anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Is the following statement about turing machines true?
Question: Here's the statement: Take a set of finite inputs from some alphabet. If for any two turing machines: All inputs in the set produce the same output for both machines In both machines, the following is true: every state transition (i.e. every row in the state table) occurs at least once for at least one of the inputs in the set before halting Then both machines will produce the same output as each other for any input. This is an informal statement, it probably needs a bit of refining. But for now the question is, is this true? Answer: This is an interesting statement, but it is not true. I'll give a proof sketch and leave the construction of the actual machines to you (read as: I am too lazy :) ). For example, take two functions $f : \mathbb{N} \setminus \{0\} \to \mathbb{N}, n \mapsto n^2$ $g : \mathbb{N}\setminus \{0\} \to \mathbb{N}, n \mapsto 2^n$ and two turing machines $M_f, M_g$ which compute $f$ and $g$, respectively. Without loss of generality, we may exclude $0$ because I think it might require special handling in $M_g$, but this depends on the actual construction of the turing machine - it does not matter this way. It is safe to assume that minimal turing machines for $f$ and $g$ would need each of its transitions for any value bigger than $1$ since they both have to implement some kind of looping scheme. This comes from the fact that $f$ and $g$ can be defined primitive-recursively, where $0$ and/or $1$ are the recursion base cases (depending on your exact definition). Now if we look at the values of these two functions: $\begin{align} n&:& &1,& &2,& &3,& &4,& &5,& ... \\ f(n)&:& &1,& &\color{green}{4},& &9,& &\color{green}{16},& &25,& ... \\ g(n)&:& &2,& &\color{green}{4},& &8,& &\color{green}{16},& &32,& ... \\ \end{align}$ We see that for an input set $I = \{ 2, 4 \}$, then $M_f$ and $M_g$ compute the same value for all input values, and since $\forall n \in I : n > 1$, they would also need all of their transitions. Still for any input value $n \geq 1, n \notin I$, they would differ. To generalize: Take any two sequences of numbers which are recursively defined and computable and also equal in at least two places, then the statement does not hold. I overcame my laziness and did one turing machine for the calculation of $2^n$ in unary. The code can be executed on this turing machine simulator to check that for inputs $11$ and $1111$, every state transition is required. ; Compute 2^n in unary ; First go to the end of the input string and write a single 1. ; The tape content is then <input>01 0 1 1 r 0 ; Move right until we are at the end of the input string. 0 _ 0 r 1 ; Write 0 separator 1 _ 1 l 2 ; Write an initial 1, since this is the value of 2^0. ; Then rewind to the beginning 2 1 1 l 2 ; Keep moving left without altering 1s 2 0 0 l 2 ; Keep moving left without altering 0s 2 _ _ r 3 ; We move back to the start. ; Now we double the string after the separating zero until ; there are no more 1s in the beginning (= decrement counter until 0) 3 1 _ r 4 ; Remove the first 1, this is equal to decrementing the counter 3 0 _ r halt ; if there are no more 1s, we are finished. 4 1 1 r 4 4 0 0 r 5 ; Move right till the separator is found 5 1 1 r 5 ; Move to the end of the result string 5 _ _ l 6 6 1 _ r 7 ; And replace the last 1 with a blank, the initial position marker. 7 _ 1 l 8 ; Now comes the actual doubling 8 1 1 l 8 ; Move left until the blank marker is found 8 _ 1 l 9 ; Move left until the blank marker is found 9 1 _ r 10 ; Shift the blank marker to the left 10 1 1 r 10 ; Move to the end of the string... 10 _ 1 l 8 ; ... and write a 1. 9 0 0 l 11 ; Once we hit 0, we are finished doubling ; And again rewind to the beginning 11 1 1 l 11 ; Keep moving left without altering 1s 11 _ _ r 3 ; We move back to the start. The computation of $n^2$ is similar, except that for doubling the output $n$ times, you add $n$ to $n$ exactly $n$ times.
{ "domain": "cs.stackexchange", "id": 4264, "tags": "turing-machines" }
Actionlib: preempt vs cancel
Question: Hi, I'm currently trying to understand the Actionlib state machine and am not clear about what "preempt" exactly means in this context. The wikipage (http://wiki.ros.org/actionlib/DetailedDescription) describes the "Preemted" state as: Preempted - Processing of the goal was canceled by either another goal, or a cancel request sent to the action server For the SimpleActionServer it says here (http://wiki.ros.org/actionlib#Using_the_ActionClient), that: New goals preempt previous goals based on the stamp in their GoalID field (later goals preempt earlier ones) However, here (http://wiki.ros.org/actionlib/DetailedDescription) it also says about the SimpleActionClient: For simplicity, the Simple Action Client tracks only one goal at a time. When a user sends a goal with the simple client, it disables all callbacks associated with the previous goal and also stops tracking its state. Note that it does not cancel the previous goal! The later seems to make a distinction between cancelling and preempting and I am unclear if there is actually a difference or what the SimpleActionServer actually does when a new goal arrives. For example: If I have a goal which changes over time, is it enough to simply send the updated goal via the SimpleActionClient to the server or should I actively cancel the previous goal in advance? I don't need any of the previous goals and don't want to have any overhead on the server regarding them. I am simply interested in the most recent goal. I'd appreciate any kind of information on this topic, thanks! Originally posted by ssssss on ROS Answers with karma: 33 on 2019-09-18 Post score: 3 Answer: Hi, to make it really clear: Cancel : Stop processing goal(s) Preempt : Stop processing current goal(s) in favor of new goal(s) given. Does that make sense? Originally posted by stevemacenski with karma: 8272 on 2019-09-18 This answer was ACCEPTED on the original site Post score: 8 Original comments Comment by gvdhoorn on 2019-09-19:\ Cancel : Stop processing goal(s) I'd add: "stop processing currently active goal(s)*. The way it's currently phrased makes it sound like the action server will never again accept a new goal, which is not the case. Comment by ssssss on 2019-09-23: That does make sense now. Thanks for clearing that up!
{ "domain": "robotics.stackexchange", "id": 33785, "tags": "ros-kinetic, actionlib" }
Java class to compute and get a MD5 hash as a string
Question: Here is my class file to compute MD5 hashes on a file. I wanted to be able to easily calculate them without having to create a lot of overhead in the code using the class. Can anyone tell me a list of improvements to make this class acceptable? package Md5Generator; /** * * @author jacob */ import java.io.IOException; import java.security.*; import java.nio.file.Files; import java.nio.file.Paths; import java.nio.file.Path; /** * The point of this class it to compute the MD5 sum of a file */ public class Md5 { // two instance variables, one to store the file path and one to store the MD5 sum private String path; private String md5Sum; /** * Constructor that takes a file path and and calcs the MD5 sum * * @param filePath the string that contains the full path to the file */ public Md5(String filePath) { path = filePath; calcMd5(path); } private void calcMd5(String filePath) { //create a messagedigest object to compute an MD5 sum try { MessageDigest md = MessageDigest.getInstance("MD5"); //create a input stream to get the bytes of the file Path path = Paths.get(filePath); //read the bytes from the file and put them in the message digest md.update(Files.readAllBytes(path)); //digest the bytes and generate an MD5 sum thats stored in an array of bytes byte[] hash = md.digest(); //convert the byte array to its hex counter parts and store it as a string md5Sum = toHexString(hash); } catch (IOException | NoSuchAlgorithmException ex) { ex.printStackTrace(); } } private String toHexString(byte[] bytes) { char[] hexArray = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F'}; char[] hexChars = new char[bytes.length * 2]; int v; for (int j = 0; j < bytes.length; j++) { v = bytes[j] & 0xFF; hexChars[j * 2] = hexArray[v / 16]; hexChars[j * 2 + 1] = hexArray[v % 16]; } return new String(hexChars); } /** * Returns the MD5 sum as a String * * @return the string that contains the MD5 sum */ public String getMd5Sum() { return md5Sum; } } Answer: THe most striking issue I can see in your code is that you're severely limited in the size of files you can compute the hash on. This line of code is particularly jarring: md.update(Files.readAllBytes(path)); You read the entire file in to memory... which limits you to two things: the size of your file is limited to the size of your memory ... unless you have lots of memory, but the file is >2GB at which point you can't create a larger array of bytes. The reason that the MessageDigest methods contain the update method, is to deal with larger files, and allow you to hash them in parts. I would use this as an opportunity to learn the somewhat older NIO features: Channels, and Buffers. Consider the code: try (MessageDigest md = MessageDigest.getInstance("MD5")) { try (FileChannel fc = FileChannel.open(Paths.get(filePath))) { long meg = 1024 * 1024; long len = fc.size(); long pos = 0; while (pos < len) { long size = Math.min(meg, len - pos); MappedByteBuffer mbb = fc.map(MapMode.READ_ONLY, pos, size); md.update(mbb); pos += size; } } catch (IOException ioe) { .... } byte[] hash = md.digest(); ..... } catch ( NoSuchAlgorithmException nsae) { .... } The advantage of the above code is that is plays much closer to the operating system. In theory, it should be the fastest possible way for a Java program to read the file, and it should never transfer the actual file content in to the Java memory space. It removes at least one data copy between the file represented in the OS on disk, and the copy in memory. There is no limit to the size of file you compute the hash on.
{ "domain": "codereview.stackexchange", "id": 23980, "tags": "java, file, cryptography, hashcode" }
How do you change $Q$ in thermodynamics?
Question: I know that changing volume changes internal energy and work for an ideal gas. However how does one add $Q$ into it? what exactly is the mechanism for doing so? I know definition of $Q$ is that it is the net heat transfer. But how does one transfer heat? But how does one "transfer heat" isn't heat already energy in transfer? and is temperature gradients the only way to do this? Answer: If I understand your question you are saying ,If the definition of heat is "energy in transfer" then why do we speak of "transfer of heat"- how do we "transfer a transfer" The second part is, what is the exact mechanism of that transfer . I will answer the second part first. Heat is transferred in one of three ways. Conduction, convection and radiation. Firstly you should understand heat and temperature are related but are not exactly the same. Heat is measured in Joules which is a unit of energy , where as temperature is measured in Kelvin , which strictly speaking is a measure of the vibration , rotation and translation of individual molecules. For example a cup of boiling water has a higher temperature than an olympic swimming pool, but the pool can be thought of as containing more heat. Although even that is not strictly correct -see part 2. Conduction- Micro explanation- occurs when a hot substance comes into contact with a cold substance and the highly vibrating molecules at the surface of the hot substance bump into less energetic molecules at the surface of the cold substance,and the collisions between the two cause the molecules in the cold substance to start moving more energetically. The ones at the surface then bump into the ones next to them and so on , so that eventually all of the molecules of the formerly cold substances are now moving at increased level of energy. Real world example- put cold metal tea spoon into a cup of hot tea , gradually the spoon warms up. Convection- Micro explanation- A large body of molecules with a higher vibrational level of energy expands because the molecules are pinging around more, thereby becoming less dense and so moves up or across due to buoyancy or pressure differences. As it does so it displaces a large mass of colder molecules. Note convection only happens in liquids and gases. Real world example - place a heat source in a cold room , the air immediately around it will be heated by conduction but that air will rise up to the ceiling by convection . After some time the air near the ceiling will be considerably warmer than the air close to the floor. ( if you are ever in a house fire- keep your head near the ground!) Radiation- Heat is transferred across space , even empty space by electromagnetic waves . The explanation of exactly what they are is a bit beyond me , but suffice to say it does not require any molecules in the space in between. Real world example- on a cold but sunny day sit out in a chair in the sun, with a good book ( our your iphone!) with one side of your face,facing the sun. After 20 minutes or so you will feel that side of your face is quite a bit warmer than the other , even though the air temperature is the same all round you. Part 2 - Because heat is defined as energy in transfer. It is not useful the speak of the "heat " of something in isolation. People say this way in common speech , but in physics terms, it only makes sense in terms of references to something else. Thus in the swimming pool example in part 1 , it's more correct to say something like the Pool has a bigger heat capacity, or a capacity to transfer more joules. If there is no transfer then instead of heat we talk of Internal energy (u)or enthalpy (u+pV).
{ "domain": "physics.stackexchange", "id": 67528, "tags": "thermodynamics" }
Domain of simple quantum harmonic oscillator
Question: When discussing the spectral theory of unbounded operators, one often starts with an operator defined on a densely defined subspace of your Hilbert space, and then proves that the operator is essentially-self adjoint which allows you to apply a spectral theory. In the case of the simple quantum harmonic oscillator one starts with the operator $$L: -\partial_{xx} + x^2$$ defined on the dense domain of smooth compactly support functions, $C^{\infty}_0(\mathbb{R}) \subset L^2(\mathbb{R})$. Then one proceeds to show that $L$ is essentially self-adjoint which means that the closure of $L$ (a closed extension of $L$ to some larger domain) is self-adjoint. My question is: What is the domain of this extension? My guess is that $L$ extends to $\bar{L}$ where the domain of $\bar{L}$ is given by: $$D(\bar{L}) = \{ e^{-x^2/2}P(x) : P(x)\,\, \text{is a polynomial.}\, \}$$ Is this true? If not, then does anyone know explicitly the domain of $\bar{L}$? Any references illustrating this fact would be greatly appreciated. Answer: It is usually very difficult to give a characterization of the domain of self-adjointness of an operator. However, the Harmonic oscillator is a well-known operator. Unluckily, this does not mean there is a completely explicit form of its domain. Anyways, I will give you what in my opinion is the best shot at explicitness: As you may know there are eigenfunctions of $L$ written by means of the Hermite functions; they also forms a basis of $L^2(\mathbb{R})$. Let's call $\{h_n(x)\}_{n\in\mathbb{N}}$ such a basis. Also, as usual, denote by $$\langle h_n,\psi\rangle=\int_{\mathbb{R}}\bar{h}_n(x)\psi(x)dx$$ the scalar product between the $n$-th Hermite function and a generic function $\psi\in L^2(\mathbb{R})$. Then the domain of $L$ can be written as: $$D(L)=\Bigl\{\psi\in L^2(\mathbb{R})\, ,\, \sum_{n=0}^\infty n^2 \lvert\langle h_n,\psi\rangle\rvert^2<+\infty\Bigr\}\equiv \Bigl\{\psi\in L^2(\mathbb{R})\, ,\, n\langle h_n,\psi\rangle \in l^2\Bigr\}$$ where $l^2$ is the usual sequence space over $\mathbb{C}$.
{ "domain": "physics.stackexchange", "id": 16911, "tags": "quantum-mechanics, mathematical-physics, operators, hilbert-space" }
Simulation of DNA sequences through substitution rates
Question: I'm looking for a little bit of guidance. My question is regarding the simulation of DNA sequences with a fix substitution rate. The majority of the programs for simulating sequences use Continuous Markov Chain Models, with different instantaneous rate matrices that can be modified according to the free parameters of each model (JC69, HKY, GTR, etc..). For example, in HKY the free parameters are the transition/transversion (kappa) rate and nucleotide frequencies (pi). In these programs (for example INDELible, Seq-Gen, Pyvolve), the substitution rate is expected to be represented by the branch length of the phylogenetic tree used to create the simulations. INDELible paper. "..Rate matrices are rescaled by INDELible such that the branch lengths represent the expected number of substitutions per site (or the average expected number of substitutions per site under a heterogeneous-sites model)." (P2 - Simulation of substitutions) INDELible: a flexible simulator of biological sequence evolution "...Each branch length is assumed to denote the mean number of nucleotide substitutions per site that will be simulated along that branch..." (P2 - Algorithm) Seq-Gen: an application for the Monte Carlo simulation of DNA sequence evolution along phylogenetic trees Let's say I want to simulate sequences with a mean substitution rate of 2.5 x 10^-7 per year. How would I prepare a tree that can accurately represent that substitution rate and be used in the simulation? In other words, how can the length be represented as a substitution rate accurately? Any insight is appreciated!. Thanks in advance! Answer: UPDATE based on comment Simplest answer: just rescale your substitution rate from years to units of branch length. (Stop thinking in terms of years.) original answer The short answer is that you have to calibrate the branch lengths of the tree to yield the substitution rates per year you want. (Leave the topology alone.) Tree branch lengths are usually agnostic to chronological time (e.g. years). Instead it is very focused on whatever the model used to construct the tree is. This can be simple substitution counts (e.g. Jukes-Cantor), or a sophisticated model that takes into account a lot of information about relative rates of different changes, possibly learned from the dataset itself (e.g. GTR). You yourself will have to decide what the conversion factor is between your branch lengths (presumably derived from such a substitution model or a distance metric) and units of chronological time. In some trees, it may already be calibrated to units of chronological time. But those will be rare. So what I'd do is to make sure that the branch lengths of the tree are of a length that will, on average, yield the number of substitutions that you expect based on how long (in years) a given branch on the tree is supposed to be. For example, if you have a tree of primates, then you want root to tip distances to be on the order of tens of millions of years, probably. If you have a tree of all bacteria, you want the root-to-tip distances to be on the order of billions of years. Note please that this is identical to re-scaling the substitution matrix to substitutions per year. What is crucial is that you know what you mean by "substitutions per year" and how you are defining that in terms of the tree.
{ "domain": "bioinformatics.stackexchange", "id": 1989, "tags": "phylogeny, phylogenetics" }
Why will the Hindley-Milner algorithm never yield a type like t1 -> t2?
Question: I'm reading about the Hindley-Milner typing algorithm while writing an implementation, and see that, as long as every variable is bound, you'll always get either atomic types or types where the arguments will determine the final type, such as t1 -> t1 or (t1 -> t2) -> (t1 -> t2) where t1 and t2 are type variables. I cannot think of a way you'd get something like t1 -> t2 or simply t1, which I understand would mean the algorithm is broken since there would be no way to determine the actual type of the expression. How do you know you'll never get a type such as these "broken" ones as long as every variable is bound? I know the algorithm yields types with variables, but these are always resolved once you pass the arguments to the function, which wouldn't be the case in a function with type t1 -> t2. This is why I want to know how we know for sure the algorithm will never yield such types. (It seems that you can get these "broken" types in ML, but I'm asking about lambda calculus.) Answer: In the lambda calculus with no constants with the Hindley-Milner type system, you cannot get any such types where the result of a function is an unresolved type variable. All type variables have to have an “origin” somewhere. For example, the there is no term of type $\forall \alpha,\beta.\; \alpha\mathbin\rightarrow\beta$, but there is a term of type $\forall \alpha.\; \alpha\mathbin\rightarrow\alpha$ (the identity function $\lambda x.x$). Intuitively, a term of type $\forall \alpha,\beta.\; \alpha\mathbin\rightarrow\beta$ requires being able to build an expression of type $\beta$ from thin air. It is easy to see that there is no value which has such a type. More precisely, if the type variable $\beta$ does not appear in the type of any term variable in the environment, then there is no term of type $\beta$ which is in head normal form. You can prove this by structural induction on the term: either the variable at the head would have to have the type $\beta$, or one of the arguments would have to have a principal type involving $\beta$, i.e. there would be a smaller suitable term. Just because there is no value of a certain type doesn't mean that there is no term of that type: there could be a term with no value, i.e. a non-terminating term (precisely speaking, a term with no normal form). The reason why there is no lambda term with such types is that all well-typed HM terms are strongly normalizing. This is a generalization of the result that states that simply typed lambda calculus is strongly normalizing. It is a consequence of the fact that System F is strongly normalizing: System F is like HM, but allows type quantifiers everywhere in types, not just at the toplevel. For example, in System F, $\Delta = \lambda x. x \, x$ has the type $(\forall \alpha. \alpha) \rightarrow (\forall \alpha. \alpha)$ — but $\Delta\,\Delta$ is not well-typed. HM and System F are examples of type systems that have a Curry-Howard correspondence: well-typed terms correspond to proofs in a certain logic, and types correspond to formula. If a type system corresponds to a consistent theory, then that theory does not allow proving theorems such as $\forall A, \forall B, A \Rightarrow B$; therefore there is no term of the corresponding type $\forall \alpha,\beta.\; \alpha\mathbin\rightarrow\beta$. The type system allows one to deduce “theorems for free” about functions over data structures. This result breaks down as soon as you add certain constants to the calculus. For example, if you allow a general fixpoint combinator such as $Y$, it is possible to build terms of arbitrary type: $Y (\lambda x.x)$ has the type $\forall \alpha. \alpha$. The equivalent of a general fixpoint combinator in the Curry-Howard correspondence is an axiom that states $\forall A, \forall B, A \Rightarrow B$, which makes the logic obviously unsound. Finding the fine line between type systems that ensure strong normalization and type systems that don't is a difficult and interesting problem. It is an important problem because it determines which logics are sound, in other words which programs embody proofs of theorems. You can go a lot further than System F, but the rules become more complex. For example, the calculus of inductive constructions which is the basis of the Coq proof assistant, is strongly normalizing yet is capable of describing common inductive data structures and algorithms over them, and more. As soon as you get to real programming languages, the correspondence breaks down. Real programming languages have features such as general recursive functions (which may not terminate), exceptions (an expression that always raises an exception never returns a value and hence can have any type in most type systems), recursive types (which allow non-termination to sneak in), etc.
{ "domain": "cs.stackexchange", "id": 2218, "tags": "lambda-calculus, type-theory, typing, type-inference" }
Is this an ok implementation of an UnorderedPair?
Question: I want to create pair, where new UnorderedPair(ObjA, ObjB) == UnorderedPair(ObjB, ObjA) returns true. I also want to avoid any (un)boxing. Have I gone about this in the correct way? public struct UnorderedPair<T> : IEquatable<UnorderedPair<T>> { public T A; public T B; public UnorderedPair(T a, T b) { A = a; B = b; } public override int GetHashCode() { return A.GetHashCode() ^ B.GetHashCode(); } public bool Equals(UnorderedPair<T> other) { return (other.A.Equals(A) && other.B.Equals(B)) || (other.A.Equals(B) && other.B.Equals(A)); } public override bool Equals(object obj) { return Equals((UnorderedPair<T>)obj); } public static bool operator ==(UnorderedPair<T> a, UnorderedPair<T> b) { return a.Equals(b); } public static bool operator !=(UnorderedPair<T> a, UnorderedPair<T> b) { return !(a == b); } } Answer: One thing: I would make the struct immutable since mutable structs have really bad unintended effects. Otherwise, this is a good, solid implementation. public struct UnorderedPair<T> : IEquatable<UnorderedPair<T>> { private readonly T a; private readonly T b; public UnorderedPair(T a, T b) { this.a = a; this.b = b; } public T A { get { return this.a; } } public T B { get { return this.b; } } public override int GetHashCode() { return this.a.GetHashCode() ^ this.b.GetHashCode(); } public bool Equals(UnorderedPair<T> other) { return (other.A.Equals(this.a) && other.B.Equals(this.b)) || (other.A.Equals(this.b) && other.B.Equals(this.a)); } public override bool Equals(object obj) { return this.Equals((UnorderedPair<T>)obj); } public static bool operator ==(UnorderedPair<T> a, UnorderedPair<T> b) { return a.Equals(b); } public static bool operator !=(UnorderedPair<T> a, UnorderedPair<T> b) { return !(a == b); } } and because time marches on, here's a more compact and performant C# 7.2 version: public readonly struct UnorderedPair<T> : IEquatable<UnorderedPair<T>> { public UnorderedPair(in T a, in T b) => (this.A, this.B) = (a, b); public T A { get; } public T B { get; } public override int GetHashCode() => this.A.GetHashCode() ^ this.B.GetHashCode(); public bool Equals(UnorderedPair<T> other) => (other.A.Equals(this.A) && other.B.Equals(this.B)) || (other.A.Equals(this.B) && other.B.Equals(this.A)); public override bool Equals(object obj) => this.Equals((UnorderedPair<T>)obj); public static bool operator ==(in UnorderedPair<T> a, in UnorderedPair<T> b) => a.Equals(b); public static bool operator !=(in UnorderedPair<T> a, in UnorderedPair<T> b) => !(a == b); }
{ "domain": "codereview.stackexchange", "id": 1255, "tags": "c#" }
Why does this infinitesimal transformation tell us the boson vectors carry charge?
Question: I'm studying from "Cheng, Li - Gauge theory of elementary particle physics", and in section 8.1, where it talks about non-Abelian gauge symmetry and the theory you can construct imposing local $SU(2)$ symmetry, considering infinitesimal transformations, the book arrives at this infinitesimal transformation law for the boson vectors $A_{\mu}^i(x)$: $$A_{\mu}^i{'}(x)=A_{\mu}^i(x)+\varepsilon^{ijk}\theta^jA_{\mu}^k(x)-\frac{1}{g}\partial_{\mu}\theta^i(x)\tag{8.28}$$ where $\vec{\theta}(x)$ is the vector of the parameters of the $SU(2)$ local transformation. Then, the book says: The second term is clearly the transformation for a triplet (the adjoint) representation under $SU(2)$. Thus the $A_{\mu}^i$s carry charges, in contrast to the Abelian gauge field. I don't understand this sentence. why does this equation implies that these fields carry charge? Answer: More generally, the generators $$Q_a=R(T_a)\tag{1}$$ of global (=$x$-independent) gauge transformations $$\phi^{\prime}= e^{i\theta^a Q_a}\phi\tag{2}$$ of a field $\phi$ are associated with its (Noether) charges. Here $T_a$ are Lie algebra basis, and $R$ denotes a representation. Noether's theorem proves charge conservation. Conversely, the charges generate global gauge symmetries. Li Cheng is in eq. (8.28) considering infinitesimal gauge transformations of gauge field $A_{\mu}$ in the adjoint representation. The third term drops out for global gauge transformations, so eq. (8.28) becomes of the form of eq. (2). Alternatively, the charges (1) also appear in the Feynman rules for cubic vertices, which lead to non-Abelian generalizations of repulsion/attraction between charges.
{ "domain": "physics.stackexchange", "id": 91931, "tags": "quantum-field-theory, charge, gauge-theory, group-theory, yang-mills" }
Why is cyclopropane-1,1-diol stable
Question: The reason given me in my book is that it's stable due to steric relief. The answer given on quora said "there is a lot of angle strain." What exactly is steric relief ? Can anyone help me elaborate how these points are contributing to the answer.Also what is the minimum angle to provide strain so that geminal diols are stable Answer: The bonds in cyclopropane 1,1-diol are less strained than the cyclopropanone. The preferred/lowest energy for tetrahedral bond angles is 109.5 degrees so the closer they can get to that the lower the energy. In cyclopropanes the angles are 60. The lowest energy angle for the R2C=O system (SP2) is 120 degrees, in cyclopropanone it is 60. This is a very strained and unstable system, anything that reduces that strain is a positive change energywise. This problem is discussed here see problem 2
{ "domain": "chemistry.stackexchange", "id": 12166, "tags": "organic-chemistry" }
Voluntary actions of Autonomic Nervous System
Question: Somatic motor activity is always voluntary, while ANS motor activity is usually involuntary. pgno:762 This indicates the possibility of voluntary activity by ANS. So, what are the voluntary actions by ANS? My Attempt: Wikipidea says: Most autonomous functions are involuntary but they can often work in conjunction with the somatic nervous system which provides voluntary control. So, is right to say that Flight or Fight response is the voluntary activity of ANS? Answer: I have no idea what the author has in mind, but their statement is wrong from the beginning: somatic motor activity is not always voluntary. Reflexes are a good and obvious example. "Fight or flight" response is basically half of the entire ANS (the sympathetic part), so that's incredibly broad. Parts of that response are definitely involuntary. I think it's more likely that Wikipedia is thinking of things like breathing, which is normally under ANS but you can also manually control your breathing. They may also be thinking of things you can do to directly influence your autonomic responses, for example, if you start running, you are voluntarily moving your muscles, but the autonomic nervous system will also participate (for example, by increasing your heart and breathing rates).
{ "domain": "biology.stackexchange", "id": 7485, "tags": "human-biology, neuroscience, autonomic-nervous-system" }
Least number of guesses needed to determine all unknown subsets of a set
Question: Say I have a set $\mathbb{S}=\{1,2,...,n\}$. I have an adversary who breaks up $\mathbb{S}$ into $k$ unknown and disjoint subsets. Denote this new set $\mathbb{A}$. I can guess any combination $s$ and my adversary has to tell me if $s$ is itself a subset of any element in $\mathbb{A}$. For example, if $n=4$, a valid $\mathbb{A}$ might be $\{ \{2\}, \{1,3,4\}\}$. All "true" guesses are $\{1\}$, $\{2\}$, $\{3\}$, $\{4\}$, $\{1,3\}$, $\{3,4\}$, $\{1,4\}$, $\{1,3,4\}$. What's the fastest algorithm and the upper bound on number of guesses I need to make to fully reverse engineer $\mathbb{A}$? Obviously the brute force algorithm takes $^nC_1 + ^nC_2 + ... + ^nC_n $ guesses for every possible subset. But I think I only need to test all possible subsets of size 2 i.e. exactly $^nC_2$ guesses? Then, I can represent each of the correct guesses of size 2 as an undirected edge in a graph. Every member of $\mathbb{A}$ of size 3 or greater should form a simple cycle in this graph. So I just need to DFS on every node in this graph, removing the longest cycle detected from the graph, and repeat for all remaining nodes until only lone edges (subsets of size 2) or unconnected vertices (subsets of size 1) remain. This graph should have $n$ vertices and at most $^nC_2$ edges. So the fastest algorithm has a time complexity of $\mathbb{o}(n^{2}\cdot{}^{n}C_{2})$. Is there an approach with fewer guesses or a faster graph algorithm? P.S.: In case anyone's interested about the context: One example use case is if $n$ known services are hosted on unknown $k$ servers with unknown grouping, and you have some way to overload any subset of services to mount a denial attack on a server. The problem becomes if you can profile the servers with the least number of queries. I came across a similar scenario and thought this was a pretty interesting problem with no literature on. Answer: A simple algorithm may be: Loop on: take the next unassigned element Ei and create a new set Si assigning Ei to it. loop on all unassigned elements Ej, testing the pair {Ei,Ej}. If it is true, assign j it to Si. In worst case, this is O(kn) as you won't do this loop more than k times. In average case, I am not sure...
{ "domain": "cs.stackexchange", "id": 12862, "tags": "graphs, time-complexity, runtime-analysis, graph-traversal" }
Carefully approach obstacle and ignore costmap?
Question: Hi folks, please imagine the following situation: my robot shall drive close to a specific object on the map, say, a shelf or something like that. It shall move to a pretty precise position relative to the shelf, turn its rear side parallel to the shelf's front and move backwards to about 5mm from the shelf. It then shall perform some special actions like moving an arm, opening a hatch, pulling something from the shelf in and stuff like that... doesn't really matter, actually (I know, that can be achieved by use of smach for example). My question however is how to make the robot move to that specific position, turn around and set back to the given distance (or other similar tasks)? Assuming that there is a set of sensors available that measure the distance to the cabinet it most likely has got to be some kind of control loop, right? And I suppose, they are already implemented in some stack/package coming with ROS. So what packages/stacks of ROS can be used for that? Is move_base still applicable for tasks like this, where obstacle avoidance and complex path planning isn't needed, but instead just slowly moving to reach a given distance from an object? Another question that might be pretty much the same answer is: how do I make my robot follow a wall for a given length, keeping a given distance? I suppose it's the same or at least a similar principle, hence no seperate question. Thanks a lot! Cheers, Hendrik Originally posted by Hendrik Wiese on ROS Answers with karma: 1145 on 2013-09-25 Post score: 0 Answer: Is move_base still applicable for tasks like this..? In its current state, no. how do I make my robot follow a wall for a given length, keeping a given distance? I would introduce a costmap layer that makes the lowest cost occur on the given path you want, which would allow you to still perform obstacle avoidance but get the desired path. However, this may not be the optimal solution. Originally posted by David Lu with karma: 10932 on 2013-09-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Hendrik Wiese on 2013-09-25: So far it seems I have to implement a new action with an underlying control stack that performs this kind of tasks. I'd appreciate any advice regarding ROS components that I could utilize to make the implementation easier and more ROS conform. Unless somebody comes up with a better approach. Comment by tfoote on 2013-09-28: Your application is quite specific and will likely need application specific code. Usually for this type of application people write their own base controller which outputs cmd_vel velocities and uses the odometry from the still running navigation stack. (Use a command vel mux to switch off the nav stack. )
{ "domain": "robotics.stackexchange", "id": 15653, "tags": "navigation, move-base" }
Concurrent algorithm for strongly connected components (SCCs)
Question: Is anybody aware of a concurrent version of Tarjan's SCCs algorithm, Kosaraju's algorithm or any other fast, O(|V| + |E|) algorithm for finding SCCs? Neither of those algorithms seem to be very hard to multithread, but I'd be happy for somebody else to have done this job. What I'm trying to handle here is an 8 GB directed graph, which I keep in RAM using a big AWS instance, and I'd like to make a good use of all 16 cores. Answer: This answer might come late, but there's a work by Gavin Lowe: "Concurrent Depth-First Search Algorithms", Proceedings of TACAS 2014, p. 202--216, Volume 8413 of LNCS, Springer that has descriptions for parallel algorithms for computing strongly connected components, lasso-shaped paths and finding nodes that are part of a cycle.
{ "domain": "cstheory.stackexchange", "id": 3059, "tags": "ds.algorithms, graph-algorithms, ds.data-structures" }
Quicksort in JavaScript, using nested functions
Question: I come from Java/Swift environment. So, I'm used to show the user of my code only "what is necessary". For JavaScript I use Visual Code. When I try to use the Intellisense feature, it shows all my functions/methods when I import it. So, to hide the unnecessary functions, I opted for using nested functions. Check out this quicksort code: /** * Sorts the Array in-place * * @param {Array.<Number | String>} array Array to be sorted */ function sort(array) { quickSort(array, 0, array.length - 1); /** * Quicksort implementation * * @param {Array.<Number | String>} array * @param {Number} start Start index of the array for quicksort * @param {Number} end End index of the array for quicksort */ function quickSort(array, start, end) { if (start < end) { let pivot = partition(array, start, end); quickSort(array, start, pivot - 1); quickSort(array, pivot + 1, end); } /** * Partitions the array for Quicksort * * @param {Array.<Number | String>} array * @param {Number} left Starting index of starting of array/sub-array * @param {Number} right Ending index of starting of array/sub-array * @returns {Number} Returns pivot index */ function partition(array, left, right) { let pivot = array[right]; let i = left - 1; for (var j = left; j < right; j++) { if (array[j] <= pivot) { i++; let temp = array[j]; array[j] = array[i]; array[i] = temp; } } let temp = array[i + 1]; array[i + 1] = array[right]; array[right] = temp; return i + 1; } } } Is this nesting appropriate? Because I(beginner) feel like if the nested functions are big, the readability(I mean like figuring out what the code is doing) is an issue. Or should I just follow the classic _ before the function names to indicate it as a private or not meant to be used by the user? A simple code breaking example: For the user of the sort(), partition() and quickSort() is an unnecessary functions. Which in-turn can cause collisions if the user names their function partition() (because I might not have mentioned partition() in my API as it is not usable on its own). And also any advice to improve the above code is welcome. Answer: Nesting functions There is nothing wrong with your approch, a few years back and I would have not said so as some browsers would parse functions within functions each time they were call, with the obvious performance hit. That is a thing of the past and functions are parsed then cached for use next time the function is called. Modules to keep global scope clean. Modern JS has modules that let you define exports and then import them as needed in other modules Each module has its own top level context and thus does not add to the global scope. Use the export token to define what can be imported. The only way to access items in the module is via the import token, and only those items explicitly exported. Modules also use strict mode by default. Thus you can flatten you code without worrying about name conflicts. If you create a default export in a module you can also import it under another name. Example of a module, and flatting your quick sort. /* in file quickSort.js */ export default function sort(array) { quickSort(array, 0, array.length - 1); } function quickSort(array, start, end) { if (start < end) { let pivot = partition(array, start, end); quickSort(array, start, pivot - 1); quickSort(array, pivot + 1, end); } } function partition(array, left, right) { let pivot = array[right]; let i = left - 1; for (var j = left; j < right; j++) { if (array[j] <= pivot) { i++; let temp = array[j]; array[j] = array[i]; array[i] = temp; } } let temp = array[i + 1]; array[i + 1] = array[right]; array[right] = temp; return i + 1; } To access it from another module import sort from "quickSort"; sort([4,3,6,3,7,8]); Or use another name, but only for the default export. import qSort from "quickSort"; qSort([4,3,6,3,7,8]); Note that modules must be in files for both export code and import code, and the script element must be of type "module" eg <script type="module" src="quickSort.js"></script> Also as a convention not a requirement modules have the extension .mjs rather than .js and some will use .es.mjs. They all have the same MIME type text/javascript Singleton Before modules it was common to use the singleton (AKA immediately invoked function) to reduce or completely eliminate any intrusion to the global scope. Example of a singleton const sort = (() => { function sort(array) { quickSort(array, 0, array.length - 1); } function quickSort(array, start, end) { if (start < end) { let pivot = partition(array, start, end); quickSort(array, start, pivot - 1); quickSort(array, pivot + 1, end); } } function partition(array, left, right) { let pivot = array[right]; let i = left - 1; for (var j = left; j < right; j++) { if (array[j] <= pivot) { i++; let temp = array[j]; array[j] = array[i]; array[i] = temp; } } let temp = array[i + 1]; array[i + 1] = array[right]; array[right] = temp; return i + 1; } return sort; })(); sort([4,3,6,3,7,8]);
{ "domain": "codereview.stackexchange", "id": 31433, "tags": "javascript, beginner, functional-programming, quick-sort" }
Why is Voyager 1 approaching Earth?
Question: When I read JPL's mission status for Voyager 1 and Voyager 2 the distance between Earth and Voyager 1 is decreasing. Is it right? Answer: Voyager $1$ is headed away from the Sun at around $17$ km per second at an angle to the ecliptic of around $35$º. The orbital velocity of the Earth is $29.8$ km per second, and multiplying by $\cos 35$º to get the component of the velocity in Voyager's direction gives a shade over $24$ km per second. So at the point in its orbit where Earth is moving closest to the direction of Voyager's travel we are actually catching up with it at around $24 - 17 = 7$ km per second. Conversely at the point in Earth's orbit where it is moving opposite to Voyager it is moving away from us at $24 + 17 = 41$ km per second. (not to scale!)
{ "domain": "physics.stackexchange", "id": 46586, "tags": "solar-system, relative-motion, space-mission" }
Is there a 'very fast growing' hierarchy that would capture System F?
Question: Particular ordinals in slow-growing and fast growing hierarchies can capture the expressiveness of many predicative type systems. Is there a hierarchy of function that could possibly capture impredicative System F? Answer: Typically, fast growing hierarchies are characterized by ordinal notations, which are really just ways to express fast-growing functions (but it's sometimes convenient to see them as ordinals in the mathematical sense). There is a somewhat generic way of assigning an ordinal (notation) to a consistent theory, though it is very non-constructive. For various technical reasons, the ordinal for $\mathrm{PA}_2$ corresponds to the fast growing functions expressible in system $F$. So the question really is: Is there an ordinal notation for $\mathrm{PA}_2$? As far as I am aware, we do not know the answer to this question, and it is a very important open question in proof theory/ordinal analysis. The following mathoverflow question explains this idea in more detail: https://mathoverflow.net/questions/144041/proof-theoretic-ordinal-of-zfc-or-consistent-zfc-extensions
{ "domain": "cstheory.stackexchange", "id": 4411, "tags": "type-systems, hierarchy-theorems, normalization" }
What's the most accurate way to obtain a locational fix using GPS?
Question: Obviously GPS is the most obvious and accessible technology for obtaining a locational "fix" for a robot at any particular time. However, while it's great sometimes, in other locations and situations it's not as accurate as I'd like, so I'm investigating whether there's a relatively easy way to improve on this accuracy (or not, if that turns out to be the case.) I've considered the following options, but found limited information online: Would using a much better antenna help, especially for low signal areas? I'm thinking yes to this, but if so how would I construct such an antenna and know that it's an improvement? Are there any good guides on how to do this? I could use a ready made antenna if they're not too expensive. Would using multiple separate receivers in tandem help, or would they likely all be off by a similar amount, or would I not be able to extract a meaningful average with this approach? What sort of characteristics should I look for when choosing a good GPS receiver to help accuracy? Is there anything else I should consider which I've missed? Answer: Can't answer all your questions, but based on your use case Differential-GPS might help you. Modern tractors are using this for precisely navigating on fields (in autonomous mode). Here fixed ground stations are used, which know their exact position and calculate the error in the current signal. This adjustment is then used by the other GPS receiver in the area.
{ "domain": "robotics.stackexchange", "id": 325, "tags": "localization, gps" }
Number of Configurations of LBA(Linear Bounded Automaton)
Question: The lemma is: Let $M$ be an LBA with $q$ states and $g$ symbols in the tape alphabet. There are exactly $qng^n$ distinct configurations of $M$ for a tape of length $n$. I want know why LBA has $qng^n$ configurations. Following is the proof: A configuration of $M$ is like a snapshot in the middle of its computation. A configuration consists of the state of the control, position of the head, and contents of the tape. Here, $M$ has $q$ states. The length of its tape is $n$, so the head can be in one of $n$ positions, and $g^n$ possible strings of tape symbols appear on the tape. The product of these three quantities is the total number of different configurations of $M$ with a tape of length $n$. Answer: If you have $g$ symbols (including a blank) and a tape of size $n$ then there are $g^n$ words of length $n$. This is really basic combinatorics: The reasoning is that you have $g$ options for the first symbol, $g$ options for the second symbol, i.e. $g^2$ options for the first two symbols. Then again $g$ options for the third symbol, giving you $g^3$ options for the first three symbols and so on. If you are not convinced, try it with for example a $3$-symbol alphabet. Since your tape has length $n$ there are $n$ possible head positions, so you multiply the number of strings on the tape with $n$. Finally, there are $q$ states, so for every possible string on tape and every possible head position, we can be in either of the $q$ states, making $qng^n$ configurations of the Turing machine.
{ "domain": "cs.stackexchange", "id": 14544, "tags": "turing-machines, automata, linear-bounded-automata" }
Among N documents, how to summarize the most unique content in each document?
Question: I now have $N$ documents, which share common content and they have special unique content. Say I have $3$ legal documents related to the same person. Document $A$ is about land law, document $B$ is about company law and document $C$ is about marriage law. How can I extract the land, company and marriage content from each document respectively and skip the common personal information? It sounds like text-summarization but with a very different nature. Any idea is welcome. Edit: In my situation, $N$ varies and the nature of the unique content is unknown. Answer: It might be worthwhile to try TF-IDF and see if that works for you. Score each term in each document proportional to how often it occurs in that document, but inversely proportional to how often it occurs across multiple documents. Then look at the the terms that have the highest scores for each document. You can use scikit-learn's TF-IDF Vectorizer to help you with this, if you are using Python. Presumably, words that are highly specific and relevant to each of your three documents will stand out, and words that relate to personal information (as well as generic legal terms and non-specific English words) will be common to multiple documents and get filtered out. Note: This will get you the specific words that are particular to each document. If the type of "content" you are seeking to extract goes beyond the word level, then you might have to take a different approach. Perhaps one way is to use the words obtained from TF-IDF to highlight the places in each document where the desired content might be found.
{ "domain": "ai.stackexchange", "id": 3051, "tags": "natural-language-understanding, text-summarization" }
How can I increase the discharge time for capacitor?
Question: Actually I was trying to make a capacitor and used Aluminium foil and paper to do this. Then i went for testing it. But I found that the capacitor was not working. Firstly I charged it with a battery of 4V and then took off the capacitor to light up a 3V led light. I also tested the for the correct terminals but still the led was not lighting up. I think that capacitor's negative terminal is fastly loosing its charge to paper from where electrons move to positive terminal. So how can i make it work? Also how to increase time for about 8 to 10 sec. Answer: Unless you were using a capacitor with a relatively large capacitance, I am not surprised that your LED did not light up. As described your experiment failed on one or both of the two counts. Newspaper is quite a good conductor because it absorbs water from the atmosphere and so becomes a reasonable conductor. The mention of waxed paper by @RobTristram was he suggesting that you stop the absorption of water by the paper by first carefully heating it to drive out the water and then coating it with wax to ensure that the water did not return. In fact, you are much better off using plastic sheet dried with a fan heater, to remove surface moisture, and I have found that thin acetate sheep of the type used with ohps (overheat projectors) or a bin bag work well. However, that is not the end of the story. Suppose that you have an LED with a rating of $3\,\rm V$ and $10\, \rm mA$ and want it to be on for $10$ seconds. The charge which a capacitor must deliver would be $10\times 10 = 100\,\rm mC$. If the voltage across the capacitor was about $3\,\rm V$ then you would need a $100/3 \approx 30\,\rm mF$ capacitor. I now come to the second count which is the capacitance of your capacitor. Suppose the paper has a thickness of $0.1\,\rm mm$, a dielectric constant of about $4$ and the capacitor has a side length of $50\,\rm cm$. The capacitance of such a capacitor with there being no air gaps will be about $20\,\rm \color{red}{nano}farads$ and thus much too small to store sufficient charge at the voltages you are wanting to use. There is another thing which you must consider and that is to do with the rate of discharge of the capacitor. When an LED is conducting it presents a fairly resistance to the circuit and this will mean that if you had just a capacitor and an LED in the circuit the time constant of the circuit, $CR$, might be fairly small and so the LED would not be on for very long. Large value capacitors, $\approx F$, are available or so perhaps one of those with an appropriate series resistor and a higher initial charging voltage for the capacitor should enable you to light up your LED. Perhaps, if nothing else, your failure has shown you how difficult it is to store a large amount of electrical energy, eg to power an electric car!
{ "domain": "physics.stackexchange", "id": 84944, "tags": "electricity, capacitance" }
Why do I have an extra factor of 3 for self-gravity?
Question: So, I'm trying to calculate the "acceleration" (force / mass) on a spherical object of mass $M$ and radius $R$ due to its own gravity that holds it together. So, here is what I figured. The "acceleration" between two point particles separated by a distance $R$ in the sphere is given by $dF = \frac{Gdm}{r^2}$. So, I figued i can just integrate $dF$ over the sphere. First, to find $dm$: $\frac{dm}{M} = \frac{dV}{V}$ $dm = \frac{M}{V}dV$ So, we have $F = \iiint_\text{sphere} \frac{GM}{Vr^2}dV$. In polar coordinates this is $F = \int_{r = 0}^{R} \int_{\theta = 0}^{\pi} \int_{\phi = 0}^{2\pi} \frac{GM}{\frac{4}{3}\pi r^2 R^3} r^2 \sin\theta \; dr d\theta d\phi $. Simplyfying, we have $F = \frac{3GM}{4\pi R^3} \int_0^R dr \int_0^\theta \sin\theta \; d\theta \int_0^{2\pi} d\phi$ Evaluating the integrals gives $F = \frac{3GM}{4\pi R^3} \times R \times 2 \times 2\pi$, which simplifes to $F = \frac{3GM}{R^2}$ However, my lecturer gives this force as $\frac{GM}{R^2}$ without explanation. I already tried googling it, but all I can find is things about self gravitational potential energy, which I've gathered is something else. So, can someone see where I'm getting an extra factor of 3? And if I'm going about this entirely the wrong way, can someone point we in the right way of a correct derivation? If anyone is wondering about the context, I want to calculate this force because I want to balance it with tidal forces. Answer: I was looking for the surface gravity, which is indeed just given by $\frac{GM}{R^2}$. My integration was nonsense as I did not consider the direction of the gravitational forces, only the magnitude.
{ "domain": "physics.stackexchange", "id": 65237, "tags": "newtonian-gravity, coordinate-systems, integration, tidal-effect" }
Proof of expression to combine different redshifts
Question: I am trying to arrive to the following expression: $1+z = (1+z_C)(1+z_G)(1+z_D)$ so the total redshift is the product of the cosmological redshift, the gravitational redshift and the Doppler redshift. I know that $1+z = \frac{\lambda_{observed}}{\lambda_{emitted}}$ but I don't know why would you make the product of them in order to get the total redshift. How do you derive the formula of the total redshift? Answer: Suppose light starts out with wavelength $\lambda_\mathrm{emit}$. If you were not moving with respect to the source in either a proper sense (Doppler shift) or comoving sense (cosmological redshift), but the light has to climb out of a potential well, it would have wavelength $\lambda_\mathrm{G} = (1 + z_\mathrm{G}) \lambda_\mathrm{emit}$. Now consider a photon emitted with $\lambda_\mathrm{G}$, but cosmologically far away. You would observe the wavelength $\lambda_\mathrm{CG} = (1 + z_\mathrm{C}) \lambda_\mathrm{G}$. Finally, suppose you are receiving photons of wavelength $\lambda_\mathrm{CG}$, but then you start moving very fast. Now those same photons will appear to you to have wavelength $\lambda_\mathrm{CGD} = (1 + z_\mathrm{D}) \lambda_\mathrm{CG}$. With all three effects at work, you simply multiply the factors: $$ \lambda_\mathrm{CGD} = (1 + z_\mathrm{C}) (1 + z_\mathrm{G}) (1 + z_\mathrm{D}) \lambda_\mathrm{emit}. $$ The same argument holds for any combination of these effects in any order: $$ \lambda_\mathrm{obs} = \lambda_\mathrm{emit} \prod_i (1 + z_i). $$
{ "domain": "physics.stackexchange", "id": 18802, "tags": "general-relativity, cosmology, doppler-effect, gravitational-redshift" }
Turltebot laser striking frame in gazebo simulation
Question: I'm currently experimenting with navigation for a simulated turtlebot using the turtlebot_navigation package. I was having trouble getting a stable position estimate when I noticed that the simulated LIDAR appears to be striking the frame of the robot which appears to be interefering with the pose estimate generated by the amcl package. I have modified the minimum range of the laser to 0.3m, but this does seem to be a temporary solution. Should the simulated sensor be moved so that is does not strike the frame, or is there a method for directing the navigation system to ignore LIDAR returns that do not meet specific criteria? Originally posted by JonW on ROS Answers with karma: 586 on 2012-11-19 Post score: 0 Answer: The simulated sensor is designed to work the same as the real sensor, which also can hit the poles. Typically obstacle avoidance algorithms have methods for eliminating self observations. Commonly you ignore anything inside the footprint or known to be near parts of the robot model. Originally posted by tfoote with karma: 58457 on 2015-06-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11798, "tags": "gazebo, simulation, turtlebot, turtlebot-navigation, turtlebot-simulator" }
ROS does not detect packages installed with apt-get
Question: Hello, I tried to install packages to ROS indigo by running the command: sudo apt-get install ros-indigo-package name The packages are downloaded and installed but neither roscd nor rosrun can find them, despite sourcing the file setup.bash after the download. Am I missing a step? Best Regards Originally posted by Audren Cloitre on ROS Answers with karma: 5 on 2015-01-29 Post score: 0 Answer: Could be that your cache is (somehow) out-of-date. Try a rospack profile after you've sourced setup.bash. Originally posted by gvdhoorn with karma: 86574 on 2015-01-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Audren Cloitre on 2015-01-29: Thank you so much gvdhoorn! It worked and ROS can now retrieve the packages installed. All good. Comment by gvdhoorn on 2015-01-29: Np. Please mark the question as answered by clicking the check mark.
{ "domain": "robotics.stackexchange", "id": 20730, "tags": "ros, package, roscd, rosrun" }
AsyncTcpClient (Asynchronous TcpClient)
Question: I've been doing network programming using C#'s TcpClient for several years. The code below is an asynchronous wrapper for TcpClient that I developed throughout these years. The key methods are: ConnectAsync() - connects asynchronously; RemoteServerInfo is a simple class containing Host, Port, and a boolean indicating whether this is an SSL connection. StartReceiving() - initiates the data reading callbacks; this method is necessary to allow time for events to be hooked up before data starts being processed DataReceivedCallback() - processes data as it is received and passes it on to any subscribed event handlers SendAsync() - sends data asynchronously Some other things to notice: The code uses the old Asynchronous Programming Model for asynchrony. There is some play with buffer sizes. The intention of this is to have adaptive buffer sizes - using a small amount of memory most of the time, but expanding to better cater for larger incoming data (up to a specified maximum) when necessary. I am using a goto statement. This might send cold shivers down your spine, but I thought it was fine in this case. Read this answer if you're religious about never using goto in any situation whatsoever. I would really appreciate code review from other developers (especially from network programmers) to see if this implementation can be improved further. Some things that come to mind include better performance, better use of TAP over APM, and any possible subtle bugs I might have missed. Here is the code for AsyncTcpClient: public class AsyncTcpClient : IDisposable { private bool disposed = false; private TcpClient tcpClient; private Stream stream; private int minBufferSize = 8192; private int maxBufferSize = 15 * 1024 * 1024; private int bufferSize = 8192; private int BufferSize { get { return this.bufferSize; } set { this.bufferSize = value; if (this.tcpClient != null) this.tcpClient.ReceiveBufferSize = value; } } public int MinBufferSize { get { return this.minBufferSize; } set { this.minBufferSize = value; } } public int MaxBufferSize { get { return this.maxBufferSize; } set { this.maxBufferSize = value; } } public int SendBufferSize { get { if (this.tcpClient != null) return this.tcpClient.SendBufferSize; else return 0; } set { if (this.tcpClient != null) this.tcpClient.SendBufferSize = value; } } public event EventHandler<byte[]> OnDataReceived; public event EventHandler OnDisconnected; public bool IsConnected { get { return this.tcpClient != null && this.tcpClient.Connected; } } public AsyncTcpClient() { } public async Task SendAsync(byte[] data) { try { await Task.Factory.FromAsync(this.stream.BeginWrite, this.stream.EndWrite, data, 0, data.Length, null); await this.stream.FlushAsync(); } catch (IOException ex) { if (ex.InnerException != null && ex.InnerException is ObjectDisposedException) // for SSL streams ; // ignore else if (this.OnDisconnected != null) this.OnDisconnected(this, null); } } public async Task ConnectAsync(RemoteServerInfo remoteServerInfo, CancellationTokenSource cancellationTokenSource = null) { try { await Task.Run(() => this.tcpClient = new TcpClient()); await Task.Factory.FromAsync(this.tcpClient.BeginConnect, this.tcpClient.EndConnect, remoteServerInfo.Host, remoteServerInfo.Port, null); // get stream and do SSL handshake if applicable this.stream = this.tcpClient.GetStream(); if (remoteServerInfo.Ssl) { var sslStream = new SslStream(this.stream); sslStream.AuthenticateAsClient(remoteServerInfo.Host); this.stream = sslStream; } if (cancellationTokenSource != null && cancellationTokenSource.IsCancellationRequested) { this.Dispose(); this.stream = null; } } catch(Exception) { // if task has been cancelled, then we don't care about the exception; // if it's still running, then the caller must receive the exception if (cancellationTokenSource == null || !cancellationTokenSource.IsCancellationRequested) throw; } } public void StartReceiving() { byte[] buffer = new byte[bufferSize]; this.stream.BeginRead(buffer, 0, buffer.Length, DataReceivedCallback, buffer); } protected virtual void DataReceivedCallback(IAsyncResult asyncResult) { try { byte[] buffer = asyncResult.AsyncState as byte[]; int bytesRead = this.stream.EndRead(asyncResult); if (bytesRead > 0) { // adapt buffer if it's too small / too large if (bytesRead == buffer.Length) this.BufferSize = Math.Min(this.BufferSize * 10, this.maxBufferSize); else { reduceBufferSize: int reducedBufferSize = Math.Max(this.BufferSize / 10, this.minBufferSize); if (bytesRead < reducedBufferSize) { this.BufferSize = reducedBufferSize; if (bytesRead > this.minBufferSize) goto reduceBufferSize; } } // forward received data to subscriber if (this.OnDataReceived != null) { byte[] data = new byte[bytesRead]; Array.Copy(buffer, data, bytesRead); this.OnDataReceived(this, data); } // recurse byte[] newBuffer = new byte[bufferSize]; this.stream.BeginRead(newBuffer, 0, newBuffer.Length, DataReceivedCallback, newBuffer); } else this.OnDisconnected(this, null); } catch(ObjectDisposedException) // can occur when closing, because tcpclient and stream are disposed { // ignore } catch(IOException ex) { if (ex.InnerException != null && ex.InnerException is ObjectDisposedException) // for SSL streams ; // ignore else if (this.OnDisconnected != null) this.OnDisconnected(this, null); } } protected virtual void Dispose(bool disposing) { if (!disposed) { if (disposing) { // Dispose managed resources. if (this.tcpClient != null) { this.tcpClient.Close(); this.tcpClient = null; } } // There are no unmanaged resources to release, but // if we add them, they need to be released here. } disposed = true; // If it is available, make the call to the // base class's Dispose(Boolean) method // base.Dispose(disposing); } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } } Answer: You should only use Task.Run to start threads, which you don't want to do when you're doing IO, at least directly. You should let the runtime make that decision. Also you need to make sure your tcpClient isn't already connected. There's also a tcpClient.ConnectAsync that retruns a Task so you should use that. Also you should never pass a CancellationTokenSource to an async method, use CancellationToken. And the .NET async library is designed to throw an OperationCanceledException on cancelation so the task is marked canceled, so use that. It's also bad practice to dispose of a class from within itself, that can have some very undeseried effects, simply close and dispose of your tcpListener So ConnectAsync could look like: private async Task Close() { await Task.Yield(); if(this.tcpClient != null) { this.tcpClient.Close(); this.tcpClient.Dispose(); this.tcpClient = null; } if(this.stream != null) { this.stream.Dispose(); this.stream = null; } } private async Task CloseIfCanceled(CancellationTeken token, Action onClosed = null) { if(token.IsCancellationRequested) { await this.Close(); if(onClosed != null) onClosed(); token.ThrowIfCancellationRequested(); } } public async Task ConnectAsync(RemoteServerInfo remoteServerInfo, CancellationToken cancellationToken = default(CancellationToken)) { try { //Connect async method await this.Close(); cancellationToken.ThrowIfCancellationRequested(); this.tcpClient = new TcpClient(); canellationToken.ThrowIfCancellationRequested(); await this.tcpClient.ConnectAsync(remoteServerInfo.Host, remoteServerInfo.Port); await this.CloseIfCanceled(cancelationToken); // get stream and do SSL handshake if applicable this.stream = this.tcpClient.GetStream(); await this.CloseIfCanceled(cancelationToken); if (remoteServerInfo.Ssl) { var sslStream = new SslStream(this.stream); sslStream.AuthenticateAsClient(remoteServerInfo.Host); this.stream = sslStream; await this.CloseIfCanceled(cancelationToken); } } catch(Exception) { this.CloseIfCanceled(cancelationToken).Wait(); throw; } } There's also async methods on Stream that return Task's so, also your OnDisconected event was not thread safe, you need to assign it to an internal variable. You should also never pass null EventArgs. Also you can simplfiy BeginRecieve to just receive and put it in a loop with async/await and a cancellation token. Also I would remove the goto and replace with a do/while (btw I'm not 100% sure the reduce buffer size logic works, someone else might want to address that) public async Task SendAsync(byte[] data, CancelationToken token = default(CancellationToken)) { try { await this.stream.WriteAsync(data, 0, data.Length, token); await this.stream.FlushAsync(token); } catch (IOException ex) { var onDisconected = this.OnDisconected; if (ex.InnerException != null && ex.InnerException is ObjectDisposedException) // for SSL streams ; // ignore else if (onDisconected != null) onDisconected(this, EventArgs.Empty); } } public async Task Recieve(CancelationToken token = default(CancellationToken)) { try { if(!this.IsConnected || this.IsRecieving) throw new InvalidOperationException(); this.IsRecieving = true; byte[] buffer = new byte[bufferSize]; while(this.IsConnected) { token.ThrowIfCancellationRequested(); int bytesRead = await this.stream.ReadAsync(buffer, 0, buffer.Length, token); if(bytesRead > 0) { if(bytesRead == buffer.Length) this.BufferSize = Math.Min(this.BufferSize * 10, this.maxBufferSize); else { do { int reducedBufferSize = Math.Max(this.BufferSize / 10, this.minBufferSize); if(bytesRead < reducedBufferSize) this.BufferSize = reducedBufferSize; } while(bytesRead > this.minBufferSize) } var onDataRecieved = this.OnDataRecieved; if(onDataRecieved != null) { byte[] data = new byte[bytesRead]; Array.Copy(buffer, data, bytesRead); onDataRecieved(this, data); } } buffer = new byte[bufferSize]; } } catch(ObjectDisposedException){} catch(IOException ex) { var evt = this.OnDisconnected; if (ex.InnerException != null && ex.InnerException is ObjectDisposedException) // for SSL streams ; if(evt != null) evt(this, EventArgs.Empty); } finally { this.IsRecieving = false; } }
{ "domain": "codereview.stackexchange", "id": 12404, "tags": "c#, networking, tcp" }
Unintentional point cloud duplication during depth image conversion
Question: I have a quite challenging question. I use ROS Melodic on Ubuntu18.04. My goal is to transform a ROS depth image to pointcloud2 format and show it in rviz. The theory seems ok, but I do not now where I fail. My problem can be seen on the image: it looks like that the pointcloud is somehow duplicated and shifted each other, although there is only one corridor in front of the camera. I uploaded my code as a gist where I transform the depth image. What I now see in rviz is this: There is a corridoor in front of my camera, but somehow twice. I thought about the camera calibration and parameters, but I use the data from CameraInfo messages. Although I played with the fx,fy,cx,cy parameters, I cannot align to overleap the two corridor. Where am I wrong? Answer: At first I thought there was some sort of systematic raw format or resolution error, sort of like ROS + kinect depth data duplication . Then I noticed that each corridor appears in a different perspective, as if the sensor turned to point a different way between two captured frames. You didn't say what sensor you're using, but I assume it's not two cams that happen to point 45 degrees apart! One might surmise that there are actually two or more frames piled up or appended in the same buffer -- sort of like a double exposure -- and the sensor actually did turn between the shots. Maybe some frame buffer or index needs to be cleared or reinitialized, or something, between frames?
{ "domain": "robotics.stackexchange", "id": 2453, "tags": "ros, point-cloud, rviz" }
What is "string universality"?
Question: I encountered the notion of "string universality" in the TASI lectures on Supergravity (April 2011). What does this actually mean? Does it mean that every consistent (in particular cancellation of anomalies) supergravity theory has as UV-completion a string theory? After a bit of googling on it, it rather seems to be considered as a bit problematic because every kind of consistent supergravity would have UV-string completion according to the concept, even a 4D-supergravity theory. As I am a novice in string theory (I have some basic knowledge of supergravity though) So I would appreciate an explanation without too much jargon. Thank you. Answer: String universality is the conjecture that every consistent supergravity QFT arises as the low-energy effective QFT of a suitable compactification (essentially a choice of "small" compact dimensions that are to small to detect as dimensions in the effective theory) of (10-dimensional) string theory. The origin of this idea is in "String Universality in Six Dimensions" by Kumar and Taylor, where they show this for a large class of 6d SUGRAs. The dimensionality of the effective QFT is "adjustable" by choosing a suitable compactification - e.g. for 6d SUGRAs the compactification manifold is four-dimensional, and for 4d SUGRAs the compactification manifold is six-dimensional.
{ "domain": "physics.stackexchange", "id": 77188, "tags": "string-theory, supergravity" }
Prove that $L_1$ is regular if $L_2$, $L_1L_2$, $L_2L_1$ are regular
Question: Prove that $L_1$ is regular if $L_2$, $L_1L_2$, $L_2L_1$ are regular. These are the things that I would use to start. As $L_1L_2$ is regular, then the homomorphism $h(L_1L_2)$ is regular. Let $h(L_1) = L_2$ and $h(L_2) = L_1$, then $h(L_1L_2) = L_2L_1$ is regular (we already know that) or $h(L_2) = \epsilon$ and we get $L_1$ By reflexing, $L_1L_2 = (L_2L_1)^{R}$, same result. But i don't know how to, for example, intersect something that gives me $L_1$ in order to preserve closure and finally $L_1$ be regular. Any help? Answer: Here is a counterexample. Let $L_1$ be any language over some alphabet $\Sigma$ containing the empty string, and let $L_2 = \Sigma^*$. Then $L_2 = L_1L_2 = L_2L_1 = \Sigma^*$ are all regular, but $L_1$ need not be, in fact it could even be uncomputable!
{ "domain": "cs.stackexchange", "id": 1350, "tags": "regular-languages, finite-automata, proof-techniques, closure-properties" }
how are quantum entangled states mantained?
Question: when ever I search for this question I always get articles on how to make a quantum entangled pair, and not how it's kept or mantained. if i were to entangle a pair of electrons, how would I keep the particles entangled for long distances? Answer: Short answer I theory, it is easy to maintain the entanglement of two particles: just do nothing. In practice, doing nothing is much harder than it sounds. You need to isolate both particle, so that their entangled degree of freedom (typically the spin for an electron) does not interact with anything else. It is hard to do with electrons, since they are charged and interact with the environment. That’s why most entanglement experiments use photons: they do not interact much, and travel fast, allowing do demonstrate long distance entanglement even if they live shortly. When one needs to keep entanglement for some time, electrically neutral material systems (cold atoms, crystal defects, etc) an relatively complicated protocols can be used. An explicit protocol involving entangled states of NV centres in diamond I give below, an explicit protocol¹ considered by experimentalists to create and maintain entanglement in NV-centres in diamond. As you could easily guess from the description, this protocol is specific for this kind of experimental system, and has to be seen as an example, and a way to convince you that there is no short simple answer to your question (beyond “keep your system isolated”). If one is concerned by entanglement in cold atom vapours, neutrons, or supraconducting qubits, the protocols will look different, at least superficially, but is often as complex. Let us look, for example, at a protocol to entangle two NV-centres in diamonds located far away (a system recently used for a Loophole free test of Bell’s inequalities). At each location, an entangled state can be created between the spin of an electron and a photon. To maintain the entanglement, the state of each electron spin state can be mapped, through carefully crafted microwave pulses, to a more isolated nuclear spin state. The two photons are measured together in an intermediate place, effectively teleporting one photon to the spin entangled with the other. After photon measurement, the two nuclear spins are entangled, and can be maintained as long as the nuclear spins doe not interact with (chemical or isotopical) impurities in diamond (i.e. a few ms at room temperature). To perform measurements on the entangled states, one needs to use microwave pulses map the nuclear spin state to the electronic spin state, and then optically probe the NV centres. Footnote: ¹: All the element of this protocol have been demonstrated in various combinations, but I don’t think the whole protocol above has been demonstrated yet. I know however that it is on the roadmap of several research groups.
{ "domain": "physics.stackexchange", "id": 36084, "tags": "quantum-mechanics, quantum-entanglement" }
What are the exact axioms to uniquely define the Minkowski metric tensor as a bilinear map?
Question: I have read that the definition of a metric tensor is a map with the following axioms: a bilinear form from the tangent vector space (of a smooth manifold) to the real field symmetric nondegenerate [Question] Now, from a purely mathematical prospective: given a map X (defined on a 4D tangent space), is it enough to say that: $X$ is a metric tensor $X$ has signature $(-, +, +, +)$ or $(+, -, -, -)$ to deduce that X is the Minkowski metric tensor? Note: if the answer is yes, it would mean that Minkowski is the only metric tensor that as a bilinear form has the signature $(-, +, +, +)$. I think that these axioms are not enough, because in GR we work with metric tensors with the same signature (see this question). Therefore: [Subquestion part a] Which additional axioms should we include to uniquely define the Minkowski metric tensor as a map? [Subquestion part b] Would the additional axiom simply be explicitly stating that the coefficients of the bilinear form are all 1 (so -1,+1,+1,+1)? Answer: Given a non-degenerate, symmetric, bi-linear form over the tangent space, expressed as $g_{μν} = g(∂_μ, ∂_ν)$, or equivalently as the tensor $g = g_{μν} dx^μ ⊗ dx^ν$ (summation convention used), where $\{x^μ: μ = 0, 1, ..., N\}$ are the coordinates (at least, locally) and $∂_μ = ∂/∂x^μ$ are the partial differential operators comprising the tangent frame, impose the following extra conditions expressed in terms of the Lie derivatives $ℒ_X$ of certain vector fields $X$: (1) Homogeneity: $ℒ_{∂_ρ} g = 0$, for all $ρ = 0, 1, ..., N$, (2) N+1-Isotropy: $ℒ_{x_σ ∂_ρ - x_ρ ∂_σ} g = 0$, for all $ρ, σ = 0, 1, ..., N$ (without loss of generality, you can take $ρ ≠ σ$ or even $ρ < σ$); where $x_0 = x^0$ and $x_i = -x^i$, for $i = 1, 2, ..., N$. That gives you bona fide spatial isotropy with respect to the space-like dimensions $1, 2, ..., N$ and non-accelerationosity (for lack of a better term) with respect to the mixed combinations of the time-like dimension $0$ with each of the spatial dimensions. Then, the metric is a Minkowski metric (up to a non-zero constant multiple), if $N > 1$. (The Minkowski metric $η = η_{ρσ} dx^ρ ⊗ dx^σ$ is sneaked into the conditions as the constant diagonal matrix $(+1,-1,-1,-1)$ of coefficients in $x_ρ = η_{ρσ} x^σ$. There is no escaping The $η$.) For $N = 3$ and 3+1 dimensions, the 10 Lie vectors, in 3D vector notation are: $${∂ \over ∂t},$$ $$∇ = \left({∂ \over ∂x}, {∂ \over ∂y}, {∂ \over ∂z}\right),$$ $$×∇ = \left(y {∂ \over ∂z} - z {∂ \over ∂y}, z {∂ \over ∂x} - x {∂ \over ∂z}, x {∂ \over ∂y} - y {∂ \over ∂x}\right),$$ $$t∇ + {∂ \over ∂t} = \left(t {∂ \over ∂x} + x {∂ \over ∂t}, t {∂ \over ∂y} + y {∂ \over ∂t}, t {∂ \over ∂z} + z {∂ \over ∂t}\right),$$ where $t = x^0$ and $ = \left(x, y, z\right) = \left(x^1, x^2, x^3\right)$. The four sets of Lie vectors are, respectively, for Stationarity, Spatial Homogeneity, Spatial Isotropy and Non-Accelerationosity. The metric is to be stationary, spatially homongeneous, isotropic and non-accelerating (for lack of a better term). First, we do (1). Since $$ℒ_{∂_ρ} dx^μ = ∂_ρ ˩ ddx^μ + d(∂_ρ ˩ dx^μ) = ∂_μ ˩ 0 + d(δ_ρ^μ) = 0,$$ and $ℒ_{∂_ρ} g_{μν} = ∂_ρ g_{μν}$, then using the product rule for $ℒ_{∂_ρ}$, we have: $$0 = ℒ_{∂_ρ} g_{μν} dx^μ ⊗ dx^ν = \left(∂_ρ g_{μν}\right) dx^μ ⊗ dx^ν + g_{μν} (0) ⊗ dx^ν + dx^μ ⊗ (0) = ∂_ρ g_{μν} dx^μ ⊗ dx^ν,$$ from which it follows that $∂_ρ g_{μν} = 0$ or that the components $g_{μν}$ are all constant. Second, we do (2). In general $$ℒ_X dx^μ = X ˩ ddx^μ + d(X ˩ dx^μ) = ∂_μ ˩ 0 + dX^μ = dX^μ,$$ so for $X = x_σ ∂_ρ - x_ρ ∂_σ$, we have $X^μ = x_σ δ_ρ^μ - x_ρ δ_σ^μ$ and, thus: $$ℒ_{x_σ ∂_ρ - x_ρ ∂_σ} dx^μ = d\left(x_σ δ_ρ^μ - x_ρ δ_σ^μ\right) = δ_ρ^μ dx_σ - δ_σ^μ dx_ρ.$$ Also, since the components $g_{μν}$ are constant, then we have $ℒ_X g_{μν} = X^ρ ∂_ρ g_{μν} = 0$, regardless of what $X$ is. Thus, using the product rule, again, we have: $$0 = ℒ_{x_σ ∂_ρ - x_ρ ∂_σ} g = (0) dx^μ ⊗ dx^ν + g_{μν} \left(δ_ρ^μ dx_σ - δ_σ^μ dx_ρ\right) ⊗ dx^ν + g_{μν} dx^μ ⊗ \left(δ_ρ^ν dx_σ - δ_σ^ν dx_ρ\right),$$ or $$0 = g_{ρν} dx_σ ⊗ dx^ν - g_{σν} dx_ρ ⊗ dx^ν + g_{μρ} dx^μ ⊗ dx_σ - g_{μσ} dx^μ ⊗ dx_ρ,$$ or, componentwise, using the symmetry of $η$ and (assumed) symmtry of $g$ to swap indices: $$0 = g_{νρ} η_{μσ} - g_{νσ} η_{μρ} + g_{μρ} η_{νσ} - g_{μσ} η_{νρ}.$$ This condition is trivial if $ρ = σ$ or $μ = ν$; particularly, if $N = 0$. If $N = 1$, then without loss of generality, we could take $(ρ,σ) = (0,1) = (μ,ν)$ and write $$0 = g_{10} (0) - g_{11} (-1) + g_{00} (+1) - g_{01} (0) = g_{00} + g_{11}.$$ That's the best you can do. The metric forms a constant symmetric trace-free $2×2$ matrix. If $N > 1$, choose any $μ$, $ρ ≠ μ$ and $σ = ν ≠ μ, ρ$. Then, we have: $$0 = g_{νρ} (0) - g_{νσ} (0) + g_{μρ} η_{νσ} - g_{μσ} (0) = ±g_{μρ}.$$ Thus, $g_{μρ} ≠ 0$ for $ρ ≠ μ$ and $g$ forms a diagonal matrix. Next, choose any $ρ = μ$ and $σ = ν ≠ μ, ρ$. Then, we have: $$0 = g_{νρ} (0) - g_{νσ} η_{μρ} + g_{μρ} η_{νσ} - g_{μσ} (0) = -g_{νσ} η_{μρ} + g_{μρ} η_{νσ}.$$ From this, it follows that $g_{μρ}/η_{μp} = g_{νσ}/η_{νσ}$. Therefore, $g$ is a constant multiple of the Minkowski metric $η$. Since $g$ is assumed to be non-degenerate, the constant multiple must be non-zero. Otherwise, if it's degenerate, the constant multiple is 0, and then $g$ must be 0 and totally degenerate.
{ "domain": "physics.stackexchange", "id": 91613, "tags": "general-relativity, special-relativity, differential-geometry, metric-tensor" }
How to calculate the pressure of a trapped, bubble of air 1m below water surface
Question: Ahoy hive mind! The rough scenario I'm looking for some help over is; Picture a tub of $\mathrm{10 \ (l) × 5 \ (w) × 2 \ (h)}$ floating in a body of water The full mass is $50,000 \ \mathrm{kg}$, so the displaced $50\ \mathrm{m^3}$ of water reaches half way up the tub sides Add a mirrored cavity, equivalent to the top, except with only $\mathrm{1\ m}$ vertical walls (structure included in the $50,000\ \mathrm{kg}$ of mass) This occurs in freshwater, and at seal level. Now bubble air into the inverted cavity, filling the entire $50\ \mathrm{m^3}$ and raising the double-tub $1\ \mathrm{m}$ higher in the water, to the point where the top of the trapped air pocket is level with the water surface. My question : How would one go about calculating the pressure of the trapped air? Any help appreciated! The idea is to hypothesise how vast a submerged tank would be needed to qualify as a realistic, low-pressure, energy stoage reservoir (any further thoughts would also be welcome!) Answer: The pressure of the air at the bottom is the pressure on top plus the pressure due to the change in height: $p=W/A+\rho_{air}gh$. You can also calculate that pressure by considering the pressure of the water there: $p=p_{atmosphere}+\rho_{water}gh$. From these two you can get both, $p$ and $h$
{ "domain": "physics.stackexchange", "id": 83972, "tags": "pressure, buoyancy, volume" }
Why does the direction of endolymph flow oppose direction of body motion?
Question: In the semicircular canals, the endolymph always flows in a direction that is opposite to the motion of the vestibular apparatus itself. I’m having trouble grasping why this is, and I would greatly appreciate it if anyone could give me some physics insight into this. Answer: The endolymph inside the ear, like any kind of matter, has mass and so is subject to inertia. This means any change in speed and direction will be resisted. The ear and semicircular canals themselves are fairly solidly connected to the rest of the body and so accelerate smoothly. But since endolymph is fluid, it cannot be brought up to speed as fast, so it lags behind in moving. So, as the ear moves forward, the endolymph stays behind and so in relation to the ear, flows backward. This is like what you experience when a car or airplane accelerates: you "flow" backwards into your seat as the vehicle surges forward.
{ "domain": "biology.stackexchange", "id": 4638, "tags": "biophysics, human-ear, balance" }
Enabling bitset-like behavior for enum classes (C++20)
Question: I want to enable bitmask-like behavior (ie. overloaded operator|, operator& and operator^) for some enum classes. This is what I came up with: #pragma once #include <type_traits> /// \brief Marks an enum class as enabled for bitmask behavior. /// \param type The name of the enum for which to enable the bitmask. /// \details This macro works by specializing \see{is_bitmask}. #define ENABLE_BITMASK(type) template<> \ struct is_bitmask<type> { \ constexpr static bool value = true; \ private: \ is_bitmask() = delete; \ }; /// This is a marker struct to enable bitmask behavior for enum classes. Should be /// automatically specialized by using \see ENABLE_BITMASK. /// \tparam T The enum class for which to enable this behavior. template <typename T> struct is_bitmask final { constexpr static bool value = false; private: is_bitmask() = delete; }; template <typename T> concept is_enum_bitmask = std::is_enum_v<T> && (is_bitmask<T>::value == std::true_type::value); /// Enables bitmask behavior for enum classes by overloading operator|, operator&, operator^ and operator~. /// \tparam Bits The enum to turn into a bitmask. This template parameter is constrained by \see is_enum_bitmask. template <is_enum_bitmask Bits> struct EnumBitset { private: using Type = std::underlying_type_t<Bits>; Type bits_ = 0; constexpr EnumBitset(Type b) noexcept { bits_ = b; } public: constexpr EnumBitset(Bits bit) noexcept { bits_ = static_cast<Type>(bit); } constexpr EnumBitset() noexcept { bits_ = 0; } constexpr EnumBitset(const EnumBitset& other) = default; constexpr EnumBitset(EnumBitset&& other) noexcept = default; constexpr EnumBitset& operator=(const EnumBitset& other) = default; constexpr EnumBitset& operator=(EnumBitset&& other) noexcept = default; constexpr ~EnumBitset() noexcept = default; [[nodiscard]] constexpr inline EnumBitset<Bits> operator|(const EnumBitset<Bits>& b) const noexcept { return EnumBitset{this->bits_ | b.bits_}; } [[nodiscard]] constexpr inline EnumBitset<Bits> operator&(const EnumBitset<Bits>& b) const noexcept { return EnumBitset{this->bits_ & b.bits_}; } [[nodiscard]] constexpr inline EnumBitset<Bits> operator^(const EnumBitset<Bits>& b) const noexcept { return EnumBitset{this->bits_ ^ b.bits_}; } constexpr inline void operator|=(const EnumBitset<Bits>& b) noexcept { this->bits_ |= b.bits_; } constexpr inline void operator&=(const EnumBitset<Bits>& b) noexcept { this->bits_ &= b.bits_; } constexpr inline void operator^=(const EnumBitset<Bits>& b) noexcept { this->bits_ ^= b.bits_; } [[nodiscard]] constexpr inline bool operator==(const EnumBitset<Bits>& b) const noexcept { return this->bits_ == b.bits_; } [[nodiscard]] constexpr inline bool operator!=(const EnumBitset<Bits>& b) const noexcept { return this->bits_ != b.bits_; } constexpr inline EnumBitset<Bits> operator~() const noexcept { return EnumBitset{~this->bits_}; } constexpr inline operator bool() const noexcept { return bits_ != 0; } constexpr inline explicit operator Type() const noexcept { return bits_; } [[nodiscard]] constexpr inline Type getBits() const { return bits_; } }; template <typename T, typename U> requires (is_enum_bitmask<T> && std::is_constructible_v<EnumBitset<T>, U>) constexpr auto operator|(T left, U right) -> EnumBitset<T> { return EnumBitset<T>(left) | right; } template <typename T, typename U> requires (is_enum_bitmask<T> && std::is_constructible_v<EnumBitset<T>, U>) constexpr auto operator&(T left, U right) -> EnumBitset<T> { return EnumBitset<T>(left) & right; } template <typename T, typename U> requires (is_enum_bitmask<T> && std::is_constructible_v<EnumBitset<T>, U>) constexpr auto operator^(T left, U right) -> EnumBitset<T> { return EnumBitset<T>(left) ^ right; } Here's how it's supposed to work: One can enable bitmask behavior by using ENABLE_BITMASK(MyEnum). I'm really unsure if this is a good way of doing this or if there are better ways to tag an enum as "bitmask enabled". I don't want to enable it for all enum classes since this would make enum classes pointless. All enum class constants can be used with each other, but we can't mix two enums. All operations should be constexpr so they can be optimized by the compiler. (Known Limitation): std::is_enum_v<T> works for both enum classes and enums, I'll fix this with C++23's std::is_scoped_enum<T> as soon as it's available. Here's some example code: enum class BitmaskableEnum { kFirst = 1, kSecond = 2, kThird = 4, kFirstAndSecond = 3 }; ENABLE_BITMASK(BitmaskableEnum); enum class SecondBitmaskableEnum { kFirst = 1 }; ENABLE_BITMASK(SecondBitmaskableEnum); enum class RegularEnum { kFirstBit = 1, kSecondBit = 2, kBoth = 3 }; void tests() { // Should compile auto a = BitmaskableEnum::kFirst | BitmaskableEnum::kSecond ^ BitmaskableEnum::kThird; a ^= BitmaskableEnum::kThird; // Should be true assert(a == BitmaskableEnum::kFirstAndSecond); // Those should not compile: const auto b = RegularEnum::kFirstBit | RegularEnum::kSecondBit; const auto c = BitmaskableEnum::kFirst | SecondBitmaskableEnum::kFirst; } I also have a minor question: I'll be using this in a larger project of mine and would like to put this in the util namespace. Would this work with ADL? If not: Which parts need to be kept in the global namespace? Answer: Simplify is_bitmask Since you are using C++20 anyway, you can avoid making is_bitmask a struct, and make it a constexpr template variable instead. This simplifies the first part of the code a lot: #define ENABLE_BITMASK(type) template<> \ inline constexpr bool is_bitmask<type> = true; template <typename T> inline constexpr bool is_bitmask = false; template <typename T> concept is_enum_bitmask = std::is_enum_v<T> && is_bitmask<T>; Missing [[nodiscard]]s? I see you use [[nodiscard]] for most of the operators. Of course it should not be used for |=, &= and ^=, but why not use it for ~, bool() and Type() as well? Missing global operators You have a just three global operators overloaded for bitmasks, and they all result in an EnumBitset<T> result. But I would also like to be able to write: auto x = BitmaskableEnum::kFirst; auto y = ~x; // no match for operator~ auto z = !x; // no match for operator! Missing noexcept There are a few member functions that are missing noexcept. There's nothing that should be able to throw. Unnecessary trailing return types I don't see a reason for the trailing return types for the global operators. The leading auto return type will already do the right thing. Putting it in a namespace You can move everything into its own namespace, with some changes. First, preprocessor macros are expanded at the point they are used, so you have to make sure they declare is_bitmask in the right namespace: namespace util { #define ENABLE_BITMASK(type) template<> \ inline constexpr bool util::is_bitmask<type> = true; ... Second, ADL is not the issue, but regular lookup is the problem. You have to tell the compiler that you want to search for functions inside the util namespace, either by doing: using namespace util; Or if you want to restrict it only to the operators, then you have to write: using util::operator&; using util::operator|; using util::operator^; ... Once the operators have been found and they return a util::EnumBitset<T>, the compiler already has the full name of the type and doesn't have to do any further lookups.
{ "domain": "codereview.stackexchange", "id": 42094, "tags": "c++, enum, c++20, constant-expression" }
Optimization of list's sublist
Question: the problem is to find total number of sub-lists from a given list that doesn't contain numbers greater than a specified upper bound number say right and sub lists max number should be greater than a lower bound say left .Suppose my list is: x=[2, 0, 11, 3, 0] and upper bound for sub-list elements is 10 and lower bound is 1 then my sub-lists can be [[2],[2,0],[3],[3,0]] as sub lists are always continuous .My script runs well and produces correct output but needs some optimization def query(sliced,left,right): end_index=0 count=0 leng=len(sliced) for i in range(leng): stack=[] end_index=i while(end_index<leng and sliced[end_index]<=right): stack.append(sliced[end_index]) if max(stack)>=left: count+=1 end_index+=1 print (count) origin=[2,0,11,3,0] left=1 right=10 query(origin,left,right) output:4 Answer: You can streamline by dropping the use of append, and ignoring values in your for-loop that are automatically disqualified. Benchmarking shows these steps reduce execution time by about 2x. Original: %%timeit origin=[2,0,11,3,0] left=1 right=10 query(origin,left,right) # 3.6 µs ± 950 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) New version: def query2(sliced,left,right): count = 0 leng = len(sliced) for i in range(leng): if sliced[i] <= right: stack_ct = 0 end_index = i found_max = False while(end_index < leng and sliced[end_index] <= right): if not found_max: found_max = sliced[end_index] >= left end_index += 1 stack_ct += 1 if found_max: count += stack_ct return count Benchmarking: %%timeit origin=[2,0,11,3,0] left=1 right=10 query2(origin,left,right) # 1.83 µs ± 41.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
{ "domain": "codereview.stackexchange", "id": 28114, "tags": "python, performance, algorithm, python-3.x" }
Is the scalar field operator self-adjoint?
Question: In A. Zee's QFT in a Nutshell, he defines the field for the Klein-Gordon equation as $$ \tag{1}\varphi(\vec x,t) = \int\frac{d^Dk}{\sqrt{(2\pi)^D2\omega_k}}[a(\vec k)e^{-i(\omega_kt-\vec k\cdot\vec x)} + a^\dagger(\vec k)e^{i(\omega_kt-\vec k\cdot\vec x)}] $$ When calculating $\pi=\partial_0\varphi^\dagger$, I came to $$ \tag{2}\varphi^\dagger(\vec x,t) = \int\frac{d^Dk}{\sqrt{(2\pi)^D2\omega_k}}[a^\dagger(\vec k)e^{i(\omega_kt-\vec k\cdot\vec x)} + a(\vec k)e^{-i(\omega_kt-\vec k\cdot\vec x)}] $$ But this would imply that $\varphi^\dagger=\varphi$. Is that correct? (Intuitively it would make sense, because in QM we also consider self-adjoint operators.) If it's correct, then why do we explicitly write $\pi=\partial_0\varphi^\dagger$ instead of just $\pi=\partial_0\varphi$? Why bother distinguishing $\varphi$ from $\varphi^\dagger$ at all? In case it is not correct, then the first two equations of this answer are most likely wrong. Answer: For a real scalar field I think what you have written is correct..But if you want to describe a complex scalar field then we need to distinguish between $\phi$ and $\phi^{\dagger}$...
{ "domain": "physics.stackexchange", "id": 26336, "tags": "quantum-field-theory, operators" }
Can vinyl LPs store audio above 20kHz and can it be played back?
Question: If, in the future, humans genetically engineered themselves to be able to hear frequencies significantly above 20kHz (say, up to 100kHz), would they find vinyl LPs a viable method of storing and playing back those frequencies? If not, what limits the medium's ability to do this? Edit: by medium I broadly refer to everything in the pressing and playback mechanism. Edit: I am asking about consumer vinyl as currently exists in 2016. Something you could conceivably buy in a shop today. Answer: Wikipedia states that The high frequency response of vinyl depends on the cartridge. CD4 records contained frequencies up to 50 kHz, while some high-end turntable cartridges have frequency responses of 120 kHz while having flat frequency response over the audible band (e.g. 20 Hz to 15 kHz +/-0.3 dB).[5] In addition, frequencies of up to 122 kHz have been experimentally cut on LP records.[6] so the answer seems "Yes, it is possible", though some non-standard technology might be required. (Ref. [5] refers to the "Technics EPC-100CMK4" cartridge, which apparently has a frequency response of 5 Hz- 120,000 Hz, 20 Hz- 20,000 Hz +,- 0.3 db 15 Hz- 80,000 Hz +,- 3 db
{ "domain": "physics.stackexchange", "id": 27689, "tags": "acoustics, material-science, estimation" }
An online algorithm to find the Pareto frontier elements
Question: I'm looking for an online algorithm that takes a stream of elements and preserves the elements that are on the Pareto frontier (e.g. all non-dominated elements). For instance. Given the following inputs, the retained Pareto frontier set would evolved as follows: (3,7) insert element b/c it's the first element pareto set now includes {(3,7)} (7,3) insert element b/c it's not dominated in the first pareto set now includes {(3,7), (7,3)} (8,4) insert element b/c it's not dominated; remove (7,3) which it is dominated in both dimensions pareto set now includes {(3,7), (8,4)} (1,1) do not insert because it's dominate in both dimensions pareto set now includes {(3,7), (8,4)} (9,9) insert element b/c it's not dominated; remove all other elements because this dominates them in both dimensions pareto set now includes {(9,9)} In my example I'm using 2-tuples, but I'm looking for an algorithm that could handle N-tuples for "small" N (say <10). The naive solution is to just to compare each element with all elements currently in the set. In practice the naive approach might not be so bad (e.g. sub $O(n^2)$) because elements will regularly be expelled by the comparison set. But I was wondering if there was a known efficient algorithm for this. I'm interested in efficiency in memory and in computational complexity. (Ha! And as a matter of fact, I'm looking for the set of algorithms that are Pareto optimal with respect to memory and computational complexity.) My current application of this is in building a Lucene search-document Collector that doesn't collect the most relevant documents (the typical use case for a search engine), but collects the Pareto optimal documents along specified dimensions. Answer: In two dimensions, each update can be done in $O(\lg n)$ time, by using a balanced binary tree data structure. But when you are working in a high-dimensional space, I don't know of any efficient solution. Let me describe an efficient algorithm for the 2D case. Let $F$ denote the set of points in the Pareto frontier. Store $F$ in a balanced binary tree, using the $x$-coordinate of each point as its key. Note that when you sort $F$ by increasing $x$-coordinate, they'll also be sorted by decreasing $y$-coordinate. Now, given a new point $(x_q,y_q)$, you can check efficiently whether it is Pareto-dominated by any element of $F$. Find the first element of $F$ to the right of $(x_q,y_q)$ (i.e., the element $(x,y) \in F$ such that $x \ge x_q$ and $x$ is minimal); then checking whether it dominates $(x_q,y_q)$. Also, given a new point $(x_q,y_q)$, you can efficiently find whether it Pareto-dominates any element of $F$. In particular, you can find indices $i,j$ such that the points $(x_i,y_i),(x_{i+1},y_{i+1}),\dots,(x_j,y_j)$ of $F$ are all Pareto-dominated by $(x_q,y_q)$ (assuming that the points of $F$ have been ordered by $x$-coordinate, Pareto-dominated points will be in a consecutive interval). Here's how. Find the first element of $F$ to the left of $(x_q,y_q)$ (i.e., the element $(x_j,y_j) \in F$ such that $x_j \le x_q$ and $x_j$ is as large as possible), and check whether $(x_q,y_q)$ dominates it. If yes, find the smallest index $i$ such that $i<j$ (so $x_i<x_j$) and $y_i \le y_q$. Both of these steps can be done in $O(\lg n)$ time. (Finding $i$ can be done in $O(\lg n)$ time by treating the tree as branching on the $y$-coordinate of points, and taking advantage of the fact that the points of $F$ are sorted by decreasing $y$-coordinate.) Now this tell us what to do. If $(x_q,y_q)$ is dominated by some point of $F$, then don't add it to $F$; you're done. Alternatively, if $(x_q,y_q)$ dominate points $i..j$ of $F$, then you need to delete those points from $F$ and add $(x_q,y_q)$ into $F$. This can be done in $O(\lg n)$ time, by noting that any interval of consecutive indices can be expressed as the union of $O(\lg n)$ subtrees of the binary tree (roughly speaking, you work with the siblings of the nodes along the path from $i$ to the root, and the same for the path from $j$ to the root); you can delete each subtree in $O(1)$ time. This lets us delete an entire range of consecutive points in $F$ in $O(\lg n)$ time, no matter how large the range is. For details, see Delete a consecutive range of leaves from a binary tree. All of this can be done in $O(\lg n)$ time, using a balanced binary tree data structure. This works in 2 dimensions (i.e., 2-tuples). In higher dimensions, the problem gets much harder. You can find references to the literature, with techniques for higher dimensions, at How to find a subset of potentially maximal vectors (of numbers) in a set of vectors; but I'm afraid that in high dimensions, all the known algorithms are likely to be fairly slow (they have a factor that is something like $O((\lg n)^{d-1})$ where $d$ is the number of dimensions).
{ "domain": "cs.stackexchange", "id": 5927, "tags": "algorithms, search-algorithms" }
How to self-learn data science?
Question: I am a self-taught web developer and am interested in teaching myself data science, but I'm unsure of how to begin. In particular, I'm wondering: What fields are there within data science? (e.g., Artificial Intelligence, machine learning, data analysis, etc.) Are there online classes people can recommend? Are there projects available out there that I can practice on (e.g., open datasets). Are there certifications I can apply for or complete? Answer: Welcome to the site, Martin! That's a pretty broad question, so you're probably going to get a variety of answers. Here's my take. Data Science is an interdisciplinary field generally thought to combine classical statistics, machine learning, and computer science (again, this depends on who you ask, but other might include business intelligence here, and possible information visualization or knowledge discovery as well; for example, the wikipedia article on data science). A good data scientist is also skilled at picking up on the domain-specific characteristics of the domain in which they working, as well. For example, a data scientist working on analytics for hospital records is much more effective if they have a background in Biomedical Informatics. There are many options here, depending on the type of analytics you're interested in. Andrew Ng's coursera course is the first resource mentioned by most, and rightly so. If you're interested in machine learning, that's a great starting place. If you want an in-depth exploration of the mathematics involved, Tibshirani's The Elements of Statistical Learning is excellent, but fairly advanced text. There are many online courses available on coursera in addition to Ng's, but you should select them with a mind for the type of analytics you want to focus on, and/or the domain in which you plan on working. Kaggle. Start with kaggle, if you want to dive in on some real-world analytics problems. Depending on your level of expertise, it might be good to start of simpler, though. Project Euler is a great resource for one-off practice problems that I still use as warm-up work. Again, this probably depends on the domain you wish to work in. However, I know Coursera offers a data science certificate, if you complete a series of data science-related courses. This is probably a good place to start. Good luck! If you have any other specific questions, feel free to ask me in the comments, and I'll do my best to help!
{ "domain": "datascience.stackexchange", "id": 577, "tags": "beginner, self-study" }
Particle hole symmetry of single site?
Question: Let's consider I have a system with equal number of spin up and spin down particles Now I consider a single site of system,I have a state $c_{i\uparrow} ^{\dagger}\mid 0\rangle$ under particle hole transformation it goes $$c_{i\uparrow} ^{\dagger}\mid0\rangle\to c_{i\uparrow}\mid0\rangle$$ can I write the tranformation in this form $c_{i\uparrow} ^{\dagger}\mid0\rangle$$=c_{i\downarrow}^{\dagger}\mid0\rangle$? I am little confused about this transformation Because we normally use $c_{i\uparrow} ^{\dagger}$$=c_{i\downarrow}^{\dagger}$ this transformation when we have equal spin up and spin down particle in a system and creating a hole with spin up means creating a particle with spin down because it corresponds to net increase in spin down, but if I consider in a system only a single site with net spin up then under particle hole transformation how it can correspond to creation of spin down particle when there is no down spin contribution at the site? Answer: No, you can not write this transformation as $c_{i\uparrow}^\dagger|0\rangle=c_{i\downarrow}^\dagger|0\rangle$, because $c_{i\uparrow}^\dagger|0\rangle$ and $c_{i\downarrow}^\dagger|0\rangle$ are two orthogonal quantum states: they can not be equal. The transformation $c_{i\uparrow}^\dagger|0\rangle\to c_{i\uparrow}|0\rangle$ you start with is also wrong, because the resulting state is $c_{i\uparrow}|0\rangle= 0$, which makes the transformation not unitary. What you normally use $c_{i\uparrow}^\dagger=c_{i\downarrow}^\dagger$ is still wrong, because $c_{i\uparrow}^\dagger$ and $c_{i\downarrow}^\dagger$ are different operators and can not be equal. The correct way of writing everything is to start by defining a particle-hole transformation operator $\mathcal{P}$, such that its action on the fermion operator is $$\mathcal{P}c_{i\sigma}^\dagger\mathcal{P}^{-1} = c_{i\sigma}, \tag{1}$$ and its action on the vacuum state is $$\mathcal{P}|0\rangle = \prod_{i,\sigma} c_{i\sigma}^\dagger|0\rangle. \tag{2}$$ It is important that the vacuum state $|0\rangle$ must also transform under $\mathcal{P}$. Physically, it is because the vacuum state is the state with no particle occupation, which means it is a state that is fully occupied by holes. So under the particle-hole transformation, the vacuum state will become a fully occupied state of particles, as expressed in Eq.(2). Mathematically, Eq.(2) follows from Eq.(1) as a result of consistency. Because the vacuum state is defined as the state annihilated by all the annihilation operator, i.e. $c_{i\sigma}|0\rangle=0$. Now if we apply $\mathcal{P}$ to both sides of the equation, we have $\mathcal{P}c_{i\sigma}|0\rangle=\mathcal{P}0=0$ (because any linear operator acts on $0$ is still $0$). However the left-hand-side reads $\mathcal{P}c_{i\sigma}|0\rangle=\mathcal{P}c_{i\sigma}\mathcal{P}^{-1}\mathcal{P}|0\rangle=c_{i\sigma}^\dagger\mathcal{P}|0\rangle$, meaning that the state $\mathcal{P}|0\rangle$ is annihilated by creation operator $c_{i\sigma}^\dagger$ instead, so the state $\mathcal{P}|0\rangle$ has to be a fully occupied state. Applying Eq.(1) and Eq.(2) to a single site (omitting the site index $i$), we have $$\mathcal{P}c_{\uparrow}^\dagger|0\rangle = (\mathcal{P}c_{\uparrow}^\dagger\mathcal{P}^{-1})(\mathcal{P}|0\rangle)=c_{\uparrow}c_{\uparrow}^\dagger c_{\downarrow}^\dagger|0\rangle=c_{\downarrow}^\dagger|0\rangle.$$ So this means under the particle-hole transformation, the spin-up state $c_{\uparrow}^\dagger|0\rangle$ is transformed to a spin-down state $c_{\downarrow}^\dagger|0\rangle$.
{ "domain": "physics.stackexchange", "id": 22167, "tags": "condensed-matter, symmetry" }
Why astatic needles are immune to only terrestrial magnetic fields and not other magnetic fields?
Question: This web article says: Astatic needles are basically two needles of approximately the same magnetic strength, mounted on top of one another. They are parallel, but their magnetic poles are on opposite sides, as shown in the diagram. This way, when inside (any) external magnetic field, the torque exerted on one needle is equal and opposite to the torque exerted on the other. Since they are connected, the system overall remains unaffected by the field. Thus it seems to me that astatic needles should also be immune to other magnetic fields. Why is it not so? Answer: The astatic needles should be immune to other uniform magnetic fields. The web article you refer to describes the use of a pair of these needles with a galvanometer but one of the needles is inside the coil and one is outside the coil (as shown in the initial photographs). The direction of the field inside and outside the coil is opposite. Because the polarities of the two needles are opposite, putting them in fields in the opposite direction will result in two torques in the same direction.
{ "domain": "physics.stackexchange", "id": 45468, "tags": "electromagnetism, forces, electricity, magnetic-fields, torque" }
Calculate average values for each day of the week for each Meter
Question: I have a program with these two methods. One method to import a set of data from a CSV within the given time period and store them in a dictionary. Here the data in CSV file is stored in following format with meter Id, Timestamp in UTC and value. For each meter there's approximately 2000 values. "MeterId","Ts","Value" 10411,"2022-10-31T16:00:00.000Z","null" 10411,"2022-10-31T16:30:00.000Z",5150 My Read CSV method loads each line to a dictionary and loads each meter ID to a separate list. I passed the dictionary and list as reference since I'm using them for second method. public static void ReadCSVData(DateTime mFromTime, DateTime mToTime, ref Dictionary<Tuple<string, DateTime>, float> mDictLines, ref List<string> mUniqueList) { Console.WriteLine(DateTime.Now.ToString() + " Reading CSV..."); List<string> lstMeters = new List<string>(); using (var reader = new StreamReader(gSourceFile)) { while (!reader.EndOfStream) { //read single line var line = reader.ReadLine(); var values = line.Split(','); DateTime timeValue; if (DateTime.TryParse(values[1].Trim('"'), out timeValue)) { if (timeValue >= mFromTime && timeValue < mToTime) { if (values[2].Trim('"') != "null") { float meterValue = float.Parse(values[2].Trim('"'), CultureInfo.InvariantCulture.NumberFormat); DateTime meterDate = DateTime.Parse(values[1].Trim('"')).AddHours(8); mDictLines.Add(Tuple.Create(values[0], meterDate), meterValue); lstMeters.Add(values[0].Trim('"')); } } } } } mUniqueList.AddRange(lstMeters.Distinct()); } My second method get the dictionary and list as arguments and calculate average values for each day of the week for each meter. public static void CalculateAverageValues(Dictionary<Tuple<string, DateTime>, float> mDictLines, List<string> mUniqueList) { List<object> arrDoc = new List<object>(); IEnumerable<string> ieUnique = from itemUnique in mUniqueList select itemUnique; foreach (string meterIdUnique in ieUnique) { float mondayTot = 0; float tuesdayTot = 0; float wednesdayTot = 0; float thursdayTot = 0; float fridayTot = 0; float saturdayTot = 0; float sundayTot = 0; int mondayCount = 0; int tuesdayCount = 0; int wednesdayCount = 0; int thursdayCount = 0; int fridayCount = 0; int saturdayCount = 0; int sundayCount = 0; foreach (KeyValuePair<Tuple<string, DateTime>, float> dictItem in mDictLines) { Tuple<string, DateTime> lineId = dictItem.Key; if (meterIdUnique == lineId.Item1) { switch (lineId.Item2.DayOfWeek) { case DayOfWeek.Monday: mondayTot += dictItem.Value; mondayCount++; break; case DayOfWeek.Tuesday: tuesdayTot += dictItem.Value; tuesdayCount++; break; case DayOfWeek.Wednesday: wednesdayTot += dictItem.Value; wednesdayCount++; break; case DayOfWeek.Thursday: thursdayTot += dictItem.Value; thursdayCount++; break; case DayOfWeek.Friday: fridayTot += dictItem.Value; fridayCount++; break; case DayOfWeek.Saturday: saturdayTot += dictItem.Value; saturdayCount++; break; case DayOfWeek.Sunday: sundayTot += dictItem.Value; sundayCount++; break; } } } float avgMonday = (mondayTot / mondayCount); float avgTuesday = (tuesdayTot / tuesdayCount); float avgWednesday = (wednesdayTot / wednesdayCount); float avgThursday = (thursdayTot / thursdayCount); float avgFriday = (fridayTot / fridayCount); float avgSaturday = (saturdayTot / saturdayCount); float avgSunday = (sundayTot / sundayCount); Console.WriteLine(DateTime.Now.ToString() + " Processed data for meter " + meterIdUnique); InsertDataToCSV(csv, meterIdUnique, avgMonday, avgTuesday, avgWednesday, avgThursday, avgFriday, avgSaturday, avgSunday); } File.WriteAllText(gResultFile, csv.ToString()); Console.WriteLine(DateTime.Now.ToString() + " Process over"); } This code works fine but it takes approximately one minute to process data for 1500 meters. Is there a way I could speed it up? Answer: Please note: the code has been reviewed in two parts. Some modifications about the first part can be found under update #1 . As I suggested in the comments you can take advantage of CsvHelper to parse csv for you. I would suggest to split your ReadCSVData into two methods to embrace reusability: The first method (ReadCSV) should read and parse the csv. The second (FilterMeters) should perform the necessary filtering. ReadCSV The Meter class class Meter { public int MeterId { get; set; } public DateTime TimeStamp { get; set; } public int? MeterValue { get; set; } } You might need to adjust the names of the fields and their data types for your needs The ReadCSV method static List<Meter> ReadCSV() { using var fileReader = new StreamReader("sample.csv"); using var csvReader = new CsvReader(fileReader, CultureInfo.InvariantCulture); csvReader.Context.RegisterClassMap<MeterMap>(); csvReader.Context.TypeConverterOptionsCache.GetOptions<int?>().NullValues.Add("null"); return csvReader.GetRecords<Meter>().ToList(); } It reads all the lines and try to parse them as Meter objects The field mapping is defined inside MeterMap (see next section) The null value handling for the MeterValue property is done via TypeConverterOptions The MeterMap class sealed class MeterMap : ClassMap<Meter> { public MeterMap() { Map(m => m.MeterId).Name("MeterId"); Map(m => m.TimeStamp).Name("Ts"); Map(m => m.MeterValue).Name("Value"); } } This separation allows you to use different property names than the csv's column names FilterMeters static (Dictionary<(int, DateTime), int> groupedMeterValues, List<int> meterIds) FilterMeters(List<Meter> meters, DateTime fromTime, DateTime toTime) { var filteredMeters = meters.Where(m => m.TimeStamp >= fromTime && m.TimeStamp < toTime && m.MeterValue.HasValue); return (filteredMeters.ToDictionary(m => (m.MeterId, m.TimeStamp), m => m.MeterValue.Value), filteredMeters.Select(m => m.MeterId).Distinct().ToList()); } You could use Tuples (as well as ValueTuples) as a return value as well So, you don't need to pass parameters via ref The filtering can be easily done via Linq's Where The ToDictionary allows you to transform your data for your needs BTW for this particular example we could also use here the Linq's GroupBy Linq's Distinct allows you to select only once the duplicate values UPDATE #1: Suggestions for CalculateAverageValues After started working with the CalculateAverageValues method I've just realized that we don't need the unique meter ids collection. So, the above FilterMeters could be rewritten like this: static Dictionary<(int, DateTime), int> FilterMeters(IEnumerable<Meter> meters, DateTime fromTime, DateTime toTime) { return meters.Where(m => m.TimeStamp >= fromTime && m.TimeStamp < toTime && m.MeterValue.HasValue) .ToDictionary(m => (m.MeterId, m.TimeStamp), m => m.MeterValue.Value); } If you would define a helper class/struct/record like bellow: class AveragedMeter { public int MeterId { get; set; } public DayOfWeek DayOfWeek { get; set; } public double AverageMeterValue { get; set; } } then the whole CalculateAverageValues could be as simple as this public static List<AveragedMeter> CalculateAverageValues(Dictionary<(int MeterId, DateTime TimeStamp), int> groupedMeterValues) { return groupedMeterValues .GroupBy(m => new { m.Key.MeterId, m.Key.TimeStamp.DayOfWeek }) .Select(g => new AveragedMeter { MeterId = g.Key.MeterId, DayOfWeek = g.Key.DayOfWeek, AverageMeterValue = g.Average(m => m.Value) }) .ToList(); } As you can see here I removed those lines that are responsible for writing to csv. Finally, let's connect the dots var meters = ReadCSV(); var groupedMeterValues = FilterMeters(meters, from, till); var averagedMeterValues = CalculateAverageValues(groupedMeterValues); WriteAverageCSV(averagedMeterValues);
{ "domain": "codereview.stackexchange", "id": 44390, "tags": "c#, performance, hash-map, console, csv" }
Identification of particles and anti-particles
Question: The identification of an electron as a particle and the positron as an antiparticle is a matter of convention. We see lots of electrons around us so they become the normal particle and the rare and unusual positrons become the antiparticle. My question is, when you have made the choice of the electron and positron as particle and anti-particle does this automatically identify every other particle (every other fermion?) as normal or anti? For example the proton is a particle, or rather the quarks inside are. By considering the interactions of an electron with a quark inside a proton can we find something, e.g. a conserved quantity, that naturally identifies that quark as a particle rather than an antiparticle? Or do we also just have to extend our convention so say that a proton is a particle rather than an antiparticle? To complete the family I guess the same question would apply to the neutrinos. Answer: Yes, to some extent. Once you choose which of the electron or positron is to be considered the normal particle, then that fixes your choice for the other leptons, because of neutrino mixing. Similarly, choosing one quark to be the normal particle fixes the choice for the other flavors and colors of quarks. But I can't think of a reason within the standard model that requires you to make corresponding choices for leptons and quarks. In particle terms, you can think about it like this: say you start by choosing the electron to be the particle and the positron to be the antiparticle. You can then distinguish electron neutrinos and electron antineutrinos because in weak decay processes, an electron is always produced with an antineutrino and a positron with a normal neutrino. Then, because of neutrino oscillations, you can identify the other two species of neutrinos that oscillate with electron antineutrinos as antineutrinos themselves, and in turn you can identify the muon and tau lepton from production associated with their corresponding antineutrinos. In terms of QFT, the relevant (almost-)conserved quantity is the "charge parity," the eigenvalue of the combination of operators $\mathcal{CP}$.
{ "domain": "physics.stackexchange", "id": 25443, "tags": "particle-physics, standard-model, definition, antimatter, conventions" }
Generic binary search tree in Java
Question: I've posted the code before, but this time I believe I fixed the bug with remove. The class implements a Binary Search Tree without rebalancing, since unbalanced tree is not an issue in my case. I implemented the basic functions to make it minimally functional: Add Remove Contains Remove any To string Size Code: package custom.trees; public class BinarySearchTree<E extends Comparable> { private BinaryTree<E> root; private int size; public void add(E value) { if (root == null) { root = new BinaryTree<>(value); ++size; return; } recursiveAdd(root, value); } private void recursiveAdd(BinaryTree<E> current, E value) { int comparisonResult = current.getValue().compareTo(value); if (comparisonResult < 0) { if (current.getRightChild() == null) { current.setRightChild(new BinaryTree<>(value)); ++size; return; } recursiveAdd(current.getRightChild(), value); } else if (comparisonResult > 0) { if (current.getLeftChild() == null) { current.setLeftChild(new BinaryTree<>(value)); ++size; } recursiveAdd(current.getLeftChild(), value); } } public boolean contains(E value) { return containsRecursive(root, value); } private boolean containsRecursive(BinaryTree<E> current, E value) { if (current == null) { return false; } int comparisonResult = value.compareTo(current.getValue()); if (comparisonResult == 0) { return true; } else if (comparisonResult < 0) { return containsRecursive(current.getLeftChild(), value); } else { return containsRecursive(current.getRightChild(), value); } } public boolean remove(E value) { return removeRecursive(root, null, value); } private boolean removeRecursive(BinaryTree<E> current, BinaryTree<E> parent, E value) { if (current == null) { return false; } int comparisonResult = value.compareTo(current.getValue()); if (comparisonResult < 0) { return removeRecursive(current.getLeftChild(), current, value); } else if (comparisonResult > 0) { return removeRecursive(current.getRightChild(), current, value); } int childCount = 0; childCount += (current.getLeftChild() == null) ? 0 : 1; childCount += (current.getRightChild() == null) ? 0 : 1; if (childCount == 0) { if (current == root) { root = null; --size; return true; } if (parent.getLeftChild() == current) { parent.setLeftChild(null); } else { parent.setRightChild(null); } --size; return true; } else if (childCount == 1) { if (current == root) { if (root.getLeftChild() != null) { root = root.getLeftChild(); } else { root = root.getRightChild(); } --size; return true; } BinaryTree<E> child = (current.getLeftChild() != null) ? current.getLeftChild() : current.getRightChild(); if (parent.getLeftChild() == current) { parent.setLeftChild(child); } else { parent.setRightChild(child); } --size; return true; } //every other case already returned until now BinaryTree<E> successor = getLeftMostChild(current.getRightChild()); current.setValue(successor.getValue()); BinaryTree<E> successorsParent = current.getRightChild(); while (successorsParent.getLeftChild() != null && successorsParent.getLeftChild() != successor) { successorsParent = successorsParent.getLeftChild(); } if (successorsParent == successor) { current.setRightChild(successor.getRightChild()); } else { successorsParent.setLeftChild(successor.getRightChild()); } --size; return true; } public void removeAny() { if (size == 0) { throw new IllegalStateException("Calling removeAny on empty tree"); } remove(getLeftMostChild(root).getValue()); } private BinaryTree<E> getLeftMostChild(BinaryTree<E> current) { while (current.getLeftChild() != null) { current = current.getLeftChild(); } return current; } public int size() { return size; } @Override public String toString() { StringBuilder builder = new StringBuilder(); builder.append('['); buildString(root, builder); if (size != 0) { builder.deleteCharAt(builder.length() - 1); } builder.append(']'); return builder.toString(); } private void buildString(BinaryTree<E> node, StringBuilder builder) { if (node == null) { return; } buildString(node.getLeftChild(), builder); builder.append(node.getValue().toString()); builder.append(' '); buildString(node.getRightChild(), builder); } } I've run tests that are given by my professor and they passed. Also, I run randomized tests with Integer as type parameter. It randomly generated arrays and added/removed from the tree and from the TreeSet from standard library itself. The following code didn't throw after being run 10'000 times: BinarySearchTree<Integer> tree = new BinarySearchTree<>(); Set<Integer> correctAnswer = new TreeSet<>(); int[] arr = generateRandomizedArray(); for (int i = 0; i < arr.length; ++i) { tree.add(arr[i]); correctAnswer.add(arr[i]); } int removeCount = random.nextInt(5, arr.length - 1); for (int i = 0; i < removeCount; ++i) { int val = random.nextInt(0, 30); tree.remove(val); correctAnswer.remove(val); } int addCount = random.nextInt(5, VALUE_UPPER_BOUND); for (int i = 0; i < addCount; ++i) { int val = random.nextInt(0, VALUE_UPPER_BOUND); tree.add(val); correctAnswer.add(val); } removeCount = random.nextInt(0, tree.size() - 1); for (int i = 0; i < removeCount; ++i) { int val = random.nextInt(0, 40); tree.remove(val); correctAnswer.remove(val); } String str = tree.toString(); String correctStr = correctAnswer.toString(); StringBuilder builder = new StringBuilder(); for (int i = 0; i < correctStr.length(); ++i) { if (correctStr.charAt(i) != ',') { builder.append(correctStr.charAt(i)); } } correctStr = builder.toString(); if (!str.equals(correctStr)) { throw new TestFailed("answer and correct answer don't match"); } The only thing I'm worried about is if internal representation is messed up, but it produces the same output as TreeSet, so I'm calm. As a side effect, the code also tests add() and toString() functions. removeAny() is implemented in terms of remove(), so should be correct as well. The part of the code that worries me the most is handling cases with root being null, but any other advices on making code better are welcome. Answer: This declaration needs improvement: public class BinarySearchTree<E extends Comparable> { With this declaration, the compiler should give you warnings about type safety on these lines: int comparisonResult = current.getValue().compareTo(value); // ... int comparisonResult = value.compareTo(current.getValue()); To fix that, declare like this: public class BinarySearchTree<E extends Comparable<E>> {
{ "domain": "codereview.stackexchange", "id": 24913, "tags": "java, tree" }
bound charges in materials
Question: I have a question regarding the bound charges in electrostatics, I think I am a bit confused, on one side I have read that bound charges in a capacitor with a dielectric inside the plates are on the surface of the dielectric material. On the other side, in other books bound charges refer to the charges of molecules which cannot move as it happens in metals (free electrons). So, if I put a solid dielectric inside a capacitor the bound charges will be only on the surface? what about the charges on the molecules which compose the solid? If I take for instance a glass of water, are there bound charges? Answer: The charges in the middle cancel, leaving only sheets of charge on the surface, and where the electric field changes. An easy classical model of a dielectric is a collection of small conducting spheres in a non dielectric insulator. The average charge in any macroscopic region is zero, but the polarization isn't.
{ "domain": "physics.stackexchange", "id": 1702, "tags": "electromagnetism, dielectric" }
Can Fe(CO)5 adopt both square pyramidal and trigonal bipyramidal geometries?
Question: I know the hybridization of $\ce{Fe(CO)5}$ is $\mathrm{dsp^3}$. According to my book, coordination compounds with coordination number 5 can interchange between square pyramidal and trigonal bipyramidal geometries. Is it true for this compound? Doesn't seem so according to the picture: Was my understanding wrong somewhere? Answer: Your book was correct that a five coordinate metal complex is able to adopt both square pyramidal and trigonal bipyramidal geometries, and in both cases the sp3d hybridisation scheme applies (if you believe in hybridisation...). Which geometry is adopted depends upon a combination of steric and electronic factors, and isn't necessarily trivial to predict, though MO theory and crystal field theory can help understand which might be more favourable. The structure of iron pentacarbonyl is widely agreed to be trigonal bipyramidal, with three CO ligands occupying equatorial positions, and two CO ligands occupying axial positions. Trigonal bipyramidal structure of iron pentacarbonyl Whilst both geometries are possible (and indeed observed, IF5 is square pyramidal), I suspect that your confusion arises from the fact that iron pentacarbonyl is able to undergo a phenomenon known as Berry psuedorotation. Mechanism of Berry psuedorotation Berry psuedoroation is the process in which two of the equatorial ligands are switched out for the two axial ligands via a square planar intermediate. This process is sufficiently rapid that if observed by NMR, only one environment for the carbon of the CO ligand is observed (i.e. the psuedorotation is faster than the NMR timescale).
{ "domain": "chemistry.stackexchange", "id": 8748, "tags": "inorganic-chemistry, coordination-compounds, molecular-structure, carbonyl-complexes" }
What does it mean for a prior to be improper
Question: It’s use will never result in a posterior distribution which integrates (or sums) to 1. ? Answer: An improper prior doesn't integrate/sum to 1, hence it is not a proper probability distribution on its own. Depending on the likelihood, the posterior distribution may or may not integrate to one. An example would be a constant function on the infinite line, e.g. $p: \mathbb{R} \rightarrow \mathbb{R}, x\mapsto 1$. It is not normalisable (since its integral is infinite), hence improper, but yet it may serve as an uninformative prior.
{ "domain": "datascience.stackexchange", "id": 4887, "tags": "data-science-model" }
Extracting message type in callback (Python)
Question: Hey all, I want to extract data of a specific type from the message in the callback. I have only one callback where I get messages from different topics of different type like Float64, Odometry,.etc. Now I want to extract only the data of type Quaternion in side any message of any topic. My call back looks like (although its not correctly working): def callback(self, msg, arg): for m in msg.slots: if type(m) == [Quaternion]: self.process(m) but Quaternion is not directly recognized. Also I have to dig into the messages like Odometry has quaternion data but inside the pose.pose.position, how to check the whole structure of message for quaternion datatype data. Thanks in advance, Originally posted by safzam on ROS Answers with karma: 111 on 2013-09-02 Post score: 0 Original comments Comment by dornhege on 2013-09-03: Are you sure that you can't find a better/cleaner design for what you want? Seem's kinda odd to me. Comment by safzam on 2013-09-03: thanks for the suggestion! can you please just write me a hint for the better/cleaner design.I just want to extract Quaternion data xywz (if any) from the msgs, AND I do not know in prior how many topics I shall get and of which type. so I have one generic callback for all the topics...regards Comment by dornhege on 2013-09-03: The thing is that I"m not sure why you'd want something like that. Answer: Might it be geometry_msgs.Quaternion? Digging through the structure can be achieved in the same way that you accesses msg.slots, just recursively. Originally posted by dornhege with karma: 31395 on 2013-09-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by safzam on 2013-09-03: thanks for the comment. but msg.slots gives the sub fields names but their type is 'str' rather than Point,.etc. is there any way to dig out the msg structure with type by type dynamically? Comment by dornhege on 2013-09-03: There is _slot_types defined for ros msgs. Comment by safzam on 2013-09-03: yes but it does not help for further digging like I do: for l1 in msg._slot_types: print l1 and output i get [std_msgs/Header,string,geometry_msgs/PoseWithCovariance,geometry_msgs/TwistWithCovariance] but when I want to dig using 'print l1.slots' or 'print l1._slot_types' gives error. ? Comment by safzam on 2013-09-03: Hallo donhege, thanks a lot, now it works. the only problem was that 'print l1.slots' or 'print l1._slot_types' gives error when datatype encountered is simple and basic datatype because it does not have further substructure. now it works, thank you. regards
{ "domain": "robotics.stackexchange", "id": 15404, "tags": "rospy" }
Dependency on actionlib when building .action files
Question: Up to today's latest wiki page of actionlib lists actionlib as a required dependency with actionlib_msgs to build a package that includes .action files, along. However, in Catkin's doc (melodic) only actionlib_msgs is a required dependency. To generate actions, add actionlib_msgs as a dependency Which one is correct? UPDATE1: Responding to the comment from @Dirk Thomas: The referenced wiki page does not only mention building a package with .action files but also contains examples how to use the action server / client. Therefore I would highly suggest to revert your change since it will make the page incomplete / incorrect. I still think the wiki should present only the minimal dependency example first, so showing dependency on actionlib_msgs suffices because: The portion of wiki I referred to is specifically under ".action file" section. It's not official AFAIK, but my understanding is the practice in ROS is to separate message/service/action definition files in a designated package (e.g. #q11835). For a package that contains .action but no code, adding a dependency on action_msgs should be enough, according to your answer. Of course creating a package that contains both .action files and code that depends on actionlib api is not restricted at all, but I'm not sure if the actionlib's wiki page needs to present that info, let alone in ".action file" section. Originally posted by 130s on ROS Answers with karma: 10937 on 2018-10-12 Post score: 0 Answer: Both are right. It depends what you want to do. If a package has .action files it needs a dependency on actionlib_msgs since that package provides the CMake API like add_action_files(). If your package wants to use the API of the actionlib package to implement an action server or use an action client than it obviously needs a dependency on that package too. Originally posted by Dirk Thomas with karma: 16276 on 2018-10-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by 130s on 2018-10-12: Thanks for clarification. Then, because the wiki page section I referred to is about building a package that contains .action but not necessarily the code, I dropped actionlib from there. Comment by Dirk Thomas on 2018-10-12: The referenced wiki page does not only mention building a package with .action files but also contains examples how to use the action server / client. Therefore I would highly suggest to revert your change since it will make the page incomplete / incorrect. Comment by Dirk Thomas on 2018-10-15: Reverted in http://wiki.ros.org/action/diff/actionlib?action=diff&rev1=199&rev2=200 Comment by 130s on 2018-10-21:\ Reverted in http://wiki.ros.org/action/diff/actionlib?action=diff&rev1=199&rev2=200 @Dirk Thomas I responded by updating OP. Comment by Dirk Thomas on 2018-10-22: The previous edit of the page reduced the dependencies for only building action files but the remaining of the page actually describes how to use action servers and clients. Comment by Dirk Thomas on 2018-10-22: So in order to be consistent if you want to only mention the dependencies for the first part then the second part needs to be updated as well that additional dependencies are necessary using the action server and client. Otherwise the information is misleading (as in your previous edit).
{ "domain": "robotics.stackexchange", "id": 31899, "tags": "ros, ros-melodic, genmsg" }
Can the Walsh Hadamard transform be computed for non power of 2 sizes?
Question: Can the walsh hadamard transform be calculated for odd image block sizes such as 5x5 or 7x7? Most of the examples I've seen are for 4x4 and 8x8? I fear it probably can't from the description I read on Wikipedia ( though I'm still trying to fully digest that page). Answer: The Walsh-Hadamard transform requires a Hadamard matrix -- which has entries $\pm 1$ and whose rows are orthogonal vectors. A $n \times n$ Hadamard matrix is known to exist for $n = 2$. Larger Hadamard matrices can exist only if $n$ is a multiple of $4$, though it is not known if there is a Hadamard matrix for every multiple of $4$. However, there is a recursive construction that can be used to construct a $2^m \times 2^m$ Hadamard matrix from a $2^{m-1}\times 2^{m-1}$ Hadamard matrix, and this structure allows for the use of the Fast Hadamard Transform algorithm, which reduces the computational cost from $2^{2m}$ additions and subtractions to $m2^m$ additions and subtractions (just like the $N$-FFT reduces the number of multiplications from $O(N^2)$ to $O(N\log N)$. With this as background, the answer is Yes, you can use a Walsh-Hadamard transform of length $N$ if you can find a $N\times N$ Hadamard matrix, but your choices for $N$ are necessarily restricted. Also, fast algorithms may not exist for your choice of $N$, though some speed-up is usually possible. Note that Walsh-Hadamard transforms (WHTs) do not support cyclic convolutions, but they do support what is sometimes called Poisson convolution. If $H$ denotes a $2^m\times 2^m$ Hadamard matrix and $\mathbf x$ and $\mathbf y$ are vectors of length $2^m$ with \WHTs $\mathbf xH$ and $\mathbf yH$, then the inverse WHT of the term-by-term multiplication of the entries in $\mathbf xH$ and $\mathbf yH$ can be described as follows. Suppose that the entries in $\mathbf x$ and $\mathbf y$ etc are indexed not by integers $0$ through $2^m-1$ but rather by the $m$-bit representations of these numbers. Thus, we talk not of $x[k]$ as the $k$-th entry in $\mathbf x$ but rather of $x[\mathbf k]$ where $\mathbf k$ is the $m$-bit representation of $k$. Then, the iWHT of the term-by-term product of the entries in $\mathbf xH$ and $\mathbf yH$ has $\mathbf k$-th entry $$\sum_{\mathbf i} x[\mathbf i]y[\mathbf k\oplus \mathbf i]$$ which is eerily reminiscent of $$\sum_i x[i]y[k-i]$$ and even more so if one notes that modulo two, addition and subtraction are the same and so that $\mathbf k\oplus \mathbf i$ could as well have been written as $\mathbf k\ominus \mathbf i$.
{ "domain": "dsp.stackexchange", "id": 729, "tags": "image-processing, filters, dct" }
Tagging the directories and switching between them by tags - follow-up (Part 2/2: business logic)
Question: This is the bash script responsible for changing the directories via textual tags. (See here the entire project.) dt_script ~/.dt/dt $@ > ~/.dt/dt_tmp COMMAND_TYPE=$(head -n 1 ~/.dt/dt_tmp) if [ $# -eq 0 ]; then NEXT_PATH=$(~/.dt/dt prev | tail -n 1) # Let dt return pwd if no prev tag. ~/.dt/dt --update-prev $(pwd) if [ "$COMMAND_TYPE" == "message" ]; then tail -n 1 ~/.dt/dt_tmp else cd $NEXT_PATH fi else if [ "$COMMAND_TYPE" == "switch_directory" ]; then NEXT_PATH=$(tail -n 1 ~/.dt/dt_tmp) ~/.dt/dt --update-prev $(pwd) cd $NEXT_PATH elif [ "$COMMAND_TYPE" == "show_tag_entry_list" ]; then tail -n +2 ~/.dt/dt_tmp elif [ "$COMMAND_TYPE" == "message" ]; then tail -n 1 ~/.dt/dt_tmp #echo $(cut -d " " -f 2) else echo "Unknown command: $COMMAND_TYPE" fi fi rm ~/.dt/dt_tmp Any further suggestions for improvement are welcomed. Answer: Some suggestions: Use More Quotes™ $@ should be double quoted to pass the parameters without word splitting. Any variable use should be quoted. It is very rare that word splitting is actually what you want, and it causes sometimes very subtle bugs. Command substitution should be quoted. Use good bashisms such as [[ instead of [. Local variables should be lowercase to distinguish them from system variables, which are uppercase. There is no need for semicolons in Bash scripts. This may be more contentious, but I generally write like this: if [[ "$variable" = 'value' ]] then … Apropos: == to compare strings is a bit of a historical accident. The original operator is just =, but == is probably not going away. $(pwd) can be simplified to . - it also means the current directory. The home directory should not be used for temporary file storage (unless that is actually your configured temporary directory). In general you should use something like working_dir="$(mktemp --directory)" (or -d if you don't have GNU coreutils mktemp). This has a few advantages: It means your home directory isn't going to fill up with temporary files which are never cleaned up (but more about that below). It is often a memory mapped filesystem, so it may be much faster than your home directory. Long if/elif sections testing the value of a single variable can usually be easier written as a case block. head does not fail if run on an empty file: $ touch foo $ head foo $ echo $? 0 So your first test could fail for the wrong reason. You might want to [[ -s "$path" ]] to check if a file is empty. The file name "dt_tmp" doesn't tell me anything about what it contains or what it's used for - in general I find that adding the project name to anything within the project is redundant, and marking something as temporary is not particularly helpful to know what it is, unless it really can contain anything. Is there a better name you can give it? It looks like it's some sort of command queue, maybe?
{ "domain": "codereview.stackexchange", "id": 33627, "tags": "bash, console, file-system" }
Subscribing to multiple topics using one (meta)callback
Question: Hello Guys, I have 13 different topics of the same class (control_msgs::JointControllerState) that I would like to subscribe to using ROSCPP. If I follow the standard procedure, I would need to define 13 different callbacks, which will be pretty similar to each other. This would make my code to be more extensive and harder to maintain. Is it possible to create one common callback (like a meta callback) for all these topics? I do not need any sort of sync. I have been thinking of creating an array of subscribers and callbacks. Using typedef I created a array of pointers to functions (the callbacks), but the problem comes when I need to assign a specific topic to each of the callbacks. Any suggestions? Thanks in advance, Charlie Originally posted by CharlieMAC on ROS Answers with karma: 28 on 2019-05-28 Post score: 0 Original comments Comment by PeteBlackerThe3rd on 2019-05-28: You can achieve exactly what you want using lambda functions. I'm a bit rusty on the syntax, but you need to write a normal callback function but with an additional parameter, say string topic_name or int id. Then you use a pass a lambda to each subscribe call which passes the appropriate additional parameter. That's the gist of it. Comment by PeteBlackerThe3rd on 2019-05-28: This answer here gives you some examples : https://answers.ros.org/question/308386/ros2-add-arguments-to-callback/ Comment by gvdhoorn on 2019-05-28: @PeteBlackerThe3rd: that Q&A only applies to ROS 2. Comment by PeteBlackerThe3rd on 2019-05-28: That's true, I'll put together a ROS 1 example quickly Comment by lxg on 2020-10-15: @PeteBlackerThe3rd, is there some thing similar to that for python ROS1? Answer: I've just modified the chatter C++ example code here so that it subscribes to two different topics using a single callback, as I was trying to describe above. #include "ros/ros.h" #include "std_msgs/String.h" void chatterCallback(const std_msgs::String::ConstPtr& msg, std::string topicName) { ROS_INFO("I heard: [%s] on topic \"%s\"", msg->data.c_str(), topicName.c_str()); } int main(int argc, char **argv) { ros::init(argc, argv, "listener"); ros::NodeHandle n; ros::Subscriber sub1 = n.subscribe<std_msgs::String>("chatter", 1, boost::bind(&chatterCallback, _1, "chatter")); ros::Subscriber sub2 = n.subscribe<std_msgs::String>("other_chatter", 1, boost::bind(&chatterCallback, _1, "other_chatter")); ros::spin(); return 0; } Hopefully this will give you a simple ROS 1 example you can adapt to your needs. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2019-05-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2019-05-28: If this is just about the topic name I would suggest to use a MessageEvent. Comment by CharlieMAC on 2019-05-29: @PeteBlackerThe3rd, how could I do this if my callback is class method? Comment by gvdhoorn on 2019-05-29: @CharlieMAC: this topic has come up multiple times on ROS Answers. If you search for something like subscriber extra argument site:answers.ros.org (with Google) it should return quite a few posts that answer your question.
{ "domain": "robotics.stackexchange", "id": 33075, "tags": "ros, ros-kinetic, roscpp, callbacks" }
According to Einstein & Brian Greene, does the photon remain stationary in the fourth dimension?
Question: According to Einstein and Brian Greene, does it logically follow that the photon remains stationary in the fourth dimension? In An Elegant Universe, Brian Greene writes: “Einstein found that precisely this idea—the sharing of motion between different dimensions—underlies all of the remarkable physics of special relativity, so long as we realize that not only can spatial dimensions share an object’s motion, but the time dimension can share this motion as well. In fact, in the majority of circumstances, most of an object’s motion is through time, not space. Let’s see what this means.” Space, Time, and the Eye of the Beholder, An Elegant Universe, Brian Greene, p. 49 Brian Greene and Albert Einstein also state that there is one and only one velocity for all entities through the four dimensions--the velocity of light or c. A photon travels at c through the three spatial dimensions. All of its velocity is directed through the three spatial dimensions. Thus Brian and Einstein are stating that a photon must be stationary in the fourth dimension. For if the photon had any velocity component in the the fourth dimension, its velocity would be different from c, which is not the case. On the other hand, if an object is stationary in the three spatial dimensions, it must be moving at c through the fourth dimension. We can summarize this as: Axiom: All entities have one velocity through the four dimensions--c. (Einstein & Brian Greene). Axiom: The velocity of light (photons) is c through the three spatial dimensions. (Einstein) Theorem: The photon remains stationary in the fourth dimension, as all of its velocity c is through the three spatial dimensions. Does this logic make sense? Also, do you prefer using the word "Axiom" or "Postulate"? An Axiom or a Postulate is a Truth. A Theorem is that which follows logically from Axioms. Answer: Photon experience infinite time dilation and hence, time is stationary for it. Does photon experience time photon travels at c through the three spatial dimensions. All of its velocity is directed through the three spatial dimensions. Thus Brian and Einstein are stating that a photon must be stationary in the fourth dimension. For if the photon had any velocity component in the the fourth dimension, its velocity would be different from c, which is not the case. All this is non sense because there is nothing like speed of time in actual world. It's just a minkowski space concept. You cannot compare speed of object in time axis and 3 dimensional world. Calculating speed in 4D Axioms and postulate serve as a basis for deducing other truths. The ancient Greeks recognized the difference between these two concepts. Axioms are self-evident assumptions, which are common to all branches of science, whilepostulates are related to the particular science. The two axioms are postulates and not axioms , since they are relevant to only one branch, physics. Theorem seems to correct.
{ "domain": "physics.stackexchange", "id": 31059, "tags": "special-relativity, photons, relativity" }
Calculate the Hamming difference between two DNA strands
Question: Write a program that can calculate the Hamming difference between two DNA strands. GAGCCTACTAACGGGAT CATCGTAATGACGGCCT ^ ^ ^ ^ ^ ^^ The Hamming distance between these two DNA strands is 7. import java.util.Optional; public class Hamming { public static int compute(String s1, String s2) { validateInputs(s1, s2); int hammingDistance = 0; int stringLength = s1.length(); for (int i = 0; i < stringLength; i++) { if (s1.charAt(i) != s2.charAt(i)) { hammingDistance++; } } return hammingDistance; } private static void validateInputs(String s1, String s2) { if (s1.length() != s2.length()) { throw new IllegalArgumentException(); } } } Test suite: import static org.hamcrest.CoreMatchers.*; import static org.junit.Assert.*; import org.junit.Test; public class HammingTest { @Test public void testNoDifferenceBetweenIdenticalStrands() { assertThat(Hamming.compute("A", "A"), is(0)); } @Test public void testCompleteHammingDistanceOfForSingleNucleotideStrand() { assertThat(Hamming.compute("A", "G"), is(1)); } @Test public void testCompleteHammingDistanceForSmallStrand() { assertThat(Hamming.compute("AG", "CT"), is(2)); } @Test public void testSmallHammingDistance() { assertThat(Hamming.compute("AT", "CT"), is(1)); } @Test public void testSmallHammingDistanceInLongerStrand() { assertThat(Hamming.compute("GGACG", "GGTCG"), is(1)); } @Test(expected = IllegalArgumentException.class) public void testValidatesFirstStrandNotLonger() { Hamming.compute("AAAG", "AAA"); } @Test(expected = IllegalArgumentException.class) public void testValidatesOtherStrandNotLonger() { Hamming.compute("AAA", "AAAG"); } @Test public void testLargeHammingDistance() { assertThat(Hamming.compute("GATACA", "GCATAA"), is(4)); } @Test public void testHammingDistanceInVeryLongStrand() { assertThat(Hamming.compute("GGACGGATTCTG", "AGGACGGATTCT"), is(9)); } } Questions: Am I using the correct data structures? Can I improve performance? Right now it seems to be \$Θ(n)\$ since it has to check all the characters in the string. Answer: Your code is really all that there is to it. It is clear and concise: one method to validate the input (validateInputs) and the rest of the method to calculate the Hamming distance with a simple loop. A couple of comments: In case of inputs of different length, you are (rightfully) throwing a IllegalArgumentException like this: if (s1.length() != s2.length()) { throw new IllegalArgumentException(); } This is a generic IllegalArgumentException with no message. You might want to add a message so that it is clearer to the caller what went wrong; it would also help the debugging (imagine a bigger application). To resemble what is done by the existing Objects class, you could rename your validating method to requireSameLength and make it return the length: private static int requireSameLength(String s1, String s2) { if (s1.length() != s2.length()) { throw new IllegalArgumentException(); } return s1.length(); } This has the advantage that the method name is now self-documenting and it is returning the correct value, which means we can chain the result (just like requireNonNull which returns the non-null value or throws an exception). import java.util.Optional; is not needed, you aren't using it. Consider making the class final since it looks like a utility class (only public static methods). If you're using Java 8, you could write this a bit shorter using the Stream API: public static int compute(String s1, String s2) { int length = requireSameLength(s1, s2); return IntStream.range(0, length).map(i -> s1.charAt(i) == s2.charAt(i) ? 0 : 1).sum(); } private static int requireSameLength(String s1, String s2) { if (s1.length() != s2.length()) { throw new IllegalArgumentException(); } return s1.length(); } It maps each index to 0 or 1 based on whether the two input Strings have equal characters at that index, and sums the result.
{ "domain": "codereview.stackexchange", "id": 19812, "tags": "java, algorithm, programming-challenge, edit-distance" }
Are Nilaparvata nymphs winged?
Question: Source: nbair.res.in I'm wondering if the first individual on the left is a nymph. Even if not do Nilaparvata sp have wings in nymphal stage? I'd appreciate any authentic source for reference. Answer: The family-level identification I had given I now see was unnecessary; the Wikipedia site for an economically important member of the genus (there called the Brown planthopper) is here: https://en.wikipedia.org/wiki/Brown_planthopper. The species covered there has long- and short-winged forms; other members of the genus are probably similar. All of the individuals in the photograph above are adult. A late-instar nymph will have wing pads (not yet wings) which are fused to the thoracic segment to which they're attached (a life-cycle diagram with drawings of two nymphal instars -- there are probably more -- is found here: http://www.cpsskerala.in/OPC/pages/ricePestbrownplant.jsp [the eighth slide]). So nymphs and short-winged adults won't be flying out of danger -- they'll be hopping away.
{ "domain": "biology.stackexchange", "id": 7177, "tags": "species-identification, entomology" }
Heterogeneous ion solution
Question: I am a college student and for my chemistry of solutions class, I had a lab where we had to titrate orange juice to check the calcium content. However, the experimental results are 20% off the labelled calcium content, and in my report I have to provide a "Tentative explanation of the discrepancy." Is it possible that the orange juice was not completely homogeneous? What I mean is that maybe some of the calcium and other 2+ ions were not well mixed and there were more/less in our sample. Is that possible or are ions always equally dispersed throughout the solvent when in solution? Answer: The calcium amount may change from one fruit to the next one, or from one day to another one, or from one field to another one. 20 % is not a significant difference between such a measurement and another one. And this may happen for any constituant, not specially for calcium.
{ "domain": "chemistry.stackexchange", "id": 13538, "tags": "solubility, solutions, ions, titration" }
Critique of Cardinal Direction Enum
Question: I'm working on a simple game in which I need to track the cardinal direction of an object. I experimented with using the enum's ordinal value, as well as using switches for the rotation, but both seemed wrong. This is what I ended up with. Is the following an adequate solution? // Defines cardinal direction public enum Direction { NORTH(0) { @Override public String getMessage() { return getDegrees() + " degrees due north"; } }, EAST(90) { @Override public String getMessage() { return getDegrees() + " degrees due east"; } }, SOUTH(180) { @Override public String getMessage() { return getDegrees() + " degrees due south"; } }, WEST(270) { @Override public String getMessage() { return getDegrees() + " degrees due west"; } }; private final int degrees; public abstract String getMessage(); private Direction(final int degrees) { this.degrees = degrees; } public int getDegrees() { return degrees; } private static final Map<Integer, Direction> lookup = new HashMap<Integer, Direction>(); static { for (Direction d : EnumSet.allOf(Direction.class)) lookup.put(d.getDegrees(), d); } public static Direction get(int degrees) { return lookup.get(degrees); } public Direction rotateRight() { return Direction.get((degrees + 90) % 360); } public Direction rotateLeft() { return Direction.get((degrees + 270) % 360); } } Answer: Conceptually it is a great solution. I can suggest some changes, but in reality they are minor. It would also be 'fun' to tune it abit, but in the bigger picture the changes would be miniscule... still... My biggest observation is that the constructors can be simplified a lot. Instead of having each Enum member create a unique method implementation (and that method builds a String each time - though the compiler will probably fix that), you could simplify it a lot with: private final int degrees; private final String message; private Direction(final int degrees, final String name) { this.degrees = degrees; this.message = degrees + " degrees due " + name; } public int getDegrees() { return degrees; } public String getMessage() { return message; } With the change to the constructor, you have: no abstract methods to implement simple message which is a constant, instead of re-creating it each time you call getMessage() Your enums are initialized in a simpler way: NORTH(0, "North"), EAST(90, "East"), SOUTH(180, "South"), WEST(270, "West") Apart from this, the code is pretty good, but, there is a way you can 'play' with the lookup system to use the ordinals only.... consider the following code (I'll leave it to you to figure out.... ;-): public static Direction get(final int degrees) { int ordinal = ((degrees % 360) / 90) - ( 4 * (degrees % 90)); return ordinal < 0 ? null : values()[ordinal]; }
{ "domain": "codereview.stackexchange", "id": 5538, "tags": "java, enum, lookup" }
Would an astronaut in this spacecraft feel weightless?
Question: A spacecraft is placed in orbit around Saturn so that it is Saturn-stationary (the Saturn equivalent of geostationary – the spacecraft is always over the same point on Saturn’s surface on the equator). Information the question provided: mass of saturn = $5.68\times 10^{26} kg$ mass of spacecraft = $2.0 \times 10^{3}kg$ period of rotation of Saturn = $10$ hours $15$ minutes Information I calculated: radius of orbit = $1.1 × 10^8m$ Now part d) Would an astronaut in this spacecraft feel weightless? Explain your answer. I am unsure how to answer this question. I guess I first need to define "weightlessness"? From what I know the sensation of weightlessness is the absence of normal force? Or its the sensation that you feel that you weight less than your normal weight? I also calculated $g = 3.13m/s^{2}$ if thats any useful? Answer: Weightlessness is when your proper acceleration is zero. The proper acceleration is an important concept in general relativity because it is a scalar invariant but, despite the fearsome reputation for complexity that general relativity enjoys, the proper acceleration has a simple physical interpretation. To determine your proper acceleration simply drop an object and measure the acceleration of the object relative to you. Then your proper acceleration is the negative of the acceleration you've just measured. For example if I drop a pen it accelerates away from me at $a = -9.81 \mathrm{m/s}^2$ (the minus sign means the acceleration is downwards) so my proper acceleration is $a = +9.81 \mathrm{m/s}^2$. Suppose now I leap off a cliff and I drop a pen. The pen and I fall at the same rate (ignoring air resistance) so the pen remains stationary alongside me. In this case my proper acceleration is zero, and therefore I would be weightless. A less fatal example would be an astronaut aboard the International Space Station. The ISS and all its contents orbit the Earth with the same period. If an astronaut aboard the ISS drops a pen it remains stationary beside them because the astronaut and pen are both orbiting the Earth in the same orbit. Hence the astronauts aboard the ISS are weightless (as countless YouTube videos testify). So in your case you need only consider whether the spaceship, the astronaut aboard it, and any items the astronaut drops are following the same orbit.
{ "domain": "physics.stackexchange", "id": 73035, "tags": "homework-and-exercises, newtonian-mechanics, reference-frames, free-body-diagram" }
Are numbers types and what is "Number"?
Question: I was pondering about what are the numbers. It seems like a number is data type. I mean, like Maybe in Haskell. Because, for instance, one on its own means nothing for me. However, one apple tells me about how many apples does one talking about. Thus, numbers are quite abstract. If somebody asks you "how is it going" you can't answer "10". 10 what? But you could say 10 out of 10. But this is another abstraction. So, what I want to say is that a number is an abstraction. It can only be used to quantify other things. Thus, very often people say that "Number" is a type or set or category, etc. But then if a concrete number is an abstraction what is Number? I hope you've got my idea and question and sorry if I used some terms incorrectly. I'm trying to study CS on my own and do make lots of mistakes. Answer: So, you have to be very careful to distinguish values, and the types of those values. We say $v : T$ when $v$ is a value with type $T$. The type Number When people say "Number" is a type, usually they're referring to the type of natural numbers. But each number isn't a type, it's a value, and $Number$ is their type. So we can say $1 : Number$, $2 : Number$, etc. Notation isn't always consistent. Sometimes people will refer to $Number$ as a type class that contains $Nat, Int, Float$ etc. But that's more complicated. Now, as for each number being a type, in a language like Haskell (or any Hindley-Milner lanugage), this isn't the case, because numbers are values, and types and values are separate. But there are a few ways we can look at this Numbers as singletons But using dependent-types or GADTs, in a Haskell, Agda, Idris, or other languages with advanced types, you can form a type like this (using Agda-like pseudocode): data TypeNat : Nat -> Set where TZero : TypeNat Zero TSucc : {n : Nat} -> TypeNat n -> TypeNat (Succ n) For any number n, TypeNat n is a type, containing exactly one value, the TypeNat representation of n. So here, the number isn't the type, but there's a 1:1 correspondence between the numbers and the types. Numbers to Quantify other things We can use numbers to quantify things, but in a dependent-type system, usually this is done by indexing types. So you can have a type like Vec a n, which is the type of Vectors containing exactly n elements of type a. So here, our types are indexed by numbers, but the number itself is not a type.
{ "domain": "cs.stackexchange", "id": 7164, "tags": "data-structures, mathematical-foundations, abstract-data-types" }
How would i implement position and heading readings into my SLAM system?
Question: I am trying to build a low-cost SLAM system with an MPU-6050 and GY-271 (magnetometer). Currently, i have a robot with an Arduino that collects the sensor data and a Raspberry Pi that (hopefully) will do the SLAM calculations. I want my robot to be able to use all three sensor readings in SLAM to create a 2D map of the environment. However, considering that i want a 2D map, i will not need all the axis readings correct? I read another post on here where one of the answers said that only the yaw from the gyroscope, and the x and y from the accelerometer would be needed. My question is, how would i implement this into my SLAM robot? I was thinking of passing the accelerometer and odometry readings through a kalman filter on the Arduino and then the same for the gyro and magnetometer readings. Would that be correct? Would i also need to use all the axis (x, y, and z) readings from the magnetometer? Or just one or two axis? Thanks. Answer: Based on your comments thread, I think what you want is a Kalman filter. I think this ahrs implementation will be helpful for you. If you search "mpu6050 ahrs" you'll also find a dozen or so other implementations to look through. If you want you can modify it so that it explicitly sets the Z coordinate to zero, but it should actually figure that out automatically. To map the room (and do SLAM), you will need a sensor which can tell you how far away the walls are. Sonor tends to be a good low-cost entry point, though it has plenty of implementation difficulties.
{ "domain": "robotics.stackexchange", "id": 1475, "tags": "slam, imu, accelerometer, gyroscope, magnetometer" }
Why is it "bad taste" to have a dimensional quantity in the argument of a logarithm or exponential function?
Question: I've been told it is never seen in physics, and "bad taste" to have it in cases of being the argument of a logarithmic function or the function raised to $e$. I can't seem to understand why, although I suppose it would be weird to raise a dimensionless number to the power of something with a dimension. Answer: It's not "bad taste", it's uncalculable to the point of meaninglessness. The whole point of dimensional analysis is that there are some quantities that are not comparable to each other: you can't decide whether one meter is bigger or smaller than ten amperes, and trying to add five volts to ten kelvin will only yield inoperable nonsense. (For details on why, see What justifies dimensional analysis? and its many linked duplicates on the sidebar on the right.) This is precisely what goes on with, say, the exponential function: if you wanted the exponential of one meter, then you'd need to be able to make sense of $$ \exp(1\:\rm m) = 1 + (1\:\rm m) + \frac12(1\:\rm m)^2 + \frac{1}{3!}(1\:\rm m)^3 + \cdots, $$ and that requires you to be able to add and compare lengths with areas, volumes, and other powers of position. You can try to just trim out the units and deal with it, but keep in mind that it needs to match, exactly, the equivalent $$ \exp(100\:\rm cm) = 1 + (100\:\rm cm) + \frac12(100\:\rm cm)^2 + \frac{1}{3!}(100\:\rm cm)^3 + \cdots, $$ and there's just no invariant way to do it. Now, to be clear, the issue is much deeper than that: the real problem with $\exp(1\:\rm m)$ is that there's simply no meaningful way to define it a way that will (i) be independent of the system of units, and (ii) keep a set of properties that will really earn it the name of an exponential. If what one wants is a simple clear-cut way to see it, a good angle is noting that, if one were to define $\exp(x)$ for $x$ with nontrivial dimension, then among other things you'd ask it to obey the property $$ \frac{\mathrm d}{\mathrm dx}\exp(x)=\exp(x), $$ which is dimensionally inconsistent if $x$ (and therefore $\mathrm d/\mathrm dx$) is not dimensionless. It's also been noted in the comments, and indeed in a published paper, that you can indeed have Taylor series over dimensional quantities, by simply setting $f(x) = \sum_{n=0}^\infty \frac{1}{n!} \frac{\mathrm d^nf}{\mathrm dx^n}(0)x^n$, and that's true enough. However, for the transcendental functions we don't want any old Taylor series, we want the canonical ones: they're often the definition of the functions to begin with, and if someone were to propose a definition of, say $\sin(x)$ for dimensionful $x$, then unless it can link back to the canonical Taylor series, it's simply not worth the name. And, as explained above, the canonical Taylor series have fundamental scaling problems that render them dead in the water. That said, for logarithms you can on certain very specific occasions talk about the logarithm of a dimensional quantity $q$, but there you're essentially taking some representative $q_0$ and calculating $$\log(q/q_0)=\log(q)-\log(q_0),$$ where in making sense of the latter you require that the two numerical values be in the same units ─ in which case the final answer is independent of the unit itself. If the situation also allows you to drop additive constants, or incorporate them into something else (such as when solving ODEs, for example, with a representative case being the electrostatic potential of an infinite line charge, or when doing plots in log scale) then you might get rid of the $\log(q_0)$ in the understanding that it will come out in the wash when you come back to dot the i's. However, just because it can be done in the specific case of the logarithm, which is unique in turning multiplicative constants into additive ones, doesn't mean you can use it in other contexts ─ and you can't.
{ "domain": "physics.stackexchange", "id": 91978, "tags": "conventions, units, dimensional-analysis" }
Why is boiling point of hydrogen greater than of helium?
Question: If we compare the boiling point of hydrogen and helium using molecular weight criteria (both have London dispersion forces as intermolecular forces of attraction because both are non polar then the one which have which have higher molecular weight will have higher intermolecular attraction forces) then helium should have greater boiling point, but if we see the boiling point data for H2 and helium then we found that H2 have its boiling point as approximately 20 Kelvin while He will have approx 4.3 Kelvin. Answer: Higher molecular weight is not the determining factor. Rather the number of electrons that could be polarized and the volume of space over which they may be polarized are the key factors in dispersion forces. For species with similar structures higher molecular weight goes along with more or larger atoms, thus more electrons and greater polarizability; but "monatomic" and "diatomic" are not really similar structures. Compared with helium, hydrogen has as many electrons (two), and the presence of two atoms instead of one allows an opportunity for polarization over more volume. So hydrogen will have more dispersion forces.
{ "domain": "chemistry.stackexchange", "id": 17024, "tags": "boiling-point" }
cmd_vel and move_base
Question: What are the conventions for the cmd_vel message produced by move_base? Is positive linear.x the forward or backward direction? Is positive angular.z clockwise or anti-clockwise as viewed from above? Originally posted by JediHamster on ROS Answers with karma: 995 on 2011-04-01 Post score: 4 Answer: You have to think in the local frame of the robot. I'll dispense a couple of samples in the 2D-plane for ground robots: Using the local robot base frame, the robot is always looking to the infinity of the x-axis (no matter where it located on the map). So if you command a Twist {linear.x=1,linear.y=0} the robot will go forward. You can see that if we are working with a non-holonomic robot (like the erratic robot) the linear.y has no meaning. In the other hand if you work with holonomic robot (like the PR2 base) the linear.y would mean lateral move. The angular.z is the yaw rotation, that it's, the only possible rotation in the 2D plane where the robot stands (if we are talking about a ground robot in the plane). The angular.z veolocity is in radians and anti-clockwise. So a Twist message with angular.z > 0 means turn left in the local frame. See details in: Standard Units of Measure and Coordinate Conventions Originally posted by Pablo Iñigo Blasco with karma: 2982 on 2011-04-01 This answer was ACCEPTED on the original site Post score: 10
{ "domain": "robotics.stackexchange", "id": 5258, "tags": "navigation, move-base" }
Finding preferred pair of $m$-sequence for Gold code
Question: I would like to find a preferred pair of $m$-sequence of degree 34 (not a multiple of 4) for Gold code. However, I am lost when I am checking William Wesley Peterson's table in his Error-correcting Codes, Appendix C I tried first fix the first entry for my first $m$-sequence. To find the other sequence, since $n = 34$, it is even, then $$2^{\left(\frac{34+2}{2}\right)}+1 = 2^{18} + 1$$ or $$2^{\left(\frac{34-2}{2}\right)}+1 = 2^{16} + 1$$ But either number is not mapped to any entry in the table that I cannot proceed to find the other $m$-sequence. I assume I should find the other $m$-sequence which has the equivalent root of $$r^{\left(2^{\left(\frac{34+2}{2}\right)}+1\right)}$$ or $$r^{\left(2^{\left(\frac{34-2}{2}\right)}+1\right)}$$ To elaborate, the following is what I consider as an easy case of prefered pair of Gold Sequence using Peterson's table and Gold's theorem, presented in this paper. I have no access to Dixon's book therefore I don't know what the correct procedure is. The author applying the equation in the conjugate form of Gold's theorem in Gold's paper Answer: Printed tables of irreducible polynomials of large degrees are usually incomplete because there are so many of them! What one can do is use various tricks of the trade to figure out a set of $68 = 2\times 34$ consecutive bits of one m-sequence from the other m-sequence (with known generator polynomial as taken from the Peterson and Weldon table) and then run the Berlekamp-Massey algorithm on these consecutive bits to get the shortest shift register that generates the desired m-sequence. If ${\bf s} = s_0, s_1, s_2, \ldots $ is one of the m-sequences, then the other m-sequence ${\bf t} = t_0, t_1, t_2, \ldots $ is related to ${\bf s}$ via $t_i = s_{qi}$ where $q = 2^{16}+1 = 65,537$. See the paper "Cross-correlation properties of pseudorandom and related sequences," Proc. IEEE, vol.68, pp.593-619, May 1980 (unfortunately behind IEEE's paywall, but copies can be found in the Internet) if you are unfamiliar with this idea. So, we crank up the LFSR that generates ${\bf s}$ and run it to produce the sequence ${\bf s}$, recording $$t_0 = s_0, ~~t_1 = s_{65537},~~ t_2 = s_{131074},~~ \cdots, t_{67} = s_{4390979}.$$ Once upon a time, cranking out 4 million+ bits would have taken some effort, but these days, it's no big deal. What next? Well, the Berlekamp-Massey algorithm can be viewed as a shift-register synthesis algorithm: it finds the shortest LFSR that can generate any given sequence. Thus, applying the Berlekamp-Massey algorithm to the first $68$ bits of $\bf t$ which we have just found gives us the LFSR that generates the m-sequence $\bf t$, and away we go!!
{ "domain": "dsp.stackexchange", "id": 9536, "tags": "digital-communications, coding" }
slam kinect packages
Question: Hi Besides gmapping and RGBD SLAM, are there any other packages you can use for SLAM with the Kinect ? Thank you Originally posted by ap on ROS Answers with karma: 42 on 2013-06-19 Post score: 0 Answer: Vslam (http://www.ros.org/wiki/vslam), but I wasn't able to install it. Maybe you'll be luckier! There's also Hector Mapping (http://www.ros.org/wiki/hector_mapping), but I read it doesn't work well with the Kinect (you would need a real laser scanner). Originally posted by Zayin with karma: 343 on 2013-06-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14628, "tags": "slam, navigation, kinect" }
[Error Msg] The root link_base has an inertia specified in the URDF, but KDL
Question: Hi guys. since I put the inertia value into the base link I got all the time the following error: The root link base_link has an inertia specified in the URDF, but KDL does not support a root link with an inertia. As a workaround, you can add an extra dummy link to your URDF. Now. I would create another link as suggested by the message, BUT in that case I must define a joint. Since the base_link can move on 6DOF and the joint "floating" has been deprecated and not usable, I dont know how which constrains shoudl I use for the joint. Any idea on how to get rid of the message? Originally posted by Andromeda on ROS Answers with karma: 893 on 2014-09-16 Post score: 2 Original comments Comment by filipposanfilippo on 2016-04-19: I am having the same issue. Could you please post or better explain your solution? Comment by Andromeda on 2016-04-19: Create a frame 'odom' and a frame 'base_link' or whatever you like. Then joint them toghether with a fixed joint. The inertia must be put on the child frame, in this case 'base_link' otherwise you get the error. Comment by filipposanfilippo on 2016-04-19: This is the message that I get when running roslaunch rrbot_control rrbot_control.launch: The root link link1 has an inertia specified in the URDF, but KDL does not support a root link with an inertia. As a workaround, you can add an extra dummy link to your URDF. Any ideas? Comment by Andromeda on 2016-04-19: After link1 create a dummy frame which has mass and inertia properties. You fix them together. You move the inertia from link1 to link_dummy Comment by filipposanfilippo on 2016-04-20: Thank you, Andromeda! I have done the suggested modification. However, now my link1 has got a gray color (the one set is orange). How can I fix this? Also in the robot tree, linl1 is not appearing (it only shows link_dummy, link2, ...). Is this normal? Answer: Would using a fixed joint with [0 0 0 0 0 0] (xyz, rpy) not work? Originally posted by gvdhoorn with karma: 86574 on 2014-09-17 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Andromeda on 2014-09-17: No, I already tried it. Putting "fixed" it will literally fix the robot to the root link. No movement are possible. "floating" would be the right choice but it has been deprecated and is not usable Comment by gvdhoorn on 2014-09-17: I think you're assuming the 'dummy link' should be the parent of base_link. I'm not sure whether that is actually a requirement? If you make your dummy a child with a fixed joint, wouldn't that work? Comment by Andromeda on 2014-09-17: You are a genius. Now I ve got it... Thanks! Comment by Andromeda on 2014-09-17: But in that case I must move the inertia into the child frame. Will ROS it taking into account? And will move the parent _link accordingly? I hope you can understand me.
{ "domain": "robotics.stackexchange", "id": 19422, "tags": "ros" }
what is the one hot encoding for cancer data classification
Question: I am working on a project to classify lung CT dataset using CNN and tensorflow, I know that the order for the category is cancer/no-cancer (only 2 classes), in more than one Github repository I see that they did one hot encoding like the code below: if label == 1: label = np.array([0, 1]) elif label == 0: label = np.array([1, 0]) what makes me confused is: 1 means cancer and 0 means no-cancer, as I understand it should be: if label == 1: label = np.array([1, 0]) elif label == 0: label = np.array([0, 1]) but why they did one hot encoding like this, I don't know I am wrong or there is another thing that I did not understand, can anyone explain it for? or give me a better way to do encoding for my data, but with code? Answer: Both ways would work equally, but the way you see in the github repo is more standard. The standard way of converting a integer label $y_i$ (from 0 to K-1) into a one-hot vector encoding is by creating a all-zero vector of length K, and set the element indexed by $y_i$ to be 1, i.e. label_one_hot = np.zeros(k) label_one_hot[label] = 1
{ "domain": "datascience.stackexchange", "id": 3776, "tags": "machine-learning, python, tensorflow" }
About free quarks and confinement
Question: I simply know that a single free quark does not exist. What is the reason that we can not get a free quark? If we can't get a free quark then what is single-top-quark? Answer: A free quark is like the free end of a rubber band. If you want to make the ends of a rubber band free you have to pull them apart, however the farther apart you pull them the more energy you have to put in. If you wanted to make the ends of the rubber band truly free you'd have to make the separation between them infinite, and that would require infinite energy. What actually happens is that the rubber band snaps and you get four ends instead of the two you started with. Similarly, if you take two quarks and try and pull them apart the force between them is approximately independent of distance, so to pull them apart to infinity would take infinite energy. What actually happens is that at some distance the energy stored in the field between them gets high enough to create more quarks, and in stead of two separated quarks you get two pairs of quarks. This doesn't happen when you pull apart a proton and electron because the force between them falls according to the inverse square law. The difference between the electron/proton pair and a pair of quarks is that the force between the quarks doesn't fall according to the inverse square law. Instead at sufficiently long distances it becomes roughly constant. I don't think this is fully understood (it certainly isn't fully understood by me :-), but it's thought to be because the lines of force in the quark-quark field represent virtual gluons, and gluons attract each other. This means the lines of force collect together to form a flux tube. By contrast the electron-proton force is transmitted by virtual photons and photons do not attract each other. Finally, top quarks are usually produced as a top anti-top pair. It is possible to create a single top quark, but it's always paired with a quark of a different type so you aren't creating a free quark.
{ "domain": "physics.stackexchange", "id": 5540, "tags": "particle-physics, standard-model, quarks, confinement, color-charge" }
Why quicksort instead of a B-tree construction?
Question: As far as I know (despite some variations which provide empirical average-case improvements) quick-sort is worst case $O(n^2)$ with the original Hoare partition scheme having a particularly bad behavior for already sorted lists, reverse-sorted, repeated element lists. On the other hand a B-Tree has $O(\log n)$ insertions, meaning worst-case $O(n\log n)$ to process an array of $n$ elements. Also an easy optimization to memoize the memory-address of the lowest and highest nodes (would make it possible to process sorted / reverse-sorted / repeated-element lists in $O(n)$). While there are more favored sorting algorithms than quicksort now (e.g. timsort) what originally favored its use? Is it the susceptibility to parallelization (also in place swaps, lower memory complexity)? Otherwise why not just use a B-tree? Answer: A B-tree has one significant disadvantage on the fastest deep cache machines, it depends on pointers. So as the size grows each access have a greater and greater risk of causing a cache-miss/TLB-miss. Effectively getting a K value of z*x, x=sum(cache/TLB-miss per access, L1-TLB misses are typically size of tree / total cache size), z ~= access time of least cache or main memory that can hold the entire tree. On the other hand the "average case" quicksort streams memory at maximum pre-fetcher speed. Only drawback here is the average case also cause a stream to be written back. And after some partitions the entire active set sit in caches and will get streamed quicker. Both algorithms suffers heavily from branch mis-predictions but quicksort, just need to backup a bit, B-Tree additionally needs to read in a new address to fetch from as it has a data dependency which quicksort doesn't. Few algoritmes are implemented as pure theoretically functions. Nearly all have some heuristics to fix their worst problems, Tim-sort excepted as its build of heuristics. merge-sort and quick-sort are often checked for already sorted ranges, just like Tim-sort. Both also have an insertion sort for small sets, typically less than 16 elements, Tim-sort is build up of these smaller sets. The C++ std::sort is a quicksort hybrid with insertion sort, with the additional fallback for the worst case behaviour, if the dividing exceed twice the expected depth it changes to a heap-sort algorithm. The original quicksort used the first element of the array as pivot, this was quickly abandoned for a (pseudo)random element, typically the middle. Some implementations changed to median-of-three (random elements) to get a better pivot, recently a median-of-5-median (of all elements) was used, and last I saw in some presentation from Alexandrescu was a median-of-3-medians (of all elements) to get the a pivot that was close to the actual median (1/3 or a 1/5 of the span).
{ "domain": "cs.stackexchange", "id": 7825, "tags": "algorithms, sorting, efficiency" }
Why are rain clouds darker?
Question: I was taught in school that clouds are white due to the scattering of light. Since all rays are reflected it appears as white. But I am wondering about rain clouds. Why are rain clouds darker? Answer: Rain clouds are dark because the part of the cloud you see is in the shade. Clouds are white because they contain tiny water droplets that scatter light of all colors equally in all directions. "Scatters light of all colors equally in all directions" means "white". But if you put a layer of white stuff over another layer of white stuff, the top layer will scatter light from the Sun, reflecting a lot of it into space. That means there's less left to light up the layer underneath. Compared to the top layer, the bottom layer will look darker. For a cloud to produce rain, it needs to be fairly tall (thick). That means the upper parts of the cloud reflect away most of the sunlight, leaving the lower parts in the shade. If you're under the cloud, the lower part is all you see -- and it looks dark.
{ "domain": "physics.stackexchange", "id": 87557, "tags": "optics, visible-light, everyday-life, atmospheric-science, weather" }
Laser scan to probabilistic map based on occupancy grid
Question: Hi all, I would like to transform laser data (LaserScan) in a local 2D occupancy grid (OccupancyGrid). I tried to use "gmapping" and it worked well. But gmapping builds a global map while I only need the local one. Is there a simple solution (a package or a node) to build a local map around the robot? Could you please tell me what should I do in details? or show me some references? Thank you. Originally posted by sadek on ROS Answers with karma: 58 on 2015-03-23 Post score: 0 Answer: Hi, I finally found a node ROS that can do that. But can be run only on ROS Indigo. More information on this page : http://wiki.ros.org/local_map Originally posted by sadek with karma: 58 on 2015-03-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jmrrosa on 2016-09-19: Hi! Can you please tell me how do you run the local_map? I'm tried to run the package but it tells me that there is no executable. Thanks!
{ "domain": "robotics.stackexchange", "id": 21216, "tags": "navigation, gmapping" }
Non uniform NC-hierarchy collapse?
Question: Jianer Chen's paper "Characterizing parallel hierarchies by reducibilities" Information Processing Letters 39(1991) 303-307 shows the theorem: If $NC^{k+1} = NC^k$ then we have $NC = NC^k$ His proof is for uniform circuit classes. What is for non uniform classes? Does the result still hold? Answer: Suppose that every circuit of depth $C\log^{k+1} n$ and size $Dn^\ell$ can be converted to an equivalent circuit of depth $C'\log^k n$ and size $D' n^{\ell'}$. Now suppose we are given a circuit of depth $C\log^{k+2} n$ and size $Dn^\ell$, and assume furthermore that it's a levelled circuit (making a circuit levelled only increases the size polynomially). Make a list of all nodes at levels $C\log^{k+1} n,2C\log^{k+1} n,\ldots,C\log^{k+2} n$. A node at level $\alpha C\log^{k+1} n$ is the value of a circuit of depth $C\log^{k+1} n$ and size at most $Dn^\ell$ whose leaves are nodes at level $(\alpha+1)C\log^{k+1}n$. We can replace each of these circuits by an equivalent circuit of depth $C'\log^k n$ and size $D' n^{\ell'}$. In total, the new circuit will have depth $C'\log^{k+1} n$ and size at most $DD' n^{\ell+\ell'}$. This shows that $NC^{k+1} = NC^k$ implies $NC^{k+2} = NC^{k+1}$.
{ "domain": "cstheory.stackexchange", "id": 2670, "tags": "concurrency, hierarchy-theorems" }
Would a black hole be slowed down as it passed through a large solid object?
Question: Say I get careless and drop a black hole. It falls through the Earth, passes through the core, and comes out on the other side. Idealised as point masses, the black hole would reach an equal height on the other side of the Earth, and if I phone up a friend on the other side perhaps they could catch it. But if the mass of the Earth retards its motion the black hole would lose energy and not gain enough height on the other side, so it would "bounce" a number of times and end up in the core for as long as the core continues to exist. If the black hole were an ordinary ballistic object it would encounter resistance. But on the other hand the black hole is really just a region of space that everything falls into and since spacetime gets stretched I intuitively expect there to be more "room" for everything as it falls in (spaghettification) so there is no impact. But on the other other hand this would cause ripples in the rest of the planet so I suppose that energy has to come from somewhere. That third point leads me to conclude that the black hole would lose energy and fail to reach an equal height on the other side, but because of the second point I don't see a mechanism for this energy loss. So what actually happens? Answer: The black hole would be slowed by dynamical friction. The black hole pulls material toward itself, but as it continues to move, that material ends up behind it. This makes an excess of material behind the black hole, and the gravitational pull of that excess mass slows the black hole. A secondary effect would be that as the black hole accretes material, it would pick up momentum from that material, also slowing it.
{ "domain": "physics.stackexchange", "id": 100425, "tags": "gravity, black-holes, projectile" }
Water-stable organoaluminum compounds in which aluminum is part of the aromatic ring
Question: The more common organoaluminum compounds, such as trialkylaluminum, decompose on contact with oxygen-containing molecules (water, ethanol etc). However, my guess is that organoaluminum compounds with an aluminum atom being part of the aromatic system might be more stable towards oxygen compounds, since the high affinity for oxygen of aluminum could be (at least partially) offset by the high energy barrier to destroying the aromatic ring. My questions are- is my guess correct, and is there any specific aromatic organoaluminum compound that is not destroyed (or acts reversibly at the worst) with water, ethanol, THF (etc)? Answer: You're probably not going to get what you want putting aluminum into an aromatic ring. Aluminum and carbon have relatively poor pi overlap, so your ring will not have the full delocalization necessary for string aromatic stabilization. And even strong aromatic stabilization may not be all that strong compared with aluminum preferring to bond with oxygen or nitrogen rather than carbon. Instead of trying to design a compound that somehow tames the reactivity of aluminum-carbon bonds, choose the right polar solvent. Protic solvents are bad. According to Wikipedia, organoaluminum compounds "readily form adducts with bases such as pyridine, THF and tertiary amines. These adducts are tetrahedral at Al." The named examples are aprotic and coordinate through nitrogen or oxygen.
{ "domain": "chemistry.stackexchange", "id": 13076, "tags": "organic-chemistry" }
Logic Programming - A definite program for the theory of groups
Question: I am studying theoretical computer science using Ayala's book "Fundamentos da Programação Lógica e Funcional" (the book is written in Portuguese), but the part I am studying right now is based on Lloyd's book "Foundations of Logic Programming". I am currently in the part about definite programs, trying to solve the following textbook exercise: Give a definite program for the theory of groups, specified with the axioms $x \cdot (y \cdot z) = (x \cdot y) \cdot z$, $x \cdot e = x$ and $x x^{-1} = e$. Prove that $e x = x$ and $x^{-1} x = e$. My doubt is in the beginning. I think that the terms should be formed using the constant $e$ (to represent the identity of the group), the unary function $i$ (to represent the inverse), the binary function $\cdot$ (to represent the group operation) and variables $x, y, z, \ldots$ But what should I use as a predicate symbol in my definite program? Thanks in advance. Answer: I am not sure it's the most elegant approach, but you can use a predicate symbol for equality. Let's denote this predicate symbol by $eq$. A definite program for the task would contain the following 3 definite program clauses for the 3 axioms: $eq(x \cdot (y \cdot z), (x \cdot y) \cdot z) \leftarrow $ $eq(x \cdot e, x) \leftarrow$ $eq(x \cdot i(x), e) \leftarrow$ I would also add the following clauses to operate easily with the idea of equality: $eq(x, x) \leftarrow$ $eq(x, y) \leftarrow eq(y, x)$ $eq(x, y) \leftarrow eq(x, z), eq(z, y)$ $eq(x \cdot y, x' \cdot y') \leftarrow eq(x, x'), eq(y, y')$ $eq(i(x), i(x')) \leftarrow eq(x, x')$ Then, proving $x^{-1} x = e$ means proving the goal $G_0: \ \leftarrow eq(i(x) \cdot x, e)$, which can be done as sketched (I switched to traditional group notation): Obtain $eq(e, (x^{-1}x) (x^{-1}x)^{-1})$ Obtain $eq(x^{-1}x, x^{-1}(x x^{-1}) x)$ and hence obtain $eq((x^{-1}x) (x^{-1}x)^{-1}, (x^{-1}x) (x^{-1}x) (x^{-1}x)^{-1})$ Obtain $eq((x^{-1}x) (x^{-1}x) (x^{-1}x)^{-1}, x^{-1} x) $ Combine the results from steps 1, 2 and 3. Finall, proving $e \cdot x = x$ means proving the goal $G_1: \ \leftarrow eq(e \cdot x, x)$, which can be done as sketched (I switched to traditional group notation) below: Obtain $eq(ex, xx^{-1}x)$ Use what we just proved: $x^{-1} x = e$ to obtain $eq(xx^{-1}x, xe)$. Obtain $eq(xe, x)$ and, combining that with the two previous step, the result will follow.
{ "domain": "cs.stackexchange", "id": 20758, "tags": "group-theory, logic-programming" }
can not save map with octomap_server / octomap_saver from rgbdslam ROS Fuerte
Question: Hi, I have seen on this ROS Answer topic that someone already had trouble saving maps from rgbd slam using octomap. I have exactly the same problem, I click on "Graph" -> Send Model (ctrl+M) inside the rgbdslam window, the model and the map seem to be sent correctly (30 out of 30 nodes, for example), but when I run "rosrun octomap_server octomap_saver mymap.bt" there is only 1 node saved to the map file (so empty map in octovis). Let me explain my setup : rgbdslam from the wiki svn, adapted with many efforts to be used in fuerte and built with rosmake. I can capture a scene (by pressing space), but no map can be saved with octomap_saver as said above. I have also set up the octomap_mapping.launch file accordingly (fixed frame "openni_camera" and "cloud_in" remapped to "/rgbdslam/batch_clouds"). I am also using the experimental branch of octomap. Here are the steps that I did to build it in ROS Fuerte: downloaded mapping_msgs from svn (ros wiki) downloaded point_cloud_perception from svn (ros wiki) downloaded geometric_shapes_msgs from svn (ros wiki) downloaded octomap_mapping-experimental from svn https://alufr-ros-pkg.googlecode.com/svn/branches/octomap_mapping-experimental/ Error : /home/micmac/fuerte_workspace/octomap_mapping-experimental/octovis/src/octovis/TrajectoryDrawer.cpp:63:5: erreur: ‘GLUquadricObj’ was not declared in this scope > Edited /home/micmac/fuerte_workspace/octomap_mapping-experimental/octovis/src/octovis/TrajectoryDrawer.h to add #include <GL/glu> > and added GL and GLU to target_link_libraries to octomap_mapping-experimental/octovis/CMakeLists.txt Error: /home/micmac/fuerte_workspace/octomap_mapping-experimental/octomap_ros/src/conversions.cpp:45:114: erreur: explicit instantiation shall not use ‘inline’ specifier [-fpermissive] Error: /home/micmac/fuerte_workspace/octomap_mapping-experimental/octomap_ros/src/conversions.cpp:46:125: erreur: explicit instantiation shall not use ‘inline’ specifier [-fpermissive] > Edited /home/micmac/fuerte_workspace/octomap_mapping-experimental/octomap_ros/src/conversions.cpp to remove 'inline' specifier on both lines 45 and 46 Then rosmake octomap_mapping-experimental : Built 39 packages with 0 failures So both RGBDSLAM and octomap_mapping-experimental seem to run fine, but I can't save any map using "rosrun octomap_server octomap_saver mymap.bt" I have also tried to publish a static transform from /rgbdslam/batch_clouds to octomap_server by adding this line to octomap_mapping.launch: I must also precise that I am using the Asus Xtion Pro live cam, so the topics may be a little different here and there (camera_rgb_optical_frame instead of openni_rgb_optical_frame), but I also tried all the required changed without success. Would you please have any advice to give me on how to save a map from rgbdslam with octomap_mapping ? I've been spending monthes on this without success so far... Originally posted by micmac on ROS Answers with karma: 141 on 2012-10-01 Post score: 2 Original comments Comment by AHornung on 2012-10-02: Do you really need the experimental branch of octomap_mapping? I think it's pretty outdated and not maintained by anyone... Comment by micmac on 2012-10-02: Well I sticked to what is said in the wiki. Furthermore, I have read somewhere (can not remember url sorry) that the experimental branch includes some transforms or other required stuff in order for the communication between rgbdslam and octomap server to work. Comment by Felix Endres on 2012-10-25: Could you update from the repository and see whether it works now? Answer: **2nd Edit: The latest release of my rgbdslam package includes the ability to compute the octomap internally. You can save it directly from the gui or via ros service call. See the wiki or the readme file for instructions. ** Edit: I found a bug in rgbdslam, where the tf frames of the old openni_camera driver are hardwired in the code. Because of this, octomap_server will not find a transformation between /map and the point cloud frame when using openni_launch. I'll try to release a fix soon. This sounds like your settings for the interplay between rgbdslam and octomap_server are incorrect. The fixed frame should not be openni_camera, but whatever you set as parameter "fixed_frame_name" for rgbdslam. rxgraph lets you easily see, whether the rgbdslam and octomap-server node are communicating on the same topic. The pointclouds streamed to the octomap server need much bandwidth and much processing by the octomap server. Make sure your computer is capable of keeping up or adapt rgbdslam's parameter "send_clouds_rate". Using the Xtion Pro is fine, you only need to adapt the input topics of rgbdslam, which seems to work as you see the cloud correctly in rgbdslam's gui. The transform publisher you use makes no sense. The frame id set in /rgbdslam/batch_clouds is the fixed_frame_name as mentioned above. Make sure you set that same frame name for the octomap_server (see rgbdslam/launch/octomap_server.launch for an example). Originally posted by Felix Endres with karma: 6468 on 2012-10-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by micmac on 2012-12-20: I have also corrected that in the code itself, but it didn't help. I have not tried yet with your new repository though... I will try it as soon as I can, thanks for updating the code :)
{ "domain": "robotics.stackexchange", "id": 11194, "tags": "ros, slam, navigation, octomap-mapping, ros-fuerte" }
Is it possible to determine which combination of three degenerate states an atom is in, not destroying the state?
Question: Suppose we have somehow determined that an atom's outer shell electron is in a $p$ state (i.e. with $l=1$). For example, waited enough for a cold boron atom to almost certainly come into electronic ground state. Since the $p$ shell has threefold degeneracy, the atom now is in some state like $$\newcommand\ket[1]{\left|#1\right\rangle} \alpha\ket{m=-1}+\beta\ket{m=0}+\gamma\ket{m=1}.$$ Having chosen some $z$ axis, can we find differences of phases and magnitudes between $\alpha$, $\beta$ and $\gamma$ without destroying or only slightly changing the atom's state? If yes, then how? If no, why? EDIT OK, it seems that by no-cloning theorem this can't be done perfectly. But what about approximate measurement of $\alpha,\beta,\gamma$? Answer: What you are trying to do is an instance of state estimation for a qutrit in some state $$|\psi⟩=\alpha|-1⟩+\beta|0⟩+\gamma|1⟩$$ using a single copy of the state. This is essentially impossible, due mostly to the no-cloning theorem. The most you can do is choose some orthogonal basis $\{|a⟩,|b⟩,|c⟩\}$, and measure along it, which will get you a hit on only one of the states, say $|a⟩$. The only information you will get from that is that $$⟨a|\psi⟩\neq0.$$ That is, you can conclude that the component of $|\psi⟩$ along some chosen vector is nonzero. However, this does not (and cannot) give you any information about the magnitude of this component, or about the existence or not of components along vectors orthogonal to $|a⟩$. It's important to note that this procedure will completely destroy the state, and this is pretty much a necessary condition for gaining any information about it. The case of a qubit is slightly easier to understand geometrically. There, if you measure on a basis $\{|a⟩,|b⟩\}$ and get $|a⟩$, then you know that the state is not on the state $|b⟩$; that is, you know a single point on the Bloch sphere that the state vector doesn't point along. However, this does not rule out points very close to that forbidden point (for which $⟨a|\psi⟩$ is small but nonzero). Quantum states really are very dark black boxes. If you have axes to the box factory - that is, if you have access to multiple copies of states produced by the same preparation - then you can perform tomography on the boxes and get a fairly good idea of what the state is (including whether it's pure or mixed). The theory of quantum state tomography tells you how many different measurements you need to perform to estimate the state, depending on its dimensionality. However, it's important to note that each 'measurement' is itself performed on an ensemble of boxes and aims to estimate a given probability. If you only have access to a finite number of boxes, this will bound the accuracy of your tomography, in well-studied ways. If you only have a single box, though, there's very little indeed that you can say.
{ "domain": "physics.stackexchange", "id": 19053, "tags": "quantum-mechanics, measurements" }
Isn't polynomial identity testing over arithmetic *expressions* trivial?
Question: Polynomial identity testing is the standard example of a problem known to be in co-RP but not known to be in P. Over arithmetic circuits, it does indeed seem hard, since the degree of the polynomial can be made exponentially large by repeated squaring. This question addresses the issue of how to work around this and keep the problem in randomized polynomial time. On the other hand, when the problem is initially presented (e.g. here), it is often illustrated over arithmetic expressions containing only constants, variables, addition, and multiplication. Such polynomials have total degree at most polynomial in the length of the input expression, and for any such polynomial the size of the output value is polynomial in the size of the input values. But since a polynomial of degree $d$ has at most $d$ roots, isn't this trivial? Just evaluate the polynomial over the rationals at any $d + 1$ distinct points and check whether the result is zero at each point. This should take only polynomial time. Is this correct? If so, why are arithmetic expressions without shared subexpressions often used as examples, when sharing is essential to the difficulty of the problem? Answer: That isn't known to be trivial. The polynomial ​$x \cdot y$ ​ has infinitely many roots. (When either variable is zero, the other variable won't affect the polynomial's value.)
{ "domain": "cs.stackexchange", "id": 5882, "tags": "randomized-algorithms, polynomial-time, circuits" }
Given a monetary amount, calculate the equivalent change
Question: I have written a program that takes in some amount of money, and prints the equivalent change (starting at hundred dollars, to fifty, to twenty, down the pennies). Here is the code: System.out.print("Enter amount of money: "); Scanner scan = new Scanner(System.in); double value = scan.nextDouble(); int valueIntegral = (int) value; int valueFractional = (int) Math.round(100 * value - 100 * valueIntegral); // Integral values int hundred = valueIntegral / 100; int fifty = (valueIntegral % 100) / 50; int twenty = ((valueIntegral % 100) % 50) / 20; int ten = (((valueIntegral % 100) % 50) % 20) / 10; int five = ((((valueIntegral % 100) % 50) % 20) % 10) / 5; int one = (((((valueIntegral % 100) % 50) % 20) % 10) % 5) / 1; // Fractional values int quarter = valueFractional / 25; int dime = (valueFractional % 25) / 10; int nickel = ((valueFractional % 25) % 10) / 5; int penny = (((valueFractional % 25) % 10) % 5) / 1; System.out.println(hundred + " hundred dollar bills\n" + fifty + " fifty dollar bills\n" + twenty + " twenty dollar bills\n" + ten + " ten dollar bills\n" + five + " five dollar bills\n" + one + " one dollar bills\n" + quarter + " quarters\n" + dime + " dimes\n" + nickel + " nickels\n" + penny + " pennies"); What I want to know is that without using loops or any iterative structures is this an acceptable way to accomplish this task? What would be a more elegant way? (those chains of modulo division are ugly) Answer: Without loops or similar "advanced" structures, there isn't much you can do to simplify this. You could save the duplication in extra variables. It might look like this: int hundred = valueIntegral / 100; int remainderHundred = valueIntegral % 100; int fifty = remainderHundred / 50; int remainderFifty = remainderHundred % 50; int twenty = remainderFifty / 20; int remainderTwenty = remainderFifty % 50; int ten = remainderTwenty / 10; [...] You have to decide if it's more readable. It's certainly less complex (and mistakes/typos are easier to catch), but it might take away some clarity. Misc you have a bit too much vertical space. Not every line needs its own paragraph. your comments don't add all that much value. The variable names already tell me all the comments do.
{ "domain": "codereview.stackexchange", "id": 18811, "tags": "java, beginner, change-making-problem" }
How can I modify a sound signal based on it's MFCC
Question: I'm working on an NN algorithm with which I want to filter noise from sound signals. I have some clean speech sound samples and noise samples, by combining them I get the signal that I want to filter. I get the 24 MFCC values which I feed in a NN and the output is another 24 MFCC values. How do I modify the original sound sample using the MFCC values to actually see the difference in it? I've read about this topic here and he does something similar, but doesn't quite explains how does he modify the sound signal using those MFCC values. By the way, im using keras and tensorflow in jupyter notebook. Answer: The authors in the page you are linking are using the analog of an equaliser but the more accurate representation of what they do is a vocoder. The typical application of a vocoder in music is to modulate one signal with the spectrum envelope of another. This sounds like this or this or this and so many other uses today, I hope these three examples are descriptive enough. In this particular application, the Neural Network (NN) is "learning" what is signal and what is noise, something that looks very much like adaptive filtering. Therefore, the signal path looks like this: Input -+-----------------> Vocoder --> Output | ^ | | +-- MFCC --> NN -------+ Hope this helps. EDIT: In terms of actually doing this you would first have to set up the basic pipeline that splits the Input into frames of N samples, calculates the Discrete Fourier Transform of each frame, derives the spectrum, modulates the spectrum with the coefficients of the NN and then does the inverse DFT to go back to samples in the time domain. This basic pipeline is usually handled with one of overlap-add or overlap-save methods and Python's Scikit module includes everything you need to implement that.A good start would be the `fftpack' module. In addition to this, you can search for Python implementations of these two methods and see if there is something there already that you could draw inspiration from. (e.g. this one). A detail here would be that if you want to use MFCC coefficients, you would have to split your DFT coefficients in the same bands (as the MFCC coefficients are derived from). This would cover the Vocoder part which accepts the spectrum of the signal at its input and the MFCC "weights" and applies the weights to the spectrum. The training / operation of the Neural Network can be handled with a number of other modules like those that you mention (or maybe even Shogun which is an older but very useful toolkit even for large problems). EDIT 2: The act of filtering each of the signal frames that result from an overlap-add or overlap-save method is a typical Digital Signal Processing operation. To do that, all you have to do is multiply the DFT spectrum of your signal with the frequency response of the filter. The operation is concluded when you apply the inverse transform and you go from the frequency domain to the time domain to playback your signal. What you get from the NN is basically the filter. So, the short answer is that you need to multiply your NN coefficients with the spectrum of a given sound frame. The difference here is that instead of using all the possible coefficients of the DFT spectrum, you divide the spectrum in 22 bands. In addition to that, these bands are not at equal spacing to each other. For example, in a typical filtering application, you could decompose your spectrum in 20 DFT coefficients and those would look like equally spaced band pass filters. But the Mel Scale or the Bark Scale are not dividing the frequency range at equal spaces. So, prior to applying the "weights" that you get from the NN to the signal, you need to construct and apply those bandpass filters that divide the spectrum according to the Mel or Bark scales.
{ "domain": "dsp.stackexchange", "id": 7026, "tags": "python, mfcc" }
What's the correspondence between Feynman diagrams and field configurations?
Question: If I understand correctly, a Feynman diagram represents a finite set of "interactions", such as the exchange of a photon between two electrons. You can think of it as a graph in which the vertices are the interactions. Otherwise, each particle propagates freely from one vertex to the next (such that the inter-vertex propagator for an electron, for example, comes from the free Dirac equation). Then you integrate over the coordinates where these interactions can happen. For example, in the simple case of a single photon exchange, you'd integrate over the spacetime location $\mathbf{x_1}$ where the first electron could be when it emits the photon, as well as the position $\mathbf{x_2}$ where the second electron could receive it -- under the constraint that $\mathbf{x_1}$ and $\mathbf{x_2}$ are null-separated, since we're talking about a photon. Finally, you add up the integrals corresponding to every possible interaction graph, to get a net amplitude for each final state. This is what I (think I) understood from reading Feynman's 1949 paper, Space-Time Approach to QED. But then, I've also read that, in general, the path integral approach to QFT says to integrate over all possible field configurations. Which doesn't seem to me to be quite the same thing. The former is an integral over graphs with a discrete set of vertices, which is surely a smaller space than the set of all configurations, each of which has an independent value at every point in infinite spacetime. So, is there some constraint on the actual allowable field configurations that renders these two sets equivalent? If so, what manner of constraint? Or is it rather that the number of vertices in the diagram can be arbitrarily large, so that it can approach any continuous field configuration in the limit? In that case, I guess the simpler Feynman diagrams would actually be more like delta-functions when construed as field configurations. Yet we know that these contribute much more to the overall amplitude, so could we conclude that nature tends to behave more like delta-functions than delocalized clouds? Answer: A Feynman diagram and a field configuration is not the same thing. The field configurations are the integration field variables over which the path integral is evaluated. However, the path integral for each specific process can be used to generate all the Feynman diagrams to all orders for that process. It is done by adding source terms for the different fields to the action and then pull out the part with the interaction terms using functional derivatives. So, what does a field configuration have to do with what the Feynman diagram represents? It is always dangerous and potentially misleading to try and give physical meaning to part of a mathematical process that computes a physical result. In this case, one has the infinite sum of Feynman diagrams on one hand in which fields interact via the vertices, and on the other hand one has the functional integration of the process as represented by the action for a given process over all field configurations. Perhaps a way to see this is that the functional integral "tests" each field configuration by applying it to the action under the constraints of the input and output fields to obtain a weight in the form of a probability amplitude for that field configuration. All the probability amplitudes for the field configurations are then added by the integral. In terms of the Feynman diagrams, these field configurations are represented by the basis functions (plane waves) in terms of which the propagators are computed. If you want to think in terms of the field configurations, then they are again "tested" by applying them to the structure of interactions as represented by the diagram, to obtain probability amplitudes. In this case, the interactions are integrated over all positions leading to different probability amplitudes for each field configuration. So, in effect the sum over Feynman diagrams adds all the probability amplitudes for all the field configurations and all possible positions of the interactions in all the Feynman diagrams. It sounds like a rather complicated business if you try to think of it like this, but it is all included in the whole calculation process.
{ "domain": "physics.stackexchange", "id": 95566, "tags": "quantum-field-theory, feynman-diagrams, path-integral, perturbation-theory, interactions" }
Find the robot pose from three beacon measurement
Question: At every timestep my robot gets sensor measurements from a scanner that finds three beacons with known poses $B_1 = (B_{1x}, B_{1y})^T, B_2 = (B_{2x}^T, B_{2y}) , B_3 = (B_{3x}, B_{3y})^T$ these measurements include the distance and angle to the beacon, the measuremnt for $B_1$ would be $m_{1t} = (d_{it}, \omega_{1t})^T$. and equivalently for the other beacons. From these measurements i want to calculate the robots pose containing its position and orientaion $x_t = (p_{xt}, p_{yt}, \Theta_{xt})^T$. Calculating the position can be done by trilateration, but I can't seem to find a way to get the orientation of the robot from these measurements. Is there possible a model to calculate both in a single calculation? If not a solution for finding the orientation of the robot would be great. Answer: If you have already computed the position of the robot, a single(!) angle towards a beacon is sufficient to get its orientation. The computation could look roughly like // b1 is position of beacon1, r is computed position of robot alpha = atan2(b1.x-r.x, b1.y-r.y) So if the robot is looking towards east, the angle-measurement of B1 should alpha. So roughly alpha-b1.w is your robot's orientation.
{ "domain": "robotics.stackexchange", "id": 1407, "tags": "mobile-robot, localization" }
I am doing a project on robotic surgeries! Can anyone help me and give me some details related to this topic?
Question: Can anyone help me, because I am doing a project on robotical surgeries and I would like someone to help me and advise me. I wonder if anyone could give me some data on tests he or she has run in a surgical robot... Thank you for your attention! Anything else will be much appreciated! Answer: There is a significant amount of information on this site: (http://allaboutroboticsurgery.com/surgicalrobots.html) That should give you the history and details about the past and current devices. Which should hopefully allow you to fine tune your questions to get more detailed specific answers. Good Luck. -Frank
{ "domain": "robotics.stackexchange", "id": 613, "tags": "sensors, robotic-arm, automatic" }
rqt custom imageView. Showing images from different cameras
Question: Hi to all, On several occasions I have created rqt plugins for monitoring different devices, and for interacting with robots graphically. In this occasion, i have a robot with eight video cameras and i want to visualize all of them in rqt. My idea is to create a menu in "Plugins", with a group named Cameras and within it, all the cameras of the robot. I mean: Plugins |_ Cameras |_ Top_camera |_ Right_camera |_ Left_camera |_ ... I already know that exists the image_view plugin for this type of things. But i need that an end user can choose between all of the cameras easily. When the user select, for example, Top_camera, it should appear an image_view screen with the current images of that camera. (In other words, i need an image_view with the image topic already selected. Totally transparent for the user). Is possible to do this, reusing the already implemented image_view plugin? I am modifying the plugin.xml file to launch multiple instances, unsuccessfully: <class name="rqt_image_view/ImageView" type="rqt_image_view::ImageView" base_class_type="rqt_gui_cpp::Plugin"> <description> Cameras </description> <qtgui> <group> <label>Cameras</label> <icon type="theme">folder-new</icon> </group> <label>PTZ top</label> <icon type="theme">camera-photo</icon> <statustip>Great user interface.</statustip> </qtgui> </class> I should need eight different image_view with the corresponding topic preselected. How can i do it? Thank you very much in advance ;) Originally posted by Jose Luis on ROS Answers with karma: 375 on 2015-01-13 Post score: 4 Answer: Finally I have achieved that i want. I have followed the suggestions of @130s and I have added other things that were necessary. Perhaps my solution is not the best one or there are another possibilities, but it works perfect. To obtain a custom ImageView plugin for each camera, I have done the following: I've copied the original code of imageView into my own package. I've modified some lines in the code for getting the image topic that i want for each camera. I've created one class for each camera. (This is necessary because you need to instantiate the classes in the "plugin.xml" file and the attribute must be different for each instance) I only use one .ui file. All the classes are using the same file: --- CMakelist.txt --- set(my_cameras_UIS src/my_cameras/image_view.ui) The main concept to understand is in plugin.xml: The Class name tag must be different for each instance. I have obtained this: Hopefully this will be useful for others. Originally posted by Jose Luis with karma: 375 on 2015-01-19 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Dirk Thomas on 2015-01-19: Instead of copying the code it would be much better to reuse the code of the existing plugin. If it does not provide the necessary API for that you might consider providing a pull request to enhance the existing plugin in a way that you can reuse it's functionality for you custom plugin. Comment by Jose Luis on 2015-01-20: I know what you say, is the best way to do this. But in this case, i needed a very concrete thing, with little changes in the code. It is not very reusable for others. I think that a pull request should be done with generic code.Thus, i have done it by this way.I appreciate your contribution, thanks
{ "domain": "robotics.stackexchange", "id": 20560, "tags": "ros, rqt-image-view, image-view, rqt" }
EM force, blocking force carrier photons in a static electric field
Question: I am doing some personal research in this specific area and wanted to ask something related to photons and EM force. are involved. Here is a thought experiment that doesn't add up to observed results, what's the flaw? Initial assumption: Virtual photons are the carriers for EM force even in static situations. Initial setup: Two oppositely charged parallel metallic plates are set up like a simple capacitor, inside the plates are an uniform static electric field, fringe fields are ignored. Based on the force carrier model, exchange of virtual photons are responsible for this uniform electric field. A plate of non-conductive, non-polar material should be capable of blocking any photons. In this case, the force carriers would be unable to reach their destination and there would be no electric field. But in reality the electric field passes straight through the non-conductive plate. Where have I gone wrong? Cheers Zed. Answer: Virtual photons do not pass from one plate to the other. The leave a plate and then return back to the plate and the electron or positive ion they left from - all in a very short time. The only time the leave a plate and do not return is when they have imparted some momentum to another charged particle (imagine a Feynman diagram for two electrons scattering off each other). The other misconception is that the non conducting plate will block the virtual photons. It may be true that the plate would block real photons of certain energies, but virtual photons are not real photons - they can pass through non conducting bodies. On the other hand, the virtual photons will also, sometimes, interact with the plate and cause the atoms of the plate to become small electric dipoles. These dipoles will become additional sources of virtual photons that will will change the electric field between the plates. This is the way dielectrics in capacitors change the capacitance of this kind of capacitor. The whole concept of virtual photons are really just expressions of how we are able to calculate how a test charge will interact with a electromagnetic field by using Feynman diagrams. It is not a good idea to read too much "reality" into the presence of virtual particles. The results of calculations of the Feynman diagrams is the really meaningful physical result, virtual photons are kind of a hand wavy tool to explain Feynman diagrams.
{ "domain": "physics.stackexchange", "id": 5000, "tags": "electromagnetism, electrostatics, electric-fields, virtual-particles" }
Why does a helium filled balloon move forward in a car when the car is accelerating?
Question: I noticed that when I had a helium filled, latex balloon inside of my car, it moved forward in the cabin as I accelerated forward. The faster I accelerated forward, the faster the balloon went from the back of the car to the front of the car. The balloon didn't have a string. This became a game with my 4 year old as we drove home. We figured out where the balloon would go based on how fast I accelerated, turned corners etc. I expected that it would act a lot like the water in a cup does, but it was the total opposite it seemed. What forces caused this behavior? I assumed it has something to do with the fluid dynamics in the closed cabin, but I can't figure it out. Answer: It travels forwards instead of backwards in an accelerating car for the same reason that a helium balloon travels upwards instead of downwards under the influence of gravity. Why is that? In an accelerating car, for all intents and purposes the acceleration can be considered a change in the amount and direction of gravity, from pointing straight down to pointing downwards and backwards. The balloon doesn't know and doesn't care if the acceleration is from gravity or from the acceleration of the car; it just tries to move in the direction it naturally moves, namely, against the direction of the acceleration. Thus, it moves forwards when you accelerate. Hopefully you find this explanation intuitively satisfying. Another more rigorous way to view the problem is through Lagrangian minimization. The balloon can be considered a low-density object embedded in a higher-density fluid constrained within the confines of the car. Under the influence of gravity pointing sideways, the total system potential energy decreases the farther forward the balloon is situated. Since the force is the gradient of the potential, the balloon will try to move forward.
{ "domain": "physics.stackexchange", "id": 100518, "tags": "forces, fluid-dynamics, reference-frames, fluid-statics, buoyancy" }
What is the origin of the 'source Gene ID' references given in the 'gene_presence_absence.csv' output of Roary?
Question: I am learning to use Roary for preparing a pan genome for some lactobacillus strains. In the 'gene_presence_absence.csv' output of Roary (which I view in excel), a 'source Gene ID' is given for each of the strains included in the analysis. The Roary web page states that this is: Presence and absence of genes in each sample, with the corresponding source Gene ID. Is this 'source Gene ID' generated by Roary, or does it relate back directly to the .gff files I used as input? If I take one of these 'source Gene ID' references and Grep in the directory containing my .gff files I get no matches, so I am assuming it is generated by Roary...? I had hoped that this reference would link back directly to the gene within the annotation file, but that seems not to be the case? Should/can I use this reference for anything? Answer: Looking at this again, I have now worked out (and should probably have noticed previously) that the 'source Gene ID' field from the 'gene_presence_absence.csv' file originates from the ID field of the .gff file for the sequence in question. You can therefore easily tie the results from the 'gene_presence_absence.csv' file back to the annotation from the sequence in question, and therefore to the location of the nucleotide sequence for the gene in question. So for instance, if you take a 'source Gene ID' from the gene_presence_absence file - say GEICCBEJ_00950 - and use it in a grep command referencing the genome from which it came, grep "GEICCBEJ_00950" my_annotation_file.gff, then you should see the line from the annotation corresponding to this sequence. The 'source Gene ID' is also referenced in the pan_genome_reference.fa output from ROARY, which gives "single representative nucleotide sequence from each of the clusters in the pan genome (core and accessory)".
{ "domain": "bioinformatics.stackexchange", "id": 1871, "tags": "genome, genomics, bacteria" }
Robot localization with AMCL and EKF
Question: Hello, I have been following an example here with jackal. My goal so far was to understand how AMCL works. However, in his example he uses ekf_localization and AMCL is just a tool to output cloud with the current robot's position. Here is a rqt_graph when I ran the localization example (which is running gazebo, amcl and rviz) . I thought that AMCL was a separate algorithm for localization and it did not need anything else. What am I missing? Why they are using both EKF and AMCL? Screenshot of rqt_graph: http://tinyurl.com/z73k3nj Originally posted by murdock on ROS Answers with karma: 75 on 2016-04-13 Post score: 1 Answer: AMCL is a global localization algorithm in the sense that it fuses LIDAR scan matching with a source of odometry to provide an estimate of the robot's pose w.r.t a global map reference frame. It is common to use an EKF/UKF such as those implemented in the robot_localization package to fuse wheel odometry with an IMU (or other sensors) and create an improved odometry estimate (local pose estimation) for AMCL. I am not sure where the example you mention uses both AMCL and an EKF, but it is probably something similar. Another way to use an EKF together with AMCL is to fuse two global estimates, e.g to fuse the pose provided by AMCL with the pose provided by another global localization method (e.g beacon-based triangulation..) If you haven't done it already, I suggest you have a look at the robot_localization wiki page and this talk from the 2015 ROSCON Originally posted by al-dev with karma: 883 on 2016-04-13 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by murdock on 2016-04-14: Thank you very much for your answers! Comment by murdock on 2016-04-15: @al-dev How could i use ONLY UKF for global localization in ROS using IMU, lidar, odometry data? Is it possible? Comment by al-dev on 2016-04-16: See this question. Comments relating to the EKF apply to the UKF as well. Comment by murdock on 2016-05-15: Which parameters would you suggest me to choose to test which ekf_localization_node or ukf_localization_node was better? Comment by dinesh on 2021-08-07: Does this means robot position in given map calculated from amcl can't be fused with odom and imu for better global localization?
{ "domain": "robotics.stackexchange", "id": 24371, "tags": "localization, navigation, jackal, ekf-localization, amcl" }
Matching girls with boys without mutual attraction (variant of maximum bipartite matching)
Question: Let us say you have a group of guys and and a group of girls. Each girl is either attracted to a guy or not, and vice versa. You want to match as many people as possible to a partner they like. Does this problem have a name? Is it feasibly solvable? Sounds hard to me... Ps. note that since the attraction is not neccessarily mutual the standard max-flow solution does not work. Answer: I think it is still the standard bipartite maximum matching problem, which can be solved by the algorithm of Hopcroft and Karp. You put an edge in the bipartite boys-girls graph, iff you have mutual attraction. Then you maximize your matching and voilà. Notice that if you would never assign a boy to a girl with one-sided attraction, since this does not increase you objective function (# of mutal attractions). If you want to maximize the number of happy people, then you set the weights of the complete bipartite boys-girls graph as follows mutual attraction = 2 happy people one sided attraction = 1 happy people otherwise = 0 happy people You then can compute the maximum weighted bipartite matching.
{ "domain": "cs.stackexchange", "id": 687, "tags": "algorithms, graphs, bipartite-matching" }
How can the electromagnetic stress energy tensor be restricted to flat space-time
Question: The Wikipedia article describing the electromagnetic stress energy tensor seems to suggest that this tensor can only be defined in flat space-time. How is it possible to define an electromagnetic stress energy tensor this way since any available electromagnetic energy/momentum must render the space-time curvature nonzero? How in practice would someone extract useful information with this stress energy tensor? Answer: The electromagnetic stress tensor can be defined in all spacetimes: $$\frac{\delta \mathscr{L}}{\delta g^{ab}} = F_{a}{}^{c}F_{bc} - \frac{1}{2}g_{ab}F^{cd}F_{cd}$$ Which reduces to the expression in the Wikipedia article for the case of flat spacetime. Note that it is still fine to define this in flat spacetime becase: 1) Electromagnetism is perfectly consistent in special relativity 2) There are many limits where the contribution of the electromagnetic field to spacetime curvature is very, very negligible (for instance, nearly every electromagnetic experiment ever run on earth's surface).
{ "domain": "physics.stackexchange", "id": 13886, "tags": "electromagnetism, general-relativity, stress-energy-momentum-tensor" }
Converting input from a FileReader to JSON and outputting it again
Question: Below is my code which I use to read data from a remote URL (which is GZipped), convert it to a Map, process the map (remove various unwanted fields, etc), the write it back to a file in JSON format. Unfortunately, it's ugly. I'm doing multiple things in the same method, but can't think of a good way to break them apart, as the input files can have hundreds of thousands of lines, so they will cause me to run out of memory quickly if I try to read in the whole thing, then process it, then output it. Can anyone offer any assistance/suggestions? private void importTdatFile(String fileURL) { String filename = getFilename(fileURL) + ".gz"; try { URL url = new URL(fileURL); // set up input GZIPInputStream gzis; if (new File(filename).isFile()) { InputStream is = ClassLoader.getSystemResourceAsStream(fileURL); gzis = new GZIPInputStream(is); System.out.println("Using tdat header from classes directory"); } else { gzis = new GZIPInputStream(url.openStream()); } BufferedReader reader = new BufferedReader(new InputStreamReader(gzis)); // set up output BufferedWriter writer = new BufferedWriter(new FileWriter(catalog.getName() + ".json")); // create a template so I only have to create a map once Map<String, String> template = new LinkedHashMap<String, String>(catalog.getFieldData().size()); for (String fieldName : catalog.getFieldData().keySet()) { template.put(fieldName, null); } // start processing while (reader.ready()) { String line = reader.readLine(); if (line.matches("^(.*?\\|)*$")) { Map<String, String> result = new HashMap<String, String>(); String[] fieldNames = catalog.getFieldData().keySet().toArray(new String[]{}); String[] fieldValues = line.split("\\|"); for (int i = 0; i < fieldValues.length; i++) { FieldData fd = catalog.getFieldData().get(fieldNames[i]); if (catalog.getFieldDataSet().contains(fd)) { result.put(fieldNames[i], fieldValues[i]); } } result = removeNulls(result); result = removeUnwantedFields(result, catalog); result = fixFieldPrefixes(result, catalog); result = fixFieldNames(result, catalog); writer.write(getJsonLine(result)); } } writer.close(); reader.close(); gzis.close(); } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } Answer: An idea: private BufferedReader getReader(final String fileUrl) throws IOException { final String filename = getFilename(fileUrl) + ".gz"; final URL url = new URL(fileUrl); final InputStream stream; if (new File(filename).isFile()) { stream = ClassLoader.getSystemResourceAsStream(fileUrl); System.out.println("Using tdat header from classes directory"); } else { stream = url.openStream(); } final GZIPInputStream gzipStream = new GZIPInputStream(stream); final InputStreamReader gzipStreamReader = new InputStreamReader(gzipStream, "UTF-8"); final BufferedReader reader = new BufferedReader(gzipStreamReader); return reader; } You don't have to close the GZIPInputStream, Reader.close() does it. I'd invert the condition inside the while loop: // start processing while (reader.ready()) { final String line = reader.readLine(); if (!line.matches("^(.*?\\|)*$")) { continue; } Map<String, String> result = new HashMap<String, String>(); ... } It makes the code flatten which is easier to read. This code maybe unnecessary, since no one uses the template˛object: // create a template so I only have to create a map once final Map<String, String> template = new LinkedHashMap<String, String>(catalog.getFieldData().size()); for (final String fieldName : catalog.getFieldData().keySet()) { template.put(fieldName, null); } You should specify the charset when you call the constructor of the InputStreamReader. final InputStreamReader gzipStreamReader = new InputStreamReader(gzipStream, "UTF-8"); Omitting it could lead to 'interesting' surprises, since it will use the default charset which varies from system to system. Here is the code after a few more method extractions. Check the comments please and feel free to ask if something isn't clear. public void importTdatFile(final String fileUrl) throws MyAppException { try { doImport(fileUrl); } catch (final IOException e) { // MalformedURLException is a subclass of IOException throw new MyAppException("Cannot import", e); } } private void doImport(final String fileUrl) throws IOException { BufferedReader reader = null; BufferedWriter writer = null; try { reader = getReader(fileUrl); writer = getWriter(); while (reader.ready()) { final String line = reader.readLine(); final Map<String, String> results = processLine(line); filterResults(results); final String jsonLine = getJsonLine(results); writer.write(jsonLine); } } finally { closeQuietly(reader); writer.close(); // do NOT ignore output errors } } private BufferedReader getReader(final String fileUrl) throws IOException { final InputStream stream = getStream(fileUrl); final BufferedReader reader = createGzipReader(stream); return reader; } private InputStream getStream(final String fileUrl) throws MalformedURLException, IOException { final InputStream stream; // I don't really like this 'if' here if (isGzipFile(fileUrl)) { // I'm not sure that the condition is correct for classpath loading // or not stream = ClassLoader.getSystemResourceAsStream(fileUrl); // I would put this println to somewhere else (after refactoring the 'if') System.out.println("Using tdat header from classes directory"); } else { final URL url = new URL(fileUrl); stream = url.openStream(); } return stream; } private BufferedReader createGzipReader(final InputStream stream) throws IOException, UnsupportedEncodingException { final GZIPInputStream gzipStream = new GZIPInputStream(stream); final InputStreamReader gzipStreamReader = new InputStreamReader(gzipStream, "UTF-8"); final BufferedReader reader = new BufferedReader(gzipStreamReader); return reader; } private boolean isGzipFile(final String fileUrl) { final String filename = getFilename(fileUrl) + ".gz"; return new File(filename).isFile(); } private BufferedWriter getWriter() throws IOException { // TODO: FileWriter use the default character encoding (see javadoc), // maybe you should use a FileOutputStream with a specified encoding. final String outputFilename = getOutputFilename(); final FileWriter fileWriter = new FileWriter(outputFilename); return new BufferedWriter(fileWriter); } private String getOutputFilename() { return catalog.getName() + ".json"; } private Map<String, String> processLine(final String line) { final Map<String, String> result = new HashMap<String, String>(); if (!isProcessableLine(line)) { return result; } // It's hard to refactor without the internals of catalog. final String[] fieldValues = line.split("\\|"); for (int i = 0; i < fieldValues.length; i++) { // TODO: possible ArrayIndexOutOfBoundsException here? final String fieldName = catalog.getFieldName(i); final FieldData fieldData = catalog.getFieldData(fieldName); if (catalog.fieldDataSetContains(fieldData)) { final String fieldValue = fieldValues[i]; result.put(fieldName, fieldValue); } } return result; } private void filterResults(final Map<String, String> results) { removeNulls(results); // TODO: catalog probably a field, so it should be visible inside these // methods without passing them as a parameter removeUnwantedFields(results, catalog); fixFieldPrefixes(results, catalog); fixFieldNames(results, catalog); } private boolean isProcessableLine(final String line) { // TODO: A precompiled regexp maybe faster but it would be premature // optimization, so don't do that without profiling return line.matches("^(.*?\\|)*$"); } private void closeQuietly(final Closeable closeable) { if (closeable == null) { return; } try { closeable.close(); } catch (final IOException e) { // TODO: log the exception with a logger instead of the // printStackTrace(); e.printStackTrace(); } } Anyway, your streaming approach is fine, you shouldn't read the whole file into the memory.
{ "domain": "codereview.stackexchange", "id": 1021, "tags": "java" }
problem installing android_core
Question: I ve been following http://wiki.ros.org/android/Tutorials/hydro/Installation%20-%20Ros%20Development%20Environment. after completing the rosjava tutorials i tried to install ros_android_core, since i am triing to build an app for my tablet. when doing wstool init -j4 ~/android/src https://raw.github.com/rosjava/rosjava/hydro/android_core.rosinstall i get the error Using initial elements from: https://raw.github.com/rosjava/rosjava/hydro/android_core.rosinstall ERROR in config: Unable to download URL [https://raw.github.com/rosjava/rosjava/hydro/android_core.rosinstall]: <urlopen error [Errno -2] Name or service not known> can someone help me Thanks Stefan Originally posted by stefan on ROS Answers with karma: 15 on 2014-07-31 Post score: 0 Answer: It could be that you're just hitting a transient network error. Try again and see if it works now. Alternatively, there could be something wrong with your network configuration (although unlikely). Try to see whether this succeeds: $ nslookup raw.github.com Also make sure you have no http(s) proxy configured anywhere. It might be interfering. Originally posted by gvdhoorn with karma: 86574 on 2014-07-31 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by stefan on 2014-07-31: thanks - solved Comment by gvdhoorn on 2014-07-31: So what was it in the end? Transient error? Proxy? Also: on this site answers are accepted by clicking the check mark at the left of an answer. We rarely close questions, as there might still be some discussion. Comment by stefan on 2014-07-31: proxy settings
{ "domain": "robotics.stackexchange", "id": 18840, "tags": "android-core, android" }
Principle of energy conservation
Question: I am new to physics and I hope you can help me solve this problem. When two objects collide with each other and stick, there is an internal force, so linear momentum is conserved. But when I calculate the initial and final kinetic energies of the system, it shows me that energy is lost when the collision occurs. This shows me that it is an inelastic collision. But how is the energy lost without the kinetic energy being transmitted into any other force. I thought that it is due to the friction between the two objects, but my thinking should be wrong because in this stage, friction is not a course to decrease the kinetic energy of the system. So, can you tell me what happened to the decreased energy of the system? In which energy type was it transmitted? I hope for a brief explanation. Answer: In case of a perfectly inelastic collision (zero coefficient of restitution), the two bodies stick together. In such a collision, kinetic energy is lost by bonding the two bodies together. This bonding energy usually results in a maximum kinetic energy loss of the system. As you've rightly said, only momentum is conserved in inelastic collision. The loss of energy is due to the sticking of the bodies and the forming of bonding energy due to the sticking.
{ "domain": "physics.stackexchange", "id": 91512, "tags": "newtonian-mechanics, thermodynamics, energy-conservation, collision, dissipation" }
Intensity of light through medium of glass
Question: I'm stuck in this following question: Calculate I2 when incident light (Ii) is unpolarized. Values: 01 = 0° and Ii = 625 W/m^2 Answer: Intensity of unpolarised light halves every time it passes through a polariser.
{ "domain": "physics.stackexchange", "id": 63298, "tags": "homework-and-exercises, optics, polarization" }
How to use Bosch LRR3 using the Bosch package?
Question: How can I incorporate the bosch lrr3 radar into ros? Originally posted by fuzail on ROS Answers with karma: 1 on 2012-09-18 Post score: 0 Original comments Comment by dejanpan on 2012-09-22: Sorry maybe I misunderstood the question, do you already have some code that can talk to the device? In that case wrapping it into the ROS node is really trivia and we would be happy to help. D. Answer: Hi there, I am one of the admins of the bosch-ros-pkg. Our personal robotics group at Bosch Research unfortunately does not have a ROS driver for LRR3 yet. If there is a greater demand for it, maybe we could try and get from our business unit. Can you send me a private message regarding this? Originally posted by dejanpan with karma: 1420 on 2012-09-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11062, "tags": "ros" }
Algorithm for segmentation of sequence data
Question: I have a large sequence of vectors of length N. I need some unsupervised learning algorithm to divide these vectors into M segments. For example: K-means is not suitable, because it puts similar elements from different locations into a single cluster. Update: The real data looks like this: Here, I see 3 clusters: [0..50], [50..200], [200..250] Update 2: I used modified k-means and got this acceptable result: Borders of clusters: [0, 38, 195, 246] Answer: Please see my comment above and this is my answer according to what I understood from your question: As you correctly stated you do not need Clustering but Segmentation. Indeed you are looking for Change Points in your time series. The answer really depends on the complexity of your data. If the data is as simple as above example you can use the difference of vectors which overshoots at changing points and set a threshold detecting those points like bellow: As you see for instance a threshold of 20 (i.e. $dx<-20$ and $dx>20$) will detect the points. Of course for real data you need to investigate more to find the thresholds. Pre-processing Please note that there is a trade-off between accurate location of the change point and the accurate number of segments i.e. if you use the original data you'll find the exact change points but the whole method is to sensitive to noise but if you smooth your signals first you may not find the exact changes but the noise effect will be much less as shown in figures bellow: Conclusion My suggestion is to smooth your signals first and go for a simple clustering mthod (e.g. using GMMs) to find an accurate estimation of the number of segments in signals. Given this information you can start finding changing points constrained by the number of segments you found from previous part. I hope it all helped :) Good Luck! UPDATE Luckily your data is pretty straightforward and clean. I strongly recommend dimensionality reduction algorithms (e.g. simple PCA). I guess it reveals the internal structure of your clusters. Once you apply PCA to the data you can use k-means much much easier and more accurate. A Serious(!) Solution According to your data I see the generative distribution of different segments are different which is a great chance for you to segment your time series. See this (original, archive, other source) which is probably the best and most state-of-the-art solution to your problem. The main idea behind this paper is that if different segments of a time series are generated by different underlying distributions you can find those distributions, set tham as ground truth for your clustering approach and find clusters. For example assume a long video in which the first 10 minutes somebody is biking, in the second 10 mins he is running and in the third he is sitting. you can cluster these three different segments (activities) using this approach.
{ "domain": "datascience.stackexchange", "id": 6477, "tags": "machine-learning, clustering, sequence" }
Notifying players about certain events
Question: This class looks ugly as hell to me. I feel like there is a better way but can't really think for a good one. Too many instanceofs and else ifs. What OOP practices or design patterns would you suggest to change and improve the code? Any tips, advises and recommended resources are welcome! What does the code do: Basically, public method accepts event object and every event object contains user/player object in some form. The reason we are getting user object from the event is to check whether that particular user will receive notification about the event. Which of course depends on user's notification settings which is inside user object. I've added ENUM class, maybe there is a way to modify it for a better code quality. @Service public class NotificationSettingsCheckServiceImpl implements NotificationSettingsCheckService { @Override public Boolean checkIfEventCanPass(ApplicationEvent event) { if (SYSTEM_EVENTS.getListOfClasses().contains(event.getClass())) { return true; } Boolean eventPasses = false; LotteryUser user; user = event instanceof NotificationApplicationEvent ? getUserBasedOnNotificationEvent(event) : getUserBasedOnEmailEvent(event); if (CAMPAIGN_EVENTS.getListOfClasses().contains(event.getClass()) && user.getNotificationSettings().getCampaignEvents()) { eventPasses = true; } else if (DRAW_RESULT_EVENTS.getListOfClasses().contains(event.getClass()) && user.getNotificationSettings().getDrawResultEvents()) { eventPasses = true; } else if (TRANSACTION_EVENTS.getListOfClasses().contains(event.getClass()) && user.getNotificationSettings().getTransactionEvents()) { eventPasses = true; } else if (USER_WON_EVENTS.getListOfClasses().contains(event.getClass()) && user.getNotificationSettings().getTransactionEvents()) { eventPasses = true; } return eventPasses; } private LotteryUser getUserBasedOnEmailEvent(EventObject event) { LotteryUser user = null; if (event instanceof UserThanksEvent) { user = ((UserThanksEvent) event).getUser(); } else if (event instanceof UserOrderCanceledEvent) { user = ((UserOrderCanceledEvent) event).getUser(); } else if (event instanceof DrawResultEvent) { user = ((DrawResultEvent) event).getPlayer(); } else if (event instanceof UserWinCongratulationEvent) { user = ((UserWinCongratulationEvent) event).getPlayer(); } else if (event instanceof UserAddedToCampaignEvent) { user = ((UserAddedToCampaignEvent) event).getLotteryUser(); } return user; } private LotteryUser getUserBasedOnNotificationEvent(EventObject event) { LotteryUser user = null; if (event instanceof UserReceivedBonusMoneyEvent) { user = ((UserReceivedBonusMoneyEvent) event).getLotteryUser(); } else if (event instanceof UserReceivedBonusInNonDepositCampaignEvent) { user = ((UserReceivedBonusInNonDepositCampaignEvent) event).getLotteryUser(); } else if (event instanceof UserReceivedBonusInDepositCampaignEvent) { user = ((UserReceivedBonusInDepositCampaignEvent) event).getLotteryUser(); } else if (event instanceof UserTakesPartInDepositCampaignEvent) { user = ((UserTakesPartInDepositCampaignEvent) event).getUser(); } else if (event instanceof UserTakesPartInNonDepositCampaignEvent) { user = ((UserTakesPartInNonDepositCampaignEvent) event).getUser(); } else if (event instanceof DrawResultNotificationEvent) { user = ((DrawResultNotificationEvent) event).getPlayer(); } return user; } } Here is the ENUM class I'm using. public enum NotificationSettingsType { SYSTEM_EVENTS(Arrays.asList(OnRegistrationCompleteEvent.class, ResetPasswordEvent.class, ManagerRegistrationEvent.class, DrawResultNotFoundEvent.class, CurrencyUpdatedEvent.class, WalletLockedEvent.class, UserWonBigPrizeEvent.class)), CAMPAIGN_EVENTS(Arrays.asList(UserReceivedBonusInDepositCampaignEvent.class, UserReceivedBonusInNonDepositCampaignEvent.class, UserReceivedBonusMoneyEvent.class, UserTakesPartInDepositCampaignEvent.class, UserTakesPartInNonDepositCampaignEvent.class, UserAddedToCampaignEvent.class)), DRAW_RESULT_EVENTS(Arrays.asList(DrawResultNotificationEvent.class, DrawResultEvent.class)), TRANSACTION_EVENTS(Arrays.asList(UserThanksEvent.class, UserOrderCanceledEvent.class)), USER_WON_EVENTS(Collections.singletonList(UserWinCongratulationEvent.class)); private List<Class> listOfClasses; NotificationSettingsType(List<Class> listOfClasses) { this.listOfClasses = listOfClasses; } public List<Class> getListOfClasses() { return listOfClasses; } } Here are the Event Object example and the class that it extends. I have several of these depending on the event they notify. This is just an example. Using Lombok plugin for Getter/Setter annotations if that confuses anyone. @Getter @Setter public class UserReceivedBonusMoneyEvent extends NotificationApplicationEvent { private final LotteryUser lotteryUser; private final BigDecimal bonusAmount; private final String currency; public UserReceivedBonusMoneyEvent( LotteryUser lotteryUser, BigDecimal bonusAmount, String currency) { super(bonusAmount); this.lotteryUser = lotteryUser; this.bonusAmount = bonusAmount; this.currency = currency; } @Override public void accept(NotificationEventVisitor visitor) { visitor.visit(this); } } And here is the NotificationApplicationEvent, which extends Spring framework's Application Event. public abstract class NotificationApplicationEvent extends ApplicationEvent { /** * Create a new ApplicationEvent. * * @param source the object on which the event initially occurred (never {@code null}) */ public NotificationApplicationEvent(Object source) { super(source); } public abstract void accept(NotificationEventVisitor visitor); } And This is NotificationEventListener class where checkIfEventCanPass method is placed. @Component @Slf4j public class NotificationEventListener implements ApplicationListener<NotificationApplicationEvent> { @Autowired private NotificationEventVisitor notificationEventVisitor; @Autowired private NotificationSettingsCheckService notificationSettingsCheckService; @Override public void onApplicationEvent(NotificationApplicationEvent event) { if (notificationSettingsCheckService.checkIfEventCanPass(event)) { event.accept(notificationEventVisitor); } } } Answer: Boolean vs boolean Prefer to use the primitive type boolean instead of Boolean (The difference is, simply put, that Boolean can also be null). Varargs, defensive copy You're passing a list of classes to your enum constructor. You could instead use varargs and pass Class... (or possibly Class<?>... although that might give you a warning, if so just ignore it). Then you can get rid of wrapping in Arrays.asList when calling the constructor and instead do it in the constructor itself. Also, don't return the list as it is, return a copy of it. Otherwise some calling code can do getListOfClasses().clear(); and screw up everything. In fact, all you really need is the contains method of the list, so skip the getListOfClasses method and make a contains method on your enum instead. Also, make listOfClasses final. Improved code: private final List<Class<?>> listOfClasses; NotificationSettingsType(Class<?>... classes) { this.listOfClasses = Arrays.asList(classes); } public boolean contains(Class<?> clazz) { return listOfClasses.contains(clazz); } Getting a user... Let the EventObject itself know how to get a user. Make a method somewhere, depending on your class heirarchy for your events, that returns a LotteryUser based on email or based on notification event. Then you can simply call this: private LotteryUser getUserBasedOnEmailEvent(EventObject event) { return event.getUserByEmail(); } private LotteryUser getUserBasedOnNotificationEvent(EventObject event) { return event.getUserByNotification(); } This removes the need for these methods completely. Depending on whether or not each EventObject really has multiple users, you could even do return event.getUser();
{ "domain": "codereview.stackexchange", "id": 25566, "tags": "java, performance, object-oriented, event-handling" }
Doppler effect observed in octaves
Question: I have a question about the interpretation of the Doppler effect, when you look at the results as a change in octaves. Nothing actually changes when you look at the result in octaves instead of frequency shift, obviously, but it suddenly seems a whole lot less intuitive. This makes me wonder whether I understand things correctly. If we assume a stationary listener and medium and moving sound source ($-c < v_s < c$, where $c \approx 343 \frac{m}{s}$, speed of sound), the observed frequency $f_L$ is: $$f_L = \left(\frac{c}{c - v_s} \right) f_0$$ where $f_0$ is the emitted frequency. For an object moving away from you (negative $v_s$), the maximum observed pitch is $\frac{1}{2} f_0$, which is exactly one octave down. Intuitively, I find that this makes sense: all frequencies are pitched down, but not by a crazy amount. When the source moves away ($v > 0$), however, one octave upis observed at half the speed of sound: $v = \frac{1}{2} c \Rightarrow f_L = 2 f_0$. And at $v = \frac{3}{4}c$ , it's two octaves up, and you can go all the way up to infinity. So in this case, all frequencies are suddenly scaled to a larger and larger range of frequencies, which seems odd. Is this actually a correct way of looking at it? And what happens for the limit of $v_s \rightarrow c$ (just before the sonic boom)? Answer: You are visualizing fine, the Doppler effect is usually experienced at ground speeds (much less than speed of sound). As you approach the sound barrier, waves will compress so much that instead of hearing a high pitch, you will experience a sudden shock wave (known as "sonic boom"). And at higher speeds than sound, the Doppler effect for the incoming object will not work practically, because you will not hear anything (object is moving faster than sound). But once it passes you, and you receive the shock wave, you will hear a much lower pitch of the source sound.
{ "domain": "physics.stackexchange", "id": 11579, "tags": "acoustics, doppler-effect" }
How to add parameters to a subscriber callback function given that it is also an action_client
Question: Hello, I'm new to ROS and my situation is the following: I have a node which subscribes to a Topic, so of course I have my callback function (which by the way uses a custom message so the problem may be there, I don't know. I don't really know C++ either :D ). The point is that this node must also be a client for an actionlib Client-Server structure. So every time my node reads something from the topic it is subscribed to, I need also to inform my actionlib Server. This is why I need to have a "Client" parameter in my callback function, to send also a message to the Server. My callback SHOULD look like this: void polygonCallback(const es_1::Draw msg, Client* cl){//Stuff} And I just defined "Client" like this typedef actionlib::SimpleActionClient<es_1::polygonAction> Client; Now, I found online that to do what I want to do I sould use boost::bind() and I did. Just like this: ros::Subscriber sub = n.subscribe<es_1::Draw>("draw", 1000, boost::bind(polygonCallback,_1,&client)); But when I try to "catkin_make" this is the error that pops out: In file included from /usr/include/boost/bind.hpp:22:0, from /opt/ros/kinetic/include/ros/publisher.h:35, from /opt/ros/kinetic/include/ros/node_handle.h:32, from /opt/ros/kinetic/include/ros/ros.h:45, from /home/canniz/Desktop/catkin_ws/src/es_1/src/polygon_action_client.cpp:1: /usr/include/boost/bind/bind.hpp: In instantiation of ‘void boost::_bi::list2<A1, A2>::operator()(boost::_bi::type<void>, F&, A&, int) [with F = void (*)(es_1::Draw_<std::allocator<void> >, actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*); A = boost::_bi::list1<const boost::shared_ptr<const es_1::Draw_<std::allocator<void> > >&>; A1 = boost::arg<1>; A2 = boost::_bi::value<actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*>]’: /usr/include/boost/bind/bind_template.hpp:47:59: required from ‘boost::_bi::bind_t<R, F, L>::result_type boost::_bi::bind_t<R, F, L>::operator()(const A1&) [with A1 = boost::shared_ptr<const es_1::Draw_<std::allocator<void> > >; R = void; F = void (*)(es_1::Draw_<std::allocator<void> >, actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*); L = boost::_bi::list2<boost::arg<1>, boost::_bi::value<actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*> >; boost::_bi::bind_t<R, F, L>::result_type = void]’ /usr/include/boost/function/function_template.hpp:159:11: required from ‘static void boost::detail::function::void_function_obj_invoker1<FunctionObj, R, T0>::invoke(boost::detail::function::function_buffer&, T0) [with FunctionObj = boost::_bi::bind_t<void, void (*)(es_1::Draw_<std::allocator<void> >, actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*), boost::_bi::list2<boost::arg<1>, boost::_bi::value<actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*> > >; R = void; T0 = const boost::shared_ptr<const es_1::Draw_<std::allocator<void> > >&]’ /usr/include/boost/function/function_template.hpp:940:38: required from ‘void boost::function1<R, T1>::assign_to(Functor) [with Functor = boost::_bi::bind_t<void, void (*)(es_1::Draw_<std::allocator<void> >, actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*), boost::_bi::list2<boost::arg<1>, boost::_bi::value<actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*> > >; R = void; T0 = const boost::shared_ptr<const es_1::Draw_<std::allocator<void> > >&]’ /usr/include/boost/function/function_template.hpp:728:7: required from ‘boost::function1<R, T1>::function1(Functor, typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type) [with Functor = boost::_bi::bind_t<void, void (*)(es_1::Draw_<std::allocator<void> >, actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*), boost::_bi::list2<boost::arg<1>, boost::_bi::value<actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*> > >; R = void; T0 = const boost::shared_ptr<const es_1::Draw_<std::allocator<void> > >&; typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type = int]’ /usr/include/boost/function/function_template.hpp:1077:16: required from ‘boost::function<R(T0)>::function(Functor, typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type) [with Functor = boost::_bi::bind_t<void, void (*)(es_1::Draw_<std::allocator<void> >, actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*), boost::_bi::list2<boost::arg<1>, boost::_bi::value<actionlib::SimpleActionClient<es_1::polygonAction_<std::allocator<void> > >*> > >; R = void; T0 = const boost::shared_ptr<const es_1::Draw_<std::allocator<void> > >&; typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type = int]’ /home/canniz/Desktop/catkin_ws/src/es_1/src/polygon_action_client.cpp:29:102: required from here /usr/include/boost/bind/bind.hpp:313:35: error: could not convert ‘(& a)->boost::_bi::list1<A1>::operator[]<const boost::shared_ptr<const es_1::Draw_<std::allocator<void> > >&>(boost::_bi::storage1<boost::arg<I> >::a1_<1>)’ from ‘const boost::shared_ptr<const es_1::Draw_<std::allocator<void> > >’ to ‘es_1::Draw_<std::allocator<void> >’ unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); ^ es_1/CMakeFiles/polygon_action_client.dir/build.make:62: recipe for target 'es_1/CMakeFiles/polygon_action_client.dir/src/polygon_action_client.cpp.o' failed make[2]: *** [es_1/CMakeFiles/polygon_action_client.dir/src/polygon_action_client.cpp.o] Error 1 CMakeFiles/Makefile2:1462: recipe for target 'es_1/CMakeFiles/polygon_action_client.dir/all' failed make[1]: *** [es_1/CMakeFiles/polygon_action_client.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j4 -l4" failed Please if someone knows how to solve this problem would be fantastic... I'm sure the solution is just changing a couple of characters, but I need to know which character. Originally posted by Canni on ROS Answers with karma: 5 on 2017-08-02 Post score: 0 Answer: First, change your callback argument const es_1::Draw msg to const es_1::Draw::ConstPtr& msg. As mentioned in section 2.2. of roscpp tutorial, "The message is passed in a boost shared_ptr" Now since callback accepts the arguments as boost::shared_ptr, you may encapsulate your action client like this: boost::shared_ptr<Client> client; client.reset(new Client(..)); // initialize ros::Subscriber sub = n.subscribe<es_1::Draw>("draw", 1000, boost::bind(polygonCallback, _1, client)); ... void polygonCallback(const es_1::Draw::ConstPtr &msg, const boost::shared_ptr<Client> &cl){//Stuff} // use cl by -> Originally posted by naveedhd with karma: 161 on 2017-08-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Canni on 2017-08-03: Thank you so much! I didn't study c++ so all this stuff is absolutely new to me. One day maybe I'll understand in detail what I'm doing ahahah Now it's compiling!
{ "domain": "robotics.stackexchange", "id": 28517, "tags": "ros, callback, boost, action-server, action-client" }
$\omega=\frac{v_t}{r}$ using this equation only prove that constant angular velocity is possible
Question: Is it possible to have constant angular velocity since according to $\omega=\frac{v_t} {r}$ angular velocity is directly propotional to tangential velocity and since tangential velocity is a vector and is always changing directions in uniform rotational motion, therefore we will get different values of angular velocity depending on the direction of tangential velocity? So we can't get constant angular velocity? Answer: First of all, in circular motion $$\vert\vec\omega\vert=\frac{\vert\vec{v_t}\vert} {r}$$ Here the magnitude of angular velocity vector is directly proportional to magnitude of tangential velocity vector.
{ "domain": "physics.stackexchange", "id": 67059, "tags": "rotational-kinematics" }