anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
SVM math question
Question: I'm studying support vector machines and came across this paper. The following equation doesn't make sense to me, especially the part with the 0 ∀i. Any help understanding the basics of SVMs? yi * (xi * w + b) - 1 >= 0 ∀i xi - one input yi - the classification (+1 or -1) w and b - variables to find in order to "train up" the SVM (?) Please forgive my ignorance on SVMs, a lot is still unclear to me and I'm trying to learn these bit by bit. Thanks! Answer: $\forall$ means "for all", i.e., the equation $$y_i (x_i w + b) - 1 \ge 0$$ is supposed to hold for every value of $i$. To learn more about that, pick up a book on discrete mathematics and read its introduction to boolean logic and propositional logic. (You might need a certain amount of background in mathematics to make sense of those equations, I'm afraid.)
{ "domain": "cs.stackexchange", "id": 2204, "tags": "machine-learning" }
Why in this problem I had to take the average of the electric field?
Question: On an electrically charged soap bubble, acts an electric force outwards. We are given the total charge of the bubble, $Q$, and its radius, $R$. What is the value of the total force, acting to move one half of the bubble away from the other? I understood the solution, besides a small point that disturbs me - They said $dF=Edq$ and then $E=\frac\sigma{2\epsilon_0}$. It seems like they took the average of the electric field inside and outside the surface of the bubble (which is, respectively, $0$ and $\frac\sigma{\epsilon_0}$). Is this always done when the electric field on a surface is to be used in a calculation? Answer: It seems like they took the average of the electric field inside and outside the surface of the bubble (which is, respectively, $0$ and $\frac\sigma{\epsilon_0}$). If you gave the surface some thickness (but still thin) and a uniform volume charge density then the electric field is $0$ inside and $\frac\sigma{\epsilon_0}$ outside and changes roughly linearly (when the shell is thin) between the inside and the outside surface. Is this always done when the electric field on a surface is to be used in a calculation? Its always that you make a singular density (whether volume, surface or line charge density) act like a limit of a nicer (and finite) volume charge densities. Whether its as simple as an average depends on the situation.
{ "domain": "physics.stackexchange", "id": 49231, "tags": "homework-and-exercises, electrostatics, electric-fields" }
Calculate the sum over a list of minima of two vectors
Question: I have the following metric: def d(x, y): """The metric.""" return 1 - sum(min(xi, yi) for xi, yi in zip(x, y)) where x and y are numpy arrays of equal length (~3000). I know that the elements of x and the elements of y each sum up to 1 and are non-negative. Can this be calculated faster? Answer: Minimum calculation could be vectorized: import numpy as np def f(x, y): return 1 - np.minimum(x, y).sum()
{ "domain": "codereview.stackexchange", "id": 27254, "tags": "python, performance, mathematics, numpy" }
Binding energy per nucleon error
Question: I am given that the mass deficit of 114-Cd is -90.01 MeV, and asked to calculate the binding energy, which should simply be 90.01 MeV then, due to the use of natural units (i.e. where $c=1$). I'm then asked to calculate the binding energy per nucleon which should simply be $\frac{90.01}{114}$. However, according to the solutions manual, the binding energy per nucleon is 8.54 MeV instead. I'm not entirely sure where the discrepancy arises. Any help would be appreciated. Answer: The mass deficit is related to, but different from, the binding energy. The bare neutron and bare proton have zero binding energy, while the canonical source for mass deficits defines $\Delta[{}^{12}\rm C] = 0$. Note also that the NNDC mass deficits are for neutral atoms, and therefore include electron masses.
{ "domain": "physics.stackexchange", "id": 37447, "tags": "homework-and-exercises, nuclear-physics, mass-energy, binding-energy, textbook-erratum" }
The Rust Book: HashMap Challenge
Question: I'm working my way through the Rust Book: Ch.8 Challenge: Using a hash map and vectors, create a text interface to allow a user to add employee names to a department in the company. For example, “Add Sally to Engineering” or “Add Amir to Sales”. Then let the user retrieve a list of all people in a department or all people in the company by department, sorted alphabetically. use std::collections::HashMap; #[derive(Debug)] enum Command { Add(String, String), List, Unknown } fn command_filter(input: String) -> Command { let mut command_string = input.split_whitespace(); match command_string.next() { Some("Add") => { match (command_string.next(), command_string.last()){ (Some(name), Some(dept)) => Command::Add(name.to_string(), dept.to_string()), _ => Command::Unknown } }, Some("List") => Command::List, _ => Command::Unknown } } fn process_command(command: Command, employee_map: &mut HashMap<String, String>) { match command { Command::Add(name, dept) => { employee_map.insert(name, dept); () }, Command::List => { for (name, dept) in employee_map { println!("Name: {} Dept: {}", name, dept); }; }, Command::Unknown => println!("Unknown command!"), }; } fn main() { let mut employee: HashMap<String, String> = HashMap::new(); let command = command_filter(String::from("Add John to Eng")); process_command(command, &mut employee); let command = command_filter(String::from("Add Devon to Bio")); process_command(command, &mut employee); let command = command_filter(String::from("List")); process_command(command, &mut employee); } As with most devs starting to learn Rust I am finding difficulty with borrowing. I do not come from a C/C++ background and so I'm also finding pointers versus actual object a little tricky too. Some specific thoughts: I am suspicious that match statements inside match statements is not good practice. The Command::Add branch in process_command feels a little hacky with the () return. Edit: After thinking more about this problem, I think if I wrapped the branch responses in Ok() then I could return a Result from that function. Do you have any feedback? Answer: General It feels strange to have functions just hanging around; why not create some methods on types? I moved command_filter to From and moved process_command to Command::process. The latter would probably make more sense as a process method on a EmployeeMap type, though. Command It's unclear what the two values are for Command::Add - use named fields instead. Could contain &strs since they live shorter than the string they parsed. Ah I see, and I would only need a new string (and hence allocation?) if the lifetime of the string was shorter than the slice? You would need to use a String (which is an allocation, yes) if the Command needs to contain the string value longer than the input string value is available. In this case, all of your input strings are hard-coded literals so they are available "forever". In a different case, you might have read some user input into another String. So long as the Command went out of scope before that String moved, it would be good. Lifetimes are hard Yes... and no. The thing is that languages like C or C++ have the exact same problems, but the languages don't stop you from doing The Wrong Thing. Languages with a garbage collector don't allow you to achieve the same level of efficiency. Rust's lifetimes allow you to be efficient while preventing memory unsafety. command_filter Could take a &str as an argument since it doesn't make use of the allocation of the String. This advances the split_whitespace iterator before checking the result of a previous next call: match (command_string.next(), command_string.last()) This isn't guaranteed to do what you want: calling next() again may or may not eventually start returning Some(Item) again at some point. You should use Iterator::fuse. Could choose to flatten out the match to one level, but this would require advancing the iterator when you don't need to. Nested matches aren't inherently bad. process_command You don't need to say () as the return expression, that's the default value for a statement (lines ending in ;). main There's no reason to specify the type of employees, the compiler can infer it. employees is a collection, so it should be a plural noun. use std::collections::HashMap; #[derive(Debug)] enum Command<'a> { Add { name: &'a str, dept: &'a str }, List, Unknown, } impl<'a> From<&'a str> for Command<'a> { fn from(input: &'a str) -> Self { let mut command_string = input.split_whitespace().fuse(); match command_string.next() { Some("Add") => { match (command_string.next(), command_string.last()) { (Some(name), Some(dept)) => Command::Add { name, dept }, _ => Command::Unknown, } }, Some("List") => Command::List, _ => Command::Unknown, } } } impl<'a> Command<'a> { fn process(self, employees: &mut HashMap<String, String>) { match self { Command::Add { name, dept } => { employees.insert(name.to_owned(), dept.to_owned()); } Command::List => { for (name, dept) in employees { println!("Name: {} Dept: {}", name, dept); } } Command::Unknown => println!("Unknown command!"), }; } } fn main() { let mut employees = HashMap::new(); let command = Command::from("Add John to Eng"); command.process(&mut employees); let command = Command::from("Add Devon to Bio"); command.process(&mut employees); let command = Command::from("List"); command.process(&mut employees); }
{ "domain": "codereview.stackexchange", "id": 30852, "tags": "beginner, rust" }
Solenoidal forces
Question: As far as I know a solenoidal vector field is such one that $$\vec\nabla\cdot \vec F=0.$$ However I saw a book on mechanics defining a solenoidal force as one for which the infinitesimal work identically vanish, $$dW=\vec F\cdot d\vec r=0.$$ In this case, a solenoidal force would satisfy $\vec F\bot\vec v$, where $\vec v=\frac{d\vec r}{dt}$. I could not find anywhere else people discussing a solenoidal force as in the latter case. Are the two definitions equivalent? How so? Answer: A counter example on equivalence may be the electric field created in a closed copper wire by a varying magnetic flux : Such a field encloses on itself, i.e $$\vec\nabla.\vec E_M =0$$ However, it does make a work because it causes the electrons to move in the conductor and create a current, which means $$dW\not=0$$
{ "domain": "physics.stackexchange", "id": 29476, "tags": "classical-mechanics, forces, work, vector-fields" }
Definition of the $S$-Matrix in Schwartz QFT-Book: Why is $\langle f, t_f | i, t_i \rangle$ in the Schroedinger picture, and not Heisenberg-picture?
Question: On page 51, (equation 5.1), Mathew Schwartz introduces the $S$-matrix as \begin{align} \langle f| S | i \rangle_{Heisenberg} = \langle f, \infty | i, -\infty \rangle_{Schrödinger} \end{align} Were $|i, t\rangle$ is a Schrödinger-state at time $t$ (at least if I understood correctly). So now my question is: Shouldn't it be exactly the other way around? I think that it should be the other way around because of several things: If I set up a schrödinger state at time $-\infty$ and at $\infty$ then the time evolution of the system (which is the interesting part) doesn't have an effect on $\langle f, \infty | i, -\infty \rangle$ at all. Schwartz later, when prooving the LSZ-formula starting at equation 6.6, uses the fields and annihilation/creation operators in the Heisenberg picture. Yet he just calculates the $S$-matrix simply as the scalar product between the defined in- and out-states, as this question points out. Can somebody clear things up here? Especially, if an answer writes something like "$|i, t_i \rangle$ is a state at time $t$", or "state $|i, t_i \rangle$ fixed at time $t$" it would be great if additionally one says what is meant by that, because that phrases meaning is very ambiguous. Answer: This is a common confusion, and I don't think Schwartz's description is clear at all. An in state $|i, \text{in} \rangle$ is a Heisenberg state, fixed at time $t = 0$, which would look like several widely separated, incoming particles if you evolved it like a Schrodinger state to $t \to -\infty$. (Explicitly, if $U(t_f, t_i)$ is the time evolution operator in Schrodinger picture, then $|i, \text{in} \rangle = U(0, -\infty) |i \rangle$. That is, it's the Schrodinger state evaluated at time $t = 0$.) Similarly, an out state $| f, \text{out} \rangle$ is a Heisenberg state which has widely separated, outgoing particles as $t \to \infty$. Here, $i$ and $f$ stand for a specification like ``two particles, of momenta $p^\mu$ and $q^\mu$''. We want to know the transition amplitudes $\langle f, \text{out} | i, \text{in} \rangle$. The $S$-matrix is defined by $$|i, \text{in} \rangle = S | i, \text{out} \rangle.$$ Upon using the unitarity property $S^\dagger = S^{-1}$, this implies $$\langle f, \text{out} | i, \text{in} \rangle = \langle f, \text{in} | S | i, \text{in} \rangle.$$ Again, all of these are Heisenberg states. But "in" and "out" states are a weird kind of Heisenberg state defined by how they would look if they were time evolved like Schrodinger states, even though they actually aren't, which is why books can be unclear on this point.
{ "domain": "physics.stackexchange", "id": 80567, "tags": "quantum-field-theory, hilbert-space, terminology, definition, s-matrix-theory" }
What is the purpose of a quantum RAM in quantum algorithms?
Question: I see many papers (e.g. Quantum principal component analysis) in which the existence of qRAM is necessary. What's the actual purpose of qRAM in quantum algorithms? Answer: This is discussed in chapter 5 of Ciliberto et al.. The purpose of most quantum(-enhanced) machine learning algorithms is to speed-up the processing of classical data over what is possible with classical machine learning algorithms. In other words, the context is that you have a set of classical vectors $\{\boldsymbol x_k\}_k$, and you want to compute some function $\boldsymbol f(\boldsymbol x_k)$ of this data (which may then be used as an estimator of some property, or as a function characterising a classifier to be used for new data points, or something else). Most quantum machine learning algorithms tell you that, provided you are able to efficiently perform the mapping $$\{\boldsymbol x_k\}_k\mapsto\lvert \{\boldsymbol x_k\}\rangle= N\sum_{kj} x_{kj}\lvert k,j\rangle,$$ then it is sometimes possible to compute $\boldsymbol f(\{\boldsymbol x_k\})$ more efficiently. It is, however, highly nontrivial how to perform such mapping efficiently. To maintain the potential exponential speed-ups of the quantum algorithms, this conversion needs to be efficient. If this is not the case, then one ends up in a situation in which the quantum algorithm can solve the problem very efficiently, but only after a lengthy preprocessing of the data has been performed, therefore killing the whole point of using the quantum algorithm. This is where QRAMs come into play. A QRAM is a device that can (theoretically) encode $N$ $d$-dimensional classical vectors into (the amplitudes of) a quantum state of $\log(Nd)$ qubits, in time $\mathcal O(\log(Nd))$. As discussed in Ciliberto et al., as well as in this related answer, the actual feasibility of QRAMs is still not entirely clear, and many caveats remain.
{ "domain": "quantumcomputing.stackexchange", "id": 80, "tags": "quantum-memory" }
Question about a relativisticaly accelerated harmonic oscillator
Question: How can the speed of oscillation of a harmonic oscillator be affected if somehow it got accelerated to a relativistic speed perpendicular to its oscillation? Can this be compared with the effect on relativistic laser clock? Answer: In the inertial frame of the harmonic oscillator the frequency/speed is not affected. But according to the rest frame, the frequency is lower because of time dilation. In the rest frame, the reduced frequency can be explained by the mass (should rather be called inertia) increasing (see relativistic mass) and the spring constant decreasing.
{ "domain": "physics.stackexchange", "id": 71163, "tags": "special-relativity, harmonic-oscillator, elementary-particles" }
E of a signal using Rayleigh
Question: I have to find The energy of a signal using Rayleigh th. the signal is $$ x(t) = A e^{-At } u(t) $$ assuming A>0 Using the classic definition of E , I found that it should be $$ \frac{A}{2} $$ Using Rayleigh I should do $$ \int_{-inf}^{inf} | \frac{A}{A+i2\pi f}|^2 df $$ This because I previously found X(f) From this I obtained $$\frac{A^2}{8 \pi^2 f} log (\sqrt (A^2 +4\pi^2f^2)) $$ with log from -infinite to infinite. $$ log \sqrt x = log \frac{x}{2} $$ but now I have no idea how to continue , and if this is correct.. thank you Answer: If I understand correctly, you want to verify the energy calculation in the frequency domain by computing the energy as $$E_x=\int_{-\infty}^{\infty}|X(f)|^2df\tag{1}$$ with $$X(f)=\mathcal{F}\big\{x(t)\big\}=\frac{A}{A+i2\pi f}\tag{2}$$ From $(2)$ we get $$|X(f)|^2=\frac{A^2}{A^2+(2\pi f)^2}=\frac{1}{1+\left(\frac{2\pi f}{A}\right)^2}\tag{3}$$ With $(3)$ and with the substitution $x=2\pi f/A$, the integral $(1)$ becomes $$E_x=\frac{A}{2\pi}\int_{-\infty}^{\infty}\frac{1}{1+x^2}dx=\frac{A}{2\pi}\arctan(x){\huge|}_{-\infty}^{\infty}=\frac{A}{2\pi}\cdot\pi=\frac{A}{2}\tag{4}$$
{ "domain": "dsp.stackexchange", "id": 8253, "tags": "fourier-transform, fourier, fourier-series" }
How to calculate translation and rotation of pulley
Question: This is not a homework question because I do not want help with solving my homework. I would rather want an explanation of what the logic behind this is and how it works. Background information My problem I know that the pulley should lower at some speed and rotate if $V_A \neq V_B$? That I can understand using intuition I think. However, I do not understand the relation between the speed of the cables and the rotation/speed of pulley. What is the "best" way to approach this kind of problem? If $V_A=V_B$ then there is no rotation? But if the speed if different, how will the relationship between cable speed and rotation/pulley speed relate? Answer: You can start building your intuition from special cases (like you have done for $v_A = v_B$). The next special case to consider is that only one of the side pulleys is giving rope, i.e. let's say $v_B = 0$. Then it is easy to see that the main pulley should be descending at $0.5v_A$. How about the rotation? The main pulley and the pulley A have to pass the same amount of arc in a time unit, because we assume the rope does not stretch. Then the rope length passed from pulley A is $s = \alpha_A r_A$, where $\alpha_A$ is the angle the pulley has turned, and similarly for the main pulley $s = \alpha_C r_C$. Since these are equal we get $\alpha_C = \frac{r_A}{r_C}\alpha_A$, or if we want it just as a function of the rope length $\alpha_C = \frac{s}{r_C}$, or the speed $\omega_C = \frac{v_A}{r_C}$. Now what happens if $v_A \neq v_B$ and neither is zero? For the height, the main thing to notice is that it does not matter from which pulley the rope is given, the main pulley makes sure that there are same lengths of rope on both sides. One way to think this is that both pulleys feed rope independently, but their effects are only half of what they feed, i.e. $v_C = 0.5(v_A + v_B)$. The angle of the middle pulley can be calculated with the same mind set, the main difference being that the pulleys A and B now rotate the main pulley to different directions. Using the results in first paragraph, we then get $\omega_C = \frac{v_A}{r_C} - \frac{v_B}{r_C}$.
{ "domain": "physics.stackexchange", "id": 62930, "tags": "rotational-dynamics, rigid-body-dynamics" }
What is the difference between an algorithm, a language and a problem?
Question: It seems that on this site, people will often correct others for confusing "algorithms" and "problems." What are the difference between these? How do I know when I should be considering algorithms and considering problems? And how do these relate to the concept of a language in formal language theory? Answer: For simplicity, I'll begin by only considering "decision" problems, which have a yes/no answer. Function problems work roughly the same way, except instead of yes/no, there is a specific output word associated with each input word. Language: a language is simply a set of strings. If you have an alphabet, such as $\Sigma$, then $\Sigma^*$ is the set of all words containing only the symbols in $\Sigma$. For example, $\{0,1 \}^*$ is the set of all binary sequences of any length. An alphabet doesn't need to be binary, though. It can be unary, ternary, etc. A language over an alphabet $\Sigma$ is any subset of $\Sigma^*$. Problem: A problem is some question about some input we'd like answered. Specifically, a decision problem is a question which asks, "Does our given input fulfill property $X$? A language is the formal realization of a problem. When we want to reason theoretically about a decision problem, we often examine the corresponding language. For a decision problem $X$, the corresponding language is: $L = \{w \mid w$ is the encoding of an input $y$ to problem $X$, and the answer to input $y$ for problem $X$ is "Yes" $ \}$ Determining if the answer for an input to a decision problem is "yes" is equivalent to determining whether an encoding of that input over an alphabet is in the corresponding language. Algorithm: An algorithm is a step-by-step way to solve a problem. Note that there an algorithm can be expressed in many ways and many languages, and that there are many different algorithms solving any given problem. Turing Machine: A Turing Machine is the formal analogue of an algorithm. A Turing Machine over a given alphabet, for each word, either will or won't halt in an accepting state. Thus for each Turing Machine $M$, there is a corresponding language: $L(M) = \{w \mid M$ halts in an accepting state on input $w\}$. (There's a subtle difference between Turing Machines that halt on all inputs and halt on yes inputs, which defines the difference between complexity classes $\mathsf{R}$ and $\mathsf{RE}$.) The relationship between languages and Turing Machines is as follows Every Turing Machine accepts exactly one language There may be more than one Turing Machine that accept a given language There may be no Turing Machine that accepts a given language. We can say roughly the same thing about algorithms and problems: every algorithm solves a single problem, but there may be 0, or many, algorithms solving a given problem. Time Complexity: One of the most common sources of confusion between algorithms and problems is in regards to complexity classes. The correct allocation can be summarized as follows: An algorithm has a time complexity A problem belongs to a complexity class An algorithm can have a certain time complexity. We say an algorithm has a worst-case upper-bounded complexity $f(n)$ if the algorithm halts in at most $f(n)$ steps for any input of size $n$. Problems don't have run-times, since a problem isn't tied to a specific algorithm which actually runs. Instead, we say that a problem belongs to a complexity class, if there exists some algorithm solving that problem with a given time complexity. $\mathsf{P}, \mathsf{NP}, \mathsf{PSPACE}, \mathsf{EXPTIME}$ etc. are all complexity classes. This means they contain problems, not algorithms. An algorithm can never be in $\mathsf{P}$, but if there's a polynomial-time algorithm solving a given problem $X$, then $X$ can be classified in complexity class $\mathsf{P}$. There could also be a bunch of other algorithms runs in different time complexity will also be able to solve the problem with the same input size under different time complexity, i.e. exponential-time algorithms, but since there already exists a single polynomial-time algorithm accepting $X$, it is in $\mathsf{P}$.
{ "domain": "cs.stackexchange", "id": 19069, "tags": "algorithms, complexity-theory, formal-languages, terminology, reference-question" }
Prove "Vertex Cover OR Clique" is NP complete
Question: Instance: An undirected graph $G$ and a positive integer $k$ Question: Does $G$ contain a vertex cover of size $\leq k$ or a clique of size $\geq k$? Obviously, this problem is solved by polynomial reduction, but is it from Clique or Vertex Cover? And how? I've tried to reduce from both problems, but seem to get stuck. If I reduce from Clique, for example, It seems I'd need to guarantee that there isn't a Vertex Cover, but it's not clear to me how to do that. It seems reducing from Vertex Cover is more promising, but I run into the same issue Answer: You can easily reduce from clique as follows. First, notice that the clique problem remains NP-hard even if we restrict $k$ to lie in $3 \leq k \leq n$ (because outside this range the problem is trivially solvable in polynomial time). Given a graph $G$ on $n$ vertices and $3\leq k \leq n$, construct a graph $G'$ by taking the disjoint union of $G$ with a perfect matching on $2(n+1)$ additional vertices. Now, $G$ has a clique of size $\geq k$ if and only if $G'$ has a clique of size $\geq k$. This is because if we find a clique of size $\geq k\geq 3$ in $G$ then it cannot lie in the matching part, as this has no cliques of size greater than $2$. Moreover, $G'$ has no vertex cover of size $\leq k$, as you need at least $n+1$ vertices to cover the matching. Thus, $G$ has a clique of size $k$ if and only if $G'$ has a clique of size $\geq k$ or a vertex cover of size $\leq k$ (this second condition always being false).
{ "domain": "cs.stackexchange", "id": 21949, "tags": "np-complete, reductions, np, clique, vertex-cover" }
Is it possible that the value of a continuous mother wavelet at origin is zero, i.e. $\psi(t=0)=0$?
Question: According to Fourier transform, a continuous wavelet could be written as $$ \psi(t)=\frac{1}{2\pi}\int\hat\psi(k)\text{e}^{-ikt}\text{d}k $$ From the equation above, we know that $\psi(t=0)$ is $$ \psi(0)=\frac{1}{2\pi}\int\hat\psi(k)\text{d}k $$ Is it possible that $\psi(0)=0$? Answer: Yes: a sufficient criterion for a valid wavelet is being zero mean in time domain, i.e. $\hat \psi (0) = 0$. For $\psi(0) = 0$ we require that $\hat \psi$ sums to zero; one example is higher order Generalized Morse Wavelets - from Olhede & Walden, (Though above is only approximately zero-sum). Such wavelets can be admissible and analytic, enabling CWT inversion. Trivial Python example below; GMWs are also implemented in ssqueezepy. Code import numpy as np import matplotlib.pyplot as plt from numpy.fft import ifft, ifftshift t = np.linspace(0, 1, 256, endpoint=False) wf = np.exp(-(t - .08)**2 * 4096) wf -= np.roll(wf, 25) # shift by 25 samples and subtract w = ifftshift(ifft(wf)) # take to time domain and center plt.plot(wf) plt.show() plt.plot(w.real) plt.plot(w.imag) plt.show()
{ "domain": "dsp.stackexchange", "id": 10161, "tags": "continuous-signals, wavelet" }
Tetris clone in C++ using ncurses
Question: Recently, just for fun, I've created a clone of the widely popular game Tetris using C++. Since I am still a novice when it comes to C++, I would really appreciate all the feedback I can get from those with more experience. The code below is also on GitHub. Main.cpp #include "Game.h" #include <ncurses.h> #include <clocale> int main() { Game game; setlocale(LC_ALL, ""); initscr(); start_color(); init_pair(0, COLOR_GREEN, COLOR_BLACK); init_pair(1, COLOR_RED, COLOR_BLACK); init_pair(2, COLOR_BLUE, COLOR_BLACK); init_pair(3, COLOR_YELLOW, COLOR_BLACK); init_pair(4, COLOR_GREEN, COLOR_BLACK); curs_set(FALSE); raw(); noecho(); nodelay(stdscr, TRUE); game.matrix_init(); while (!game.isGameOver()) { bool can_create_block = false; can_create_block = game.get_last_block().move_down(); if (can_create_block) { game.destroy(); game.create_block(); } game.controls(); napms(game.getSpeed()); if (game.getSpeed() < DEFAULT_SPEED) game.setSpeed(DEFAULT_SPEED); game.draw(); game.gameOverChecker(); } endwin(); return 0; } cCoord.h #ifndef TETRIS_CCOORD_H #define TETRIS_CCOORD_H #define MAX_COORDINATES 4 class cCoord { private: int x, y; public: // Getter functions int get_x() const; int get_y() const; // Setter functions cCoord set_x(int a); cCoord set_y(int b); cCoord(int a, int b) : x(a), y(b) {}; cCoord() = default; ~cCoord() = default; }; #endif //TETRIS_CCOORD_H cCoord.cpp #include "cCoord.h" int cCoord::get_x() const { return x; } int cCoord::get_y() const { return y; } cCoord cCoord::set_y(int b) { y = b; return *this; } cCoord cCoord::set_x(int a) { x = a; return *this; } Block.h #ifndef TETRIS_BLOCK_H #define TETRIS_BLOCK_H #include "cCoord.h" class Block { private: cCoord coord; public: Block(cCoord c); Block(int x, int y); Block() = default; ~Block() = default; void move_down(); void move_right(); void move_left(); // Setter functions Block set_x(int x) { coord.set_x(x); return *this; } Block set_y(int y) { coord.set_y(y); return *this; } // Getter functions int get_x() const { return coord.get_x(); } int get_y() const { return coord.get_y(); } }; #endif //TETRIS_BLOCK_H Block.cpp #include "Block.h" #include "Game.h" Block::Block(cCoord c) : coord(c) {} Block::Block(int x, int y) : coord(x, y) {} void Block::move_down() { coord.set_y(coord.get_y() + 1); } void Block::move_right() { coord.set_x(coord.get_x() + 1); } void Block::move_left() { coord.set_x(coord.get_x() - 1); } Structure.h #ifndef TETRIS_STRUCTURE_H #define TETRIS_STRUCTURE_H #include "cCoord.h" #include "Block.h" #include <vector> class Structure { private: int struct_type; // The type of block, according to the key cCoord origin; int color; public: Structure(int type, int c); Structure(const Structure&); // Rotation methods Structure rotate_left(); Structure rotate_right(); // Movement methods bool move_down(); Structure move_left(); Structure move_right(); std::vector<Block> coords; // Getters int getColor() const; }; #endif //TETRIS_STRUCTURE_H Structure.cpp #include <cmath> #include "Structure.h" #include "Game.h" inline void rotate_point(cCoord &origin, float angle, Block &p) { int x1 = static_cast<int>(round(cos(angle) * (p.get_x() - origin.get_x()) - sin(angle) * (p.get_y() - origin.get_y()) + origin.get_x())); int y1 = static_cast<int>(round(cos(angle) * (p.get_y() - origin.get_y()) + sin(angle) * (p.get_x() - origin.get_x()) + origin.get_y())); p.set_x(x1); p.set_y(y1); } Structure::Structure(int type, int c) : struct_type(type), origin(Game::struct_origins[type]), color(c) { coords.resize(4); for (int i = 0; i < MAX_COORDINATES; ++i) { coords.at(i).set_x(Game::struct_coords[type][i].get_x()); coords.at(i).set_y(Game::struct_coords[type][i].get_y()); } } Structure::Structure(const Structure &s) : struct_type(s.struct_type), origin(s.origin), coords(s.coords), color(s.color) {} Structure Structure::rotate_left() { std::vector<Block> temp(coords); // Create a temporary array to make // sure the structure doesn't go out of bounds for (auto &b : temp) { rotate_point(origin, 1.5708, b); // If out of bounds, do not rotate the original structure if (b.get_x() > Game::width - 1 || b.get_x() < 0 || b.get_y() > Game::height - 1 || b.get_y() < 0 || Game::collision_detector_x(b.get_x(), b.get_y())) return *this; } for (int i = 0; i < coords.size(); ++i) coords[i] = temp[i]; return *this; } Structure Structure::rotate_right() { std::vector<Block> temp(coords); // Create a temporary array to make // sure the structure doesn't go out of bounds for (auto &b : temp) { rotate_point(origin, -1.5708, b); // If out of bounds, do not rotate the original structure if (b.get_x() > Game::width - 1 || b.get_x() < 0 || b.get_y() > Game::height - 1 || b.get_y() < 0 || Game::collision_detector_x(b.get_x(), b.get_y())) return *this; } for (int i = 0; i < coords.size(); ++i) coords[i] = temp[i]; return *this; } bool Structure::move_down() { for (auto &b : coords) { if (b.get_y() >= Game::height - 1 || Game::collision_detector_y(b.get_x(), b.get_y() + 1)) return true; } for (auto &b : coords) b.move_down(); if (origin.get_y() <= Game::height - 1) origin.set_y(origin.get_y() + 1); return false; } Structure Structure::move_left() { std::vector<Block> temp(coords); // Create a temporary array to make sure the // structure doesn't go out of bounds for (auto &b : temp) { b.move_left(); // If out of bounds, do not move the original structure if (b.get_x() > Game::width - 1 || b.get_x() < 0 || Game::collision_detector_x(b.get_x() - 1, b.get_y())) return *this; } for (int i = 0; i < coords.size(); ++i) coords[i] = temp[i]; origin.set_x(origin.get_x() - 1); return *this; } Structure Structure::move_right() { std::vector<Block> temp(coords); // Create a temporary array to make sure the // structure doesn't go out of bounds for (auto &b : temp) { b.move_right(); // If out of bounds, do not move the original structure if (b.get_x() > Game::width - 1 || b.get_x() < 0 || Game::collision_detector_x(b.get_x() + 1, b.get_y())) return *this; } for (int i = 0; i < coords.size(); ++i) coords[i] = temp[i]; origin.set_x(origin.get_x() + 1); return *this; } int Structure::getColor() const { return color; } Game.h #ifndef TETRIS_GAME_H #define TETRIS_GAME_H #include "Structure.h" #include <vector> #include "Globals.h" #define DEFAULT_SPEED 300 class Game { friend class Block; private: int prev_block = 99; /* Previous block, represented by the key */ bool gameOver = false; int speed = 250; public: /* * Block structures key: * 0 : L shaped * 1 : Square shaped * 2 : Stick shaped * 3 : Stair shaped * 4 : T shaped */ constexpr static int height = 24; constexpr static int width = 10; constexpr static long blockChar = L'\u2588'; /* Constant which represents the value of the block character */ static cCoord struct_coords[][MAX_COORDINATES + 1]; static cCoord struct_origins[MAX_COORDINATES + 1]; Game(); /* Constructor */ // Block/Structure functions void create_block(); Structure& get_last_block(); void destroy(); // Destroy blocks in a line and then make all blocks ontop fall down static bool collision_detector_y(int x, int y); static bool collision_detector_x(int x, int y); // Getters int get_next_block(); bool isGameOver() const; int getSpeed() const; // Setters void setSpeed(int speed); // General game methods void matrix_init(); void draw(); void controls(); void gameOverChecker(); // Checks for game over }; #endif //TETRIS_GAME_H Game.cpp #include <random> #include <ncurses.h> #include "Game.h" int Game::get_next_block() { int val; while (true) { std::random_device generator; std::uniform_int_distribution<int> distribution(0,4); if((val = distribution(generator)) != prev_block) return val; } } // Stores template for all the different tetris pieces cCoord Game::struct_coords[][MAX_COORDINATES + 1] = {{ /* Row: 1 */ {0, 0}, {1, 0}, {2, 0}, /* Row: 2 */ {0, 1}, }, { /* Row: 1 */ {0, 0}, {1, 0}, /* Row: 2 */ {0, 1}, {1, 1}, }, { /* Row: 1 */ {0, 0}, /* Row: 2 */ {0, 1}, /* Row: 3 */ {0, 2}, /* Row: 4 */ {0, 3}, }, { /* Row: 1 */ {1, 0}, {2, 0}, /* Row: 2 */ {0, 1}, {1, 1}, }, { /* Row: 1 */ {1, 0}, /* Row: 2 */ {0, 1}, {1, 1}, {2, 1}, }}; // Stores the origins coords for all the different tetris pieces cCoord Game::struct_origins[MAX_COORDINATES + 1] = { /* L Shaped */ {0, 0}, /* Square shaped */ {0, 0}, /* Stick shaped */ {0, 0}, /* Stair shaped */ {1, 0}, /* T shaped */ {1, 1}, }; Game::Game() { create_block(); } inline void Game::create_block() { s.push_back(Structure(get_next_block(), get_next_block())); } inline Structure& Game::get_last_block() { return *(s.end() - 1); } bool Game::isGameOver() const { return gameOver; } void Game::matrix_init() { int x, y; for (y = 0; y < height; y++) { for (x = 0; x < width; x++) { bool foundBlockFlag = false; // Cycle through x and y, if x and y match with block, draw block for (auto iter1 = s.cbegin(); iter1 != s.cend(); ++iter1) for (auto iter2 = iter1->coords.cbegin(); iter2 != iter1->coords.cend(); ++iter2) if (x == iter2->get_x() && y == iter2->get_y()) { attron(COLOR_PAIR(iter1->getColor())); printw("█"); attroff(COLOR_PAIR(iter1->getColor())); foundBlockFlag = true; break; } // If nothing matches, draw a space if (!foundBlockFlag) { move(y, x); printw(" "); } } move(y, x); printw("\n"); } } void Game::draw () { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { bool foundBlockFlag = false; // Cycle through x and y, if there is a block where there isn't a block drawn, draw one for (auto iter1 = s.cbegin(); iter1 != s.cend(); ++iter1) for (auto iter2 = iter1->coords.cbegin(); iter2 != iter1->coords.cend(); ++iter2) if (x == iter2->get_x() && y == iter2->get_y() && static_cast<char>(mvinch(y, x)) != blockChar) { attron(COLOR_PAIR(iter1->getColor())); move(y, x); printw("█"); attroff(COLOR_PAIR(iter1->getColor())); foundBlockFlag = true; break; } // If nothing matches, draw a space if (!foundBlockFlag) { move(y, x); printw(" "); } } } } void Game::controls () { switch(getch()) { case 'q' : case 'Q' : get_last_block().rotate_left(); break; case 'e' : case 'E' : get_last_block().rotate_right(); break; case 'a' : case 'A' : get_last_block().move_left(); break; case 'd' : case 'D' : get_last_block().move_right(); break; case 'x' : case 'X' : gameOver = true; break; case 's' : case 'S' : setSpeed(100); break; } } void Game::destroy() { int counter = 0; int delete_y; bool fall_flag; for (int y = height-1; y >= 1; --y) { fall_flag = false; for (int x = 0; x < width; ++x) { if (mvinch(y, x) == blockChar) { ++counter; } if (counter >= width) { delete_y = y; for (auto iter1 = s.begin(); iter1 != s.end(); ++iter1) for (auto iter2 = iter1->coords.begin(); iter2 != iter1->coords.end();) { if (iter2->get_y() == delete_y) { iter2 = iter1->coords.erase(iter2); fall_flag = true; continue; } ++iter2; } } } if (fall_flag) for (int y = delete_y - 1; y >= 0; --y) { for (auto iter1 = s.begin(); iter1 != s.end(); ++iter1) for (auto iter2 = iter1->coords.begin(); iter2 != iter1->coords.end(); ++iter2) { if (iter2->get_y() == y) iter2->move_down(); } } counter = 0; } } void Game::gameOverChecker() { if(s.size() < 2) return; Structure block = *(s.end() - 2); for (auto iter1 = block.coords.cbegin(); iter1 != block.coords.cend(); ++iter1) { if (iter1->get_y() <= 1) { gameOver = true; return; } } } int Game::getSpeed() const { return speed; } void Game::setSpeed(int speed) { Game::speed = speed; } bool Game::collision_detector_y(int x, int y) { for (auto i1 = s.cbegin(); i1 != s.end() - 1; ++i1) for (auto i2 = i1->coords.cbegin(); i2 != i1->coords.cend(); ++i2) if (i2->get_y() == y && i2->get_x() == x) return true; return false; } bool Game::collision_detector_x(int x, int y) { for (auto i1 = s.cbegin(); i1 != s.end() - 1; ++i1) for (auto i2 = i1->coords.cbegin(); i2 != i1->coords.cend(); ++i2) if (i2->get_x() == x && i2->get_y() == y) return true; return false; } Globals.h #ifndef TETRIS_GLOBALS_H #define TETRIS_GLOBALS_H #include <vector> #include "Structure.h" // Contains all the block structures that have fallen down, and are still falling extern std::vector<Structure> s; #endif //TETRIS_GLOBALS_H Globals.cpp #include "Globals.h" // Contains all the block structures that have fallen down, and are still falling std::vector<Structure> s; Answer: I see a number of things that may help you improve your code. Hide implementation details It was surprising to me to find so many lines of more or less raw curses code in main. The Model-View-Controller design pattern is often useful for programs like this. The model is the internal state of the game that's mostly already within your Game class, the view is currently split between main and various parts of the Game class and the controller is essentially just the game.controls() function. Separating the I/O from the game logic will help you write cleaner code and also assist if you were to decide to port the game to some other platform. Understand random The get_next_block member function of Game is currently this: int Game::get_next_block() { int val; while (true) { std::random_device generator; std::uniform_int_distribution<int> distribution(0,4); if((val = distribution(generator)) != prev_block) return val; } } There are a few problems with this. First, and most importantly, you should generally not use the std::random_device except to seed another random number generator such as mt19937. The reason for this is that std::random_device is sometimes very slow and sometimes hardware based. It tends to slow down a lot if the underlying entropy of the device is low. The second problem is that the generator should probably be static so that the distribution is pulling from the same random number generator every time instead of creating a new one. I'd rewrite it like this: int Game::get_next_block() { static std::mt19937 generator(std::random_device{}()); std::uniform_int_distribution<int> distribution(0,4); int val; for (val = distribution(generator); val == prev_block; val = distribution(generator)) { } return val; } Prefer const variables to #define Since you're using C++, there is little reason to use #define to define a numerical constant. Instead of this: #define DEFAULT_SPEED 300 Use this: constexpr int default_speed{300}; Note that I've also changed that from all capital letters (which is the convention for macros) to a regular variable name according to whichever convention you're using. Use const where practical There are a number of places in the code where variables could be declared const such as in Game.h: static const cCoord struct_coords[][MAX_COORDINATES + 1]; static const cCoord struct_origins[MAX_COORDINATES + 1]; Avoid the use of global variables I see that s (which is a poor name, by the way) is a global variable. It's generally better to explicitly pass variables your function will need rather than using the vague implicit linkage of a global variable. In this case, I think it would make more sense to have this be the Game object and have a separate Screen object as the Model and View classes of a Model-View-Controller. Write member initializers in declaration order The Structure class has this constructor Structure::Structure(const Structure &s) : struct_type(s.struct_type), origin(s.origin), coords(s.coords), color(s.color) {} That looks fine, but in fact, coords will be initialized after color because members are always initialized in declaration order and color is declared before coords in this class. To avoid misleading another programmer, you should swap the order of those such that it says instead: Structure::Structure(const Structure &s) : struct_type(s.struct_type), origin(s.origin), color(s.color), coords(s.coords) {} This way the initialization actually proceeds from left to right as one might expect at first glance. Be careful with signed and unsigned In several cases, the code compares an int i with an unsigned std::size_t coords.size(). It would be better to declare i to also be std::size_t. Pass object references where needed It doesn't really make much sense for the collision detector functions to be static since they need the current game state to actually operate correctly. Instead, make them regular member functions and then pass a reference to the Game object for all of the various Structure functions that call one of the collision functions. Doing so will help you eliminate the ugly global variable as mentioned above. Rethink your classes The cCoord class is not doing anything except cluttering the code. This isn't Java, and the "getters and setters" idiom used there is not generally acceptable in modern C++. Instead. your cCoord class could simply be a simple struct since anything can set or read it anyway. Similarly, the Block class is also doing very little and doesn't benefit containing a cCoord -- it could be much simpler to have an x and y directly as part of the Block class. Also, it would make more sense if the Block class actually represented a Block (with the associated coordinates and origin currently stored in Game) and handled its own rotation and reported its minimum and maximum coordinates for use in collision detection. Understand the libraries you use The mvinch() curses function returns a chtype and not a char. The existing code attempts to deal with that using a static cast, but that's not really correct. Instead, I'd recommend changing the blockChar constant to be of type chtype. Avoid floating point where practical The current rotate_point within Structure uses sin and cos to rotate the blocks, but the blocks are only ever rotated by +90 or -90 degrees. It would make more sense to simply precompute the sin and cos values of those angles and use those directly, which would be much faster. Think of the user The game doesn't currently show a score, the boundaries of the playing field or any hint as to which keys do what to control the game. Each of these would definitely enhance the user's experience. Omit return 0 When a C or C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no need to put return 0; explicitly at the end of main. Note: when I make this suggestion, it's almost invariably followed by one of two kinds of comments: "I didn't know that." or "That's bad advice!" My rationale is that it's safe and useful to rely on compiler behavior explicitly supported by the standard. For C, since C99; see ISO/IEC 9899:1999 section 5.1.2.2.3: [...] a return from the initial call to the main function is equivalent to calling the exit function with the value returned by the main function as its argument; reaching the } that terminates the main function returns a value of 0. For C++, since the first standard in 1998; see ISO/IEC 14882:1998 section 3.6.1: If control reaches the end of main without encountering a return statement, the effect is that of executing return 0; All versions of both standards since then (C99 and C++98) have maintained the same idea. We rely on automatically generated member functions in C++, and few people write explicit return; statements at the end of a void function. Reasons against omitting seem to boil down to "it looks weird". If, like me, you're curious about the rationale for the change to the C standard read this question. Also note that in the early 1990s this was considered "sloppy practice" because it was undefined behavior (although widely supported) at the time. So I advocate omitting it; others disagree (often vehemently!) In any case, if you encounter code that omits it, you'll know that it's explicitly supported by the standard and you'll know what it means.
{ "domain": "codereview.stackexchange", "id": 23482, "tags": "c++, beginner, c++11, curses, tetris" }
Simple DRY Rails - check if a model belongs to a user
Question: I can clearly see I'm repeating myself, but I don't know how to refractor this without making it really ugly. What I have: class ChildModelController < ApplicationController def new @model = ParentModel.find(params[:id]) # Does the current user have permissions on the parent model if signed_in? unless current_user.admin? or current_user.id == @model.user_id flash[:error] = t('errors.messages.not_authorized') flash.keep redirect_to root_path end else unless session and session[:session_id] == @model.author_session_token flash[:error] = t('errors.messages.not_authorized') flash.keep redirect_to root_path end end #do stuff to the model end end The only way I can think to make it smaller is to make it ugly: class ChildModelController < ApplicationController def new @model = ParentModel.find(params[:id]) # Does the current user have permissions on the parent model if (!signed_in? and session and session[:session_id] == @model.author_session_token ) || (signed_in and current_user.admin? or current_user.id == @model.user_id) flash[:error] = t('errors.messages.not_authorized') flash.keep redirect_to root_path end #do stuff to the model end end Am I missing an even easier way? Answer: I suggest you extract two methods: authorized? to check if the user is authorized, and not_authorized to handle behaviour. You would then end up with this: def new @model = ParentModel.find(params[:id]) not_authorized unless authorized? end private def authorized? if signed_in? # If you have your associations setup correctly you don't have # to check equality on id. You can just compare the objects current_user.admin? || current_user == @model.user else # See aside number 2 for why I removed session # session[:session_id] will return nil if it's not present # which is false in ruby, so it's safe to do this session[:session_id] == @model.author_session_token end end def not_authorized flash[:error] = t('errors.messages.not_authorized') flash.keep redirect_to root_path end But I suggest you think about using an authorisation library, or extracting authorisation logic into a policy object. This way you gain the benefit of DRY, and an extensible way to add and reuse permissions. def new @model = ParentModel.find(params[:id]) not_authorized unless authorized?(current_user, @model) end In a separate policy object you could: # Choose a better name for this class ModelPolicy attr_reader :user, :model def initialize(user, model) @user, @model = user, model end def authorized? user.admin? || user.id == model.user_id end end This example illustrates my point but it is not complete. You must wire the policy to the controllers, maybe include it in a concern. But it's not that hard. There's a great gem called Pundit that gives you similar interface. Asides Try not to use unless with || and && conditions because it becomes confusing. It's clearer to use negation: !current_user. Some people are boolean impaired, including myself. I'm not sure under what circumstances the session hash does not exist. So checking for its presence may be redundant.
{ "domain": "codereview.stackexchange", "id": 11694, "tags": "beginner, ruby, ruby-on-rails" }
Putting a matrix from a text file into a list
Question: I'm trying to read a text file with matrix and put it in a list, but I am using two loops here and I want my function to be faster. def read_file(path_to_file): mylist=[] for eachLine in open(path_to_file,'rt'): mylist.append([int(eachRow) for eachRow in eachLine.split()]) return mylist Answer: Not sure if it's possible to make this faster. But it can be better: with open(path_to_file, 'rt') as fh: return [[int(value) for value in line.split()] for line in fh] First of all, you should always use a with ... context manager when working with files. That way you cannot forget to close the file handle after you are done reading from it. Other improvements: More natural variable names Simpler and more compact writing style using a list comprehension Also keep in mind PEP8, the Python style guide.
{ "domain": "codereview.stackexchange", "id": 18983, "tags": "python, performance, parsing, file, matrix" }
How will I draw the following signal?
Question: I want to know that if I have any signal given how will I draw it according to the following equation $$x(t-1)-1/2$$ what will be the role of $1/2$ when drawing the new signal? Answer: A signal $x(t)$ is a function that maps a time instant $t$ to a corresponding value. Thus, the manipulations that you described in your question apply as they would with any other function: $x(t-1)$ will effect a right shift along the $t$ axis by $1$ unit. $-\frac{1}{2}$ will effect a downward shift along the $x(t)$ axis by $\frac{1}{2}$ units. That is, the role of $-\frac{1}{2}$ is to just subtract that value from the signal at all values of $t$.
{ "domain": "dsp.stackexchange", "id": 6888, "tags": "continuous-signals" }
Apparent contradiction between First and Second Laws of Thermodynamics
Question: How is the Second Law of Thermodynamics not either false or incompatible with the First Law of Thermodynamics? In practice, the Second Law entails that, given an infinite amount of time, there should be virtually no useful energy in the cosmos. There is, of course, lots of useful energy; thus, either the First Law was, at some point, broken (new energy was created) or the Second Law is simply incorrect and there is or was an 'uphill flow' of energy. The common response I've gotten from lay-persons is that the Second Law applies within the context of time, but, as space-time had a beginning (i.e., the Big Bang), neither the First nor the Second Law ever had to be violated, per se. This never satisfied me: As far as I am aware, there is no direct evidence to suggest that space-time began at the Big Bang. Instead, all the evidence that we have points to the fact that the universe was once very dense, but not necessarily to some absolute beginning. Clearly, scientists cleverer than me have been working on this for years; so I'm sure there's a real answer. Can you give me some input? Answer: Three points to answer: Even without discussing the Big Bang, for which there is plenty of evidence, it is just as possible that time had a beginning as that it hadn't. So your argument only applies in the case that the universe has existed for an infinite amount of time, and with all the same physical laws for that infinite time. Recognize that this is quite a large assumption and not at all entailed in the usual formulation of physics. The "laws" of thermodynamics hold effectively, for larger time-scales and larger systems, especially the second one. There are known cases where entropy decreases for a short time in a small system. In classical mechanics, we also have the Poincare recurrence theorem that hints at the possibility that the rise in entropy cannot hold at all time scales and must come to an end somewhen (but this is not totally clear, either.) So, in a nutshell, laws of thermodynamics don't have the same fundamental status as other laws of physics like Maxwell's laws, Schrödinger's equation and so on. So we should not wonder to much if we get alleged contradictions when applying them to the whole, maybe infinite history, of the universe. The way you connect the first and second law is maybe also problemativ since at first sight, they are both about different physical entities: the first about energy, and the second about entropy. The connection that entropy measures somehow how much "useful energy" there is is rather loose.
{ "domain": "physics.stackexchange", "id": 50399, "tags": "thermodynamics, big-bang" }
Gravity and velocity increase and time effects problem
Question: First I must specify I am a math major and not currently taking any classes on physics and this is a casual interest/hobby of mine. Therefore I apologize in advance if my questions seem naive. An object sits near a gravitational well. The object begins to accelerate toward the center of the gravity well. There exists a reference frame outside which sees this object falling in. The object itself is said to experience time at a slower rate as the gravity increases. Query: Does the object experience time slower itself or is the time dilation only measurable and therefore visible to an outside reference frame but not known to the object experiencing said dilation? i.e., it's life expectancy is some arbitrary time and it lives that time according to it's own clock but to an outside observer it lives much longer. Now let us state this object, call it particle A, is moving at 99.999% of c. It is being followed by the outside reference frame at the same speed from a separate entity. Call these, two particles: Particle A: the one which will intersect a black hole or some sufficiently large gravity well, and Particle B travels at the same velocity parallel to particle A but far enough away that it can observe particle A's interaction with the gravity well without being a part of the gravitational interaction. Query: Let us assume for a moment that an object can actually reach the limit of the speed of light for it's velocity. Particle A is said to to be moving so close to C that a sufficiently large gravitational well should give it the boost necessary to reach that limit as it falls in or approaches. Since Particle A should be experiencing time at a slower rate. What does particle B see in this scenario without worrying about the particles frame slowing down, etc.? The first question is of the most importance to myself. -Thank you for your help. Answer: To your first question- The particle falling in seems to move slower with respect to an observer at rest outside.The word observer is the most important as it denotes the reference frame.The object falling in the gravitation well is in the same frame as its clock.So,all things move slowly for it,including its biological clocks(heart,pulse,etc.) and its brain.So, the object itself cannot perceive any slow down in its environment.But to an outside observer the whole system of the falling body is moving a t a slower rate.So,to him it appears that his time is dilated.And as for increase in life expectancy it also varies from observer to observer with change in reference frames.To the outside observer as in this case,the life expectancy of the falling man increases as its biological clocks move slower with respect to him. For example imagine the life time of a man is for 50,000 beats of the heart.Now with slow down of the heart beat rate it takes a longer time to complete 50,000 beats and thus the life expectancy appears to increase.But remember the most important thing here-from observer to observer and reference frame to reference frame it varies.
{ "domain": "physics.stackexchange", "id": 25713, "tags": "special-relativity, time-dilation" }
Resources about implementing a database table
Question: I'd like to understand more how data can be stored on disk so it can be read quickly. I don't want to say database record because I'm not asking how to build a database. My question is closer to how would I build a table. I'm not asking about indexing. I'm thinking about you have an application that already has some a pointer to a record. I've been trying to find some references e.g. papers, books, but everything so far focuses on database design as in writing DDL or structures for indexing like B+ trees. Are there any references that talk about how tables are actually implemented and what are the tradeoffs? Answer: Databases have some issues that "mere mortal" data storage doesn't concern itself with, such as transaction safety and disk failure. Here are a few classic papers on the topic: Astrahan et al (1976), System R: A Relational Approach to Data Base Management, ACM TOMS 1:2, pp 97-137. Stonebreaker et al (1976), The Design and Implementation of INGRES, ACM TOMS 1:3, pp 189-222. Stonebreaker (1987), The Design of the POSTGRES Storage System, VLDB '87, pp. 289–300. There are also some good books that cover the relevant topics, such as Database System Implementation by Garcia-Molina, Ullman, and Widom.
{ "domain": "cs.stackexchange", "id": 17596, "tags": "data-structures, database-theory, databases" }
Data enrichment of geographical records
Question: I have a user_data table with various fields, some of them are based on geography. I'd like to enrich the data with additional columns, like expected_income_in_region, city_population, life_expectancy_in_state, etc... for each user record. I'd like to use these extended fields with my sql query (prior to additional python manipulations) Is the standard way of getting hold of this data , or do I need to scrape from the US Census bureau and format it myself ? Answer: If you are looking to use US Census data, the American FactFinder website: https://factfinder.census.gov/faces/nav/jsf/pages/index.xhtml allows you to search for Census data in different ways and then download tables in csv format. For example, on the Community Facts site, you can enter a state, county, city, town, or zip code and obtain the population of the geography that you entered. Then you can click on the subjects listed on the left side of the page (Age, Business and Industry, Education, etc.) to see other interesting information about your geography, or click the links under "Popular tables for this Geography" to see tables containing other data for your community. Or you can use the Advanced Search feature to search all available data using all of the search features provided by the site. Also, I don't know much about this myself, but there exists at least this one Github repository with "SQL scripts and other supporting files for importing recent American Community Survey releases from the US Census Bureau datasets". The scripts are released under the GNU General Public License.
{ "domain": "datascience.stackexchange", "id": 5139, "tags": "dataset, sql" }
Is there a difference if input nodes have discrete or range value
Question: If you have some input nodes containing fruit values like {apple=0, pear=1, oranges=2} vs temperature values like {5, 10, 30, 50}. Is there any difference on how you set up the neural network to learn the output? I'm guessing on the first case the neural network only learn about the input you used to train. For example if you then try with banana the result will be something random. But in the case of temperature if I input 35 I would expect a result similar to 30. Also, I'm guessing for the first case would need much more data to learn than the second case. So wondering if have to take different consideration on each case. Answer: If categorical values are 1-hot encoded, then there is a node for each item in the category. There would be an input note that indicate the presence or absence of a categorical value. There would be an apple node, a pear node, or a orange node. If apple is present, then the apple input node would take a value of 1; 0 otherwise. Numerical values have one node per feature. The activation node have higher value if the numerical value is higher. Typically, these input node are rescaled between 0 and 1. The activation of an input node can take on any value between 0 and 1. In the case of temperature, it depends on how temperature is measured. If temperature is measured continuously, then it can be encoded as a numerical value. If temperature can only take a limited subset of values, then can not be encoded as a numerical value. It would be measured on a ordinal scale.
{ "domain": "datascience.stackexchange", "id": 8864, "tags": "machine-learning, neural-network" }
Why does ROS have nodes?
Question: I've read the documentation about nodes and topics, but they just describe what nodes and topics are, not why they are useful. Granted, I'm very new to designing robot source code and I am guessing that there are particular cases when they are needed, I have just yet to see them. I'm currently working for a robotics startup (as an intern) that has a pretty messy code base and I really want to clean it up--but I want to use best practices and a very good design. AFAIK, nodes are useful to support inter process communication. Well, there are many parts in the system I'm looking at and I think they've made too many nodes where there doesn't really need to be. It seems like nodes are being used in place of designing a proper architecture that passes information internally. Therefore, I'd like to hear the community's opinion on nodes. Should we avoid using them unless absolutely necessary? When are they absolutely necessary? What is the problem with having one monolithic process? Originally posted by cpagravel on ROS Answers with karma: 156 on 2017-03-17 Post score: 0 Answer: From http://wiki.ros.org/Nodes The use of nodes in ROS provides several benefits to the overall system. There is additional fault tolerance as crashes are isolated to individual nodes. Code complexity is reduced in comparison to monolithic systems. Implementation details are also well hidden as the nodes expose a minimal API to the rest of the graph and alternate implementations, even in other programming languages, can easily be substituted. Separate nodes gives better fault tolerance and debugging; it's much easier to see what's broken when a single node is crashing, and the OS prevents nodes from overwriting the memory of other nodes. Using a pub-sub middleware like ROS also hides the thread/process synchronization that happens in any large, multithreaded system and provides a more obvious way to distribute the processing load across many cores. Obviously there's a performance hit for serializing and deserializing messages, but for small messages this is negligible. If you're passing around large messages such as images or point clouds, the performance gain from using nodelets outweighs their complexity. For more best practices, have a look at the ROS Patterns and Best Practices section of the wiki. Originally posted by ahendrix with karma: 47576 on 2017-03-17 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by cpagravel on 2017-03-17: Was there ever an instance when you found that putting a piece of code into a separate node was extremely beneficial or mission critical to do? I'd appreciate to hear some personal experience. I'm trying to architect a design which allows for future modularity (which is why I want a feel for nodes). Comment by gvdhoorn on 2017-03-18: I would add to @ahendrix's answer that ROS is (a form of a) component based software framework, making CBSE possible. Nodes are just the concrete implementation of the component concept in ROS, which offer services and consume and produce datastreams. Packages then group several such components .. Comment by gvdhoorn on 2017-03-18: .. in coherent sets (although guarding the coherency is the responsibility of the author / maintainer, so is not necessarily guaranteed). My personal experience is that if you see all of ROS as one concrete example of a more abstract conceptual framework (ie: CBSE), things make sense sooner.
{ "domain": "robotics.stackexchange", "id": 27351, "tags": "ros" }
Looking for efficient algorithm to pick out a unique equivalence class representative for 2D arrays with lots of symmetries
Question: Given 2D arrays (number of rows between 0 and 10, number of columns between 0 and 10, elements are integers between 0 and 31) Two arrays $A,B$ are equivalent $A\sim B$ if $A$ can be transformed into $B$ via permutations of rows columns alphabet e.g. $\begin{pmatrix} 1&2&3\\ 2&2&3 \end{pmatrix} \sim \begin{pmatrix} 1&1&2\\ 1&3&2 \end{pmatrix} $ by permuting the two rows permuting columns 1 and 2 permuting the digits 2->1, 3->2, 1->3. I am looking for an efficient algorithm to pick out a unique member for these equivalence classes for lookup in a cache table. Currently what I am doing is: Loop through all permutations of rows Loop through all permutations of colunms Permute digits so that this array is lexicographically minimum (the first digit is 0, the next un-mapped digit is 1 etc). And taking the minimum (again, lexicographically) of these. Obviously, one can improve on that by only considering as the first row any row that has the greatest multiplicity of characters. But I'm sure there is massive speedup to be had here. I am just not seeing a great way of finding it. Edit I want a function that is a many-to-one mapping of these 2D arrays to a representative of the equivalent class, i.e. all matrices that are equivalent are mapped to the same output. I want this operation to be quick, fewer operations is better. Ideally each equivalence class can be mapped to a hashable object (like an integer or string) But if the output is a unique class representative that works too. Answer: Your problem is related to graph isomorphism, for which there is no known polynomial-time algorithm. Therefore, you should not expect any efficient algorithm that works on all cases. In particular, your problem is GI-hard, so there is unlikely to be any efficient algorithm that works correctly on all problem instances (unless the matrices are quite small). In practice, you can use a software package like nauty, bliss, etc. Why? Well, I will show that if there is a polynomial-time algorithm for your problem, then there is a polynomial-time algorithm for testing isomorphism of two bipartite graphs. It is known that the latter implies a polynomial-time algorithm for graph isomorphism. But no such algorithm is known, and it is a famous open problem whether such an algorithm exists. So don't hold your breath waiting for a polynomial-time algorithm for your problem, either. Here is the reduction. Let $G_1,G_2$ be two bipartite graphs, and $A_1,A_2$ be their $n\times n$ adjacency matrices (here $n=|V_1|+|V_2|$). Then $G_1,G_2$ are isomorphic iff their adjacency matrices are equivalent up to row/column swaps. (You don't have to worry about permutations of the alphabet, since the alphabet is $\{0,1\}$, and a swap of 0/1 will increase the number of 1's in the adjacency matrix.) If we had a polynomial-time algorithm for your problem, we could apply it to $A_1,A_2$ to get their canonical representatives $R_1,R_2$; now $R_1=R_2$ iff $A_1,A_2$ are row/column-swap-equivalent, i.e., iff $G_1,G_2$ are isomorphic. If you have to solve your problem in practice, you could use existing software tools for graph isomorphism, e.g., nauty, bliss, etc., to compute a canonical representative for the equivalence class under graph isomorphisms of the colored complete bipartite graph induced by your matrix. Specifically, if you have a $m\times m$ matrix $M$, then the corresponding bipartite graph has $2m$ vertices, with an edge from $i$ to $j$ carrying color $M[i,j]$ for each $i,j$. See also https://math.stackexchange.com/q/159992/14578, https://mathoverflow.net/q/59991/37212, https://en.wikipedia.org/wiki/Graph_canonization.
{ "domain": "cs.stackexchange", "id": 21347, "tags": "algorithms, optimization, sorting" }
Systematic way to draw all inequivalent Feynman diagrams
Question: I am wondering whether there is some systematical approach to find Feynman diagrams for S-matrix (or to be more precise for $S-1$ since I am interested in scattering amplitude). For example in $\phi^3$ theory and its variations (e.g. $\phi^2\Phi$) there is ridiculous amount of diagrams even on two-loop level. I am particularly interested in $\phi\phi \rightarrow \phi\phi$ or $\phi\Phi \rightarrow \phi\Phi$ scattering. What I usually do (for this kind of scattering) is this: I draw tree level diagrams One-loop diagrams are obtained from tree level diagrams by connecting lines together with single additional line in every possible manner (e.g. adding loop on internal line or connecting external leg and internal line ...) Two-loop diagrams are obtained from one-loop diagrams by adding a line as in previous point. I do not add loops on external legs since those are irrelevant (at least for S-matrix). Some of the options generated with this algorithm result in the same diagrams - I use Wick's theorem to check if diagrams are in correspondence with the same contraction or not, if yes - then redundant diagrams are erased. I think that above algorithm should work (please correct me if I am wrong), however it is very cumbersome and impractical. It also does not work for $\phi^4$ theories since one can not simply "connect lines" there - but this does not make much trouble because $\phi^4$ has pretty simple diagrams up to two-loop level. So my question is - is there some useful method how to obtain Feynman diagrams at least up to two-loop level in scalar field theory(ies)? Keep in mind I am beginner in QFT. Answer: OP has discovered on their own a primitive application of the Schwinger-Dyson equations. Congratulations! A very gentle introduction to the Schwinger-Dyson equations. ... or how to calculate correlation functions without Feynman diagrams, path integrals, operators, canonical quantisation, the interaction picture, field contractions, etc. Note: we will include operators and Feynman diagrams anyway so that the reader may compare our discussion to what they already know. The diagrams below have been generated using the LaTeX package TikZ. You can click on the edit button to see the code. Feel free to copy, modify, and use it yourself. Note: we will not be careful with signs and phases. Factors of $\pm i$ may be missing here and there. Consider an arbitrary QFT defined by an action $S$. The most important object in the theory is the partition function, $Z$. Such an object can be defined either in the path-integral formalism or in the operator formalism (cf. this PSE post): \begin{equation} Z[j]\equiv N^{-1}\int\mathrm e^{iS[\varphi]+j\cdot\varphi}\mathrm d\varphi\equiv\langle \Omega|\mathrm T\ \mathrm e^{ij\cdot\phi}|\Omega\rangle\tag1 \end{equation} where $N$ is a normalisation constant; $\Omega$ is the vacuum state; and $\mathrm T$ is the (covariant) time ordering symbol. In either case, one can show that $Z[j]$ satisfies the functional differential equation \begin{equation} \color{red}{(iS'[\delta]-j)Z[j]\equiv 0}\tag2 \end{equation} known as the Schwinger-Dyson (SD) equation. (Here, $\delta=\frac{\delta}{\delta j}$ denotes functional differentiation with respect to $j$.) A fascinating fact about the SD equation is that it can be used to introduce a third formulation of QFT, together with the path-integral and the operator formalisms. In the SD formulation, one forgets about path-integrals and operators. The only object is the partition function $Z[j]$, which is defined as the solution of the SD equation. The only postulate is SD, and everything else can be derived from it. In this answer we shall illustrate how the standard perturbative expansion of QFT is contained in SD. Intuitively speaking, the method is precisely OP's algorithm "take the lower order, and connect any two lines in all possible ways". For completeness, we stress that SD also contains all the non-perturbative information of the theory as well (e.g., the Ward-Takahashi-Slavnov-Taylor identities), but we will not analyse that. Scalar theory. Our main example will be so-called $\phi^4$ theory: \begin{equation} \mathcal L=\frac12(\partial\phi)^2-\frac12 m^2\phi^2-\frac{1}{4!}g\phi^4\tag3 \end{equation} where $\phi\colon\mathbb R^d\to\mathbb R$ is a real scalar field. The SD equation for the partition function is \begin{equation} \left[\partial^2\frac{\delta}{\delta j(x)}+m^2\frac{\delta}{\delta j(x)}+\frac{1}{3!}g\frac{\delta^3}{\delta j(x)^3}-ij(x)\right]Z[j]\equiv 0\tag4 \end{equation} If take a functional derivative of this equation of the form \begin{equation} \frac{\delta}{\delta j(x_1)}\frac{\delta}{\delta j(x_2)}\cdots \frac{\delta}{\delta j(x_n)}\tag5 \end{equation} and then set $j\equiv 0$, we get \begin{equation} \begin{aligned} (\partial^2+m^2)G(x,x_1,&\dots,x_n)+\frac{1}{3!}gG(x,x,x,x_1,\dots,x_n)=\\ &=i\sum_{m=1}^n\delta(x-x_m)G(x_1,\dots,\hat x_m,\dots,x_n) \end{aligned}\tag6 \end{equation} where the hat $\hat\cdot$ over an argument means that it is to be omitted. Here, $G(x_1,\dots,x_n)$ is the $n$-point function, \begin{equation} G(x_1,\dots,x_n)\equiv \langle 0|\mathrm T\ \phi(x_1)\cdots\phi(x_n)|0\rangle\tag7 \end{equation} which in the SD formalism is defined as \begin{equation} G(x_1,\dots,x_n)\equiv \frac{\delta}{\delta j(x_1)}\frac{\delta}{\delta j(x_2)}\cdots \frac{\delta}{\delta j(x_n)}Z[j]\bigg|_{j=0}\tag8 \end{equation} We see that the SD equations are nothing but a system of partial differential equations for the correlation functions. In general, these equations are impossible to solve explicitly (essentially, because they are non-linear), so we must resort to approximation methods, i.e, to perturbation theory. Let us begin by introducing the inverse of $(\partial^2+m^2)$, the propagator: \begin{equation} \Delta(x)\equiv (\partial^2+m^2)^{-1}\delta(x)=\int\frac{\mathrm e^{ipx}}{p^2-m^2+i\epsilon}\frac{\mathrm dp}{(2\pi)^d}\tag9 \end{equation} We may use the propagator to integrate the SD equations as follows: \begin{equation} \begin{aligned} \color{red}{G(x,x_1,\dots,x_n)}&\color{red}{=\frac{1}{3!}g\int\Delta(x-y)G(y,y,y,x_1,\dots,x_n)\,\mathrm dy+}\\ &\color{red}{+i\sum_{m=1}^n\Delta(x-x_m)G(x_1,\dots,\hat x_m,\dots,x_n)} \end{aligned}\tag{10} \end{equation} which is a system of coupled integro-differential equations of the Fredholm type, whose solutions can formally be written as a Liouville-Neumann series in powers of $g$. This is the basis of perturbation theory. Moreover, these equations are precisely the formalisation of OP's algorithm. We stress that the whole paradigm of perturbation theory is contained in equation $(10)$. In particular, one need not introduce Feynman diagrams at all: the perturbation series can be extracted directly from $(10)$. That being said, and to let the reader compare our upcoming discussion to the standard formalism, let us introduce the following graphical notation: a four-vertex is represented by a node with four lines, and a propagator is represented by a line and the $n$-point function is represented as a disk with $n$ lines: In graphical terms, one typically represents the SD equations $(10)$ as follows: Perturbation theory is based on the (somewhat unjustified) assumption that a formal power series of the form \begin{equation}\tag{14} G\sim G^{(0)}+g G^{(1)}+g^2G^{(2)}+\cdots+\mathcal O(g^k) \end{equation} should be, in a certain sense, a good approximation to the real $G$. In practice, this series is observed to be asymptotic, so things work rather well as long as $g\ll 1$. The first thing we notice is that, due to equation $(10)$, the term of order zero in $g$ satisfies \begin{equation}\tag{15} G^{(0)}(x,x_1,\dots,x_n)=i\sum_{m=1}^n\Delta(x-x_m)G^{(0)}(x_1,\dots,\hat x_m,\dots,x_n) \end{equation} which, by iteration, leads to \begin{equation}\tag{16} \color{red}{G^{(0)}(x_1,\dots,x_n)=\sum_\mathrm{pairings}\prod i\Delta(x_i-x_j)} \end{equation} which is usually known as Wick's theorem. The higher orders satisfy \begin{equation} \begin{aligned} G^{(k)}(x,x_1,\dots,x_n)&=\frac{1}{3!}\int\Delta(x-y)G^{(k-1)}(y,y,y,x_1,\dots,x_n)\,\mathrm dy+\\ &+i\sum_{m=1}^n\Delta(x-x_m)G^{(k)}(x_1,\dots,\hat x_m,\dots,x_n) \end{aligned}\tag{17} \end{equation} With this, we see that we may calculate any correlation function, to any order in perturbation theory, as an iterated integral over combinations of propagators. To calculate the $n$-point function to order $k$, we need the $(n-1)$-point function to order $k$, and the $(n+3)$-function to order $k-1$, which can be iteratively calculated, by the same method, in terms of the corresponding correlation functions of lower $k$. When $k$ becomes zero we may use Wick's theorem, which means that the algorithm terminates after a finite number of steps. Let us see how this works in practice. We begin by the zero order approximation to the two-point function. By Wick's theorem, we see that the propagator provides us with a very crude approximation to the two-point function, \begin{equation}\tag{18} G^{(0)}(x_1,x_2)=i\Delta(x_1-x_2) \end{equation} which, as expected, agrees with the diagram By a similar analysis (Wick's theorem), the four-point function is given, to zero order in perturbation theory, by \begin{equation} \begin{aligned} G^{(0)}(x_1,x_2,x_3,x_4)&=i\Delta(x_1-x_2)i\Delta(x_3-x_4)\\ &+i\Delta(x_1-x_3)i\Delta(x_2-x_4)\\ &+i\Delta(x_1-x_4)i\Delta(x_2-x_3) \end{aligned}\tag{20} \end{equation} which, once again, agrees with the diagrams We next calculate the first order approximation to the two-point function; using $(17)$, we see that it is given by \begin{equation}\tag{22} G^{(1)}(x_1,x_2)=\frac{1}{3!}\int\Delta(x_1-y)G^{(0)}(y,y,y,x_2)\,\mathrm dy \end{equation} We already know the value of the factor $G^{(0)}(y,y,y,x_2)$: \begin{equation}\tag{23} -G^{(0)}(y,y,y,x_2)=3\Delta(y-y)\Delta(x_2-y) \end{equation} so that \begin{equation}\tag{24} G^{(1)}(x_1,x_2)=\frac{i}{2}\int\Delta(x_1-y)\Delta(y-y)\Delta(x_2-y)\,\mathrm dy \end{equation} which is precisely what the one-loop diagram predicts: We can use the same technique to compute the first order correction to the four-point function. The reasoning is the same as before; equation $(17)$ reads \begin{equation} \begin{aligned} G^{(1)}(x_1,x_2,x_3,x_4)&=\frac{1}{3!}\int\Delta(x-y)G^{(0)}(y,y,y,x_2,x_3,x_4)\,\mathrm dy\\ &+i\Delta(x_1-x_2)G^{(1)}(x_3,x_4)\\ &+i\Delta(x_1-x_3)G^{(1)}(x_2,x_4)\\ &+i\Delta(x_1-x_4)G^{(1)}(x_2,x_3) \end{aligned}\tag{26} \end{equation} From our previous calculation, we already know the value of $G^{(1)}(x_1,x_2)$; on the other hand, the term $G^{(0)}(y,y,y,x_2,x_3,x_4)$ can be efficiently computed using Wick's theorem; in particular, \begin{equation} \begin{aligned} iG^{(0)}(y,y,y,x_2,x_3,x_4)=\,&3\Delta(y-y)\Delta(y-x_2)\Delta(x_3-x_4)\\ +&3\Delta(y-y)\Delta(y-x_3)\Delta(x_2-x_4)\\ +&3\Delta(y-y)\Delta(y-x_4)\Delta(x_2-x_3)\\ +&6\Delta(y-x_2)\Delta(y-x_3)\Delta(y-x_4) \end{aligned}\tag{27} \end{equation} so that \begin{equation} \begin{aligned} -G^{(1)}(x_1,x_2,x_3,x_4)&=\frac12\Delta(x_1-x_2)\int\Delta(x_3-y)\Delta(y-y)\Delta(x_4-y)\,\mathrm dy\\ &+\frac12\Delta(x_1-x_3)\int\Delta(x_2-y)\Delta(y-y)\Delta(x_4-y)\,\mathrm dy\\ &+\frac12\Delta(x_1-x_4)\int\Delta(x_2-y)\Delta(y-y)\Delta(x_3-y)\,\mathrm dy\\ &+\frac{1}{2}\Delta(x_3-x_4)\int\Delta(x_1-y)\Delta(y-y)\Delta(y-x_2)\,\mathrm dy\\ &+\frac{1}{2}\Delta(x_2-x_4)\int\Delta(x_1-y)\Delta(y-y)\Delta(y-x_3)\,\mathrm dy\\ &+\frac{1}{2}\Delta(x_2-x_3)\int\Delta(x_1-y)\Delta(y-y)\Delta(y-x_4)\,\mathrm dy\\ &+\int\Delta(x_1-y)\Delta(y-x_2)\Delta(y-x_3)\Delta(y-x_4)\,\mathrm dy \end{aligned}\tag{28} \end{equation} which, as expected, agrees with the value of the one-loop diagrams: As a final example, let us compute the second order correction to $G(x_1,x_2)$, to wit, \begin{equation}\tag{30} G^{(2)}(x_1,x_2)=\frac{1}{3!}\int\Delta(x_1-y)G^{(1)}(y,y,y,x_2)\,\mathrm dy \end{equation} where $G^{(1)}(y,y,y,x_2)$ is given by $(26)$. The final result is \begin{equation} \begin{aligned} G^{(2)}(x_1,x_2)&=\frac{1}{3!}\int\Delta(x_1-y)\Delta(y-z)\Delta(z-y)\Delta(z-y)\Delta(z-x_2)\,\mathrm dy\,\mathrm dz\\ &+\frac{1}{3}\int\Delta(x_1-y)\Delta(y-y)\Delta(y-z)\Delta(z-z)\Delta(z-z)\,\mathrm dy\,\mathrm dz\\ &+\frac{1}{3}\int\Delta(x_1-y)\Delta(y-z)\Delta(z-z)\Delta(z-y)\Delta(y-z)\,\mathrm dy\,\mathrm dz \end{aligned}\tag{31} \end{equation} which, once again, agrees with the value of the diagrams Continuing this way, we may calculate any correlation function to any order in perturbation theory. It is interesting to note that this method allows one to compute any correlation function, to any order in perturbation theory, by a rather efficient method. In particular, we didn't need to draw any Feynman diagram (although we drew them anyway, for the sake of comparison), and neither did we have to compute any symmetry factor. In fact, I have a strong suspicion that numerical computations of higher order loop corrections use some variation of this algorithm. A simple application of this algorithm in Mathematica can be found in this Mathematica.SE post. The reader will also note that no vacuum bubbles have been generated in the calculation of correlation functions. Recall that when working with path integrals or the Dyson series, such diagrams are generated and subsequently eliminated by noticing that they also appear in the denominator. Such graphs are divergent (both at the level of individual diagrams and at the level of summing them all up), so their cancellation is dubious. Here, the diagrams simply don't appear, which is an advantage of the formalism. Yukawa theory. For completeness, let us mention how this works in more general theories: those with non-scalar fields. The philosophy is exactly the same, the main obstacle being the notation: indices here and there make the analysis cumbersome. Assume you have a field $\phi_a(x)$ which satisfies \begin{equation}\tag{33} \mathscr D\phi(x)=V'(x) \end{equation} for some matrix-valued differential operator $\mathscr D$, and some vector-valued operator $V'$. In term of the action, $\mathscr D\phi=S_0'$ and $V'=S'_\mathrm{int}$, where $S_0$ is the quadratic part of $S$ and $S_\mathrm{int}$ is the rest of terms. With this, the SD equations read \begin{equation}\tag{34} \mathscr D_1 \langle\phi_1\cdots\phi_n\rangle=i\langle V'_1\phi_2\cdots\phi_n\rangle+\sum_{m=2}^n\delta_{1m}\langle \phi_2\cdots\hat\phi_m\cdots\phi_n\rangle \end{equation} where I have introduced the short-hand notation $i=(x_i,a_i)$. Also, $\delta_{ij}=\delta_{a_ia_j}\delta(x_i-x_j)$. By analogy with our discussion above, we see that the algorithm is essentially the same, but now the propagator is $\mathscr D^{-1}$, and there is a factor of $V$ on every vertex. Let me sketch how this works in the Yukawa theory with a scalar field $\phi$ and a Dirac field $\psi$, interacting through $V=g\phi\bar\psi\psi$. The Lagrangian reads \begin{equation}\tag{35} \mathcal L=i\bar \psi\!\!\not\!\partial\psi-m\bar\psi \psi+\frac{1}{2}(\partial \phi)^2-\frac{1}{2}M^2\phi^2-g\phi\bar\psi\psi \end{equation} with $\psi\colon\mathbb R^d\to \mathbb C_a$ and $\phi$ as before. The equations of motion are \begin{equation} \begin{aligned} -(-i\!\!\not\!\partial+m)\psi=g\phi\psi\equiv U\\ -(\partial^2+M^2)\phi=g\bar\psi\psi\equiv V \end{aligned}\tag{36} \end{equation} As usual, we define the propagators as \begin{equation} \begin{aligned} (-i\!\!\not\!\partial+M) S_{12}&=\delta_{12}\\ (\partial^2+m^2) \Delta_{12}&=\delta_{12} \end{aligned}\tag{37} \end{equation} that is, \begin{equation} \begin{aligned} S(p)&=\frac{1}{\!\!\not\!p-m+i\epsilon}\\ \Delta(p)&=\frac{1}{p^2-m^2+i\epsilon} \end{aligned}\tag{38} \end{equation} We now need to introduce the correlation functions. Let me use a hybrid notation which I hope will simplify the notation as much as possible: \begin{equation}\tag{39} iG(1^\alpha,2_\beta,3,\dots)\equiv\langle \Omega|\mathrm T\ \psi^\alpha(x_1) \bar\psi_\beta(x_2)\phi(x_3)\cdots|\Omega\rangle \end{equation} or, in other words, every upper index corresponds to $\psi$; every lower index corresponds to $\bar\psi$; and every space-time point with no indices corresponds to $\phi$. In terms of the partition function, the correlation function is defined as \begin{equation}\tag{40} iG(1^\alpha,2_\beta,3,\dots)\equiv \left[\frac{\delta}{\delta \eta_\alpha(x_1)}\frac{\delta}{\delta \bar\eta^\beta(x_2)}\frac{\delta}{\delta j(x_3)}\cdots\right]Z[\eta,\bar\eta,j]\bigg|_{j=\eta=\bar\eta=0} \end{equation} With this, the SD equations of the theory read \begin{equation} \begin{aligned} iG(1^\alpha,2_\beta,3,\dots)&=\int S_{1y}^\alpha{}_\gamma\langle U^\gamma(y)\bar\psi_\beta(x_2)\phi(x_3)\cdots\rangle \mathrm dy\\ &+iS^\alpha_{12\beta}\langle\phi(x_3)\cdots\rangle+\cdots\\ iG(1,2,3^\alpha,\dots)&=\int \Delta_{1y}\langle V(y)\phi(x_2)\psi^\alpha(x_3)\cdots\rangle\mathrm dy\\ &+i\Delta_{12}\langle\psi^\alpha(x_3)\cdots\rangle+\cdots \end{aligned}\tag{41} \end{equation} More generally, given an arbitrary correlation function $G$, the corresponding SD equations are obtained by replacing any field by its propagator and vertex function, and adding all possible contact terms with the same propagator. In fact, the general structure of the SD equations is rather intuitive: it is simply given by what index placement suggests; in general there is one and only one way to match up indices on both sides of the equation so that the propagators and fields are contracted in the correct way. The calculation of $G$ is rather similar to that of the scalar theory above. As before, we assume it makes sense to set up a power series in $g$, \begin{equation}\tag{42} G=G^{(0)}+gG^{(1)}+g^2G^{(2)}+\cdots \end{equation} Perturbation theory is obtained by constructing $G^{(k)}$ from the known value of the correlation functions of lower order. For example, the one point function $iG(1)=\langle\phi(x_1)\rangle$ satisfies \begin{equation}\tag{43} G(1)=g\int \Delta_{1y} G(y_\alpha, y^\alpha)\mathrm dy \end{equation} To lowest order, $G^{(0)}(1)=0$; the first correction reads \begin{equation} \begin{aligned} iG^{(1)}(1)&=i\int \Delta_{1y} G^{(0)}(y_\alpha, y^\alpha)\mathrm dy=\\ &=-i\int \Delta_{1y}\text{tr}(S_{yy})\mathrm dy \end{aligned}\tag{44} \end{equation} where the negative sign is due to the fermionic statistics of $\psi,\bar\psi$ (or, equivalently, of their corresponding sources, $\eta,\bar\eta$). In particular, $iG^{(0)}(1_\alpha, 2^\alpha)=\langle\bar\psi_\alpha(x_1)\psi^\alpha(x_2)\rangle=-\langle\psi^\alpha(x_2)\bar\psi_\alpha(x_1)\rangle=-\text{tr}(S_{21})$; more generally, we always have a negative sign associated to traces over fermionic indices. The expression above agrees with the standard one-loop Feynman diagram, to wit where the dashed line represents a scalar propagator and a solid one a spinorial one. Similarly, the two point function $i G(1,2)=\langle\phi_1\phi_2\rangle$ satisfies \begin{equation}\tag{46} i G(1,2)=ig\int \Delta_{1y} G(y_\alpha, y^\alpha,2)\mathrm dy+i\Delta_{12} \end{equation} As usual, to lowest order we have $G^{(0)}(1,2)=\Delta_{12}$; the first correction is \begin{equation}\tag{47} i G^{(1)}(1,2)=i\int \Delta_{1y} G^{(0)}(y_\alpha, y^\alpha,2)\mathrm dy=0 \end{equation} since $G^{(0)}(y_\alpha, y^\alpha,2)=0$. In order to calculate the next correction we need the three point function, which satisfies \begin{equation}\tag{48} i G(1^\alpha,2_\beta,3)=ig\int S_{1y}^\alpha{}_\gamma G(y,y^\gamma,2_\beta,3)\mathrm dy+S^\alpha_{12\beta} G(3) \end{equation} that is, \begin{equation} \begin{aligned} i G^{(1)}(1^\alpha,2_\beta,3)&=i\int S_{1y}^\alpha{}_\gamma G^{(0)}(y,y^\gamma,2_\beta,3)\mathrm dy+S^\alpha_{12\beta} G^{(1)}(3)=\\ &=-\int \Delta_{3y}(S_{1y}S_{y2})^\alpha{}_\beta -\Delta_{3y}S^\alpha_{12\beta}\text{tr}(S_{yy})\mathrm dy \end{aligned}\tag{49} \end{equation} which agrees with the one-loop diagrams With this, we now have what we need in order to compute the first non-trivial correction to the two-point function $G(1,2)$: \begin{equation} \begin{aligned} i G^{(2)}(1,2)&=i\int \Delta_{1y} G^{(1)}(y_\alpha, y^\alpha,2)\mathrm dy=\\ &=\int \Delta_{1y}\Delta_{2z}\text{tr}(S_{yz}S_{zy})-\Delta_{1y}\Delta_{z2}\text{tr}(S_{yy})\text{tr}(S_{zz})\ \mathrm dy\,\mathrm dz \end{aligned}\tag{51} \end{equation} which agrees with the diagrams Finally, the fermionic two-point function $G(1^\alpha,2_\beta)$ satisfies \begin{equation}\tag{53} i G(1^\alpha,2_\beta)=ig\int S_{1y}^\alpha{}_\gamma G(y,y^\gamma,2_\beta)\mathrm dy+iS^\alpha_{12\beta} \end{equation} which, to order zero in $g$, becomes $G^{(0)}(1^\alpha,2_\beta)=S^\alpha_{12\beta}$, as expected. The first correction is \begin{equation}\tag{54} iG^{(1)}(1^\alpha,2_\beta)=i\int S_{1y}^\alpha{}_\gamma G^{(0)}(y,y^\gamma,2_\beta)\mathrm dy=0 \end{equation} since $G^{(0)}(y,y^\gamma,2_\beta)=0$. To second order in $g$, \begin{equation} \begin{aligned} i G^{(2)}(1^\alpha,2_\beta)&=i\int S_{1y}^\alpha{}_\gamma G^{(1)}(y,y^\gamma,2_\beta)\mathrm dy=\\ &=-\int \Delta_{yz} (S_{1y}S_{yz}S_{z2})^\alpha{}_\beta-(S_{1y}S_{y2})^\alpha{}_\beta\Delta_{yz}\text{tr}(S_{zz})\mathrm dy\,\mathrm dz \end{aligned}\tag{55} \end{equation} which, as one would expect, agrees with the one-loop Feynman diagram The calculation of higher order correlation functions, to higher loop orders, is analogous. Hopefully, the worked out examples above are enough to illustrate the general technique. It's a nice formalism, isn't it?
{ "domain": "physics.stackexchange", "id": 47814, "tags": "quantum-field-theory, scattering, feynman-diagrams" }
Confusion with Carnot Cycle Problem
Question: Consider two fluids $F_1$ and $F_2$ with equal volume and heat capacity as well as $T_1$ and $T_2$ respectively, whereby $T_1 > T_2$. One uses a Carnot cycle to transfer heat from $F_1$ to $F_2$ in small cycles such that the temperatures after a certain amount of cycles are equal $T_1 = T_2 = T_0$. Now, I want to find this temperature $T_0$ in terms of $T_1$, $T_2$ and $C_V$. My confusion is that the problem requires pumping from hot to cold temperature. Isn't this a spontaneous process? How would this be different from simply putting the two fluids in direct contact and figuring out their equilibrium temperature? Could an approach maybe be to consider the problem of pumping hot to cold like in a refrigeration process, and then take the negative of that process? Any hints would be appreciated! Edit- Equation for Entropy: $$\begin{align}\Delta S =& \int_{T_1}^{T_0} C_v\frac{dT}{T} + \int_{T_2}^{T_0} C_v\frac{dT}{T} \\ =& \ C_v\ln(\frac{T_0}{T_1}) + C_v\ln(\frac{T_0}{T_2}) \\ =& \ C_v\ln(\frac{T_0^2}{T_1T_2})\end{align} $$ Answer: Here's a hint. If you just allow the fluids to equilibrate, the amount of heat that F1 loses is equal to the amount of heat that F2 gains. But, if you run a Carnot engine between them, the amount of heat that F1 loses exceeds the amount of heat that F2 gains. The difference is the amount of work that the Carnot engine does.
{ "domain": "physics.stackexchange", "id": 70365, "tags": "homework-and-exercises, thermodynamics, carnot-cycle" }
Template parsing function
Question: I am using preg_replace_callback(), and I am concerned that I might have some sort of security risk. I am hoping to get a review of the below substitute() function. The purpose of the function is to parse a user provided template with user provided data. A quick description on how it is used is shown below. The user posts data to the server, which is saved in a database (escaped for SQL injection of course). Note that the application will limit which array indexes are allowed (i.e. firstname, lastname, and phone only in this example). $userProvidedString='Hello {?firstname?} {?lastname?}, Our database shows that your phone number is {?phone?}.'; $userProvidedArray=array( 0 => array('firstname'=>'John','lastname'=>'Doe','phone'=>'(555) 222-1212'), 1 => array('firstname'=>'Jane','lastname'=>'Doe','phone'=>'(555) 333-1212'), 2 => array('firstname'=>'Mike','lastname'=>'Moe','phone'=>'(444) 222-1212'), ); Later the data is pulled from the database, and parsed as follows: foreach($userProvidedArray AS $contact) { $customMessage=substitute($userProvidedString,$contact); } My question relates to the substitute() function. Specifically, does my use of preg_replace_callback() pose any security issues? Previously, I was using $template_new= preg_replace('/\{\?(\w+)\?\}/e', '$contact["$1"]', $userProvidedString); which uses the /e flag which I assume can be dangerous. Does that danger still remain? Any other recommendations for it? I am currently using PHP 5.3.18, but will hopefully upgrade to 5.4 soon. function substitute($template, $values, $deliminator='{??}') { //Replaces occurrences of {?name?} with $values['name] if it exists in $values. Deliminators can be changed $deliminator="\\".implode("\\",str_split($deliminator)); $half = (int) ( (strlen($deliminator) / 2) ); $pattern = '/'.substr($deliminator, 0, $half).'(\w+)'.substr($deliminator, $half).'/'; return preg_replace_callback( $pattern, function ($matches) use ($values) { if (isset($values[$matches[1]])) { return $values[$matches[1]]; } return $matches[0]; }, $template); } Answer: Escaping the RegExp You should never do manual escaping, if an appropriate function exists. For RegExp, this is preg_quote. Another risk is that you'r escaping before splitting, so you might destroy an escaping sequence. Better do function substitute($template, $values, $delimiter = '{??}') { $half = (int) ( (strlen($delimiter) / 2) ); $begin = preg_quote(substr($delimiter, 0, $half), '/'); $end = preg_quote(substr($delimiter, $half), '/'); $pattern = '/' . $begin . '(\w+)' . $end . '/'; Escaping the Values The user provided values maust be escaped according to the context, where they are used. In this case it is HTML, so you must suppress unwanted tags, especially scripting tags. You can use strip_tags() or htmlspecialchars() for that purpose. I prefer the latter, because the user's value is not changed. return preg_replace_callback( $pattern, function ($matches) use ($values) { if (isset($values[$matches[1]])) { return htmlspecialchars($values[$matches[1]]); } return $matches[0]; }, $template); } } Conclusion With these two small changes you have done everything to make that function save.
{ "domain": "codereview.stackexchange", "id": 3755, "tags": "php, security, dependency-injection" }
Average value of non-projective observables
Question: I am quite confused about how to measure observables (like Pauli spins). For example, in the exercise 2.66 of Nielsen and Chuang's textbook: Show that the average value of the observable $X_1Z_2$ for a two qubit system measured in the state $(\vert00\rangle+\vert11\rangle)/\sqrt2$ is zero. My first take is to decompose the system into eigenvalues of the operators: $$(\vert00\rangle+\vert11\rangle)/\sqrt2 = \left(\frac{\vert+\rangle+\vert-\rangle}{\sqrt2}\vert0\rangle + \frac{\vert+\rangle-\vert-\rangle}{\sqrt2}\vert1\rangle\right)/\sqrt2$$ So, each one of the four possibilities appears with the same probability 1/4, and the average is $$\frac{1*1\ +\ (-1)*1\ +\ 1*(-1)\ +\ (-1)*(-1)}{4}=\frac{1-1-1+1}{4}=0$$ Now, I observe that just computing $\langle\phi\vert X_1Z_2\vert\phi\rangle$ also works although $X_1Z_2$ is not projective: $$\begin{align} 2 * \langle\phi\vert X_1Z_2\vert\phi\rangle &= \langle00\vert X_1Z_2 \vert00\rangle + \langle00\vert X_1Z_2 \vert11\rangle + \langle11\vert X_1Z_2 \vert00\rangle + \langle11\vert X_1Z_2 \vert11\rangle \\ &= \langle00|10\rangle - \langle00|01\rangle + \langle11|10\rangle - \langle11|01\rangle\\ &= 0+0+0+0=0 \end{align}$$ I'm not really sure about this method. I also found this solution: and I cannot understand what they compute. Answer: I think when your question asks for the "average value" it means the expectation value. The expectation value of the operator $A$ in the state $|\psi>$ is given as $<\psi|A|\psi>$. https://en.wikipedia.org/wiki/Expectation_value_(quantum_mechanics) In your case, $A=X_1Z_2$ and $|\psi> = 1/\sqrt(2) (|00>+|11>)$. Now just plug and chug. If |0> or |1> is not an eigenstate of $X$ or $Z$ then you will need to re-express your state vector in the correct eigenbasis, as you mentioned in your question. I don't know how any of your operators and their eigenvectors/eigenvalues are defined, as I don't have that textbook on hand. But I think your first step is likely correct, and you just need to plug that into the expression for the expectation value.
{ "domain": "physics.stackexchange", "id": 51251, "tags": "quantum-mechanics, observables" }
Max Stress on Weld
Question: Part is welded to a plate, based on the below free body diagram how would I determine the max stress in the weld? It seems half the weld is in tension and the other half is in compression. Answer: Sum Loads About Centroid of the Welds Max. Weld Stresses: $f_{c,t} = \dfrac{P}{A} \pm \dfrac{M}{S}$ (lbs/in, along each weld length) Alternative: If the above weld pattern does not produce enough strength, before increasing the weld size, try to switch the welds to other sides as shown below. Other than the increase in section modulus, everything remains the same. Note: The square tubing usually has round corners, which will result in a shorter effective weld length.
{ "domain": "engineering.stackexchange", "id": 4287, "tags": "structural-engineering, mechanical" }
Suggestions requested for writing ROS plugins
Question: I am new to writing Rviz plugins. What would be the preferred method/ rviz class lists that I could look into for visualizing line segments, circles and beziers for writing rviz plugin?" Thanks Originally posted by Joy16 on ROS Answers with karma: 112 on 2016-06-29 Post score: 0 Answer: You'll want to subclass rviz::Display. For examples (assuming C++), you can download the RViz source code and look at the default plugin classes. Edit: also, go over the Display plugin tutorial. I personally work better from actual source code examples, but the tutorial may helpful. Originally posted by kramer with karma: 1470 on 2016-06-30 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 25104, "tags": "ros" }
Escape velocity for hydrogen molecules
Question: The question: At what temperature is the RMS speed of Hydrogen molecules equal to the escape speed from the earth's surface? Values of radius of earth($r$) and gas constant $R$ has been supplied only. We know that the escape velocity of a body on earth is given by $(2rg)^{1/2}$. Putting in the values of $r$ and $g$ we get $11.2\ \text{km/s}$ as the answer. We also know that for a mole of ideal gas $PV=RT=\dfrac13M[c_{rms}]^2$,where $M$ is molecular weight of the gas. So substituting for $T$ we get $T=\dfrac{M[c_{rms}]^2}{3R}$ I have put $11.2\ \text{km/s}$ in place of $c_{rms}$ and $M=2$ and put the value of $R$ and I have got $904.8\ \text{K}$ as my answer, which is equal to $631.8\ ^o\text{C}$. For moon the answer came as $98\ ^o\text{C}$. My question is that moon certainly has a temperature was lower than $98\ ^o\text{C}$ so then shouldn't it have an atmosphere of hydrogen? But the moon has no atmosphere!! I cannot understand where i have gone wrong. Answer: I believe your mistake is with units, and it is the following: $$T=\dfrac{M[c_{rms}]^2}{3R} = \dfrac{\left( 1 \text{amu} \right) [11.2 \frac{km}{s}]^2}{3 \left( 8.3144621 \frac{\mathrm{J}}{\mathrm{\text{mol} K}} \right)} $$ This doesn't even cancel out because you're left with a $\text{mol}$ unit. Add avogadro's number. $$T = \dfrac{\left( 1 \text{amu} \right) [11.2 \frac{km}{s}]^2}{3 \left( 8.3144621 \frac{\mathrm{J}}{\mathrm{\text{mol} K}} \right)} \left( 6.022 \times 10^{23} \frac{1}{\text{mol}} \right) = 5,028 K $$ This is the case for Earth. For the Moon: $$T = \dfrac{\left( 1 \text{amu} \right) [2.4 \frac{km}{s}]^2}{3 \left( 8.3144621 \frac{\mathrm{J}}{\mathrm{\text{mol} K}} \right)} \left( 6.022 \times 10^{23} \frac{1}{\text{mol}} \right) = 230 K $$ This is negative in Celsius units. This is not a problem. It is merely saying that even freezing temperatures are enough for a lone Hydrogen atom to escape the gravity of the moon with. All we required was that this number be less than the temperature of the sun, which it is. Any surface that faces away from the sun will again see those photons within a month, unless it's in a crater on one of the poles, which we know can have ice near the surface. So that makes sense.
{ "domain": "physics.stackexchange", "id": 7837, "tags": "homework-and-exercises, thermodynamics, atmospheric-science, ideal-gas, hydrogen" }
closed-loop inverse kinematics
Question: Hi ROS fans.. I have made a node that controls ax-12 servo motors by using inverse kinematics algorithm. The result is good.. but I want to improve the performance better. So I Have to use closed-loop inverse kinematics algorithm.. I never try it.. and I want to try it now.. Are there ROS packages which I can learn about closed-loop inverse kinematics ? Thanks.. I hope someone can help me :) Originally posted by Rizqa on ROS Answers with karma: 45 on 2013-04-26 Post score: 0 Original comments Comment by cagatay on 2013-04-27: for closed-form inverse kinematics, you can use openrave's ikfast compiler. Openrave has wrapper for ROS Answer: You can try following this tutorial to generate an IKFast solution, as suggested by @cagatay. You'll need to create a DAE file from your robot's URDF, and will need to install the openrave software. But, after that, the steps are fairly straightforward. Note that IKFast is pretty good, but is unable to find solutions for all robot geometries. In particular, it only handles robots with 6 joints; additional joint beyond that must be specified as "free joints". It can handle prismatic joints, but with a few limitations. Good luck! Originally posted by Jeremy Zoss with karma: 4976 on 2013-04-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Rizqa on 2013-04-28: Thanks @Jeremy Zoss and @cagatay
{ "domain": "robotics.stackexchange", "id": 13970, "tags": "ros" }
Why does gravity-induced quantum interference in quantum mechanics show that gravity is not purely geometric at the quantum level?
Question: I am asking problem from the 'J.J. Sakurai and Jim J. Napolitano' Book Modern Quantum Mechanics 2nd edition. It reads as follows: "...This experiment also shows that gravity is not purely geometric at the quantum level because effect depends on $(m/\hbar)^2$." Is it because $m$ is comparable to $\hbar$ or this is the term that does appear in pair or anything else? Page 134,135. Answer: Your text, in §2.7.2, 3rd ed, points out that the coupling to the gravitational potential in the Schrödinger equation, $$ \left (i{\hbar\over m} \partial_t + {\hbar^2\over 2m^2}\nabla^2 -\Phi_{grav}\right )\psi=0,\tag{2.331} $$ does not drop out as in classical physics equations of motion and hence trajectories; therefore it is the quantity $(m/\hbar)^2$ with dimensions $T^2L^{-4}$ that will have to appear in gravity-induced quantum interference experiment phases such as the important Colella, Overhauser, & Werner, (1975) "Observation of gravitationally induced quantum interference" Physical Review Letters 34(23), p1472 experiment. That is, the neutron mass in this interference experiment does not drop out, as it would in classical physics, leading to the celebrated Einstein geometrical formulation, where all objects move the same regardless of mass, there. The scale of the masses involved is just right to actually observe quantum interference (which vanishes in the classical limit, $\hbar/m\to 0$). This is not altogether surprising since, after all, $\hbar$ appears/presents at the microscopic level of "elementary" particles. Of course, on similar dimensional grounds, the same type of functional integral combination of dimensionful quantities appears in the functional integral undergirding most attempts at quantizing gravity, a willful forfeiture of geometry: individual masses matter. Edit in response to comment by @flippiefanus Another answer links the entire 2nd edition of S&N, whose 3rd edition I use and reference. (2.331) of the 3rd edition above, is (2.7.11) there. "Pure geometric", in its language, is merely the dropping out of mass from particle trajectories in a gravitational field, so individual masses of the moving "test" particles don't matter. Through an admirably sly footnote, they avoid exegesis of the equivalence principle, that GR readers might relish dragging them into the rabbit hole of. See next. The COW experiment, in footnote 6, points out the gravitational potential entering the Schrödinger equation can be eliminated if all the macroscopic equipment is in an equivalent accelerated frame, which I understand as a reassurance that quantum experiments do not unsubtly violate the equivalence principle. But, as S&N remark, individual test particle masses do enter in the interference phase, so, in that sense (perhaps only), QM interference phases depend on the test particle's mass, here the neutron, which is their seat-of-the pants effective definition of non-geometrical. (It's almost as though the tower of Pisa paradigm-experiment involves interferometers for particles of different mass? )
{ "domain": "physics.stackexchange", "id": 96341, "tags": "quantum-mechanics, gravity, mass, interference, equivalence-principle" }
Why use DNA polymerase in making cDNA?
Question: RT is capable of synthesizing a complementary dna strand ( as in HIV life cycle.) Then why is DNA pol used when cDNA (synthesizing the second strand of it ) has to be synthesized from mRNA ( For eg.to construct a cDNA library) ? Answer: Reverse transcriptase, as the name suggests, uses an RNA template to create a DNA transcript (i.e., complement). Once the DNA complement has been made, DNA polymerase is used because it uses a DNA template to produce a DNA complementary strand. In your specific example, HIV contains a positive-stranded, RNA genome. The HIV RT can use either RNA or DNA templates to produce DNA complements, and this is how the single-stranded RNA genome is transcribed into double-stranded DNA, before integration. To my knowledge, the RT enzymes sold commercially (for applications like RT-PCR) are typically a mix of an RT enzyme and a DNA polymerase, isolated from other retroviruses, such as murine leukemia virus. An important consideration for lab application is the production of cDNA that truly represents the RNA template of interest. Since RT enzymes typically lack a 3'->5' exonuclease activity, they are unable to "proof-read" the complement strand as it is being produced. In contrast, there are very efficient proof-reading DNA polymerase, and this is why a DNA polymerase (with high proof-reading ability) is used after first-strand synthesis by RT.
{ "domain": "biology.stackexchange", "id": 1551, "tags": "dna, rna, biotechnology, retrovirus" }
Guess-the-number game by a Python beginner
Question: For some background I'm pretty much a beginner and attempting to learn Python currently. So I decided after a few days of lessons I'd look for some beginner projects to test my knowledge. A website suggested I do a 'Guess the Number' game and basically suggested I follow this guideline. Random function Variables Integers Input/Output Print While loops If/Else statements In short I read the documentation on random and decided to give it a shot. Now for the actual question. Is this a good way of doing this or can I simplify it? I've pretty much been working at this for longer than I would like to admit (approximately 90 minutes), because I'd get stumped and then rewrite what I was thinking. # guess the number game import random print "Welcome to guess the number!" print "You have 10 tries to guess the correct number!" print "The range of numbers will be from 0-25" # variables to store winning number, user guess and number of tries number = random.randint(0, 25) guess = raw_input("Please enter a number (0-25): ") tries = 0 # check to see if the user guessed the right number if int(guess) == number: print "Congratulations you\'ve won!" # noticed that you could input invalid numbers and it counted as a guess so this is how i solved that while int(guess) > 25 or int(guess) < 0: guess = raw_input("Please enter a VALID guess: ") else: # my attempt at making the game loop while tries < 10 and int(guess) != number: guess = raw_input("Please guess again: ") tries = tries + 1 # i noticed if i guessed the right answer out of the loop it would just exit so i duplicated here to prevent it if int(guess) == number: print "Congratulations you\'ve won!" # implemented the lose mechanic elif tries == 10: print "You've Lost!" # same with the correct answer issue i had so i put it in the loop as well elif int(guess) > 25 or int(guess) < 0: while int(guess) > 25 or int(guess) < 0: guess = raw_input("Please enter a VALID guess: ") # this is here because I didn't want to take tries away for invalid guesses tries = tries So the game for me works as expected. You can win, lose, guess invalid numbers (I haven't tried letters but I won't get into that yet). Just not sure if this is the most efficient I can get. But if it's good enough for a beginner, I'll take it. Answer: This is a good first stab at a guess the number game. Here's a few things: You should be learning/using Python 3 instead of Python 2. So far the only difference for you will be raw_input becomes input and print "foo" becomes print("foo"). The line tries = tries doesn't do anything meaningful. You don't need it You should put all of this inside a function called main and then at the bottom you run it with this (tests if this script is being run standalone): if __name__ == '__main__': main() You do int(guess) a lot. This is something that can fail (if someone types abc for example). You should do it once and check for failure. try: guess = int(guess) except ValueError:`` print('Guess must be between 0 and 25') # ask for another guess It's a good idea to comment. That's a fantastic habit to get into. However, you run the risk of over-commenting. And that is more distracting than having too few. As a rule, don't explain what, explain why. More concretely, "check to see if a user guessed the right number" is obvious from the if guess == secret_number:. Finding a balance is a skill, but if you work on it and read good quality open source code, you'll pick it up. When you do your range check, you can do it in a much more pythonic way. Instead of checking guess < 0 or guess > 25, you can do if not 0 <= guess <= 25 tries = tries + 1 can be tries += 1 You don't need to escape ' inside a " (so "Congratulations you\'ve won!" can be "Congratulations you've won!") The overarching issue you have though is most of your logic is duplicated in several different places. This becomes a problem if you want to, say, change the number range from 0-25 to 0-50. You'd need to change that in 6 places. What happens if you miss one? Then your game will break in weird ways. What's the solution? Look to pull out duplicate logic like this into smaller, manageable chunks. In this case, it's helpful to identify the steps of your game. Generate a secret number Collect a guess from the user If the user guessed the secret or more than 10 attempts have been made, end, else go back to 2 One easy thing you can pull out is "collect a guess from the user." You can do this in a function that gets the number, converts it to an int (handling the exception), and checking it is within range. LOW, HIGH = 0, 25 def get_guess(): while True: try: guess = int(input(f'Guess a number between {LOW}-{HIGH}: ')) except ValueError: print('Please enter a number') else: if LOW <= guess <= HIGH: return guess print(f'Number must be between {LOW} and {HIGH}') Now you can use get_guess() when you need to guess the guess from the user without having to add any extra control structures (like wrapping everything in a while) or duplicate any logic. Now, when using get_guess() you can concern yourself with fewer facets of the game. Namely, checking the number of attempts and if the guess was correct: from random import randint MAX_ATTEMPTS = 10 def main(): secret = randint(LOW, HIGH) for _ in range(MAX_ATTEMPTS): guess = get_guess() if guess == secret: print('You win!') break else: print('Too many attempts; you lose.')
{ "domain": "codereview.stackexchange", "id": 33353, "tags": "python, beginner, python-2.x, number-guessing-game" }
Subscribing to nav_msgs::Path message
Question: Hi everyone, I am trying to subscribe to a message of type nav_msgs::Path. Following is the callback function: nav_msgs::Path gPlan; void planCallback(nav_msgs::Path::ConstPtr& msg) { int i=0; for(std::vector<geometry_msgs::PoseStamped>::const_iterator it= msg->poses.begin(); it!= msg->poses.end(); ++it) { gPlan.poses[i] = *it; i++; } } The subscriber has been defined as: ros::Subscriber planSub = n.subscribe("global_plan", 1000, planCallback); The error I am getting is as follows: In file included from /opt/ros/indigo/include/ros/serialization.h:37:0, from /opt/ros/indigo/include/ros/publisher.h:34, from /opt/ros/indigo/include/ros/node_handle.h:32, from /opt/ros/indigo/include/ros/ros.h:45, from /home/lead/catkin_ws_autoware/src/PPpublisher/src/purepursuitPub.cpp:1: /opt/ros/indigo/include/ros/message_traits.h: In instantiation of ‘static const char* ros::message_traits::MD5Sum<M>::value() [with M = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >]’: /opt/ros/indigo/include/ros/message_traits.h:228:103: required from ‘const char* ros::message_traits::md5sum() [with M = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >]’ /opt/ros/indigo/include/ros/subscribe_options.h:89:50: required from ‘void ros::SubscribeOptions::initByFullCallbackType(const string&, uint32_t, const boost::function<void(P)>&, const boost::function<boost::shared_ptr<typename ros::ParameterAdapter<P>::Message>()>&) [with P = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&; std::string = std::basic_string<char>; uint32_t = unsigned int; typename ros::ParameterAdapter<P>::Message = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >]’ /opt/ros/indigo/include/ros/node_handle.h:658:5: required from ‘ros::Subscriber ros::NodeHandle::subscribe(const string&, uint32_t, void (*)(M), const ros::TransportHints&) [with M = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&; std::string = std::basic_string<char>; uint32_t = unsigned int]’ /home/lead/catkin_ws_autoware/src/PPpublisher/src/purepursuitPub.cpp:71:74: required from here /opt/ros/indigo/include/ros/message_traits.h:121:29: error: ‘__s_getMD5Sum’ is not a member of ‘boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >’ return M::__s_getMD5Sum().c_str(); ^ /opt/ros/indigo/include/ros/message_traits.h: In instantiation of ‘static const char* ros::message_traits::DataType<M>::value() [with M = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >]’: /opt/ros/indigo/include/ros/message_traits.h:237:105: required from ‘const char* ros::message_traits::datatype() [with M = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >]’ /opt/ros/indigo/include/ros/subscribe_options.h:90:54: required from ‘void ros::SubscribeOptions::initByFullCallbackType(const string&, uint32_t, const boost::function<void(P)>&, const boost::function<boost::shared_ptr<typename ros::ParameterAdapter<P>::Message>()>&) [with P = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&; std::string = std::basic_string<char>; uint32_t = unsigned int; typename ros::ParameterAdapter<P>::Message = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >]’ /opt/ros/indigo/include/ros/node_handle.h:658:5: required from ‘ros::Subscriber ros::NodeHandle::subscribe(const string&, uint32_t, void (*)(M), const ros::TransportHints&) [with M = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&; std::string = std::basic_string<char>; uint32_t = unsigned int]’ /home/lead/catkin_ws_autoware/src/PPpublisher/src/purepursuitPub.cpp:71:74: required from here /opt/ros/indigo/include/ros/message_traits.h:138:31: error: ‘__s_getDataType’ is not a member of ‘boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >’ return M::__s_getDataType().c_str(); ^ In file included from /opt/ros/indigo/include/ros/subscription_callback_helper.h:35:0, from /opt/ros/indigo/include/ros/subscriber.h:33, from /opt/ros/indigo/include/ros/node_handle.h:33, from /opt/ros/indigo/include/ros/ros.h:45, from /home/lead/catkin_ws_autoware/src/PPpublisher/src/purepursuitPub.cpp:1: /opt/ros/indigo/include/ros/parameter_adapter.h: In instantiation of ‘static ros::ParameterAdapter<M>::Parameter ros::ParameterAdapter<M>::getParameter(const Event&) [with M = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&; ros::ParameterAdapter<M>::Parameter = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&; ros::ParameterAdapter<M>::Event = ros::MessageEvent<const boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > > >; typename boost::remove_reference<typename boost::remove_const<M>::type>::type = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >]’: /opt/ros/indigo/include/ros/subscription_callback_helper.h:144:54: required from ‘void ros::SubscriptionCallbackHelperT<P, Enabled>::call(ros::SubscriptionCallbackHelperCallParams&) [with P = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&; Enabled = void]’ /home/lead/catkin_ws_autoware/src/PPpublisher/src/purepursuitPub.cpp:127:1: required from here /opt/ros/indigo/include/ros/parameter_adapter.h:78:30: error: invalid initialization of reference of type ‘ros::ParameterAdapter<boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&>::Parameter {aka boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&}’ from expression of type ‘const boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >’ return *event.getMessage(); ^ In file included from /opt/ros/indigo/include/ros/publisher.h:34:0, from /opt/ros/indigo/include/ros/node_handle.h:32, from /opt/ros/indigo/include/ros/ros.h:45, from /home/lead/catkin_ws_autoware/src/PPpublisher/src/purepursuitPub.cpp:1: /opt/ros/indigo/include/ros/serialization.h: In instantiation of ‘static void ros::serialization::Serializer<T>::read(Stream&, typename boost::call_traits<T>::reference) [with Stream = ros::serialization::IStream; T = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >; typename boost::call_traits<T>::reference = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&]’: /opt/ros/indigo/include/ros/serialization.h:163:32: required from ‘void ros::serialization::deserialize(Stream&, T&) [with T = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >; Stream = ros::serialization::IStream]’ /opt/ros/indigo/include/ros/subscription_callback_helper.h:136:34: required from ‘ros::VoidConstPtr ros::SubscriptionCallbackHelperT<P, Enabled>::deserialize(const ros::SubscriptionCallbackHelperDeserializeParams&) [with P = boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >&; Enabled = void; ros::VoidConstPtr = boost::shared_ptr<const void>]’ /home/lead/catkin_ws_autoware/src/PPpublisher/src/purepursuitPub.cpp:127:1: required from here /opt/ros/indigo/include/ros/serialization.h:136:5: error: ‘class boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >’ has no member named ‘deserialize’ t.deserialize(stream.getData()); ^ make[2]: *** [PPpublisher/CMakeFiles/purepursuitPub.dir/src/purepursuitPub.cpp.o] Error 1 make[1]: *** [PPpublisher/CMakeFiles/purepursuitPub.dir/all] Error 2 make: *** [all] Error 2 Invoking "make -j4 -l4" failed Where am I going wrong? edit: The whole code is: #include <ros/ros.h> #include <nav_msgs/Path.h> #include <nav_msgs/Odometry.h> #include <tf/transform_listener.h> #include <sensor_msgs/Imu.h> #include <std_msgs/Float32.h> #include <std_msgs/String.h> #include <geometry_msgs/Twist.h> #include <geometry_msgs/PoseStamped.h> #include <geometry_msgs/TwistStamped.h> #include <autoware_msgs/lane.h> #include <tf2_msgs/TFMessage.h> #include <tf2_geometry_msgs/tf2_geometry_msgs.h> #include <tf2_ros/transform_listener.h> #include <iostream> #include <cstring> #include <std_msgs/Float32.h> using namespace std; tf2_msgs::TFMessage tfmsg; nav_msgs::Path gPlan; float curVel; void tfCallback(const tf2_msgs::TFMessage& msg) { tfmsg = msg; } void planCallback(nav_msgs::Path::ConstPtr& msg) { int i=0; for(std::vector<geometry_msgs::PoseStamped>::const_iterator it= msg->poses.begin(); it!= msg->poses.end(); ++it) { gPlan.poses[i] = *it; i++; } } void encCallback(const std_msgs::Float32::ConstPtr& enc_msg) { curVel = enc_msg->data; } int main(int argc, char **argv) { ros::init(argc, argv, "pp_listener"); ros::NodeHandle n; geometry_msgs::PoseStamped curPose; geometry_msgs::PoseStamped curPoseTransformed; autoware_msgs::lane finWP; autoware_msgs::waypoint wp; tf::TransformListener listener; ros::Subscriber tfSub = n.subscribe("tf", 1000, tfCallback); ros::Subscriber planSub = n.subscribe("global_plan", 1000, planCallback); ros::Subscriber encSub = n.subscribe("enc_vel", 1000, encCallback); ros::Publisher cur_posePub = n.advertise<geometry_msgs::Pose>("current_pose", 1000); ros::Publisher cur_velPub = n.advertise<std_msgs::Float32>("current_velocity", 1000); ros::Publisher fin_wpPub = n.advertise<autoware_msgs::lane>("final_Waypoints", 1000); ros::Rate r(10.0); while(ros::ok()) { ros::spinOnce(); ros::Time now = ros::Time::now(); //curVel = odommsg.twist.twist.linear.x; if(tfmsg.transforms[2].child_frame_id == "base_link") { curPose.pose.position.x = tfmsg.transforms[2].transform.translation.x; curPose.pose.position.y = tfmsg.transforms[2].transform.translation.y; curPose.pose.position.z = tfmsg.transforms[2].transform.translation.z; curPose.pose.orientation = tfmsg.transforms[2].transform.rotation; curPose.header = tfmsg.transforms[2].header; listener.transformPose("map",curPose,curPoseTransformed); } finWP.header = gPlan.header; finWP.increment =1; for(int i=0; i<=gPlan.poses.size(); i++) { wp.pose.header = gPlan.header; wp.twist.header = gPlan.header; wp.twist.twist.linear.x = 2; wp.pose.pose.position = gPlan.poses[i].pose.position; wp.pose.pose.orientation = gPlan.poses[i].pose.orientation; finWP.waypoints.push_back(wp); } cur_posePub.publish(curPose); cur_velPub.publish(curVel); fin_wpPub.publish(finWP); r.sleep(); } return 0; } Originally posted by kllysin on ROS Answers with karma: 13 on 2018-11-19 Post score: 0 Answer: If you just want to create a copy of the path message, then the copy constructors mean you can do this: gPlan = nav_msgs::Path(msg); Hope this helps. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-11-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by kllysin on 2018-11-19: Thanks for your answer! Comment by MichelleHusbands on 2020-11-12: Thanks, @PeteBlackerThe3rd, it worked like a charm!
{ "domain": "robotics.stackexchange", "id": 32074, "tags": "ros, nav-msgs, ros-indigo" }
One instance per method call or multiple method call per instance
Question: I have the following class designs, and I'm looking for the best practices, aligned to the OOP design principles/patterns. The ParameterParser will be used inside a foreach block through 10 items approximately. Any comments regarding the method body will also be appreciated, though the focus of the question is on the class design/usage. Use the constructor to take the parameter and parse (one instance per parse) public class ParameterParser { private readonly Type _propertyType; private readonly string _parameter; public ParameterParser(Type propertyType, string parameter) { _propertyType = propertyType; _parameter = parameter; } public object Parse() { if (typeof(IEnumerable).IsAssignableFrom(_propertyType) || typeof(IEnumerable<>).IsAssignableFrom(_propertyType)) return _parameter.Split(new[] { ',', ';', '|' }, StringSplitOptions.RemoveEmptyEntries); if (_propertyType.IsEnum) return Enum.Parse(_propertyType, _parameter); return Convert.ChangeType(_parameter, _propertyType); } } Parameterless constructor, multiple parsing per instance public class ParameterParser2 { public object Parse(Type propertyType, string parameter) { //same code } } A helper class, static method public class ParameterHelper { public static object Parse(Type propertyType, string parameter) { //same code } } I think the first one is more likely to be the best, but I can't explain which exactly are the advantages over the others. The question is "Considering design principles, which one is the best and why? Is there any pattern which would fit the scenario?". Answer: Option 1 is surprising. I find it funny that you have return Enum.Parse(_propertyType, _parameter); in a parameterless Parse() method. Enum.Parse is just an example (int.Parse would be another); these methods are commonly used by C# programmers and your single-use parser seems to break POLS in my opinion. Option 2 is [more] consistent with the framework. I don't mean to repeat what I just said, but it feels natural for a Parse method to take in all the parameters it needs. Now there is another problem here: your API is exposing object, which means the parsed value will need to be cast from the calling code, into the correct type. This is bad. Very bad. Nothing should ever be returning an object. If you're parsing a value type, you're boxing it right there. Is that not likely? How about keeping it type-safe and taking a generic type argument instead of a Type parameter? public T Parse<out T>(string value) { ... return Convert.ChangeType(value, typeof(T)); } This way if T is a value type, you're not boxing it. And you're not returning an object, and it's still the calling code that decides what type to parse the string with. Option 3 feels wrong. Whenever I feel the need for a class to be called xxxxxHelper, something itches. The class should be static as well, if it's only going to expose static methods. That said, as @Simon has noted (oh, well, Simon says...) this brings DI to its knees. Best stay clear from that.
{ "domain": "codereview.stackexchange", "id": 5406, "tags": "c#, object-oriented, design-patterns, classes" }
How to build single pipeline with multiple estimators supporting fit and predict?
Question: I have a ML problem where I want to divide the prediction task into subproblems (where I believe specialized models will do better). All these predictions tasks operate independently and will use the same input data - but will have different estimators/targets. For example: single dataset (A) shared transformations A -> B estimator #1: random forests with target Y1 estimator #2: GBM classifier with target Y2 estimator #3: logistic regression with target Y2 the predictions of each of these models will be output as a tuple (#1, #2, #3) I'm looking for a simple (or best practice way) to define the above pipeline and train it and be able to use it for prediction. I have looked at sklearn Pipeline but best I can tell you can't use that to have multiple estimators for training/predictions (would love to learn I'm wrong on this). My fallback option is to build a class that supports fit and predict_proba but under the hood just calls these models sequentially (training in sequence & generating predictions in sequence before returning the tuple of results). Is there a better way to go about this problem? Answer: Scikit-learn pipelines are designed to chain operations, they are not designed to handle conditional logic. Your problem is better handled in Python-based logic. Something like: from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler pipe_1 = make_pipeline(StandardScaler(), RandomForestClassifier()) pipe_2 = make_pipeline(StandardScaler(), GradientBoostingClassifier()) pipe_3 = make_pipeline(StandardScaler(), LogisticRegression()) pipe_1.fit(X, y1) pipe_2.fit(X, y2) pipe_3.fit(X, y2) predictions = (pipe_1.predict(X), pipe_2.predict(X), pipe_3.predict(X))
{ "domain": "datascience.stackexchange", "id": 11024, "tags": "machine-learning, scikit-learn" }
Why is fillet weld assumed to be in a state of pure shear stress?
Question: According to Building codes, when calculating the maximum load a fillet weld can take, one checks only that the stress in pure shear is below the maximum shear strength. We know that shear yielding stress and tensile yield stress are related (using the Von Mises Criterion for the onset of yield): $$\sigma_s = \frac{\sigma_y}{\sqrt(3)}\approx0.6*\sigma_y$$ where $\sigma_s$ is the yield stress in yield and $\sigma_y$ is the yield stress in tension. But why do we assume the weld is in a state of pure shear? Why is this a valid assumption? Answer: First of all, one small but important note: The relationship between shear yield stress $S_{sy}$ and the (tensile) yield stress $S_y$ is dependent on the failure theory. Von Mises: $S_{sy} = 0.577 S_y\approx 0.6 S_y$ Tresca: $S_{sy} = 0.5 S_y$ I.e. the Tresca is a more conservative criterion.. That is probably the reason that it is preferred for materials with brittle failure. And altough normally steel can be considered as ductile, the Heat Affected Zone (HAZ) around the weld usually exhibits a more brittle failure. Therefore, Tresca seems to be more appropriate. Also I don't know if the Building code you are referring to explicitly states the Von Mises relation, or is just saying "shear stress" Let's proceed to the calculation, the total force passing through each weld is $\frac F 2$. Also let's assume a length of weld equal to l. The force needs to pass through every cross-section that passes from the lower left corner of the blownup image of the weld. We can examine the following 3 cases. horizontal crosssection (cross-section area $\sqrt 2 a l$) normal stress diagonal cross-section (cross-section area $a l$) combination of normal and shear vertical cross-section (cross-section area $\sqrt 2 a l$) shear stress In the following analysis I will use the following equation for simplicity $$\sigma_0= \frac{F}{2\sqrt 2 a l}$$ If you calculate the stress for: 1. horizontal cross-section: $$\sigma_1 = \frac{F/2}{\sqrt 2 a l}= \frac{F}{2\sqrt 2 a l}=\sigma_0\le S_y$$ 3. vertical cross-section: $$\tau_3 = \frac{F/2}{\sqrt 2 a l}= \frac{F}{2\sqrt 2 a l}=\sigma_0 \le S_{sy}$$ Finally, case 2 for the combined normal and shear stress. From the geometry ($45^\circ$ plane) the total force of $\frac F 2$, has a normal component with magnitute $\frac{F}{2}\frac{\sqrt 2}{2}= \frac{F}{2\sqrt{2}}$ and a shear component of equal magnitute. Therefore for case 2, you can calculate $$\sigma_2 =\frac{\frac{F}{2\sqrt{2}}}{a l}=\frac{F}{2\sqrt{2} a l}=\sigma_0, \quad \tau_2 =\frac{\frac{F}{2\sqrt{2}}}{a l}=\frac{F}{2\sqrt{2} a l}=\sigma_0$$ using the von Mises criterion for the equivalent general plane stress $$\sigma_{v,eq} = \sqrt{\sigma_2^2 + 3\tau_2^2}= \sqrt{\sigma_0^2 + 3*\sigma_0^2}= 2 \sigma_0<=S_y$$ If summarise the results the equations are: $$\begin{cases} (1.) \quad\sigma_0\le S_y\\ (2.) \quad2\sigma_0\le S_y\\ (3.) \quad\sigma_0\le S_{sy}\end{cases} \rightarrow \begin{cases} (1.) \quad\sigma_0\le S_y\\ (2.) \quad2\sigma_0\le S_y\\ (3.) \quad\sigma_0\le 0.5 S_{y} (Tresca)\end{cases} $$ It is obvious that (2.) and (3.) are equivalent and they are also more conservative than case (1.). Also the calculations of (3.) are simpler. Bottom line: The pure shear stress is as stringent as any other state of stress encountered at any plane of the weld, and its easier to download. (thanks @Jonathan R Swift )
{ "domain": "engineering.stackexchange", "id": 3483, "tags": "stresses, welding" }
Reading the sequence number from a transform
Question: Hi folks! I'd like to be able to use the sequence number from a transform that I am getting from a motion capture vrpn. I use this piece of code to read the transform: tf_listener.lookupTransform("/optitrak", "/quad", ros::Time(0), quad); and am attempting to read the sequence via: seq= quad.header.seq; But, naturally, the transform does not include that header. How can I read get both the transform and it's associated sequence number? Thanks, Parker Originally posted by 0pc0 on ROS Answers with karma: 78 on 2012-08-27 Post score: 0 Answer: As far as I know, it is not possible to get the sequence number due to the design of how tf works. Since there can be multiple producers of tf info, data can arrive out of order and may not be synchronized. It is the consumer's responsibility to check for the required tf using the timestamp, hence the lookupTransform with associated time (ros::Time(0) for example to get the latest tf) If it is absolutely necessary to have a sequence number, it is probably best to go back to the producer (in your case, whoever broadcasts /optitrak->/quad) and publish that transform with stamp. But I'm wondering about the necessity of this, since by your tf lookup, you know it is the latest tf, and can just be seq+1 by default. Originally posted by weiin with karma: 2268 on 2012-08-27 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by dornhege on 2012-08-27: AFAIK tf even interpolates between frame, so there might not be any single message behind a tf request.
{ "domain": "robotics.stackexchange", "id": 10786, "tags": "ros, quadcopter, tf2, ros-electric, transform-listener" }
why RGB and depth Image synchronization not working? [Solved]
Question: why this works ? but not the bottom one? #include <message_filters/subscriber.h> #include <message_filters/time_synchronizer.h> #include <sensor_msgs/Image.h> #include <sensor_msgs/CameraInfo.h> #include <cv_bridge/cv_bridge.h> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/core/core.hpp> #include <opencv2/features2d/features2d.hpp> #include <iostream> #include <ros/ros.h> #include <pcl_ros/point_cloud.h> #include <pcl/point_types.h> #include <pcl/filters/voxel_grid.h> #include <boost/foreach.hpp> #include <sensor_msgs/PointCloud2.h> #include <pcl/io/pcd_io.h> #include <image_transport/image_transport.h> #include <cv_bridge/cv_bridge.h> #include <sensor_msgs/image_encodings.h> #include <sstream> #include <string> //names that will appear at the top of each window static const std::string windowName = "Original Image"; using namespace std; using namespace sensor_msgs; using namespace message_filters; void callback(const ImageConstPtr& image_rgb, const CameraInfoConstPtr& cam_info) { // Solve all of perception here... cv::Mat image_color = cv_bridge::toCvCopy(image_rgb)->image; cv::imshow(windowName,image_color); cv::waitKey(3); } int main(int argc, char** argv) { ros::init(argc, argv, "vision_node"); ros::NodeHandle nh; message_filters::Subscriber<Image> rgb_sub(nh,"/camera/rgb/image_color", 1); message_filters::Subscriber<CameraInfo> info_color_sub(nh,"/camera/rgb/camera_info", 1); TimeSynchronizer<Image, CameraInfo> sync(rgb_sub, info_color_sub, 10); sync.registerCallback(boost::bind(&callback, _1, _2)); ros::spin(); return 0; } In the upper code I'm able to see the RGB image . BUT This is not giving any result. I'm trying to synchronize both the images and then separate then inside callback . then process the RGB image to find the object. I want ti find depth of particular pixel u,v . but this is not working when I'm trying to send both RGB and depth image. void callback(const ImageConstPtr& image_rgb, const ImageConstPtr& image_depth_source) { // Solve all of perception here... cv::Mat image_color = cv_bridge::toCvCopy(image_rgb)->image; cv::Mat image_depth = cv_bridge::toCvCopy(image_depth_source)->image; cv::imshow(windowName1,image_color); cv::imshow(windowName2,image_depth); cv::waitKey(3); } int main(int argc, char** argv) { ros::init(argc, argv, "vision_node"); ros::NodeHandle nh; message_filters::Subscriber<Image> rgb_sub(nh,"/camera/rgb/image_color", 1); message_filters::Subscriber<Image> depth_sub(nh,"/camera/depth_registered/image_raw", 1); TimeSynchronizer<Image, Image> sync(rgb_sub, depth_sub, 10); sync.registerCallback(boost::bind(&callback, _1, _2)); ros::spin(); return 0; } Originally posted by PKumars on ROS Answers with karma: 92 on 2016-02-18 Post score: 0 Original comments Comment by PKumars on 2016-02-19: Hello I tried to check for the topic /camera/depth_registered/image_raw in rostopic list. but with rostopic hz it's giving no response. then I changed to /camera/depth/image and I got average rate: 29.960 min: 0.031s max: 0.037s std dev: 0.00194s window: 27. then also I'm unable to get result. Comment by PKumars on 2016-02-19: with RGB image I got this subscribed to [/camera/rgb/image_color] average rate: 30.049 min: 0.030s max: 0.036s std dev: 0.00210s window: 18 average rate: 30.022. Comment by PKumars on 2016-02-19: message_filters::Subscriber rgb_sub(nh,"/camera/rgb/image_color", 1); message_filters::Subscriber depth_sub(nh,"/camera/depth/image", 1); TimeSynchronizer<Image, Image> sync(rgb_sub, depth_sub, 10); sync.registerCallback(boost::bind(&callback, _1, _2)); now I'm doing this . Comment by PKumars on 2016-02-19: void callback(const ImageConstPtr& image_rgb, const ImageConstPtr& image_depth_source) { cv::Mat image_color = cv_bridge::toCvCopy(image_rgb)->image; cv::Mat image_depth = cv_bridge::toCvCopy(image_depth_source)->image; cv::imshow(windowName1,image_color); cv::imshow(windowName2,image_depth);} Answer: Are you sure that your drivers for your device are setup to publish on both of those topics? What does rostopic hz report for each topic? Are you entering your callback? Depending on how synchronized your messages are, you may also need an approximate synchronizer. Originally posted by jarvisschultz with karma: 9031 on 2016-02-19 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by PKumars on 2016-02-19: hello Thanks a lot . I solved this problem. I used Approximate-time Policy http://wiki.ros.org/message_filters#ApproximateTime_Policy Comment by gvdhoorn on 2016-02-26: I've marked @jarvisschultz answer as the answer, as he provided you with the solution.
{ "domain": "robotics.stackexchange", "id": 23829, "tags": "ros, opencv, synchronization, depth" }
ROS MoveIt PR2 Demo Launch Fails
Question: Hi An interesting one this as I'm not sure whether anyone else will be effected by this but here goes. From a fresh install of ROS hydro and MoveIt hydro and installation of the 'ros-hydro-moveit-full-pr2' package I have been trying to run the simple moveit demo.launch file i.e 'roslaunch pr2_moveit_config demo.launch' However it unfortunately fails to load / work giving the error. [ INFO] [1377888324.708150830]: Starting scene monitor [ INFO] [1377888324.710933161]: Listening to '/planning_scene' [ INFO] [1377888324.711348039]: Starting world geometry monitor [ INFO] [1377888324.712989217]: Listening to '/collision_object' using message notifier with target frame '/odom_combined ' [ INFO] [1377888324.720032859]: Listening to '/planning_scene_world' for planning scene world geometry [move_group-5] process has died [pid 11657, exit code -4, cmd /opt/ros/hydro/lib/moveit_ros_move_group/move_group __name:=move_group __log:=/home/michael/.ros/log/4ed43f30-11a4-11e3-b291-08863b2589c1/move_group-5.log]. log file: /home/michael/.ros/log/4ed43f30-11a4-11e3-b291-08863b2589c1/move_group-5*.log [ERROR] [1377888327.017819527]: Object recognition action: Unable to connect to action server within allotted time [rviz_Rinnegan_11584_115703587-6] process has died [pid 11685, exit code -4, cmd /opt/ros/hydro/lib/rviz/rviz -d /opt/ros/hydro/share/pr2_moveit_config/launch/moveit.rviz __name:=rviz_Rinnegan_11584_115703587 __log:=/home/michael/.ros/log/4ed43f30-11a4-11e3-b291-08863b2589c1/rviz_Rinnegan_11584_115703587-6.log]. log file: /home/michael/.ros/log/4ed43f30-11a4-11e3-b291-08863b2589c1/rviz_Rinnegan_11584_115703587-6*.log Now I have never had this error before and it has also cropped up when trying to load up another moveit package for fanuc. Certainly this is new to me, though I do seem to have an poor run in with action servers [ERROR] [1377888327.017819527]: Object recognition action: Unable to connect to action server within allotted time It is possible to go the long way around through loading up rviz separately, along with a robot_description and then adding motion planning / robot_state within Rviz, however without running move_group I get no planning library loaded and running move_group.launch separately also fails. What I find odd is that two weeks ago everything ran fine, on two separate computers, I go on holiday, take my slow netbook and install similar two the others the same ROS hydro / moveit etc and it runs fine. In fact I can load the damn thing up and run demo.launch with no problems right now, but coming back the other two computers seem to no longer like me, even after a purge and reinstall of ROS. Has anything changed in the last two weeks with regards to MoveIt or move_group? I'm kind of worried on whether I should touch my slow netbook at all in case it gets the same disease :) Originally posted by cdrwolfe on ROS Answers with karma: 137 on 2013-08-30 Post score: 2 Original comments Comment by Martin Günther on 2013-09-01: Good question! In case you don't get any answers here, you should probably post to the moveit google group, perhaps also with a link to this question. Most of the moveit support seems to be happening over there. Comment by cdrwolfe on 2013-09-02: I've managed to somehwat get around this problem but probably not in the correct way. I git cloned moveit_core and moveit_ros and catkin_maked them in my src, for some reason this partially failed and didn't complete moveit_ros_move_group, so I copied over the move_group node from opt/ros/... into the devel/lib/moveit_ros_move_group and it all seemed to work fine :S Answer: There was a recent change in Rviz (https://github.com/ros-visualization/rviz/issues/680), which forced some plugins to be updated - including MoveIt's MotionPlanning plugin. The fix is there, but currently only available via sources (hydro-devel) version. I expect there will be another release soonish. Originally posted by bit-pirate with karma: 2062 on 2013-09-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by cdrwolfe on 2013-09-24: Well this is somewhat frustrating :). I decided in some infinite glory to start over again as the Hydro deb has finally been released and reinstall everything. Of course this drove me straight back to the problem above again, however this time my somewhat work around no longer works :). Anyway, first off some clarification, I have been under the assumption that all you really need to do is catkin_make everything once you have essentially git cloned it into source, however I swear half the time the libraries are not being created. I assume moveit_ros should compile the move_group node into the moveit_ros_move_group folder within catkin_ws/devel/lib as it did before, but now nothing. Which brings me to my next question on how exactly do I go about applying the fix above? Comment by bit-pirate on 2013-09-24: Did you check for missing system dependencies using rosdep? You also can use for installing catkinized and released ROS packages. In your workspace do rosdep install --from-paths src -i -y
{ "domain": "robotics.stackexchange", "id": 15393, "tags": "moveit, pr2" }
Faster than light morse code v2
Question: Imagine a theoretical superhard steel cylinder pointed towards the moon , 1 cm in radius and 380,000 km long, traveling at 1km per hour towards the moon. Parallel to this cylinder, 1 meter away we have a magnetic cylinder of equal dimensions, not moving. If we pushed the magnetic cylinder "down" towards the moving steel cylinder, causing the steel cylinder to stop, would both the earth side and the moon side stop at the same time? Would this pulse be a faster than light morse code machine? Answer: If you synchronize the pushes at the two ends then you can make the ends stop at the same time. Or you can make one step before the other, either way around. Consider the following related idea. Suppose New Year's day is approaching and we arrange firework displays all around the world. Everyone has an accurate clock. We can arrange the start of the displays so that a sequence of rocket launch events happens in all the cities, one just before the next, and with careful preparation the sequence can proceed faster than the speed of light around the world! Or to have an example involving longer times, put the events on different planets. Then the timekeeping will not have to be so precise. It will be pretty easy to arrange for an event on Mars to start just a few seconds after an event on Earth for example. But you don't do it by sending signals from Earth to Mars just when the Earth event happens. Rather, you plan ahead and a day or so before you could tell the people on Mars to get ready and then do it at the agreed time. The point is that by preparation one can make things at different places happen simultaneously, or one after another, or one before another, however we like. But the preparation signals---the ones used to set up the experiment---will all have to travel slower than light. (If with the long rod you just stop one end, then an impulse will travel down the rod at the speed of sound and eventually the other end will stop, but that was not the experiment you had in mind.)
{ "domain": "physics.stackexchange", "id": 76740, "tags": "speed-of-light, faster-than-light" }
Finding the Initial Launch Velocity of a Ball Whose Launch Angle is Known and Trajectory Contains a Given Point (Accounting for Air Resistance)
Question: I am working on a project that involves launching a ball through a basket (similar to basketball, but on a smaller scale for now). I haven't started building the project yet as I wanted to work out all the theory first. However, the system I am thinking of uses a single flywheel configuration, driven by a DC motor, that imparts the launch velocity on a Nerf ball. The launch angle of the ball is controlled by a servo motor. The height of the basket is known, and I am measuring the distance to the basket using a range finder. For the theory, I wanted to work out a general equation that finds the launch velocity of the ball given the launch angle and X and Y displacement of the basket (Y is the height of the basket and X is the measured distance to the basket). I've already solved this while ignoring air resistance, but since the Nerf balls are very light, I think I should account for air resistance. I already have the equations for X and Y displacement, accounting for air resistance. They are given below: $$ x = v_{x0}\tau(1 - e^\frac{-t}{\tau}) $$$$ y = (v_{y0} + v_t)\tau(1 - e^\frac{-t}{\tau}) - v_tt$$ Where: $v_{x0}$ = Horizontal component of launch velocity $v_{y0}$ = Vertical component of launch velocity $t$ = Time $\tau$ = Time constant (defined by $\frac{m}{k}$ where $m$ is the mass of the ball and $k$ is a constant defined drag equation (i.e. $k = \frac{\rho AC_d}{2}$). (I got the above equations from this video, and the equations depend on linear air resistance. Please let me know if I should use quadratic air resistance instead). Now, for the project, I want to figure out a way to use the above equations to calculate the initial launch velocity needed to launch the ball given the X and Y displacement of the basket and the initial launch angle. I rearranged the above equations into the below equations to find the initial components: $$ v_{x0} = \frac{x}{\tau(1-e^\frac{-t}{\tau})} $$ $$ v_{y0} = \frac{y + v_tt}{\tau(1-e^\frac{-t}{\tau})}-v_t $$ So, can I use the above equations to solve for my initial velocity? I'm a bit worried that the equations don't fully define the parameters of my trajectory in order to solve for the initial launch velocity. Also, is there a better way to account for air resistance while keeping the math simpler? Again, I was able to figure it out using a regular parabolic trajectory, which was relatively simple math. Are there some tricks that could let me adjust for air resistance without going through complicated math? Answer: First of all, you should write the explicit form of $v_t$ in your second equation, $$v_t=v_{y0}e^{-t/\tau}-(1-e^{-t/\tau})g\tau.$$ Now, add this equation to your system: $$\frac{v_{y0}}{v_{x0}}=\tan\theta,$$ where $\theta$ is the launch angle. You have 3 equations and 3 unknowns, $v_{x0}$, $v_{y0}$ and $t$ (the time of the impact). You need to solve (probably numerically) for $v_{x0}$ and $v_{y0}$ and compute the launch velocity, $$v_0=\sqrt{v_{x0}^2+v_{y0}^2}.$$
{ "domain": "physics.stackexchange", "id": 67885, "tags": "homework-and-exercises, newtonian-mechanics, kinematics, projectile, drag" }
Entity Component System Using C++
Question: Introduction I'm a new to C++ so please take me easy :) I am currently working on a low-level game engine using C++, OpenGL, and GLFW; I've implemented the Event System and the Input Manager, which you can check out here: Improved Event System & InputManager Using C++. Graph For clarity, I've included a graph: Components Entity: This is a simple type alias for size_t. Each entity in the system is represented by a unique Entity ID. Component: This is an abstract base class for all components. Components are the data that entities can possess. System: This is an abstract base class for all systems. Systems are where the logic of the game lives. Each system operates on entities that have a specific set of components. For example, a MovementSystem might operate on all entities that have both PositionComponent and VelocityComponent. BaseComponentPool and ComponentPool: These classes manage the storage of components. Each type of component has its own ComponentPool. The BaseComponentPool is an interface that allows the ComponentManager to store a collection of ComponentPool objects of different types. ComponentManager: This class manages all the ComponentPools. It provides methods to add and retrieve components to/from entities. EntityManager: This class manages the creation and destruction of entities. SystemContext: This class provides the context for systems to operate. It provides access to the EntityManager and ComponentManager, allowing systems to query entities and their components. SystemManager: This class manages all the systems. It provides methods to add, remove, enable, and disable systems. It also provides a method to update all systems, which is typically called once per game loop. ECSManager: This is the main interface to the ECS. It provides methods to create and destroy entities, add components to entities, and add systems. It also provides a method to update all systems, which is typically called once per game loop. Control Flow The flow of the ECS is as follows: Entities are created using the ECSManager. Components are added to entities using the ECSManager, which delegates to the ComponentManager. Systems are added using the ECSManager, which delegates to the SystemManager. Each game loop, the ECSManager's updateSystems method is called, which delegates to the SystemManager's updateSystems method. Each system is updated, operating on the entities and their components as needed. Concerns Question: Should I replace the simple Entity = size_t, with an Object Entity? This would improve type safety but would also reduce speed and increase complexity, since now every call to get the Entity's ID would be through the interface of the object. Problem: This design does not adhere to the dependecy inversion principle, because SystemContext, which is a high-level object, is exported to the Systems (which are low-level). Normally, this would be solved by using interfaces, but not here, because the SystemCotext object consits of templated methods, which cannot be virtual. However, I do not think this is a mark of bad design becuase the DIP is just a principle, not a hard-rule. Any feedback/suggestions would be very much appreciated. Source Code Entity.h #pragma once using Entity = size_t; Component.h #pragma once class Component { public: virtual ~Component() = default; }; ComponentPoolExceptions.h #pragma once #include <stdexcept> #include "Entity.h" class ComponentEntityNotFoundException : public std::runtime_error { public: ComponentEntityNotFoundException(Entity entity, const std::string& componentType) : std::runtime_error(entity + " not found in component pool: " + componentType) { } }; ComponentPool.h #pragma once #include <stdexcept> #include <vector> #include <memory> #include "Component.h" #include "ComponentPoolExceptions.h" class BaseComponentPool { public: virtual ~BaseComponentPool() = default; virtual void destroyEntityComponent(Entity entity) = 0; virtual bool hasComponent(Entity entity) const = 0; }; template <typename ComponentType> class ComponentPool : public BaseComponentPool { public: void addComponent(Entity entity, std::unique_ptr<ComponentType> component) { if (entity >= pool.size()) pool.resize(entity + 1); pool[entity] = std::move(component); } void destroyEntityComponent(Entity entity) override { if (entity < pool.size()) pool[entity].reset(); } const ComponentType& getComponent(Entity entity) const { if (entity >= pool.size() || pool[entity] == nullptr) throw ComponentEntityNotFoundException(entity, typeid(ComponentType).name()); return *pool[entity]; } bool hasComponent(Entity entity) const override { return entity < pool.size() && pool[entity] != nullptr; } private: std::vector<std::unique_ptr<ComponentType>> pool; }; ComponentManager.h #pragma once #include <typeindex> #include "ComponentPool.h" class ComponentManager { public: template<typename ComponentType, typename... Args> void addComponent(Entity entity, Args&&... args) { auto component = std::make_unique<ComponentType>(std::forward(args)...); getComponentPool<ComponentType>().addComponent(entity, std::move(component)); } template<typename ComponentType> const ComponentType& getComponent(Entity entity) { return getComponentPool<ComponentType>().getComponent(entity); } template<typename ComponentType> bool hasComponent(Entity entity) { return getComponentPool<ComponentType>().hasComponent(entity); } void destroyEntityComponents(Entity entity) { for (auto& [type, pool] : componentPools) pool->destroyEntityComponent(entity); } private: template<typename ComponentType> ComponentPool<ComponentType>& getComponentPool() { std::type_index typeIndex(typeid(ComponentType)); auto it = componentPools.find(typeIndex); if (it == componentPools.end()) { auto newPool = std::make_unique<ComponentPool<ComponentType>>(); it = componentPools.emplace(typeIndex, std::move(newPool)).first; } return static_cast<ComponentPool<ComponentType>&>(*it->second); } private: std::unordered_map<std::type_index, std::unique_ptr<BaseComponentPool>> componentPools; }; EntityExceptions.h #pragma once #include <stdexcept> #include <string> #include "Entity.h" class EntityOutOfBoundsException : std::runtime_error { public: EntityOutOfBoundsException(Entity entity) : runtime_error("Entity: " + std::to_string(entity) + " is out of bounds!") { } }; EntityManager.h #pragma once #include <queue> #include <unordered_set> #include "Entity.h" class EntityManager { public: Entity createEntity(); void destroyEntity(Entity entity); const std::unordered_set<Entity>& getActiveEntities() const { return activeEntities; } private: std::queue<Entity> freeEntities; std::unordered_set<Entity> activeEntities; Entity nextEntity = 0; }; EntityManager.cpp #include "EntityManager.h" #include "EntityExceptions.h" Entity EntityManager::createEntity() { Entity entity; if (!freeEntities.empty()) { entity = freeEntities.front(); freeEntities.pop(); } else { entity = nextEntity++; } activeEntities.insert(entity); return entity; } void EntityManager::destroyEntity(Entity entity) { if (activeEntities.find(entity) == activeEntities.end()) throw EntityOutOfBoundsException(entity); freeEntities.push(entity); activeEntities.erase(entity); } SystemContext.h #pragma once #include "ComponentManager.h" #include "EntityManager.h" class SystemContext { public: SystemContext(EntityManager& EntityManager, ComponentManager& componentManager) : EntityManager(EntityManager), componentManager(componentManager) { } template<typename ComponentType> requires std::derived_from<ComponentType, Component> const ComponentType& getComponent(Entity entity) { return componentManager.getComponent<ComponentType>(entity); } template<typename ComponentType> requires std::derived_from<ComponentType, Component> bool hasComponent(Entity entity) { return componentManager.hasComponent<ComponentType>(entity); } template<typename... ComponentTypes> requires (std::derived_from<ComponentTypes, Component> && ...) std::vector<Entity> getEntitiesWithComponents() { std::vector<Entity> entities; auto& activeEntites = EntityManager.getActiveEntities(); for (const auto& entity : activeEntites) if ((componentManager.hasComponent<ComponentTypes>(entity) && ...)) entities.push_back(entity); return entities; } private: EntityManager& EntityManager; ComponentManager& componentManager; }; System.h #pragma once #include "SystemContext.h" class System { public: virtual ~System() = default; virtual void onAdded() = 0; virtual void update(float deltaTime, SystemContext& context) = 0; virtual void onRemoved() = 0; void enable(bool enabled) { this->enabled = enabled; } bool isEnabled() const { return enabled; } private: bool enabled = true; }; SystemExceptions.h #pragma once #include <stdexcept> class SystemNotFoundException : public std::runtime_error { public: SystemNotFoundException(const std::string& systemType) : std::runtime_error("System not found: " + systemType) {} }; class SystemAlreadyAddedException : public std::runtime_error { public: SystemAlreadyAddedException(const std::string& systemType) : std::runtime_error("System is already added: " + systemType) {} }; SystemManager.h #pragma once #include <list> #include <memory> #include <stdexcept> #include <unordered_map> #include "System.h" #include "SystemExceptions.h" class SystemManager { public: template<typename SystemType, typename... Args> void addSystem(Args&&... args) { std::type_index typeIndex(typeid(SystemType)); if (systemLookup.find(typeIndex) != systemLookup.end()) throw SystemAlreadyAddedException(typeid(SystemType).name()); std::unique_ptr<System> system = std::make_unique<SystemType>(std::forward<Args>(args)...); system->onAdded(); systems.emplace_back(std::move(system)); systemLookup[typeIndex] = systems.size() - 1; } template<typename SystemType> void removeSystem() { std::type_index typeIndex(typeid(SystemType)); auto it = systemLookup.find(typeIndex); if (it == systemLookup.end()) throw SystemNotFoundException(typeid(SystemType).name()); systems[it->second]->onRemoved(); systems.erase(systems.begin() + it->second); systemLookup.erase(it); } template<typename SystemType> bool hasSystem() const { std::type_index typeIndex(typeid(SystemType)); return systemLookup.find(typeIndex) != systemLookup.end(); } template<typename SystemType> void enableSystem(bool enabled) { auto system = getSystem<SystemType>(); system->enabled(enabled); } void updateSystems(float deltaTime, SystemContext& context) const { for (auto& system : systems) if(system->isEnabled()) system->update(deltaTime, context); } private: std::vector<std::unique_ptr<System>> systems; std::unordered_map<std::type_index, size_t> systemLookup; }; ECSManager.h #pragma once #include "SystemManager.h" class ECSManager { public: ECSManager() : context(entityManager, componentManager) { } Entity createEntity() { return entityManager.createEntity(); } void destroyEntity(Entity entity) { componentManager.destroyEntityComponents(entity); entityManager.destroyEntity(entity); } template<typename ComponentType, typename... Args> requires std::derived_from<ComponentType, Component> void addComponent(Entity entity, Args&&... args) { componentManager.addComponent<ComponentType>(entity, std::forward(args)...); } template<typename SystemType, typename... Args> requires std::derived_from<SystemType, System> void addSystem(Args&&... args) { systemManager.addSystem<SystemType>(std::forward<Args>(args)...); } template<typename SystemType> requires std::derived_from<SystemType, System> void removeSystem() { systemManager.removeSystem<SystemType>(); } template<typename SystemType> requires std::derived_from<SystemType, System> bool hasSystem() const { return systemManager.hasSystem<SystemType>(); } template<typename SystemType> requires std::derived_from<SystemType, System> void enableSystem(bool enabled) { systemManager.enableSystem<SystemType>(enabled); } void updateSystems(float deltaTime) { systemManager.updateSystems(deltaTime, context); } private: EntityManager entityManager; ComponentManager componentManager; SystemManager systemManager; SystemContext context; }; Answer: About your concerns Question: Should I replace the simple Entity = size_t, with an Object Entity? This would improve type safety but would also reduce speed and increase complexity, since now every call to get the Entity's ID would be through the interface of the object. It would indeed improve type safety, but there is no reason why this would reduce speed, and only perhaps a little complexity is added to the source code. Consider: struct Entity { std::size_t id; }; Now consider how we have to rewrite, for example, hasComponent() to use this: bool hasComponent(Entity entity) const override { return entity.id < pool.size() && pool[entity.id] != nullptr; } Nothing changed except the addition of a few .ids. Also, since this struct only contains a std::size_t, the assembly code generated by the compiler will be exactly the same as when using Entity = std::size_t, even with optimization disabled! Problem: This design does not adhere to the dependecy inversion principle, because SystemContext, which is a high-level object, is exported to the Systems (which are low-level). Normally, this would be solved by using interfaces, but not here, because the SystemContext object consists of templated methods, which cannot be virtual. However, I do not think this is a mark of bad design because the DIP is just a principle, not a hard-rule. As long as you only depend on the public interface, not the internals, I think it's fine. You don't need to make an abstract base class for everything. Technically speaking, you can still replace SystemContext with a completely different implementation, and if it has the same interface, your code will still compile fine. The only drawback of not having an abstract base class is that you cannot call System::update() with different types of derived SystemContexts. You probably could make it work by using type erasure, but that would require drastic changes to your code base. Unused class Component What does class Component actually do? I don't see it used in any of the code. It also only has a virtual destructor, no other members, so the only thing you could use it for is to store pointers-to-base in a std::vector. But you're not doing that in your code. I would just remove it, otherwise it will just add dead weight to classes that inherit from it. ComponentPool can store components without using std::unique_ptr Since an instance of ComponentPool will only ever store components of the same type, there is no need to use std::unique_ptr to store them in a std::vector, you could just write: std::vector<ComponentType> pool; Of course, that only makes sense if ComponentPool::addComponent() gets the component as a raw value or reference, but that's easy: just make ComponentManager::addComponent() pass a component by value. Or even better: have it forward Args...: class ComponentManager { public: template<typename ComponentType, typename... Args> void addComponent(Entity entity, Args&&... args) { getComponentPool<ComponentType>().addComponent( entity, std::forward<Args>(args)... ); } … }; template <typename ComponentType> class ComponentPool: public BaseComponentPool { public: template<typename... Args> void addComponent(Entity entity, Args&&... args) { if (entity >= pool.size()) pool.resize(entity + 1); pool[entity] = Component(std::forward<Args>(args)...); } … private: std::vector<ComponentType> pool; }; One issue now is how to have ComponentPool know whether a given entity actually has a valid component associated with it. You could keep track of that in a separate datastructure, like a std::vector<bool> hasComponent, or you can store std::optional<ComponentType>s in the std::vector pool. Another reason not to do this is if ComponentType is a large type: this would waste space in the pool for entities that don't have that component associated with them. But more about that below. Avoid temporary memory allocations It looks like in a system's update() function, it's supposed to call context.getEntitiesWithComponents() to get a std::vector<Entity> with all the entities that have the required components. This requires several memory allocations, as that std::vector is being grown dynamically. Then the caller has to loop over that vector to do its update thing on each entity in that std::vector. However, that is unnecessary; consider instead having a way to pass a function to SystemContext that it will apply to any matching entities: template<typename... ComponentTypes> requires (std::derived_from<ComponentTypes, Component> && ...) updateEntitiesWithComponents(std::function<void(Entity)> update) { auto& activeEntites = entityManager.getActiveEntities(); for (const auto& entity : activeEntites) if ((componentManager.hasComponent<ComponentTypes>(entity) && ...)) update(entity); } } Performance The code looks clean and functionally correct. However, especially since Entity Component Systems were designed to handle lots of objects, you might want to think about performance, both in terms of CPU efficiency and memory efficiency. Consider for example something like activeEntities. This is a std::unordered_set<Entity>. You might think that since it has \$O(1)\$ insertions and lookups, it will be fast. However, each insertion will cause memory to be allocated for a single Entity and some associated bookkeeping for the hash table. All the allocations might be spread around instead of being close together in memory. When doing a lookup, a hash value has to be calculated and pointers have to be followed. What if you had used a std::vector<bool> instead to keep track of which entity IDs are active? With most C++ standard library implementations, this uses only one bit per entity ID, and all those bits are stored sequentially in memory. Doing an insertion and lookup is just a few assembler instructions. The only drawback is that if you have very little active entities compared to the largest entity ID, then you might be wasting more memory than std::unordered_set, although you'll probably never reach that point in practice in a well-designed game. I already mentioned that you can avoid the need for std::unique_ptr in ComponentPool, and the need for temporary storage when updating systems. However, there still is the issue that each time update() is called, it has to scan through all entities to find out which ones it should update. And the set of entities to update might not even change since the last time update() as called. Ideally, a System has a densely packed std::vector<Entity> of all the entities belonging to that system. In that case, each time update() is called it can just loop over that vector, without having to do any other checks. This would require some bookkeeping though, and might increase the overhead of adding and removing entities from the ECS. Related to that, it would also be nice if ComponentPools would have densely packed std::vector<ComponentType>s to store the components. When updating a system with only one component, you can then just linearly go through one such vector. For systems with multiple components it's not that easy, but you'd still save memory by densely packing components. The drawback of course is that you need some other forms of bookkeeping to keep track of which entities have which components. Consider using static inline template member variables It's quite likely that you only need one ECS in a given program. In that case, you can make use of static inline template member variables, which have been possible since C++17. This is how it looks: class ComponentManager { public: template<typename ComponentType, …> void addComponent(Entity entity, …) { componentPools<ComponentType>.addComponent(entity, …); } … private: template<typename ComponentType> static inline ComponentPool<ComponentType> componentPools; }; The static and template part are just how they always worked. The inline ensures that even if multiple source files cause instantiations of pools of the same ComponentType, they will be merged at link time (see also the One Definition Rule). Take inspiration from existing ECS frameworks You might want to look at some existing ECS frameworks to get some inspiration for the interface and/or performance optimizations. A well known framework for C++ is EnTT.
{ "domain": "codereview.stackexchange", "id": 44870, "tags": "c++, object-oriented, design-patterns, entity-component-system, constrained-templates" }
What initiates primase to add an RNA primer to a DNA strand and what makes it stop?
Question: What initiates primase to add an RNA primer to a DNA strand and what makes it stop adding RNA nucleotides? Is there tags added to the DNA back-bone? Answer: Prokaryotic primases are activated by DNA helicase [1, 2] while the eukaryotic ones are triggered when they form a complex with DNA polymerase alpha and its accessory B subunit [2]. I couldn't find too much information about what exactly triggers activation, but according to De Falco M et al. (2004): [...] synthetic function (of the prokaryotic primase) is specifically activated by thymine-containing synthetic bubble structures that mimic early replication intermediates. [3] It stops when it finishes reading a DNA template: The Sso DNA primase utilizes poly-pyrimidine single-stranded DNA templates with low efficiency for de novo synthesis of RNA primers [3]. References: Wikipedia contributors, "Primase," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Primase&oldid=593928284 (accessed December 1, 2014). David N. Frick, and Charles C. Richardson. DNA PRIMASES. Annual Review of Biochemistry. Vol. 70: 39-80 (Volume publication date July 2001). DOI: 10.1146/annurev.biochem.70.1.39 De Falco M, Fusco A, De Felice M, Rossi M, Pisani FM. The DNA primase of Sulfolobus solfataricus is activated by substrates containing a thymine-rich bubble and has a 3'-terminal nucleotidyl-transferase activity. Nucleic Acids Res. 2004 Sep 30;32(17):5223-30. Print 2004.
{ "domain": "biology.stackexchange", "id": 3128, "tags": "biochemistry, molecular-genetics" }
PARITY $\notin$ $AC^0$ with bounded fanout: easy proof?
Question: $AC^0$ is the class of constant-depth polynomial-size circuits with NOT gates and unbounded fan-in AND and OR gates, where inputs and gates also have unbounded fanout. Now consider a new class, call it $AC^0_{bf}$ which is like $AC^0$ but for which inputs and gates have fanout at most $O(1)$. This class is clearly in $AC^0$. In fact, it is strictly contained in $AC^0$, as noted here. Therefore, PARITY is obviously not in $AC^0_{bf}$. Is there a proof of PARITY $\notin AC^0_{bf}$ which does not also go through for $AC^0$? In other words, is there a proof which does not use powerful techniques like the switching lemma or the Razborov/Smolensky method? Answer: I might miss something, but isn't $AC^0_{bf}$ the same as a Formula? Since every input bit can have an effect on at most a bounded number of gates, we can simply suppose that every gate has only one output (after possibly duplicating a few things) and we can push down not gates as well. We know that the formula size of parity is n^2 (see Troy J. Lee, "The formula size of PARITY", 2007) and since on every level of our circuit we can only have O(n) gates, this shows that parity is not in $AC^0_{bf}$.
{ "domain": "cstheory.stackexchange", "id": 990, "tags": "cc.complexity-theory, circuit-complexity" }
Balloon from float to sink?
Question: Everytime I walk by the balloon stall, I know that they use helium to make the balloons float in the air. But my problem is why when I leave the balloon aside, the balloon sinks to the bottom. Is there anything happen to the helium inside? Answer: When the balloon is fully inflated with helium its density (mass/volume) is less than that of the air around it, allowing it to float. Over time, some of the helium will "leak" out of the balloon. As the helium is released the mass of the balloon decreases slightly, but the volume decreases at a much faster rate, resulting in a balloon with a density greater than that of the air around it, which is why it falls to the ground.
{ "domain": "chemistry.stackexchange", "id": 2670, "tags": "everyday-chemistry, noble-gases" }
If sound travels through matter what medium does light travel through?
Question: So sound is a wave and is basically just vibrations, an atom vibrates causing another next to it vibrate and so on until it finally reaches our ears to become sound. If that's normally how waves behaves, what about light? I understand it's also a particle but something must have caused its starting point to influence its next point in space thereby allowing it to travel through space. what is the cause and effect relationship that allows light to propagate through space? I ruled out matter because most visible light is blocked by matter. Answer: The process of light propagation is described by the Maxwell equations. $$ \nabla\cdot{\bf D} = \rho $$ $$ \nabla\cdot{\bf B} = 0 $$ $$ \nabla\times{\bf E} = - {{\partial{\bf B}}\over{\partial t}} $$ $$ \nabla\times{\bf H} = {\bf J} + {{\partial{\bf D}}\over{\partial t}} $$ These equations say (in simple terms) that: change in the electric field is causing a change in magnetic field, while change in magnetic field is causing a change in the electric field. The original source of electromagnetic waves is some oscillating charge (for instance an electron) which has an electric field around it. This field is changing (because the charge is oscillating). Therefore (according to the fourth equation) a magnetic field $H$ is formed. But the creation of magnetic field is in fact a change in magnetic field. This leads (according to the third equation) to creation of new electric field $E$. But this change in $E$ leads to $H$, which leads to $E$, etc.
{ "domain": "physics.stackexchange", "id": 16009, "tags": "electromagnetic-radiation, visible-light, aether" }
How many kilograms of graphite are consumed while producing 4 kg of Al through Hall-Heroult process?
Question: How many kilograms of graphite are consumed while producing 4 kg of $\ce{Al}$ through Hall-Heroult process? This looks like a simple stoichiometry problem: $$\ce{4Al^3+ + 12e- -> 4Al} \; \; \; \; \; \text{(at cathode)} \\ \ce{3C ->[O2] 3CO2 + 12e^-} \; \; \; \; \; \text{(at anode)}$$ For every $4$ moles of $\ce{Al}$ consumed, $3$ moles of $C$ are consumed. The mass of graphite consumed is thus: $$M = \frac{4000}{27} \times \frac{3}{4} \times 12 = \pu{1.33 kg}$$ However, my textbook mentions the answer to be $\pu{2 kg}$, along with the following explanation: for every kilogram of $\ce{Al}$ produced, approximately $\pu{0.5 kg}$ of graphite is consumed. Why is this so? Could anyone provide a valid reference? Answer: According to Aluminium Production1: The production of 1 ton of aluminum typically requires 420 kg of carbon, 1920 kg $\ce{Al2O3}$, and 16 kg $\ce{AlF3}$ .... The theoretical carbon consumption is only $\pu{333 kg C/t} \space \ce{Al}$. The excess carbon consumption is mainly due to air oxidation of the hot anodes in the hooded cells. The reaction occurs preferentially in the pitch binder matrix and leads to the physical loss of coke particles to generate dust that floats on top of the electrolyte. Theoretically, your answer holds good. However, it fails practically. It's a stupid tricky question by the problem setters, who are apparently testing your knowledge of metallurgy rather than stoichiometry. References: Tabereaux, Alton T., and Ray D. Peterson. “Aluminum Production.” Treatise on Process Metallurgy, Elsevier, 2014, pp. 839–917. doi:10.1016/B978-0-08-096988-6.00023-7
{ "domain": "chemistry.stackexchange", "id": 14044, "tags": "physical-chemistry, stoichiometry, metallurgy" }
Console-like printing of message JavaScript
Question: I'm learning JavaScript and I've made a very simple little document-modifying script that prints a message as if it was someone typing into a console. It practices recursion and in my opinion, I feel a bit iffy about the number of if-cases in the code. My main concern is the JavaScript though I know there's still a lot I have to learn about CSS animations. var text = document.getElementById('text'); var greeting = ['Hello. I am a console wannabe.', 'All systems are functioning.', 'I like pie.']; text.innerHTML = '<i>▮</i>'; (function greet() { if (greeting.length > 0 && greeting.length < 3) { text.insertBefore(document.createElement('br'), text.lastChild); } var line = greeting.shift(); if (!line) { return; } line = line.split(''); (function type() { var character = line.shift(); if (!character) { return setTimeout(greet, 2000); } text.insertBefore(document.createTextNode(character), text.lastChild); setTimeout(type, 300); }()); }()); body{ background-color: #000000; color: #99ffcc; font-family: Courier; } i{ font-style: unset; font-size: 1em; animation: blink 1100ms linear infinite; } @keyframes blink { 49% { opacity: 1; } 50% { opacity: 0; } 89% { opacity: 0; } 90% { opacity: 1; } } <div id="text"></div> I would like as many opinions as you can have about my simple little script. Does it look like I have a small grasp of JavaScript? Answer: Your code is really easy to understand, and there's not much wrong with it. I promise I'll go easy on you. Right on the first lines, something bothers me... You have this: var text = document.getElementById('text'); var greeting = ['Hello. I am a console wannabe.', 'All systems are functioning.', 'I like pie.']; text.innerHTML = '<i>▮</i>'; What's wrong? The names! text implies some actual text, but you have a <div>. greeting is alright, but it is actually text. And that <i> is a prompt. I propose the following rewrite: var screen = document.getElementById('screen'); var text = ['Hello. I am a console wannabe.', 'All systems are functioning.', 'I like pie.']; screen.innerHTML = '<b id="prompt" class="idle">&marker;</b>'; It all makes sense, except for the class, which I will explain later on. And please, use the HTML entities, to avoid problems with file encodings. Also, don't use <i> for this, since it will have problems when the animation is stopped. The best would be to move the prompt to the HTML and ignore that line. Lets analyse the function greet(). You have the following code: (function greet() { if (greeting.length > 0 && greeting.length < 3) { text.insertBefore(document.createElement('br'), text.lastChild); } var line = greeting.shift(); if (!line) { return; } line = line.split(''); (function type() { var character = line.shift(); if (!character) { return setTimeout(greet, 2000); } text.insertBefore(document.createTextNode(character), text.lastChild); setTimeout(type, 300); }()); }()); Huh? The greet() function is preparing to type the text? And only works for this example? Well, let me re-write this for you. function type(text, screen) { //You have to check for lines and if the screen is an element if(!text || !text.length || !(screen instanceof Element)) { return; } //if it is not a string, you will want to make it into one if('string' !== typeof text) { text = text.join('\n'); } //normalize newlines, and split it to have a nice array text = text.replace(/\r\n?/g,'\n').split(''); //the prompt is always the last child var prompt = screen.lastChild; prompt.className = 'typing'; var typer = function(){ var character = text.shift(); screen.insertBefore( //newlines must be written as a `<br>` character === '\n' ? document.createElement('br') : document.createTextNode(character), prompt ); //only run this again if there are letters if( text.length ) { setTimeout(typer, 300); } else { prompt.className = 'idle'; } }; setTimeout(typer, 300); }; Wow, some fine German Overengeneering going on there! You can pass it a string or an array of strings, and a 'screen', where the message will be added. Now, enough of Javascript, lets talk about CSS! It is a pretty straightforward CSS, without many complications. But, remember that it suffered a rewrite. Replace your i CSS with this: #prompt{ font-style: unset; font-size: 1em; } #prompt.idle { animation: blink 1100ms linear infinite; -webkit-animation: blink 1100ms linear infinite; } On your HTML, simply change the id. Remember, we re-wrote it: <div id="screen"><b id="prompt" class="idle">&marker;</b></div> Final result: And now, all together: function type(text, screen) { //You have to check for lines and if the screen is an element if(!text || !text.length || !(screen instanceof Element)) { return; } //if it is not a string, you will want to make it into one if('string' !== typeof text) { text = text.join('\n'); } //normalize newlines, and split it to have a nice array text = text.replace(/\r\n?/g,'\n').split(''); //the prompt is always the last child var prompt = screen.lastChild; prompt.className = 'typing'; var typer = function(){ var character = text.shift(); screen.insertBefore( //newlines must be written as a `<br>` character === '\n' ? document.createElement('br') : document.createTextNode(character), prompt ); //only run this again if there are letters if( text.length ) { setTimeout(typer, 300); } else { prompt.className = 'idle'; } }; setTimeout(typer, 300); }; window.onload=function(){ var screen = document.getElementById('screen'); var text = [ 'Hello. I am a better console wannabe.', 'All systems are functioning.', 'I like pie.' ]; type(text, screen); }; body { background-color: #000000; color: #99ffcc; font-family: Courier; } #prompt { font-style: unset; font-size: 1em; } #prompt.idle { animation: blink 1100ms linear infinite; -webkit-animation: blink 1100ms linear infinite; } @keyframes blink { 49% {opacity: 1;} 50% {opacity: 0;} 89% {opacity: 0;} 90% {opacity: 1;} } @-webkit-keyframes blink { 49% {opacity: 1;} 50% {opacity: 0;} 89% {opacity: 0;} 90% {opacity: 1;} } <div id="screen"><b id="prompt" class="idle">&marker;</b></div> To ressemble a little more a console, the animation is only applyed when there's no typing. With minor changes, it is possible to make it work with keypresses on the keyboard. And with a write queue. But that's left as an exercise for the O.P. Also, one cool thing you can try is to use a random timeout, to create a varying effect on the writting, to simulate real typing! You can even add an extra interval to special characters, to be more human-like: function type(text, screen) { //You have to check for lines and if the screen is an element if(!text || !text.length || !(screen instanceof Element)) { return; } //if it is not a string, you will want to make it into one if('string' !== typeof text) { text = text.join('\n'); } //normalize newlines, and split it to have a nice array text = text.replace(/\r\n?/g,'\n').split(''); //the prompt is always the last child var prompt = screen.lastChild; prompt.className = 'typing'; var typer = function(){ var character = text.shift(); screen.insertBefore( //newlines must be written as a `<br>` character === '\n' ? document.createElement('br') : document.createTextNode(character), prompt ); //only run this again if there are letters if( text.length ) { var delay, next = text[0]; //based on a querty pt-PT keyboard, there delays are subjective if(next.match(/[a-z\d\t\-\.,º]/)){ //fastest characters delay = 50; } else if(next == ' ' || next == '\n' || next.match(/[\\\|\!\"\#\$\%\&\/\(\)\=\?\'»\*ª_:;>A-Z]/)) { //medium-slow keys delay = 100; } else if(next.match(/[\@\€\£\§\{\[\]\}\~\´]/)) { //slow keys delay = 150; } else { //Yes, that slow! delay = 250; } //repeated characters are types faster if(next == character) { delay -= 25; //reduces the delay by 50 } setTimeout(typer, delay + (Math.random() * 50)); } else { prompt.className = 'idle'; } }; setTimeout(typer, 50 + (Math.random() * 50)); }; window.onload=function(){ var screen = document.getElementById('screen'); var text = [ 'Hello. I am a better console wannabe.', 'All systems are functioning.', 'I like pie.', 'É só pra teste (just for testing).', 'ASCII PARTY!"#$%&/()=?!!!!!' ]; type(text, screen); }; body { background-color: #000000; color: #99ffcc; font-family: Courier; } #prompt { font-style: unset; font-size: 1em; } #prompt.idle { animation: blink 1100ms linear infinite; -webkit-animation: blink 1100ms linear infinite; } @keyframes blink { 49% {opacity: 1;} 50% {opacity: 0;} 89% {opacity: 0;} 90% {opacity: 1;} } @-webkit-keyframes blink { 49% {opacity: 1;} 50% {opacity: 0;} 89% {opacity: 0;} 90% {opacity: 1;} } <div id="screen"><b id="prompt" class="idle">&marker;</b></div>
{ "domain": "codereview.stackexchange", "id": 14743, "tags": "javascript, beginner, css, console, animation" }
Hermitian Conjugation in 2D CFT
Question: When we take the hermitian conjugate of an operator in D dimensions we have: $$ \mathcal{O}_{flat}(r,\vec{n})^\dagger=\frac{1}{r^{2\Delta}}\mathcal{O}_{flat}\left(\frac{1}{r},\vec{n}\right) $$ where $\vec{n}$ the "spherical" coordinates of our radial quantisation, $\Delta$ the weight of our field and $r$ the radial coordinate. When we consider complex coordinates in 2D CFT this becomes$$ \mathcal{O}(z,\bar{z})^\dagger=\frac{1}{z^{2\bar{h}}}\frac{1}{\bar{z}^{2h}}\mathcal{O}\left(\frac{1}{\bar{z}},\frac{1}{z}\right) $$ where $z,\bar{z}$ the complex coordinates and the weight of the field is $(h,\bar{h})$. How can I prove the 2D case from the $N$-Dimensional case? Note: In the 2D case we use $z=re^{i\theta},r=|z|,\theta=\arg(z)$, the standard complex plane coordinates. Answer: The conjugation formula you have in general dimensions is true only for scalar operators. In $d=2$, this corresponds to an operator with $h={\bar h} = \frac{\Delta}{2}$. We therefore need to prove $$ {\cal O}^\dagger(z,{\bar z}) = \frac{1}{(z {\bar z})^{\Delta} } {\cal O} \left( \frac{1}{ {\bar z} } , \frac{1}{z} \right) \qquad \qquad (1) $$ To derive this, we start from the formula in general dimensions reduced down to $d=2$ $$ {\cal O}^\dagger(r ,\theta) = \frac{1}{r^{2\Delta}} {\cal O}\left( r' , \theta' \right) , \qquad r' = \frac{1}{r} , \qquad \theta' = \theta $$ We now set $z=r e ^{i\theta}$. Then, $$ z' = r' e^{i \theta'} = \frac{1}{r} e^{i\theta} = \frac{1}{r e^{-i\theta}} = \frac{1}{ {\bar z}}. $$ Then, rewriting the transformation in complex coordinates, we immediately reproduce (1).
{ "domain": "physics.stackexchange", "id": 64998, "tags": "homework-and-exercises, operators, conventions, conformal-field-theory" }
Parallel Transport along a surface
Question: I'm currently learning differential geometry for General Relativity. That being said, I would like to know how would you compute the parallel transport of a vector along a surface. $\nabla_{\vec{u}}F =0$ Using a symmetric metric. $$g_{\mu\nu} = \begin{pmatrix} g_{11} & 0 & 0\\ 0 & g_{22} & 0 \\ 0 & 0 & g_{33} \end{pmatrix}$$ $$\vec{u} \cdot\nabla F = u_1 \frac{\partial F}{ \partial x^1} g_{11} + u_2 \frac{\partial F}{ \partial x^2} g_{22} + u_3 \frac{\partial F}{ \partial x^3} g_{33} = 0$$ This leaves me with one equation with three parameters. What are the other conditions I would have to impose to get the values of $u$? Answer: This is not the way the parallel transport equation is written. You're rewriting the geometric definition incorrectly, as divergence of a scalar field, but what you should have is 3 equations for the components of the parallely-transported vector field. First of all, we would say that something is parallely transported along a curve or a vector field, not a surface. This is because the parallel transport is defined as happening along a path, not a surface. Of course, your surface can contain many paths, and parallel transport could happen along any of these. Here's how you'd rewrite the equation, supposing that $\mathbf{F}$ is also a vector field: $$ \nabla_{\mathbf{u}}\mathbf{F} = 0 \\[0.5em] u^{\alpha}\nabla_{\alpha}F^{\beta} = 0\\[0.5em] u^{\alpha}\big{[}\partial_{\alpha}F^{\beta} + \Gamma_{\alpha\mu}^{\beta}F^{\mu} \big{]} = 0 $$ The transition from the first to the second line is done by just expressing the covariant derivative of vector field $\mathbf{F}$ in components and making use of the fact that $\nabla$ is $\mathbb{R}$-linear in the first (bottom) argument. Then, we're using the definition of the covariant derivative. To obtain the Christoffel symbols, we use: $$ \Gamma_{\alpha\mu}^{\beta} = \frac{1}{2}g^{\beta\nu}(g_{\alpha\nu,\mu} + g_{\mu\nu,\alpha} - g_{\alpha\mu,\nu} )$$. Plugging this into the third equality would yield an equation that has to be satisfied in order for the vector field $\mathbf{F}$ to be parallel to $\mathbf{u}$. However, if we want to rephrase the task in the language of parallel transport of a vector ($\mathbf{F}(t=0)$) and looking for its evolution, we re-express the third equality by assuming that $\mathbf{u}$ is affinely parametrized by a parameter $t$. From the very definition of the vector field as a differential operator, it holds that the action of $\mathbf{u}$ on any function can be written as: $$ \mathbf{u}(f) = \frac{d}{dt}(f \circ \gamma)(t) = \frac{d}{dt}(f \circ x^{-1} \circ x \circ \gamma )(t) = \frac{\partial}{\partial x^{\alpha}}(f\circ x^{-1}) \frac{d}{dt}(x^{\alpha} \circ \gamma)(t) \equiv \partial_{\alpha}f \dot{\gamma}^{\alpha} \equiv u^{\alpha} \partial_{\alpha}f$$ In the above, the vector field $\mathbf{u}$ could be defined on the whole manifold, but we're concentrating on one of its integral curves. In the second and third equality I wanted to show how a coordinate expression is derived, using the Leibniz rule on a composition of $f\circ x^{-1}$ and $x \circ \gamma $. We inserted $x^{-1} \circ x = id$, where $x: \mathcal{M} \rightarrow R^{n}$ is the coordinate map. We can therefore treat the expression $u^{\alpha}\partial_{\alpha}F^{\beta}$ as a derivative of the composition $F^{\beta} \circ \gamma$ with respect to parameter $t$. Mind you, of course, that one needs to solve for an integral curve $\gamma$ of the $\mathbf{u}$ that one wishes to perform the parallel transport along. This requires solving the equation $\mathbf{u}(\gamma(t)) = \frac{d\gamma}{dt}$, or in coordinates: $u^{\alpha}(\gamma(t)) = \frac{d\gamma^{\alpha}}{dt}$. For example, if $\mathbf{u} = x\partial_{y} - y\partial_{x}$, we'd be solving for $\gamma(t)=(\gamma^{x}(t), \gamma^{y}(t))$ such that: $$ \frac{d\gamma}{dt} = \frac{d\gamma^{x}}{dt}\frac{\partial}{\partial x} + \frac{d\gamma^{y}}{dt}\frac{\partial}{\partial y} $$ which gives: $$ \frac{d\gamma^{x}}{dt} = - \gamma^{y}(t) \\ \frac{d\gamma^{y}}{dt} = \gamma^{x}(t)$$ Taking into account all of the above, the parallel transport equation is expressed as system of ODEs: $$ \frac{dF^{\beta}(t)}{dt} = -\dot{\gamma}^{\alpha}(t)\Gamma^{\beta}_{\alpha\mu}(t)F^{\mu}(t), \\ F^{\beta}(0) = F^{\beta}_{0}.$$
{ "domain": "physics.stackexchange", "id": 84989, "tags": "general-relativity, differential-geometry, metric-tensor, tensor-calculus" }
Can someone explain conservation laws in terms of state space?
Question: "Whenever a dynamical law divides the state space into separate cycles, there is a memory of which cycle they started in. Such a memory is called a conservation law." —What is meant by this statement? Can someone please explain this with an example? This is from Leonard Susskind's "Theoretical Minimum" on classical mechanics. I asked this before once but no one answered; here is the link to my previous question: What does separate cycles dividing the state-space have to do with Conservation laws? Answer: [this answer is very qualitative - in words rather than numbers. However, some simple computation is present in the hope of making the argument more complete without using complex math. It is also pretty long, so feel free to skim through the talking - but I hope the result is explanatory!] The problem Susskind's example is an extension of the rule of classical mechanics, more in particular of Newton's law $F=ma$ which, given an initial position and velocity, dictate the evolution of the system in a deterministic way. In classical mechanics, systems evolve over time according to Newton's laws. Take the easiest system you can think of: a bouncing ball. You take a ball, you drop it from some height and observe its trajectory: it will go down, hit the ground (we assume, bouncing perfectly back), it will go up, it will go down, it will go up, it will go down: it is, in a way, in a cycle (up, down, up, down, etc.). The rules of physics and the way you dropped the ball (in this case, Newton's equation) force the ball to stay in this cycle: it can not go below ground, it can not go higher than it started, it can not go left or right if you dropped it perfectly straight. What is fascinating is that, once you know the forces involved, you can describe the whole cycle just by giving it a number: the initial energy. Why are we in a cycle Why is this? It is because of conservation of energy. You might know that for a ball of mass $m$ at a height $h$ in a constant gravity field $g$ the energy of the ball is given by $E_0=mgh$. So when you drop the ball, it has a given energy ($E_0$) and that energy can convert into kinetic energy $K={1\over 2}mv^2$ i.e. into velocity ($v$). This happens however at the expense of height: if you accelerate, you go down. If you decelerate, you go up (on the way back). You are forced by conservation of energy of having, at time, $t$, a total energy $$E(t)=mgh(t)+{1\over 2}mv(t)^2=E_0$$ i.e. the total energy has to be $E_0$. This fixes the "limits" of your cicle: you can change $h(t)$ and you can change $v(t)$ but they are forced to satisfy the above equation so they live in a "finite" configuration space: you can not just give $h$ and $v$ any value, you can only choose couples $(h, v)$ which satisfy that equation! [More in general, the equation is $U(x)+{1\over 2}mv^2=E_0$ where $U(x)$ is an energy depending on the position and sometimes velocity, of your system.] Why is this a cycle...? That's not all of it: because everything is governed by Newton's law and Newton's second law ($F=ma$) is a differential equation, the state at time $t+dt$ is completely determined by the state at time $t$. This means that not only you are in a bound space (i.e. you can only choose $(h, v)$ according to the equation above) but you are also in a cycle meaning that once you choose the first value, the second one is pre-determined, and the third one too and so on: you move from one "state" (from a point $(h, v)$) to another state according to Newton's rule, you are constrained in a "finite" space and eventually you have to come back where you started: this is a cycle. Fixed in a cycle: describing your cycle with energy So you see, there's only two things describing your ball: its position $h(t)$ at time $t$ and its velocity $v(t)$. The couples $(h, v)$ completely determine your system because of Newton's equation and the space containing all the couples $(h, v)$ is called "phase space". Can we describe our cycle somewhat? Imagine you drop your ball starting from a height $h(t=0)=1$ (in some units). The ball has a mass of $m=1$ in some units and the acceleration of gravity is $g=10$ in some units. This means that $$E_0=mgh(t=0)=1*1*10=10$$ so now the above equation of conservation of energy, putting in the numbers becomes $$E_0=mgh(t)+{1\over 2} mv(t)^2 = 1*10*h(t)+0.5*1*v(t)= 10.$$ i.e. $$10h(t)+{1\over 2}v^2(t)=10$$ This is an equation giving you, once you choose $h(t)$ the corresponding value of $v$ according to $$v = \pm \sqrt{ 2(10-10h)}$$ and you already notice a couple of things: $h$ cannot be higher than $h=1$ (the starting height) and for each $h$ there's two values of $v$ (one going up, one going down). Indeed, if you plot these points (the points describing a system are called "points in phase space") you get this: The arrows indicate motion according to Newton's equation: if you start at (h=1, v=0), you go down up to $h=0$ increasing in (negative, pointing down) velocity, you bounce reversing your speed (dashed line - the example we chose is a bit tricky in that it involves this violent speed reversal but let's just give it for granted!) and you go up again until you are at (h=1, v=0) and the cycle starts again. But you could start anywhere on the cycle (at height $h=0.75$ with speed $2.23$ pointing up or down) and you would still follow the cycle because evolution is dictated by Newton's law. And remember: you can choose starting positions only according to conservation of energy i.e. the starting position must be on the cycle and have a total energy $E_0$! The $size$ and $shape$ of this cycle is given by the energy: by choosing the inital value $E_0=mgh(0)$ you set the size of the cycle according to the equation above! Here (in red) is an example of the same system with an initial energy of $E_0=5$ it looks exactly the same, but it's smaller because it has less energy So what Susskind is saying is that, once you put your system in its cycle, you can describe the complete cycle using one number, i.e. the total energy $E_0$. That will give you the complete set of points (position, velocity) that describe your system and it will also tell you, given an initial condition, how it will evolve in time (simply follow the arrows in the pictures). The fact that the system can never leave this cycle is a demonstration of conservation of energy, which adds a constraint on the couples $(h, v)$. You can sort cycles according to the energy that they have because they never mix, because the same cycle can only have one energy. In a way, a Susskind says, once you release the particle it will "remember" where it started from and get back there. But the particle is not actually remembering: it's just constrained by physics to end up back there, as if walking in circles. Also, the different cycles he mentions could be cycles of the same system with different energies (one has energy $Q=1$ one $Q=0$ etc. or they could be cycles of different systems (a pendulum, etc). One thing is for sure: once you decide which forces are acting (gravity? a spring?) to starting position of the system it corresponds a unique cycle with fixed total energy. That's just how Newton's equation works! Another way to see it, more similar to Susskind's analysis is that instead of points $(h, v)$ you have discrete states (i.e. you don't move continously but you jump from one state to the other) and there also is a rule saying that if you are in a state in which the velocity points downwards, you jump to a state of lower altitude, if you are in a state with velocity pointing upwards, you go to higher altitudes (with the exception of the velocity reversal at $h=0$). To this, it corresponds the following cycle, with discrete points along the same curve as before: Again, the shape of the cycle however is given only by the energy, and the way you move by the rule you gave your system (in the previous case: $F=ma$, but you can invent any rule and get a different dynamical system, provided the rule is such that from a given state you always go to a single other one). The point is, given a set of state and a rule to jump between them, you are forced to obey the rule and thus can not escape the cycle and thus you can describe the whole system by a single parameter: the "identifier" of the cycle you are in. In classical mechanics, that identifier is the total energy. That identifier will be conserved throughout the cycle! Not that this concept (a dynamical description of your system) will be dramatically expanded in statistical mechanics and quantum mechanics and the interesting thing is that at the base of such different descriptions there are the same concepts, i.e. the fact that systems evolve in a space described by (position, velocity) [this will be expanded a lot in QM] and conservation of energy [which holds up to a point in SM].
{ "domain": "physics.stackexchange", "id": 75715, "tags": "newtonian-mechanics, classical-mechanics, lagrangian-formalism, conservation-laws, hamiltonian-formalism" }
SKLearn PCA explained_variance_ration cumsum gives array of 1
Question: I have a problem with PCA. I read that PCA needs clean numeric values. I started my analysis with a dataset called trainDf with shape (1460, 79). I did my data cleaning and processing by removing empty values, imputing and dropping columns and I got a dataframe transformedData with shape (1458, 69). Data cleaning steps are: LotFrontage imputing with mean value MasVnrArea imputing with 0s (less than 10 cols) Ordinal encoding for categorical columns Electrical imputing with most frequent value I found outliers with IQR and got withoutOutliers with shape (1223, 69). After this, I looked at histograms and decided to apply PowerTransformer on some features and StandardScaler on others and I got normalizedData. Now I tried doing PCA and I got this: pca = PCA().fit(transformedData) print(pca.explained_variance_ratio_.cumsum()) plt.plot(pca.explained_variance_ratio_.cumsum()) plt.xlabel('number of components') plt.ylabel('cumulative explained variance') the output of this PCA is the following: [0.67454179 0.8541084 0.98180307 0.99979932 0.99986346 0.9999237 0.99997091 0.99997985 0.99998547 0.99999044 0.99999463 0.99999719 0.99999791 0.99999854 0.99999909 0.99999961 0.99999977 0.99999988 0.99999994 0.99999998 0.99999999 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. ] Then I tried: pca = PCA().fit(withoutOutliers) print(pca.explained_variance_ratio_.cumsum()) plt.plot(pca.explained_variance_ratio_.cumsum()) plt.xlabel('number of components') plt.ylabel('cumulative explained variance') out: [0.68447278 0.86982875 0.99806386 0.99983727 0.99989606 0.99994353 0.99997769 0.99998454 0.99998928 0.99999299 0.9999958 0.99999775 0.99999842 0.99999894 0.99999932 0.99999963 0.9999998 0.9999999 0.99999994 0.99999998 0.99999999 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. ] Finally: pca = PCA().fit(normalizedData) print(pca.explained_variance_ratio_.cumsum()) plt.plot(pca.explained_variance_ratio_.cumsum()) plt.xlabel('number of components') plt.ylabel('cumulative explained variance') Out: [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] How is it possible that the last execution gives such an output? Here are data distributions transformedData withoutOutliers normalizedData I'll add any further data if necessary, thanks in advance to any who can help! Answer: With PCA it is really important to put all (!) features on the same scale using standardization, e.g. using standard.scaler, i.e. having mean 0 and standard deviation 1. Also see this and this posts. The reason for this is that PCA looks at the variance explained by the different features. So in order to make the features comparable, standardization is required.
{ "domain": "datascience.stackexchange", "id": 10560, "tags": "python, scikit-learn, pca, data-analysis, variance" }
Finding a minimum of a noisy function
Question: I have a certain function that calculates numerically, for every $x \in [0,10]$, a value $y\geq 0$. I want to find an approximate minimum point of that function. A possible solution is to calculate $y$ for e.g. 10000 values of $x$ in the range $[0,3]$ and return the $x$ for which $y$ is smallest. I am looking for fastest solutions. I know that the general shape of the function is like the following plot: I.e, it is like a function with a single minimum point, but with some added random noise of bounded size. Without the noise, I could easily find the minimum point using gradient methods, but with noise gradients seem useless. What other search algorithm can I use here? NOTE: Naturally the approximation quality can depend on the noise size, for example, in the above plot, any answer between $\approx 0.3$ and $\approx 0.7$ would be considered good enough. Answer: Some things you could explore are metaheuristics. The following methods don't make assumptions about the function to be optimized, and some are relatively simple to understand and implement. Some of these mantain a population of possible solutions, recombining them to create new possible solutions and keeping the best ones. Differential evolution Particle Swarm Evolutionary algorithms There are a lot more, with a lot of published variations. It is my understanding that they are very capable, you can expect the error to be much smaller to the error margin you mentioned and the acceptable error can be used as a parameter for the halting condition of the algorithm. If the target function is computationally cheap to evaluate these methods should be very competitive. In particular differential evolution could be a place to start exploring. You can use an already written implementation on scipy's optimize. There are other approaches, some of which make assumptions about the function to be optimized. You can see some examples on Wikipedia (Computational optimization techniques section).
{ "domain": "cs.stackexchange", "id": 9996, "tags": "optimization, numerical-algorithms" }
Peer to Peer GraphQL with Relay over WebSocket
Question: Here is my Relay Network: async function fetchQuery( operation, variables, ) { // hash graphql query for unique listener const hash = await digestMessage(operation.text); // send query to websocket doSend(JSON.stringify({ message: 'request', query: operation.text, hash, variables, })); // listen on response from WebSocket, events to promise return await response({ eventEmitter, hash }) } Here is my WebSocket listener: function onMessage (evt: MessageEvent) { const payload = JSON.parse(evt.data) if (payload.message === 'request') { // run graphql query if WebSocket message is a 'request' graphql(schema, payload.query).then(result => { doSend( JSON.stringify({ message: 'response', hash: payload.hash, data: result.data }) ) }) } else if (payload.message === 'response') { // emit local event with results of graphql resolver if WebSocket message is a 'response' eventEmitter.emit(payload.hash, { data: payload.data }) } } Design Goals: GraphQL support with minimal backing services, here just a WebSocket server. Use client devices, the browser, for persistence instead of a server. Design Constraints / GraphQL and Relay Goals: Provide data in a form that is useful to product developers building a view. Fetch data more efficiently. Decouple product code and server logic. Attempted Approaches: WAMP: Needs a server for router. Web-RTC: Needs a WebSocket server for signaling. For a broadcast, each peer needs to connect to all peers without a server or use WebTorrent / DHT WebSocket in React useEffect: No React reconciliation. No Relay declarative data-fetching. Work in Progress: Load balancing: Limiting only one response, and deciding who responds next. Versioning: If peers have an old schema or resolver logic changes. Authentication: Verifying OAuth tokens in browser. Local storage security and risk: Browser now stores credentials, like payment processing tokens, instead of server. Abuse: Just anyone joins WebSocket and sends out bad data. Feedback welcome. I looked for similar projects and found a lack of references. Answer: Don't! After working on this for too long, I'm moving onto something else, and don't suggest you do this. Typical over reach. When building an internet application, you have to ask yourself, are internet servers and the internet itself really the problem? Here are some specifics: Are Servers Really a Problem? WebRTC requires signaling. Free platforms or WebSocket servers can still be DDoS, requiring something else in front of it. Servers are an easy, often free utility compared to storage and compute on an abstraction of client devices. Who's Really Offline Today? Mobile and Wifi are ubiquitous. There's web workers. 5G has billions invested in using client devices in conjugation with data providers down to the chips with efficient power consumption. Peer Discovery A lot of the products that bill themselves as decentralized are surprisingly centralized when it comes to peer discovery. Authoritative Data Anyone can resolve a query with anything. Requests require complex validation, dropping the peer, and reporting to a signaling server. Dedicated Coverage No peer instances require a server. Headless browsers may drop WebRTC support. Private data Servers provide an area of mitigated security, as in, if someone's rooted your server and reads API secrets, you have bigger problems. Peers require specialization, as in adding a Stripe secret in local storage, and announcing checkout support. HTTP Cookie This was the final deal breaker. Cookies with the secure header prevent JavaScript access to secrets. Server required, non-negotiable. At best, you have a very complex and insecure infrastructure still running servers. A better alternative is to find a graphql platform. Then run fail2ban for DDoS protection on a free tier Google Cloud Engine server, and put that in front of the graphql platform. For more scaling, like expensive compute, setup redis as a queue, and process on a local private network. Rest here is left as a reference: I created an npm based on this concept, with github repo.. Feedback and contributions welcomed. Here's some highlights after about a month's work Load Balancing Create a random delay on each peer from 0 to 3s in intervals of 20ms. During delay, use GraphQL's local state management to save incoming responses. After delay, check local state for response. Clear state if exists then short circuit exit. Source Authentication Use a public key crypto library like nacl. When sending a response, sign the hash, and provide public key. When receiving a response, verify the signature. Validation Use io-ts to validate the WebSocket message. My hash, query, and signature properties are strings. My variable property is a record of strings and my uri property, is either "request" or "response". graphql runs the request's query in browser with variables so no injection. A custom graphql-tools codegen takes the graphql schema, and generates io-ts types. These types runtime decode websocket response data. Authoritative Data Even with all this, there is no authoritative data sources. If you build an online store, someone can join and start responding with {price: 0.00}. Load balancing times them out, authentication tags them, validation makes them do some work, so there's momentum working here. It's not just opening a URL. Still it's possible to open a browser window and send bad data. My plan is to use brain.js. Train a test set of good and bad source detection. Bundle with the app. Property, value, and source should be enough for consensus and alerting the peer network of bad sources. Other ideas are transparency with good UI and let the user detect bots, and also references. Send a supplier list along with the price, and let the app independently validate. All these are domain specific so I left it out of graphql networking implementation. Rendering, Suspense API Work in progress, will report back soon. Update: 10/21/20 RelayEnvironmentProvider component suspends. Since this at the top level of the app, the entire app suspends as oppose to individual components. The relay issue-tracker demo has a custom react experimental version. I'm creating a npm to see if I can drop this functionality into Relay networking. Bundle Size react, react-relay, relay-runtime, and graphql are all 100kb+ each. brain.js is at ~580kb. Working in the browser, and built for the browser are two different things. To keep bundle size small, I webpack these as external modules loaded from the vendor's CDN, excluded from webpack build. Other graphql-tools has limited browser support. file includes do not work. Instead I went with compile time codegen while building io-ts runtime decoding from graphql schema. jest has limited support too, especially with create-react-app. I wanted to import all files in a folder automatically. browser supports this but jest doesn't. If I want that extensibility, compile time codegen would be the best. Also jest does not support TextEncoder, which relay uses, and was used but no longer in my nacl implementation. I used StandardLib instead. I looked into modeling a state machine for this. xState and graphql is redundant. graphql is really nice, the best state management there is. It's best to avoid complex state as most as possible. Straight through pure functions with just input and output, best you can do.
{ "domain": "codereview.stackexchange", "id": 39649, "tags": "graphql" }
Remove GCC optimizations
Question: Hi, I'm trying to debug an issue with some code via GDB, but some values are apparently optimized away. How can I disable compiler optimization in CMakeLists.txt? I tried doing rosbuild_remove_compiler_flags(my_executable "-O2") and then rosbuild_add_compiler_flags(my_executable "-O0") but this doesn't appear to have done anything, according to the output of make VERBOSE=1 it's still using -O2. What am I doing wrong? (the correct answer is trying to debug a ROS node using GDB, clearly...) Originally posted by Ibrahim on ROS Answers with karma: 307 on 2011-08-21 Post score: 1 Answer: When you open the CMakeLists.txt file generated by roscreate-pkg, it should contain the following lines: # Set the build type. Options are: # Coverage : w/ debug symbols, w/o optimization, w/ code-coverage # Debug : w/ debug symbols, w/o optimization # Release : w/o debug symbols, w/ optimization # RelWithDebInfo : w/ debug symbols, w/ optimization # MinSizeRel : w/o debug symbols, w/ optimization, stripped binaries #set(ROS_BUILD_TYPE RelWithDebInfo) Just uncomment the line set(ROS_BUILD_TYPE RelWithDebInfo) and set it to set(ROS_BUILD_TYPE Debug) Originally posted by Lorenz with karma: 22731 on 2011-08-21 This answer was ACCEPTED on the original site Post score: 7
{ "domain": "robotics.stackexchange", "id": 6475, "tags": "ros, gdb, cmake, rosbuild" }
Logging library
Question: Due to several limitations of the standard library's logging module, I wrote my own. Below are the most important parts. You can find the whole library here. Any feedback is welcome. # fancylog - A library for human readable logging. # # Copyright (C) 2017 HOMEINFO - Digitale Informationssysteme GmbH # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. """A library for beautiful, readble logging.""" from datetime import datetime from enum import Enum from sys import stdout, stderr from threading import Thread from time import sleep from traceback import format_exc from blessings import Terminal __all__ = [ 'logging', 'LogLevel', 'LogEntry', 'Logger', 'TTYAnimation', 'LoggingClass'] TERMINAL = Terminal() def logging(name=None, level=None, parent=None, file=None): """Decorator to attach a logger to the respective class.""" def wrap(obj): """Attaches the logger to the respective class.""" logger_name = obj.__name__ if name is None else name obj.logger = Logger(logger_name, level=level, parent=parent, file=file) return obj return wrap class LogLevel(Enum): """Logging levels.""" DEBUG = (10, '', TERMINAL.bold) SUCCESS = (20, '✓', TERMINAL.green) INFO = (30, 'ℹ', TERMINAL.blue) WARNING = (70, '⚠', TERMINAL.yellow) ERROR = (80, '✗', TERMINAL.red) CRITICAL = (90, '☢', lambda string: TERMINAL.bold(TERMINAL.red(string))) FAILURE = (100, '☠', lambda string: TERMINAL.bold(TERMINAL.magenta( string))) def __init__(self, ident, symbol, format_): """Sets the identifier, symbol and color.""" self.ident = ident self.symbol = symbol self.format = format_ def __int__(self): """Returns the identifier.""" return self.ident def __str__(self): """Returns the colored symbol.""" return self.format(self.symbol) def __eq__(self, other): try: return int(self) == int(other) except TypeError: return NotImplemented def __gt__(self, other): try: return int(self) > int(other) except TypeError: return NotImplemented def __ge__(self, other): return self > other or self == other def __lt__(self, other): try: return int(self) < int(other) except TypeError: return NotImplemented def __le__(self, other): return self < other or self == other def __hash__(self): return hash((self.__class__, self.ident)) @property def erroneous(self): """A log level is considered erroneous if it's identifier > 50.""" return self.ident > 50 class LogEntry(Exception): """A log entry.""" def __init__(self, *messages, level=LogLevel.ERROR, sep=None, color=None): """Sets the log level and the respective message(s).""" super().__init__() self.messages = messages self.level = level self.sep = ' ' if sep is None else sep self.color = color self.timestamp = datetime.now() def __hash__(self): """Returns a unique hash.""" return hash(self._hash_tuple) @property def _hash_tuple(self): """Returns the tuple from which to create a unique hash.""" return (self.__class__, self.level, self.messages, self.timestamp) @property def message(self): """Returns the message elements joint by the selected separator.""" return self.sep.join(str(message) for message in self.messages) @property def text(self): """Returns the formatted message text.""" if self.color is not None: return self.color(self.message) return self.message class Logger: """A logger that can be nested.""" CHILD_SEP = '→' def __init__(self, name, level=None, parent=None, file=None): """Sets the logger's name, log level, parent logger, log entry template and file. """ self.name = name self.level = level or LogLevel.INFO self.parent = parent self.file = file self.template = '{1.level} {0}: {1.text}' def __str__(self): """Returns the logger's nested path as a string.""" return self.CHILD_SEP.join(logger.name for logger in self.path) def __hash__(self): """Returns a unique hash.""" return hash(self.__class__, self.name) def __enter__(self): """Returns itself.""" return self def __exit__(self, _, value, __): """Logs risen log entries.""" if isinstance(value, LogEntry): self.log_entry(value) return True return None @property def root(self): """Determines whether the logger is at the root level.""" return self.parent is None @property def path(self): """Yields the logger's path.""" if not self.root: yield from self.parent.path yield self @property def layer(self): """Returns the layer of the logger.""" return 0 if self.root else self.parent.layer + 1 def log_entry(self, log_entry): """Logs a log entry.""" if log_entry.level >= self.level: if self.file is None: file = stderr if log_entry.level.erroneous else stdout else: file = self.file print(self.template.format(self, log_entry), file=file, flush=True) return log_entry def inherit(self, name, level=None, file=None): """Returns a new child of this logger.""" level = self.level if level is None else level file = self.file if file is None else file return self.__class__(name, level=level, parent=self, file=file) def log(self, level, *messages, sep=None, color=None): """Logs messages of a certain log level.""" log_entry = LogEntry(*messages, level=level, sep=sep, color=color) return self.log_entry(log_entry) def debug(self, *messages, sep=None, color=None): """Logs debug messages, defaulting to a stack trace.""" if not messages: messages = ('Stacktrace:', format_exc()) if sep is None: sep = '\n' return self.log(LogLevel.DEBUG, *messages, sep=sep, color=color) def success(self, *messages, sep=None, color=None): """Logs success messages.""" return self.log(LogLevel.SUCCESS, *messages, sep=sep, color=color) def info(self, *messages, sep=None, color=None): """Logs info messages.""" return self.log(LogLevel.INFO, *messages, sep=sep, color=color) def warning(self, *messages, sep=None, color=None): """Logs warning messages.""" return self.log(LogLevel.WARNING, *messages, sep=sep, color=color) def error(self, *messages, sep=None, color=None): """Logs error messages.""" return self.log(LogLevel.ERROR, *messages, sep=sep, color=color) def critical(self, *messages, sep=None, color=None): """Logs critical messages.""" return self.log(LogLevel.CRITICAL, *messages, sep=sep, color=color) def failure(self, *messages, sep=None, color=None): """Logs failure messages.""" return self.log(LogLevel.FAILURE, *messages, sep=sep, color=color) Use example: #! /usr/bin/env python3 """divide. Usage: divide <dividend> <divisor> [options] Options: --help, -h Show this page. """ from docopt import docopt from fancylog import LogLevel, LogEntry, Logger def divide(dividend, divisor): try: return dividend / divisor except ZeroDivisionError: raise LogEntry('Cannot divide by zero.', level=LogLevel.ERROR) def main(options): with Logger('Division logger', level=LogLevel.SUCCESS) as logger: try: dividend = float(options['<dividend>']) except ValueError: logger.error('Divident is not a float.') return try: divisor = float(options['<divisor>']) except ValueError: logger.error('Divisor is not a float.') return logger.success(divide(dividend, divisor)) if __name__ == '__main__': main(docopt(__doc__)) Answer: The code reads good, I only have a few nitpicks: Use default values where applicable instead of None: it will ease usage of the API as they can be introspected; class LogEntry(Exception): """A log entry.""" def __init__(self, *messages, level=LogLevel.ERROR, sep=' ', color=None): class Logger: """A logger that can be nested.""" CHILD_SEP = '→' def __init__(self, name, level=LogLevel.INFO, parent=None, file=None): Change value of a parameter instead of reasigning it through a ternary: this make it more explicit about what you’re trying to do; def inherit(self, name, level=None, file=None): """Returns a new child of this logger.""" if level is None: level = self.level if file is None: file = self.file return self.__class__(name, level=level, parent=self, file=file) Avoid return None, especially as last intruction: it’s just noise. Now, this is neat code and all, but it feels… rather limited. The two main concerns I have about this module is that: it only logs to files (or similar); it gives veeery few control over the formatting of the log message. I sometimes need to send logs to the system syslog daemon and I can do it using a handler provided by the Python's logging module, but if I want to do that using yours, I would have to wrap syslog and your module myself… Same goes for sending logs over TCP (to a listening logstash for instance). Same goes for the formatting of the message. As far as I hate % formatting imposed by the logging module, it is at least possible to writte a Formatter that is format aware and pass large objects to the logger which will use the format string to traverse it and retrieve values that I need to log only if it actually logs something. Using your approach, I can either write a non-flexible __str__ method on each object I want to log so I can have desired information (which is not always possible nor wanted, if I want to log several different things in several different places) or split the format string (bad for readability) and traverse the object to retrieve attributes myself which is a bit of overhead (some may be expensive properties, even) if the log ends up discarded. Other missing things includes the ability to configure a logger hierarchy though a file and to retrieve/create any logger by name in any module. I also don't really understand the use of the contextmanager, especially the LogEntry exception. As far as I find it interesting to create a specific logger for a specific portion of the code, why would I want to use the LogEntry "exception": it swallows the actual error, making the calling code unaware of any problem: if your example code were to use the value returned from divide, it would have a hard time doing so as None would be returned; it changes the type of the exception making any attempt to catch real errors moot. Why not keep it to the simple except XXX: logger.xxx(…) return that you use in your main? (or even better: except XXX: logger.xxx(…); raise) Can't you instead provide a utility function like current_logger that uses this context manager to return the appropriate one (or create a root logger if none is in use)? Or something akin to decimal.localcontext where you provide module-level logging functions (debug, success, info…) that applies to the current logger and other utilities to select which logger is the current one.
{ "domain": "codereview.stackexchange", "id": 29532, "tags": "python, python-3.x, reinventing-the-wheel, library, logging" }
Green feather-like formations in tap water - what is it? (with pictures)
Question: These things recently appeared in my tap water Copenhagen Denmark, this summer 2013. They are very small, and have a feather like structure. I've called the water supply, and they thought that it was some tracer put in there by humans to trace if the warm water supply leaks into the tap water. But I could not show them the pictures. What are they? Are they dangerous? For size comparison: the green stuff is located on the top right of a piece of A4 paper. Answer: I have had a biologist out taking some samples of the water. She examined them in a microscope, and identified the algae as blue-green algae/cyanobacteria. She could not identify the species further. She said the reason probably was that my pipes where exposed to sunlight, and also that the pibes with hot and cold water ran too close to each other, creating a nice lukewarm temperature for the algae to thrive in. I'm of course going to change the pipes :)
{ "domain": "biology.stackexchange", "id": 5947, "tags": "zoology, botany, species-identification" }
Why do some karyotype graphs contain 46 pairs of chromosomes?
Question: The human genome consists of 23 pairs of chromosomes. Two copies of each pair connect to each other at the centromere. Normal karyotype graphs should look like this: But some karyotype graphs contain 46 pairs of chromosomes, such as this one: Why is this the case? Answer: They both contain 23 pairs of homologous chromosomes (and... yes as Luis suggested the first one is male and the second one female). In Figure 1 each chromosome is composed by 1 chromatid, while in Figure 2 each chromosome is composed by 2 chromatids. This is probably because the two pictures were taken at different stages of Mitosis. In Figure 1 the picture was taken probably in anaphase, while in Figure 2 was probably taken in or before metaphase. You might want to search the internet to check the difference between chromatids and chromosomes and to read something about the cell cycle and mitosis.
{ "domain": "biology.stackexchange", "id": 10757, "tags": "dna, chromosome, karyotype" }
A grey water bird with brown head
Question: A greyish bird, with brown head and relatively long, pointy beak. Size comparable to mallard. Spotted in central Poland, on small, artificial pond, populated by mallards and Eurasian coots. Photo has been taken today (late September). I'd be grateful for any information that could be used to identify this bird. Answer: Based on the location and picture I believe this a Goosander (Mergus merganser) which is the European name, the North American name is the Common Merganser. You can find more information about them here. They undergo two main molts, breeding plumage in December and eclipse plumage in May. Here is a description of the eclipse molt by Bent, 1923: In May the adult male goosander begins to assume its eclipse plumage. The adult male in August has the crown reddish brown, with a gray tinge; chin white, and the rest of the head and upper neck rich red brown. By the looks of your photo, you either have 2 females during the breeding plumage phase (January - May) or either sex during the eclipse phase (June - December). It will depend on when the photo was taken. Here is an image with a breeding plumage male (left) alongside some females: References: Bent, Arthur Cleveland. 1923. Smithsonian Institution United States National Museum Bulletin 126 (Part 1): 1-13. United States Government Printing Office
{ "domain": "biology.stackexchange", "id": 7821, "tags": "species-identification, ornithology" }
Do any entanglement measures for mixed states exist that use only single site correlation functions?
Question: For a pure state $\rho_{AB}$, the entropy of entanglement of subsystem $A$ is \begin{equation} S( \rho_A) = -tr (\rho_A \log \rho_A) \end{equation} where $\rho_A$ is the reduced density matrix of A. For a single site of a spin chain, $\rho_A$ can be written in terms of single site correlation functions $\langle \sigma_l^\alpha \rangle$ where $\alpha = x,y,z$. Are there any entanglement measures for mixed states that use only the same correlation functions, $\langle \sigma_l^\alpha \rangle$ where $\alpha = x,y,z$? Answer: It seems that such a measure for mixed states is fundamentally impossible, since you can have both entangled and separable states which have exactly the local expectation values. For pure states, monogamy of entanglement ensures that the impurity of a reduced density matrix (which can be infered from the expectation values of local Pauli operators) is directly related to entanglement. However for mixed states, this is not the case, as the following example will homefully make clear: Consider a two qubit system, in which the two reduced density matrices are maximally mixed. In this case, it is possible the system is separable, composed of two copies of the maximally mixed state, or it is maximally entangled, composed of a single EPR pair, or anything in between. Thus no function of local expectation values can distinguish separable from entangled states in general. However, purity (which is a function of single site correlation functions), can indeed be used as a bound on the entanglement of a system, again due to monogomy of entanglement. If the local system is not maximally mixed, then it is not maximally entangled, and hence maximum amount of entanglement possible for a system is a monotonic function of its (im)purity.
{ "domain": "physics.stackexchange", "id": 3353, "tags": "research-level, quantum-entanglement, spin-models" }
Are "Gradient Boosting Machines (GBM)" and GBDT exactly the same thing?
Question: In the category of Gradient Boosting, I find some terms confusing. I'm aware that XGBoost includes some optimization in comparison to conventional Gradient Boosting. But are Gradient Boosting Machines (GBM) and GBDT the same thing? Are they just different names? Apart from GBM/GBDT and XGBoost, are there any other models fall into the category of Gradient Boosting? Answer: Boosting is an ensemble technique where predictors are ensembled sequentially one after the other(youtube tutorial. The term gradient of gradient boosting means that they are ensembled using the optimization technique called gradient descent (Boosting Algorithms as Gradient Descent. Given this, you can boost any kind of model that you want (as far as I know). Moreover in the scikit learn library, gradient boosting, its under the ensemble folder. You can boost any kind of model (linear, svm) its just that decision trees achieve normally great results with this kind of ensemble. Same way that you can do a bagging with any kind of estimator, but if you do it with a decision tree and add a couple technichalities more, you can call it Random Forest. From scikit learn documentation: GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage a regression tree is fit on the negative gradient of the given loss function. But are Gradient Boosting Machines (GBM) and GBDT the same thing? Are they just different names? Gradient boosting machines are a kind of ensemble and Gradient Boosting Decision Tree the particular case when a tree is used as the estimator. Apart from GBM/GBDT and XGBoost, are there any other models fall into the category of Gradient Boosting? You can use any model that you like, but decision trees are experimentally the best. "Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like support vector machines (SVM)." Kai Ming TingLian Zhu, Springer
{ "domain": "datascience.stackexchange", "id": 8332, "tags": "xgboost, ensemble-modeling, gbm, ensemble-learning" }
Orienting the robot properly
Question: I'm trying to position my robot in a particular way after it reaches a particular checkpoint. locations['A'] = Pose(Point(6.72, 5.91, 0.000), Quaternion(0.000, 0.000, 0.223, 0.975)) locations['B'] = Pose(Point(0.0646, -3, 0.000), Quaternion(0.000, 0.000, -0.670, 0.743)) locations['C'] = Pose(Point(6.11, -4.95, 0.000), Quaternion(0.000, 0.000, 0.733, 0.680)) Here in the above code I use the quaternion values to set the pose (orientiation). I want to know how I can check the quaternions in real time when the robot is moving. I can rostopic echo /odom and see the x,y,z as well as x,y,z,w (which i guess are the quaterinion values). Is there any other way to fetching quaternion values ? Secondly, if I rotate the robot to a position, check the quaternion (x,y,z,w) values, move it around and later come to the same position , the quaternion values are chaning every time. I'm not sure if this is a valid behaviour because as I know, if quaternion is a representation of rotation (or pose), then if the robot is in the same pose after moving around, it shouldn't change isn't it? My goal is to use the locations part of code attached above to orient the robot according to my requirement. Can this be done in any othe approach or is there any way I can get proper quaternion valuies? Thanks. Originally posted by spiritninja on ROS Answers with karma: 48 on 2019-06-26 Post score: 0 Answer: I can rostopic echo /odom and see the x,y,z as well as x,y,z,w (which i guess are the quaterinion values). Is there any other way to fetching quaternion values ? yes: lookup the transform between base_link of your robot and the odom or map frame. if I rotate the robot to a position, check the quaternion (x,y,z,w) values, move it around and later come to the same position , the quaternion values are chaning every time. I'm not sure if this is a valid behaviour because as I know, if quaternion is a representation of rotation (or pose), then if the robot is in the same pose after moving around, it shouldn't change isn't it? If you're just looking at /odom this can probably be explained by realising that most wheeled robots rely heavily on encoder ticks to calculate their odometry. Displacement (both pure translation as well as rotation) would then be the result of integration. You probably know that due to wheel slip and other factors wheel-based odometry is far from ideal, so cumulative errors could lead to what you are seeing (ie: returning to "the same" position doesn't result in seeing the same odometry values). To improve this situation, you may want to use a sensor fusion package like robot_localization. That takes in raw odometry measurements and transforms them into a better state estimate for your robot. Note: this will most likely still not result in seeing the exact same values, but, properly tuned, should result in better estimates. Originally posted by gvdhoorn with karma: 86574 on 2019-06-26 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by spiritninja on 2019-06-26: Thank you @gvdhoorn. Yeah, I agree with your above statement. I guessed somehow wheel slip factor would add to what I was seeing. So, what you're saying is I get the quaternion values from a proper package such as robot_localization, then feeding the same to my code would likely help in the orientin the robot, isnt't it? Comment by gvdhoorn on 2019-06-26: It will probably improve the situation, yes, but it will probably never become perfect. For that, you'd need an external, absolute reference that is not based on integration (but that is just my own understanding: I'm not a navigation expert). Comment by spiritninja on 2019-06-26: Alright. Thanks a lot for your insight. On the contrary, is there any way in which i can easily orient the robot in a partocular position (through code) other then the method (quaternion) i'm following ? (Kindly don't treat this as a seperate question). Comment by gvdhoorn on 2019-06-26: What's so special about the "quaternion"? It's just a particular representation of an orientation. Off the top of my head you have two options: a body-local referenced orientation an external reference for your orientation option 2 would require some external reference. Perhaps a marker. Option 1 depends on things like odometry and state estimation. Kindly don't treat this as a seperate question it is a separate / follow-up question though. Comment by spiritninja on 2019-06-26: Alright. Thanks a lot for helping out!
{ "domain": "robotics.stackexchange", "id": 33258, "tags": "ros, navigation, pose, ros-kinetic, quaternion" }
What's the physical explanation for the attenuation of sound in Newtonian fluid?
Question: My question refers to stokes's law of sound attenuation in viscous medium. At this point i don't try to understand the mathematical form of the law - i simply don't understand how viscosity effects planar waves - this law states that planar waves decay exponentially, and i don't understand how a shear stress emerges in the propogation of perfectly planar waves. So what do shear stresses and sound propogation have in common? Answer: Newton's law of viscosity, when properly extended using mathematical rigor to 3 dimensions, results in a linear relationship between the 3D stress tensor and the 3D rate of deformation tensor, and includes not only viscous shear stresses and shear rates, but also normal stresses and normal strain rates. These normal stresses and strains are operative in the situation(s) that you are describing. The relationship between the stress tensor and the rate of deformation tensor reduces to the usual equation relating shear stress to shear rate for the special case of pure shear (with only one velocity gradient). Even for the case of pure shear, if the stresses and strains are resolved into their principal components (say by transforming using a coordinate system rotation), the only stress components present will be normal stresses and the only strain rate components present will be normal strain rates.
{ "domain": "physics.stackexchange", "id": 33636, "tags": "fluid-dynamics" }
Prefix transformations from OpenCR
Question: I am trying to run rrt_exploration on my turtlebot3. I need to prefix all per-robot nodes, topics, and transforms with a per-robot name such as "robot_1". I can do this the usual way for all nodes and topics with <group ns="robot_1"> in my launch file and for almost all transforms with <param name="tf_prefix", value="robot_1"/> in the node that provides the transform. However, this does not work for transforms that are published by the OpenCR via rosserial_python such as odom. How can I change the name of the transforms published by the rosserial_python node? Re-flashing the openCR is not an option, as this must be extensible to multi-robot systems. Originally posted by qrfaalnz on ROS Answers with karma: 1 on 2018-02-08 Post score: 0 Original comments Comment by lucasw on 2018-02-20: How about adding tf_prefix support to rosserial_python, and only the python on the computer would have to be changed? Comment by AmateurHour on 2018-08-01: Hello! Did you ever solve this? Having the same issue and the answers to this question were not correct. Comment by qrfaalnz on 2018-08-01: I solved this similarly to @lucasw's comment. Modify rosserial_python to accept a tf_prefix parameter and change Publisher's handlePacket() to add the prefix to child_frame_id and header.frame_id of each tf/tfMessage. Unfortunately I can't post code. Comment by AmateurHour on 2018-08-01: Ok! Do you think you could post the code as an answer to my question https://answers.ros.org/question/299330/where-does-turtlebot3_core-define-odom-and-base_footprint-names/ ? Comment by AmateurHour on 2018-08-01: or could you perhaps elaborate on where to find Publisher's handlePacket()? Comment by qrfaalnz on 2018-08-01: That is part of rosserial in SerialClient.py. Clone rosserial into your catkin_ws and you should be able to search for it. Look at how other parameters are handled. Comment by Darby Lim on 2018-08-01: TurtleBot3 supports multi-robot application. This link might be help you http://emanual.robotis.com/docs/en/platform/turtlebot3/applications/#load-multiple-turtlebot3s Answer: The package has been updated, now you don't need namespace prefixing for single robot. There is an example launch file (simple.launch) you can check it out. Originally posted by hasauino with karma: 91 on 2018-03-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29998, "tags": "ros-kinetic, opencr, rosserial, turtlebot3, tf-prefix" }
how to get the Polysemes of a word in wordnet or any other api?
Question: how to get the Polysemes of a word in wordnet or any other api. I am looking for any api. with java any idea is appreciated? Answer: There are several third-party Java APIs for WordNet listed here: http://wordnet.princeton.edu/wordnet/related-projects/#Java In the past, I've used JWNL the most: http://sourceforge.net/projects/jwordnet/ The documentation for JWNL isn't great, but it should provide the functionality you need.
{ "domain": "datascience.stackexchange", "id": 123, "tags": "nlp" }
Generic C++ exception catch handler macro
Question: I have this set of legacy C++ projects with a large number of public functions. At the start, none of those publicly exposed functions had try..catch insulation inside them. When a C++ exception fired across that boundary, if the caller wasn't compiled with the same C++ compiler and the same project settings, then it easily caused a crash. To help insulate against this problem I first went into every public function and wrapped the public function bodies with all-encompassing try...catch blocks, with the catch attempting to log a message about the exception. Basically just handling the (...) case and logging "Unexpected Exception". With hundreds of public functions (working up to thousands), this became tedious when I decided to add specialized std::exception handlers to them all, other dev team members update some to do something different, etc. In the interests of DRY (Don't Repeat Yourself), I chose to make a generic catch block which gets applied in those instances where I wanted something fast to insulate a function from throwing an exception but get as much info as possible into the log. C++ exceptions being what they are, I couldn't figure out a nice simple portable way to do it without using a macro. Yes, yes, sad but true, another macro is born, but I am interested to learn about any other options considering it must be portable and fast. I don't mean to say macros are portable and fast, but this implementation fits the bill... #define CatchAll( msg ) \ catch( const Poco::Exception &e ) \ { \ try{ LogCritical( Logs.System(), std::string( e.displayText() ).append( msg ) );}catch(...){assert(0);} \ } \ catch( const std::exception &e ) \ { \ try{LogCritical( Logs.System(), std::string( e.what() ).append( msg ) );}catch(...){assert(0);} \ } \ catch(...) \ { \ try{ LogCritical( Logs.System(), std::string( "Exception caught in " __FUNCTION__ ". " ).append( msg ) );}catch(...){assert(0);} \ } So there you have it. The code above gets called like this... try{ // statements that can throw } CatchAll("*Special extra message about what was attempted in the try block*") So that's it, that's the magic. This isn't meant to replace an intelligently coded block of specific exception handling, this is meant to quickly put bare-bones insulation where none existed before, and do it in a DRY, portable, fast and easy to grok way. Ok, ok, macros are evil, I know, but what else could be done here? And as goes the way of the macro, they proliferate and multiply. Here's a secondary macro to set an rc code in addition to logging, so the function can return a failing rc if an exception throws in the insulated function... /// Catch all generic exceptions and log appropriately. /// Logger is insulated from throwing, so this is a NO THROW operation. /// Sets rc (integer parameter) so wrapping function can perform cleanup /// before returning the error number. #define CatchAllSetRC( rc, msg ) \ catch( const Poco::Exception &e ) \ { \ (rc) = -1; \ try{ LogCritical( Logs.System(), std::string( e.displayText() ).append( msg ));}catch(...){assert(0);} \ } \ catch( const std::exception &e ) \ { \ (rc) = -1; \ try{ LogCritical( Logs.System(), std::string( e.what() ).append( msg ));}catch(...){ assert(0); } \ } \ catch(...) \ { \ (rc) = -1; \ try{ LogCritical( Logs.System(), std::string( "Exception caught in " __FUNCTION__ ". " ).append( msg ));}catch(...){ assert(0); } \ } This expanded version gets called with an rc code so the caller can return it... int rc = 0; try{ // statements that can throw } CatchAll(rc, "Extra info to append to logged exception message") Answer: In the catch block, you can rethrow the exception. This can be done in a function: int rc = 0; try{ // statements that can throw }catch(...){ HandleException(rc, "Extra info to append to logged exception message"); } void HandleException(int rc, std::string msg) { try{ throw; }catch(Poco::Exception &e){ // ... }catch( const std::exception &e){ // ... } }
{ "domain": "codereview.stackexchange", "id": 10880, "tags": "c++, error-handling, exception, macros, poco-libraries" }
Constructing a single qubit gate from S, H and Pauli gates
Question: How can we construct a single qubit gate $U = \mathrm{e}^{\frac{\mathrm{i}\pi}{4}}*\exp(−\frac{\mathrm{i}\pi}{4} Y)$ from $S$, $H$ (Hadamard), and Pauli gates? I already know that final answer will be: $$SHSHS3 = SHSHS^\dagger$$ I just don't know the process that led to this answer. Answer: There may be a simpler way of doing this, but this certainly works. First find the matrix representation of $U$ by multiplying out the terms. Remember that $e^{i \theta Y} = \cos(\theta) I + i \sin(\theta)Y$. The final result (thank you Sympy) is $$ U = \frac{1}{2} \begin{bmatrix} 1 + i & -1 - i \\ 1 + i & 1 + i \\ \end{bmatrix} $$ Now all single qubit transforms can be translated mechanically into the form $e^{i\gamma} R_Z(\phi) R_X(\theta)R_Z(\lambda)$. Qiskit provides a method to do that: from qiskit.quantum_info import OneQubitEulerDecomposer decomposer = OneQubitEulerDecomposer('ZXZ') phi, theta, lam, gamma = decomposer.angles_and_phase(U) You learn that $\theta = \phi = \frac{\pi}{2}$, $\lambda = \frac{-\pi}{2}$, and $\gamma = \frac{\pi}4$. Now $R_Z(\phi) = R_Z(\pi/2)$ is just $S$, and $R_Z(\lambda) = R_Z(-\pi/2)$ is just $S^\dagger$. Any rotation around the X axis can be expressed as a rotation around the Z axis preceded and followed by an H. So $R_X(\theta) = H R_Z(\theta) H = H S H$ We have a leftover global phase $\gamma$ which we can ignore. Putting the pieces together you get $S HSH SSS$.
{ "domain": "quantumcomputing.stackexchange", "id": 4329, "tags": "quantum-gate, quantum-state, linear-algebra" }
Magnifying glass as a mirror?
Question: I understand how a magnifying glass works as a convex lens, however I am curious as to how one can explain using it as a mirror. When I observe the reflections off of a magnifying glass, strangely enough it seems to create two reflections: one that is inverted and one that is upright. How is this so? Shouldn't a magnifying glass simply act as a convex mirror, just as it acts as a convex lens? Which images are real, and which ones are virtual images? I've attached an image I took as an example. In the image, the same house is reflected twice--one is inverted, one is upright. Answer: When light goes from one medium to another some is reflected. You get a virtual upright image formed by reflection off the near surface of the glass and an inverted real image formed by reflection off the back surface.
{ "domain": "physics.stackexchange", "id": 30867, "tags": "reflection" }
What are the individual probabilities after √SWAP gate?
Question: Say, qubit $\left|a\right\rangle = \alpha_1|0\rangle + \beta_1|1\rangle$ and $|b\rangle = \alpha_2|0\rangle + \beta_2|1\rangle$. After $\sqrt{\text{SWAP}}$(a,b) what are new probability amplitudes of $a$ and $b$ in terms of $\alpha_1,\, \alpha_2,\, \beta_1,\, \beta_2$? Answer: The overall state of the input is $|a\rangle|b\rangle$, which we can represent as: $$ \left(\begin{array}{c} \alpha_1\alpha_2 \\ \alpha_1\beta_2 \\ \beta_1\alpha_2 \\ \beta_1\beta_2 \end{array}\right) $$ We apply the square root of swap gate (note that there are different ways that this matrix could be written ), $$ \sqrt{\text{SWAP}}=\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & e^{i\pi/4}/\sqrt{2} & e^{-i\pi/4}/\sqrt{2} & 0 \\ 0 & e^{-i\pi/4}/\sqrt{2} & e^{i\pi/4}/\sqrt{2} & 0 \\ 0 & 0 & 0 & 1 \end{array}\right). $$ Hence, we're after the calculation $$ \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & e^{i\pi/4}/\sqrt{2} & e^{-i\pi/4}/\sqrt{2} & 0 \\ 0 & e^{-i\pi/4}/\sqrt{2} & e^{i\pi/4}/\sqrt{2} & 0 \\ 0 & 0 & 0 & 1 \end{array}\right)\left(\begin{array}{c} \alpha_1\alpha_2 \\ \alpha_1\beta_2 \\ \beta_1\alpha_2 \\ \beta_2\beta_2 \end{array}\right)=\left(\begin{array}{c} \alpha_1\alpha_2 \\ e^{i\pi/4}(\alpha_1\beta_2-i\beta_1\alpha_2)/\sqrt{2} \\ e^{-i\pi/4}(\alpha_1\beta_2+i\beta_1\alpha_2)/\sqrt{2} \\ \beta_1\beta_2 \end{array}\right). $$ Converting this back to Dirac notation gives us the final answer $$\alpha_1\alpha_2|00\rangle+\frac{e^{i\pi/4}}{\sqrt{2}}(\alpha_1\beta_2-i\beta_1\alpha_2)|01\rangle+ \frac{e^{-i\pi/4}}{\sqrt{2}}(\alpha_1\beta_2+i\beta_1\alpha_2)|10\rangle+\beta_1\beta_2|11\rangle. $$ Note that the probability amplitudes of each term are joint probability amplitudes of the whole system. As a general rule, you cannot separate this state into (state of a)$\otimes$(state of b), and therefore cannot identify individual probability amplitudes for each system. Perhaps I could also suggest a bit of a notational change? If you want to refer to the two qubits as $a$ and $b$, do not also refer to the states of the two qubits as $|a\rangle$ and $|b\rangle$, as that's only likely to lead to confusion when the states change.
{ "domain": "quantumcomputing.stackexchange", "id": 684, "tags": "quantum-gate, quantum-state" }
Rubbing a rod with silk?
Question: Rubbing a glass rod with silk causes charges to be exchanged and consequently both objects get charged. Why do the objects have to be "rubbed"? I get that one has a stronger pull on the electrons than the other, but shouldn't just allowing the objects to make contact be enough.? I would appreciate a "visualization" of whats happening. Similar questions: Why two objects get charged by rubbing? How does rubbing cause the transfer of electrons from one object to the other? Neither question addresses why the objects need to be rubbed instead of just making contact. Answer: The link you posted in your question contains a link to a wikipedia page on the triboelectric effect, which in turns contains the answer to your question. From the "Cause" section: Although the word comes from the Greek for "rubbing", τρίβω (τριβή: friction), the two materials only need to come into contact and then separate for electrons to be exchanged. After coming into contact, a chemical bond is formed between some parts of the two surfaces, called adhesion, and charges move from one material to the other to equalize their electrochemical potential. This is what creates the net charge imbalance between the objects. When separated, some of the bonded atoms have a tendency to keep extra electrons, and some a tendency to give them away, though the imbalance will be partially destroyed by tunneling or electrical breakdown (usually corona discharge). In addition, some materials may exchange ions of differing mobility, or exchange charged fragments of larger molecules. The triboelectric effect is related to friction only because they both involve adhesion. However, the effect is greatly enhanced by rubbing the materials together, as they touch and separate many times.
{ "domain": "physics.stackexchange", "id": 11515, "tags": "electromagnetism, electricity" }
Does more fire create a hotter fire?
Question: If we lit a single fire that was fueled by a substance that burns at 500°F, then around that fire we lit directly against it another fire surrounding the original one. This second fire's fuel also burns at 500°F. Would the original fire increase in temperature? If yes, is there any topic related to this event that I can take to research it further? Answer: By 500 F I assume you are talking about the ignition temperature of the material being burned when it first ignited, not the temperature of the surface and the flame above the burning material after ignition, which would be considerably higher than 500 F (500 F is around the ignition temperature of wood). Once the fuel, say wood, ignites then the surface temperature and flames will be greater than 500 F. In that case, I would think that making the fire larger (increasing the circumference of the fire) may increase the temperature of the burning surface and flames and the rate of burning of the fire (heat release) as follows: The figure below is a simplified representation of a burning surface showing heat and mass transfers (based on the Drysdale fire protection textbook "Introduction to Fire Dynamics".) Part of the heat released in the combustion process is fed back to the burning surface to maintain the combustion process. Part of the heat released is lost to the atmosphere, $\dot Q_{atm}$. (a key to the other terms is given below, if you are interested). One would expect that most of the heat loss occurs above the flame. But some heat loss also occurs to the side of the flame (i.e., around the perimeter of the burning area). Thinking of the perimeter as the surface of a cylinder surrounding the burning area, the larger the perimeter the lower the surface area to volume ratio. That could favor heat retention in the center of the flaming area and an increase in the temperature of the flames at the center. A more detailed explanation of the figure is below. In particular I draw your attention to the last two equations. The first gives the heat release rate. The greater that rate the higher the burning temperature, all other things being equal. It is proportional to the mass burning rate. The last equation gives the mass burning rate. Note that the lower the heat loss rate the greater the mass burning rate. The lower surface to volume ratio of the fire the lower the heat loss rate, all other things being equal. Hope this helps Key to Terms in Figure: $\dot Q_{c}$ = rate at which energy (heat) is released in the fire (kW) $\dot Q_{F}$ = the heat flux supplied by the flame fed back to the fuel surface ($\frac{kW}{m^2}$) $\dot Q_{L}$ = heat losses expressed as the heat flux through the fuel surface ($\frac {kW}{m^2}$) $\dot Q_{atm}$ = heat lost to atmosphere (kW) $\dot m$ = the rate of burning of the fuel ($\frac{kg}{m^{2}.s}$) According to Drysdale, "...the rate at which energy is released in a fire ($\dot Q_{c}$) is the most important single factor which characterizes its behavior" The heat release rate ($\dot Q_{C}$) can be roughly related to the rate of burning $\dot m$ and the heat of combustion of the fuel by the following: $$\dot Q_{c}= x.\dot m . A_{f}\Delta H_{c} $$ where $A_f$ is the fuel surface area ($m^2$) $\Delta H_{c}$ is the heat of combustion of the volatiles ($\frac{kJ}{kg}$) and $x$ is a factor to account for incomplete combustion (<1.0) which is a function of the mixing of air drawn in from the surrounding atmosphere with the volatiles. The rate of burning can, in turn, be expressed generally as $$\dot m=\frac{\dot Q_{F}-\dot Q_{L}}{L_{v}}$$ Where $L_{v}$ is the heat required to convert the fuel into volatiles. For a liquid, that would be the latent heat of vaporization.
{ "domain": "physics.stackexchange", "id": 75965, "tags": "thermodynamics, temperature, combustion" }
Is robot_pose_ekf used by another node?
Question: Are there any examples of the output of robot_pose_ekf being used as a service by another node? Originally posted by jdt141 on ROS Answers with karma: 118 on 2012-02-03 Post score: 0 Original comments Comment by jdt141 on 2012-02-06: I tried to elaborate. Let me know if I need to expand this. Comment by tfoote on 2012-02-03: Please elaborate on this question. One "node using another" is not well defined. Answer: PR2 navigation is using odom_combined --> base footprint transform which is published by robot-pose-ekf node for it's navigation stack! Originally posted by Alireza with karma: 717 on 2012-02-04 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 8104, "tags": "navigation, robot-pose-ekf" }
Why isn't washing up liquid dry
Question: Washing up liquid is wet and it is added to water. Why does it not come in a half size package and you add the water yourself? In the same way washing powder is? Answer: There's no technical reason why you couldn't make a powdered dishwashing detergent and I suspect that someone probably does sell it, just as there are liquid, tablet, and powdered automatic dishwasher detergents and laundry detergents. It does seem more convenient to use a liquid detergent for washing dishes in the sink. A liquid doesn't need time to dissolve, is easy to dispense without a scoop and if you're only washing a couple of dishes, you can simply squirt a small amount directly on a dish or use one of those brushes with a detergent reservoir. I haven't seen any dishwashing detergent that had a full ingredients list, but the one I have says "contains anionic surfactants and enzymes". Anionic surfactant typically implies sodium dodecyl sulfate, which is itself a solid, usually found as a powder. The enzymes, while they could be freeze-dried, could retain more activity if stored in solution. Likewise, the fragrances often added to these detergents may be more suited to a liquid form.
{ "domain": "chemistry.stackexchange", "id": 2351, "tags": "everyday-chemistry, surfactants, cleaning" }
Why is the tension $\vec T$ downwards in the system?
Question: Link to video Doubt: At pulley $A$, consider the free-body diagram. Why is the tension, $\vec{T}$, in both hands downwards? I know $\vec{T}_2$ is upwards since it is the force acting upwards in order to prevent breaking the rope. I don't understand the logic of $\vec T$ in the left and $\vec T$ in the right. Please help me. Answer: Tension in an ideal light string is a scalar, not a vector. The tension is maintained by forces at the ends of the string equal in magnitude to the tension. If we are considering forces on pulley $A$, then the geometry of the attachment means that the forces from the string are downward. If you were thinking about the forces on whatever is pulling down on the strings, then that force would be upward. By declaring the force on the pulley to be $\vec T$, the problem is using shorthand to say that the force is equal in magnitude to the tension. For simple problems like this, it can be easy to miss the difference because the direction is pretty clear from examination. I don't get your point. In system (a) Tension is pulling upward. Why there is a change in the system (b)? "Tension" doesn't have a direction and doesn't pull in any direction. One end of the string (because of its orientation) pulls upward and the other end pulls downward. Both ends pull with the same magnitude. You have to define which end you are talking about to tell what direction the force from the string is oriented. Because $A$ is above the string, it is pulled downward by the string. Because $B$ is below the string, it is pulled upward by the string.
{ "domain": "physics.stackexchange", "id": 59938, "tags": "newtonian-mechanics, forces, vectors, free-body-diagram" }
Problem solving moments in equilibrium
Question: Two straight ladders, AC and BC, have equal weights $W$. They stand in equilibrium with their feet, A and B, on rough horizontal ground and their tops freely hinged at C. Angle CAB = 60$ ^o$, angle CBA = 30$^o$ and AB = $l$ (see diagram). Find the vertical reactions at A and B. Resolving moments about A: $$\frac12l\sin30^o\cdot\sin30^o\cdot W + (l-\frac12l\sin60^o\cdot\sin60^o)W = lN$$ $$\implies N = \frac{3W}4$$ Resolving moments about B: $$\frac12l\sin60^o\cdot\sin60^o\cdot W + (l-\frac12l\sin30^o\cdot\sin30^o)W = lR$$ $$\implies R = \frac{5W}4$$ Find the magnitude and direction of the force exerted on BC by AC. I couldn't work out how to solve this? The required answer is $\frac12W$, 30$^o$ above horizontal. Answer: You'll need to include a vertical and horizontal force at point C due to ladder AC (since we shouldn't assume a direction for the total force -- even though our intuition may prove to be correct). The vertical forces on ladder BC are then related by $F_v + N - W = 0$ and you have already established that $N = \frac{3W}{4}$. There are now five forces acting on ladder BC, but if we look at the force moments about B, we can neglect the torque due to N and the static friction acting on the foot of the ladder. We can then sum the torques due to W , $F_v$, and $F_h$ to zero. You should indeed find that the total force F at C (using the Pythagorean Theorem) comes to $\frac{W}{2}$ and the direction is 30º above the horizontal (I find it pointing to the right).
{ "domain": "physics.stackexchange", "id": 7235, "tags": "homework-and-exercises, forces, statics" }
How to help the subway to save electric energy?
Question: You get in the middle of a subway car. The subway accelerates, coasts, decelerates, optionally recouping energy. Is there anything you can do to make the subway use less energy? Walk or roll forward or backwards during acceleration, coast or deceleration? Rolling on a bike backwards during acceleration with the generator and light on will extract energy, so that seems wrong. Walking is similar (it's like walking down a mountain, which will convert potential energy to heat). Would walking forward during acceleration help? Or changing position during the coast phase? I think walking back while decelerating with energy recouperation will give the subway extra energy. Answer: Yes you can. It might even be practical for very specific constraints. Essentially you have to become a kinetic energy storage device that is able to extract kinetic energy from the train during the braking phase and return it to the train during the restart/acceleration phase. You have to become a flywheel. Now I assume you do not want to be strapped to a rotating mechanical device that is part of the drivetrain, so you will become the linear equivalent of a flywheel. Step 1 During the braking phase, you 1) run or 2) roll on your bike from the back of the train towards the front. Because the train is decelerating with respect to the track, you will experience a forward acceleration which helps you accumulate velocity/KE inside the train. In option 1) you will actually assist in braking if you can achieve a sprint acceleration higher than the train's braking acceleration. In option 2) you are simply maintaining your personal kinetic energy with respect to the track even as the train loses it's kinetic energy from braking. Remember...flywheel.... Step 2 Do not run out of room before the train stops. If you have to stop sprinting or rolling, the experiment ends and you achieved nothing. The moment you begin to slow down, your deceleration in the train will cause the train to accelerate (and the train will just bleed off that accumulated KE in the form of heat after having to apply more brake). We need a really long train. Step 3 Hopefully before you reach the end of the really really really long train, it has come to full stop and is time for it to depart again. As it begins to accelerate from the track, stop running/rolling. Your accumulated kinetic energy will hopefully and mostly be trasferred back to the train and assist in the acceleration. Step 4 Congratulations, you have achieved linear flywheel-hood by assisting both braking and acceleration of the train to reduce fuel costs. You might end up being hungry though so probably not worth the trouble.
{ "domain": "physics.stackexchange", "id": 51638, "tags": "newtonian-mechanics, energy, energy-conservation" }
How does pressure measured by sphygmomanometer translate directly to blood pressure?
Question: As per my understanding a sphygmomanometer when wrapped around the arm and inflated only measures the pressure of the air inside the cuff, doesn't it? How does that translate directly to the pressure value of blood flowing across the artery? Answer: Korotkoff sounds! The blood pressure measurement process is fairly cool, and goes like this. Inflate the cuff to well over plausible blood pressures (250mmHg or so). Slowly deflate the cuff while listening to the artery. When you start to hear sounds, that's when the systolic blood pressure is higher than the cuff pressure and the heart can squeeze a little blood through the cuff, which makes a little squirty noise. As cuff pressure continues to drop it stays between the systolic and diastolic blood pressures and therefore bloodflow stops and starts and creates audible turbulence. When the artery stops making noise, the cuff pressure is below the diastolic blood pressure and the cuff has no effect on the artery, so it goes back to laminar flow that doesn't make noise. Measuring the pressure of the cuff is just part of controlling the pressure of the cuff. You could conceivably measure blood pressure by submerging their arm in water(mercury would not require a conversion, but vats of mercury are unpopular in doctor's offices) and listening for Korotkoff sounds and measuring the depths where they start and stop. Edit: The actual question here is how does cuff pressure physically change the arterial surroundings, not how do we link the two clinically. The body is mostly water, and water is incompressible. For small strains(amount of deformation) flesh isn't that springy and mostly "flows" like water. When you pressurize the cuff around the arm, the flesh of the arm equilibrates to the cuff pressure the way balloons equilibrate to the air pressure, only less dramatically because water is mostly incompressible. For small pressures and large cuffs, therefore, the cuff pressure is the pressure on the artery wall, and by listening for Korotkoff sounds while varying the cuff pressure the blood pressure can be roughly established. The reason why you don't lose all blood flow to your extremities while diving underwater is because the systemic pressure on all of your body raises your internal blood pressure, making the net external pressure on any part of your body zero. (The cuff works by exerting a relative pressure difference on part of your body.) At high altitude the sphygmanometer is still accurate-ish because the cuff pressure relative to the atmosphere is still the same as the blood pressure, relative to the atmosphere. If you had air for blood(or any compressible fluid) your veins would collapse underwater. This is why your ears hurt while diving deeply if you don't equalize them: your ear is full of compressible air, and at depth the air attempts to shrink more than the flesh around it, pulling things out of place painfully.
{ "domain": "biology.stackexchange", "id": 4210, "tags": "blood-pressure, measurement" }
de Sitter space vs de Sitter universe
Question: I have heard of the term de Sitter space. From this post user G. Smith writes, De Sitter spacetime is curved; specifically, it has the same positive scalar curvature at every point. Likewise, when I go to the wikipedia page, we see In mathematical physics, n-dimensional de Sitter space (often abbreviated to dSn) is a maximally symmetric Lorentzian manifold with constant positive scalar curvature. It is the Lorentzian analogue of an n-sphere (with its canonical Riemannian metric). But I have also came across a similar term, which is the de Sitter universe. This wikipedia page says, A de Sitter universe is a cosmological solution to the Einstein field equations of general relativity, named after Willem de Sitter. It models the universe as spatially flat and neglects ordinary matter, so the dynamics of the universe are dominated by the cosmological constant, thought to correspond to dark energy in our universe or the inflaton field in the early universe. When people are saying that de Sitter space has a constant positive scalar curvature, I'm assuming this implies that the spatial part of the spacetime follows spherical geometry (in particular its spatial slices are compact). On the other hand, a de Sitter universe is spatially flat (and so its spatial slices are not compact). Thus the de Sitter space and the de Sitter universe can't be the same thing, right? The two terms seem so similar, yet they describe different things. Am I correct in this assertion? I think it is important to confirm/disconfirm this to prevent any confusion. Are the above quotes accurate? Answer: They are the same manifold. 3+1D de Sitter spacetime has 3D spacelike slices of constant zero, negative, and positive curvature, as shown in the Wikipedia article in the sections "flat slicing", "open slicing" and "closed slicing" respectively. (The same is true in Euclidean space, where you can embed planes, spheres and pseudospheres – though the Euclidean pseudosphere covers only a finite area of the hyperbolic plane, while in de Sitter space you can embed the whole thing.) I think the phrase "de Sitter universe" is not widely used, and the Wikipedia article with that title is low-profile and not accurate. You can cover a patch of de Sitter space, such as our universe in the future according to ΛCDM, with spatially flat FLRW coordinates in many different inequivalent ways, or with spatially elliptical/hyperbolic FLRW coordinates in many inequivalent ways (the flat/open/closed slicings in the first Wikipedia article are all FLRW metrics). They can be expanding ($\dot a>0$) or contracting. None is more correct than any other. In the present day it's a different story: the matter distribution breaks the symmetry, and only one FLRW coordinate system makes sense. For example, there are special spatial slices across which the CMBR temperature is constant, and those slices are flat/Euclidean. During slow-roll inflation, the time-varying inflaton potential breaks the symmetry, and surfaces of constant potential are flat. The spacetime curvature (Ricci scalar) of de Sitter space is constant, and that's what G. Smith and the first Wikipedia article are referring to when they talk about "scalar curvature".
{ "domain": "physics.stackexchange", "id": 87340, "tags": "cosmology, spacetime, terminology, definition, de-sitter-spacetime" }
how to calculate difference of two messages in callback function on same topic
Question: Hi Sir, I am actually reading my hand pose from a Leap Motion sensor and I want to calculate how much the hand moves in X direction. My question is how to read the message in callback function(as you know, the message is coming through a normal topic ) and show the difference between the previous message and the current one? Please take a look at my call back function: public: spaceNavToTwist() : spinner_(1) { joy_sub_ = n_.subscribe("leapmotion/data", 1, &spaceNavToTwist::joyCallback, this); // Changed "spacenav/joy" to topic "/leapmotion/data" twist_pub_ = n_.advertise<geometry_msgs::TwistStamped>("jog_arm_server/delta_jog_cmds", 1); joint_delta_pub_ = n_.advertise<jog_msgs::JogJoint>("jog_arm_server/joint_delta_jog_cmds", 1); spinner_.start(); ros::waitForShutdown(); }; private: ros::NodeHandle n_; ros::Subscriber joy_sub_; ros::Publisher twist_pub_, joint_delta_pub_; ros::AsyncSpinner spinner_; void joyCallback(const jog_msgs::leapros::ConstPtr& msg) { // Variables to store the previous values of handpose along X,Y and Z-Axes double previousX, previousY, previousZ; // Variables to store the current values of handpose along X,Y and Z-Axes double currentX, currentY, currentZ; // Get the initial values of handpose along X,Y and Z-Axes previousX = msg->palmpos.x; // Read the current values of handpose along X,Y and Z-Axes currentX = msg->palmpos.x; ROS_ERROR("previous %f", previousX); ROS_ERROR("current %f", currentX); ROS_ERROR("difference %f", currentX-previousX); I don't know why when I print the previous message and the current one out, they are always the same and difference is 0. Thanks a lot Originally posted by stevensu1838 on ROS Answers with karma: 27 on 2018-12-20 Post score: 0 Answer: There is something wrong here : // Get the initial values of handpose along X,Y and Z-Axes previousX = msg->palmpos.x; // Read the current values of handpose along X,Y and Z-Axes currentX = msg->palmpos.x; You are actually doing previousX = currentX since you assigned them the same value. What you want to do is to use global variables so that when the callback is called again you still have the value stored somewhere, i'll give you an exemple : //global variable that needs to be set to a default value double previousX = 0; void joyCallback(const jog_msgs::leapros::ConstPtr& msg) { double difference; difference = msg->palmpos.x - previousX; ROS_INFO("previous %f", previousX); previousX = msg->palmpos.x; ROS_INFO("current %f", previousX); ROS_INFO("difference %f", difference); } Originally posted by Delb with karma: 3907 on 2018-12-20 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by stevensu1838 on 2018-12-22: Hi buddy, thank you so much for your help. And I am right now calculating the speed of my hand in the following link. Can you please take a look?link
{ "domain": "robotics.stackexchange", "id": 32195, "tags": "ros, ros-kinetic, message, topic, callbacks" }
If a system is deterministic, will it still be deterministic if time is reversed?
Question: If you were to drop a ball, it would be easy to calculate when it will hit the ground and how much energy will be absorbed by the ground (let's assume there's no air resistance and the ball does not bounce). If you were to then reverse time, would energy from the ground gather rapidly toward the location of the ball, creating a powerful but microscopic impulse capable of launching the ball to its original height? If the answer is yes, then the time reversed system should in fact be deterministic. If the answer is no, then there could not possibly be any way to predict when the ball will bounce or how high it could go. The system would not be deterministic. Which result is closer to reality? Answer: If the equations in whatever deterministic theory you are using are reversible in time, then you can use the current state to predict both the future and the past your system just as easily, because it's all in the mathematics. However, deterministic equations can suffer from sensitivity to initial conditions. Generally, that happens in both directions of time too, so you can only predict the weather about as far into the future as you can into the past. You might be able to predict how much heat a bouncing ball will lose, and thereby predict the height it was dropped from if you know how many bounces it took, but not if you wait until the ball comes to a stop because stopping a ball is not going to happen using equations that are time reversible.
{ "domain": "physics.stackexchange", "id": 34393, "tags": "determinism, time-reversal-symmetry" }
how to restart dynamixel after overload error?
Question: Hi. I'm using dynamixel_motor package to control my robot. Often my dynamixel shows over load error because of tough usage. Once this error happans, I have to switch off my dynamixel and switch on again. This way is not useful,so I would like to restart that dyanamixel through a program after over load error. Could you tell me how to do this? Originally posted by baozhang on ROS Answers with karma: 39 on 2016-07-09 Post score: 1 Answer: It depends on the model of Dynamixel you are using. Some of the newer models (XM and Pro series) have reboot commands builtin. For the others it is a combination of toggling the torque enable register and setting the torque limit register http://forums.trossenrobotics.com/showthread.php?7289-How-to-clear-Dynamixel-Overload-error&p=66440#post66440. Buggy behavior has also been noted in some AX-12 firmware releases, so the exact ordering of these steps may be different in some configurations http://forums.trossenrobotics.com/showthread.php?7289-How-to-clear-Dynamixel-Overload-error&p=66441#post66441. Finally, if overload occurs it's usually a good idea to review your design and determine how it can be avoided. I've burnt up MOSFETs on several AX-12s by ignoring and resetting overload errors. At the very least be sure to let the motor cool down if the temperature is also high. Originally posted by penguin8 with karma: 16 on 2016-07-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25193, "tags": "dynamixel" }
At which point does a cell switch its function?
Question: In a multi cellular organisms there are many cells of different functions. How / when does a cell change its function compared to its parent? Is it done by influence of other cells after its has been created? Answer: As MattDMo suggested there are so much possibilities. I would recommend to take a look at this information: Cellular differentiation wiki The Cell: A Molecular Approach. 2nd edition (NCBI) I will summarize some points, however note that this will not include every aspect. Every cell contains the same genome (same DNA), so what can make these cells to differ so much form each other (e.g. neurons vs muscle cells). As noted in the Wikipedia article: Since each cell, regardless of cell type, possesses the same genome, determination of cell type must occur at the level of gene expression I added this figure just to give a representation of the hierarchy. However probably you already knew this because you were asking about the mechanism of how these cells can become different from their parent cells. I will provide two important examples (again notice that this is just a simple summary): first let's talk about Cytoplasmic determinants Cytoplasmic determinants are special molecules which play a very important role during oocyte maturation, in the female's ovary (wiki) the cytoplasmic determinants are molecules such as mRNA or proteins which are e.g. unequally distributed in the egg cell, which means that if this cell devides one half will get other proteins(e.g. transcription factors(TF)) and mRNA than the other half. Different TFs --> different gene expression --> different proteins --> different cells. Secondly we have induction which is really important in cell differentiation: frequently we find that cells become different from one another as a result of inductive signals coming either from other cells or from their external environment (source) These two (induction and Cytoplasmic determinants) are important to determine the fates of cells(we also have gastrulation, HOX genes, etc). But how does the induction and the cytoplasmic determinants cause a change in gene expression? The same is true for this, there are a lot of different ways how this can happen (see gene silencing): Genomic Imprinting Paramutation Transposon silencing (or Histone Modifications) Transgene silencing Position effect RNA-directed DNA methylation I can write pages of Cellular differentiation because this question is really broad. So I will advice you to read the following articles: Differential gene expression and development The process of differentiation Cellular differentiation
{ "domain": "biology.stackexchange", "id": 6352, "tags": "replication" }
Is this language NP Hard?
Question: $L=\{$$($$m$,$w$,$n$$)$| $m$ is an encoding of a non-deterministic Turing machine, $w$ is any word/string in the closure of alphabet, i.e. $w\in\Sigma^*$, $n$ is any positive integer, i.e. $n\in\Bbb{Z}$ and $n>0$, and $m$ accepts $w$ for a maximum of $n$ moves/steps/operations $\}$ Alphabet is $\Sigma=\{0,1\}$ If this language is NP Hard then what reduction proves it? My effort to answer this question myself is: If $L$ is NP Hard then every NP language reduces to $L$ in polynomial time deterministically. More formally $L\in\Bbb{NPH}\implies\forall L'\in\Bbb{NP}:\;L'\le_PL$ Let $L'$ be any arbitrary NP language. According to the assumption that $L'\in\Bbb{NP}$ by definition of the complexity class $\Bbb{NP}$ exists non-deterministic Turing machine $m$ that decides $L'$ in polynomial time. This means that for every word/string $w\in\Sigma^*$: If $w\in L'$ then $m$ accepts $w$ for a maximum of $c\cdot N^k$ moves/steps/operations AND If $w\notin L'$ then $m$ rejects $w$ for a maximum of $c\cdot N^k$ moves/steps/operations Where $c$ and $k$ are constant non-negative integers and $N$ is the length of $w$. If $L'\le_PL$ then exists deterministic Turing machine $m'$ that computes function $f:\Sigma^*\rightarrow\Sigma^*$ in polynomial time so that for every word/string $w\in\Sigma^*$: $w\in L'\iff f(w)\in L$ Let $m'$ be deterministic Turing machine that in the beginning $m'$ inserts the encoding of $m$ before input word/string $w$, which takes constant time $\mathcal{O}(1)$, because the encoding of $m$ is constant word/string and thus the length of the encoding of $m$ is constant non-negative integer, then $m'$ computes the length of the input word/string $w$ by counting all symbols of $w$, which suppose to take linear time $\mathcal{O}(N)$ where $N$ is the length of the input word/string $w$, then compute the power of the computed length by the constant non-negative integer $k$, which takes polynomial time, then multiply the result by the constant non-negative integer $c$, which also takes polynomial time, and at last append the result after $mw$ and the content of the tape of $m'$ will be $(m,w,c\cdot N^k)$ where $n=c\cdot N^k$ After doing all these actions $m'$ halts. $m'$ is indeed deterministic Turing machine and runs in polynomial time and $w\in L'\iff (m,w,c\cdot N^k)\in L$ followed by the definitions of complexity class $\Bbb{NP}$ and non-deterministic Turing machines. Hence by definition $L'\le_PL$ because of $m'$ Conclusion: For arbitrary language $L'$ I showed that $L'\le_PL$ holds. Therefore every NP language reduces to $L$ in polynomial time deterministically and by definition $L$ is NP Hard. What I wrote above suppose to be a proof that $L$ is NP Hard, but the proof doesn't reduce already proven NP Hard language to $L$, so I think that I did some mistake. Answer: Indeed, this is the prototypical NP-complete language. Recall what NP-hardness means: For all languages $L'$ in NP, there is a polynomial time reduction from $L'$ to $L$. Normally, when one wants to prove NP-hardness, one takes a language $L''$ which is itself already known to be NP-hard and establishes a reduction from $L''$ to $L$. But not so in this case! It is much more natural to directly construct a reduction from $L'$ to our $L$, only using the fact that $L'$ belongs to NP.
{ "domain": "cs.stackexchange", "id": 11376, "tags": "complexity-theory, turing-machines, np-hard, np, nondeterminism" }
Any Open Source Fast Wavelet transform libraries?
Question: I am in need of an open source library for computing Fast wavelet transforms (FWT) and Inverse fast wavelet transforms (IFWT) - this is to be part of a bigger code I am currently writing. The things I am looking for in the library: 1) Contains a good variety of wavelet families (Daub,Haar, Coif etc.) 2) Ability to run in parallel - VERY IMPORTANT 3) Reasonable documentation, so that I can include new wavelets of my own without having to modify the complete source ( Maybe an OOPS based approach would help?) I am flexible about the language - C/C++/Fortran90/Fortran77/Python .... Any language would do for me, though I would prefer something which is optimized for speed and in parallel. So far, I have found PyWavelets - it looks good, but it is in Python ( therefore considerably slow) and it doesn't run in parallel. I am going to use the DWT for processing really huge datasets, so speed is an important concern for me. Its understandable there may not be anything with all the requirements I mentioned. But I wanted to hear from the community if they have any suggestions. Answer: you can have a look at the LTFAT's wavelet module http://ltfat.sourceforge.net/doc/wavelets/index.php it runs in Matlab/Octave with backend written in C. It has fairly large database of wavelet filters and new ones can be added easily. What exactly do you mean by 2) Ability to run in parallel - VERY IMPORTANT Should the computation itself be somewhat parallelized or that it should be possible to run several batches of FWT calculations? I guess that running several instances of Octave should do the trick in the latter case.
{ "domain": "dsp.stackexchange", "id": 11956, "tags": "wavelet" }
Cooling with fans in narrow space
Question: I'm building a LED ceiling lamp. The idea is to mix different tones of white to get desired tone. It will initially have 120W LEDs and maybe more later It will probably never run at full power but I want to be able to. It consists of two parts. Upper part where psu and electronics are, and lower part where LEDs are. The parts are separated by sheet metal. The dimensions are 1020x620x150 The upper parts height is 50mm The upper part is sealed from top (by ceiling) and from bottom (by metal sheet) The upper part will have holes in it. I want to make the lamp as shallow as possible and for that reason I've come up with this idea for airflow generation: Use large fans to lower noise generation. Turn them on their side. To force air to move in one direction block top on one side of fans, and bottom on other side of fans (images 2, 4) Would this kind of configuration work or will I have to use small fans at high rpm? Answer: No, this will not work well (as I read your plans). You should configure your fans (of any size) to draw air from inside the chassis and expel it outside the chassis. This will require vents on the perimeter of the chassis to allow air in. No 'blocking' is necessary except that the outflow face of the fan should be sealed around the outflow vent perforations to ensure all air entering the fan is drawn from the chassis.
{ "domain": "engineering.stackexchange", "id": 1984, "tags": "airflow, cooling" }
How to perform active search in point feature based monoSLAM?
Question: I am modifying an implementation of SLAM with single camera, MonoSLAM [1]. Instead of image patches, I want to use features points (ORB) to track landmarks. MonoSLAM uses a EKF framework. So for every feature tracked, I have a covariance matrix which defines the uncertainty of the measurement in current camera frame. Had this been an image patch, as in Davidson et al [1], one has to center a sliding window with every pixel inside the ellipse defined by the covariance. Then check for correlation of pixel intensities between the tracked patch and the patch from the new frame. Some thing like this figure below. For features, should I search in a similar fashion by calculating the descriptor for each pixel in the ellipse? or Since features come with their own detectors, should I leave it to the detector to find the list of features within the ellipse and then match them with my previous frame feature? [1]: Davison, Andrew J., et al. "MonoSLAM: Real-time single camera SLAM." IEEE Transactions on Pattern Analysis & Machine Intelligence 6 (2007): 1052-1067. Answer: This is not the answer of your question but ORB is a feature detection algorithm. Feature detection and tracking are different. Once you found a feature it is better to track them by a feature tracking algorithm such as KLT. You can also track the features by feature matching algorithms but it is not very efficient. Since features come with their own detectors, should I leave it to the detector to find the list of features within the ellipse and then match them with my previous frame feature? -> Yes The ellipse is usually to find a Mahalanobis distance to fairly count the uncertainty. If Mahalanobis distance of the two matched features is close enough, it can be used. This is how you find a matched features in EKF slam. Just remember that this is a quite old way.
{ "domain": "robotics.stackexchange", "id": 1846, "tags": "slam, computer-vision, ekf, monocular, data-association" }
Stable marriage problem preferential to asking side
Question: Watching this youtube video: https://www.youtube.com/watch?v=w1leqkpDaRw it described the problem with the stable marriage problem, that the asking side get a better deal then the asked site. Meaning that the result may differ if the woman ask the men instead of the men asking the women. I have test this and found it to be true, but i can not find any scientific papers on this, so i where hoping that some of you guys might know of one or more? Answer: One place to look at is the classic book The stable marriage problem. The link provides a relevant excerpt, showing that the matching produced by the standard Gale–Shapley algorithm is male optimal and female pessimal: any man gets the best possible partner (in his view) he can get in any perfect matching, and any woman gets the worst possible partner (in her view) she can get in any perfect matching. If we switch men and women, we will obtain the female optimal, male pessimal matching. The quoted book, as well as Knuth's book on the subject, contain other material on stable marriages, including the number of different stable marriages, the lattice structure of stable marriages, sampling a random stable marriage, related problems such as the stable roommate problem, and much more. From another angle, a distinctly different algorithm for the stable marriage problem has been developed by Subramanian and picked up by Feder; see for example Section 6 of this paper.
{ "domain": "cs.stackexchange", "id": 3110, "tags": "algorithms, assignment-problem" }
Velocity of an orbiting body
Question: The velocity of an orbiting body is given by: $$v = \sqrt{\frac{Gm}{r}}$$ I was trying to derive this formula earlier but I was struggling with incorporating $G$ into my derivation. I tried looking at centripetal force in order to derive the equation: $$F = ma = \frac{mv^2}{r}$$ $$F = \frac{mv^2}{r}$$ So $$v = \sqrt{\frac{Fr}{m}}$$ But this didn't seem to be the right way to go. Answer: The force of gravity equals mass times acceleration so: $ F_g = ma $ We also know that the force of gravity equals the gravitational constant, G, multiplied by the mass of Earth and the mass of the satellite, all over the distance from the center of Earth to the satellite squared. $F_g = \frac{GM_em }{r^2} $ And we also know that centripetal acceleration equals: $a=\frac{v^2}{r}$ Therefore, $\frac{GM_em }{r^2} = m(\frac{v^2}{r})$ Solving for $v$, we get: $v=\sqrt{\frac{GM_e}{r}}$
{ "domain": "physics.stackexchange", "id": 55499, "tags": "newtonian-mechanics, classical-mechanics, newtonian-gravity, orbital-motion" }
Peskin and Schroeder: derivation of Dirac fields commutator
Question: I'm perplexed by the following non numbered equation at page 54 of Peskin & Schroeder, right between $(3.92)$ and $(3.93)$ $$ [\psi_a(x),\overline{\psi}_b(x)]=\int\frac{d^3p}{(2\pi)^3}\frac{1}{2E_\mathbf{p}}\sum_s\left(u_a^s(p) \overline{u}_b^s(p)e^{-ip\cdot(x-y)}+v_a^s(p) \overline{v}_b^s(p)e^{ip\cdot(x-y)}\right)=\\=\int\frac{d^3p}{(2\pi)^3}\frac{1}{2E_\mathbf{p}}\sum_s\left((\not p+m)_{ab}e^{-ip\cdot(x-y)}+(\not p-m)_{ab}e^{ip\cdot(x-y)}\right)=\\=(i\not\partial_x+m)_{ab}\int\frac{d^3p}{(2\pi)^3}\frac{1}{2E_\mathbf{p}}\left(e^{-ip\cdot(x-y)}-e^{ip\cdot(x-y)}\right)=\\=(i\not\partial_x+m)_{ab}[\phi(x),\phi(y)]$$ Where $\psi_a$ are Dirac fields. I have questions both about the notation and the actual content. What is meant by the subscript $ab$ on operators? E.g. $(\not p +m)_{ab}.$ On the fields I interpreted it as if we had multiple fields with the same Lagrangian, i.e. a total Lagrangian density given by $$\mathcal{L}=\sum_a \overline{\psi}_a(i\not \partial -m)\psi_a$$ but the two subscripts don't make sense to me written that way. Looking at this computation, it seems like $$u_a^s(p) \overline{u}_b^s(p)=(\not p+m)_{ab}$$ (whatever that means) and similar for the antiparticles, but I would've expected $$ u_a^s(p) \overline{u}_b^s(p)=2m\delta_{ab}.$$ How can I resolve this? Answer: (1) Recall that the written out form of $${\not p } + m = \gamma^\mu p_\mu + m 1_4$$ and so $$({\not p } + m )_{ab} = (\gamma^\mu p_\mu + m 1_4)_{ab} $$ is just denoting the $ab$ entry of the 4$\times$ 4 matrix. (2) $$u_a^s(p) \overline{u}_b^s(p)=(\not p+m)_{ab}$$ Not quite. Rather, $$\sum_s u_a^s(p) \overline{u}_b^s(p)=(\not p+m)_{ab}.$$ This is shown on page 34 of Fun with Spinor indices. Your concern about thinking that $"u_a^s(p) \overline{u}_b^s(p)=2m\delta_{ab}"$ is also addressed on that page. Edit So in short $ab$ are, in general, spinor indices meaning that $\psi_a$ is the $a^{th}$ component of the dirac spinor $\psi$, and, say, $\bar{\psi}_a\gamma^\mu_{ab}\psi_b = \bar{\psi} \gamma^\mu \psi$ where the latter is found by usual matrix multiplication. That is, $ab$ represent the components of a matrix in spinor space.
{ "domain": "physics.stackexchange", "id": 54777, "tags": "quantum-field-theory, notation, dirac-equation, spinors" }
Why did Venus not lose its atmosphere without a magnetic field?
Question: It is often stated that the magnetosphere not only shields the planet from cosmic radiation, but also prevents atmospheric loss. Why then did Venus not lose most of its atmosphere if it doesn't have a strong magnetic field? Is there another mechanism at play, or is the statement about the importance of magnetosphere to atmospheric loss prevention wrong? Answer: There is an interesting article on the magnetosphere of Venus on the ESA Science and Technology site. You can find the article here and it will probably answer your question. The article states, like you did, that some planets, like Earth, Mercury, Jupiter and Saturn, have magnetic fields internally induced by their iron core. These magnetic fields shield the atmosphere from particles coming from solar winds. It also confirms your statement that Venus lacks this intrinsic magnetosphere to shield its atmosphere from the solar winds. The interesting thing, however, is that spacecraft observations, like the ones made by ESA's Venus Express, have shown that the Venusian ionosphere's direct interaction with the solar winds causes an externally induced magnetic field, which deflects the particles from the solar winds and protects the atmosphere from being blown away from the planet. However, the article also explains that the Venusian magnetosphere is not as protective as Earth's magnetosphere. Measurements of the Venusian magnetic field show several similarities, such as deflection of the solar winds and the reconnections in the tail of the magnetosphere, causing plasma circulations in the magnetosphere. The differences might explain the fact that some gasses and water are lost from the Venus atmosphere. The magnetic field of Venus is about 10 times smaller than the earth's magnetic field. The shape of the magnetic field is also different. Earth has a more sharp magnetotail facing away from the Sun and Venus has a more comet-shaped magnetotail. During the reconnections most of the plasma is lost in the atmosphere. The article explains therefore that although Venus does not have an intrinsic magnetic field, the interaction of the thick atmosphere with the solar winds causes an externally induced magnetic field, that deflects the particles of the solar winds. The article suggests, however, that the different magnetic field may mean that lighter gasses are not as protected and therefore are lost into space. I hope this sufficiently answers the question.
{ "domain": "astronomy.stackexchange", "id": 5126, "tags": "planetary-atmosphere, magnetic-field, venus" }
rosmake does not respect ROS_PARALLEL_JOBS or --pjobs
Question: rosmake in ROS fuerte does not seem to respect the ROS_PARALLEL_JOBS=-jN environment variable nor the --pjobs=N command line option for compiling with N+1 processes. Instead, the number of rosmake instances spawned is always equal to the number of cores detected (regardless of whether or not that is more or fewer than the number specified via ROS_PARALLEL_JOBS or --pjobs). The ROS wiki page on environment variables says setting ROS_PARALLEL_JOBS should work, but I have multiple installations of ROS that display this behavior. Has anyone else had this problem? Originally posted by yoos on ROS Answers with karma: 78 on 2013-02-26 Post score: 0 Answer: That bug is known: https://code.ros.org/trac/ros/ticket/4036 https://github.com/ros/ros/issues/4 and a fix will be soon be released for fuerte and later. In the meantime a workaround is using this (for experts): export MAKEFLAGS="-j8" Originally posted by KruseT with karma: 7848 on 2013-02-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13088, "tags": "ros, ros-fuerte, rosmake, parallel, environment-variables" }
A crude clojure progress reporting function
Question: The purpose of this function is to report processing progress to the terminal. It loops through an array of maps that contain two properties: :sequence and :function. Within each sequence-function-map, it will loop through the sequence, and run the function on each item in the sequence. This is my first Clojure program that will be used in a production environment. This particular function does the job, but I suspect that it's not very lispy. (defn run-with-reporting [sequence-function-maps stop-time] (def beginning-time (java.util.Date.)) (let [seq-counts (map #(count (:sequence %)) sequence-function-maps)] (def total-sequence-counts (reduce + seq-counts))) (println (str "total counts = " total-sequence-counts)) (def prog-rpt-ref (ref {:todo total-sequence-counts, :done 0, :total-time-taken 0})) (doseq [sequence-function-map sequence-function-maps] (let [sequence (:sequence sequence-function-map), function (:function sequence-function-map)] (loop [loop-seq sequence] (if (and stop-time (clj-time/after? (clj-time/now) stop-time)) (println "stopped at " (clj-time/now) ". Requested stop at " stop-time) (if loop-seq (do (def item (first loop-seq)) (def start-time (java.util.Date.)) (function item) (def end-time (java.util.Date.)) (def time-delta (- (.getTime end-time) (.getTime start-time))) (let [derefd-rpt (deref prog-rpt-ref)] (dosync (alter prog-rpt-ref assoc :done (inc (:done derefd-rpt)), :total-time-taken (+ (:total-time-taken derefd-rpt) time-delta)))) (let [derefd-rpt (deref prog-rpt-ref)] (let [average-time (/ (:total-time-taken derefd-rpt) (:done derefd-rpt))] (println "Avg time / each = " (hrs-min-sec average-time) ", Estimated total time left = " (hrs-min-sec (* average-time (- (:todo derefd-rpt) (:done derefd-rpt))))))) (recur (next loop-seq)))))))) (let [derefd-rpt (deref prog-rpt-ref)] (println "Total time taken = " (hrs-min-sec (- (.getTime (java.util.Date.)) (.getTime beginning-time))) ", Done = " (:done derefd-rpt) "/" (:todo derefd-rpt)))) Here's a testing function (defn test-run-w-reporting [stop-clj-time] (def testing-func (fn [item] (. java.lang.Thread sleep (rand 1000)))) (let [sequence-function-maps [{:sequence (range 30), :function testing-func}, {:sequence (range 15), :function testing-func}]] (run-with-reporting sequence-function-maps stop-clj-time))) Answer: I will point out stuff that I'd do differently, though I am by no means a clojure expert, I do have some experience in it. Numbered items will reappear inside the code as comments. no DEF inside defn body def will define the var for your entire namespace as such, you will pollude your namespace with temporaries use LET instead no DO necessary inside the body of IF furthermore, I find it more idiomatic to use WHEN in the case that no else branch exists using @ instead of DEREF could reduce verbosity by using UPDATE-IN instead of INC, you can simply pass a transformation function note the vector notation however: UPDATE-IN takes a vector of keywords You can pass a partial function, i.e. a function which is partially applied as the transformation function to update the total-time-taken No need to nest LETs, you can write all definitions into a single let Code: (defn run-with-reporting [sequence-function-maps stop-time] (let [beginning-time (java.util.Date.) ;; 1. seq-counts (map #(count (:sequence %)) sequence-function-maps) total-sequence-counts (reduce + seq-counts) prog-rpt-ref (ref {:todo total-sequence-counts, :done 0, :total-time-taken 0})] (println (str "total counts = " total-sequence-counts)) (doseq [sequence-function-map sequence-function-maps] (let [sequence (:sequence sequence-function-map), function (:function sequence-function-map)] (loop [loop-seq sequence] (if (and stop-time (clj-time/after? (clj-time/now) stop-time)) (println "stopped at " (clj-time/now) ". Requested stop at " stop-time) (when loop-seq ;; 2. (let [item (first loop-seq) start-time (java.util.Date.)] (function item) (let [end-time (java.util.Date.) time-delta (- (.getTime end-time) (.getTime start-time)) derefd-rpt @prog-rpt-ref] ;; 3. (dosync (alter prog-rpt-ref update-in ;; 4. [:done] inc, [:total-time-taken] (partial + time-delta))) ;; 5. (let [derefd-rpt @prog-rpt-ref ;; 3. & 6. average-time (/ (:total-time-taken derefd-rpt) (:done derefd-rpt))] (println "Avg time / each = " (hrs-min-sec average-time) ", Estimated total time left = " (hrs-min-sec (* average-time (- (:todo derefd-rpt) (:done derefd-rpt))))))) (recur (next loop-seq)))))))) (let [derefd-rpt (deref prog-rpt-ref)] (println "Total time taken = " (hrs-min-sec (- (.getTime (java.util.Date.)) (.getTime beginning-time))) ", Done = " (:done derefd-rpt) "/" (:todo derefd-rpt)))) ) I would try to reduce verbosity and increase readability (in the literate sense) by utilising descriptive helper functions - current-time instead of (.getTime (java.util.Date.)) Personally I like to keep the amount of variables as low as possible. Variables are great for producing side-effects, describing complex results or for when you need a result several times. In the case of the ref, writing @ instead of using a temporary seems more natural to me. As such: I would try to reduce the amount of side-effects and use descriptive functions to calculate complex results and place them where they belong without temporary variables You can use destructring for maps (let [{:keys [key1 key2]} mymap] ...) will let-bind key1 and key2 by extracting :key1 & :key2 from mymap Code: (defn current-time "Return the current time. Same as (.getTime (java.util.Date.))." [] (.getTime (java.util.Date.))) (defn run-with-reporting [sequence-function-maps stop-time] (let [beginning-time (current-time) ;; 1. seq-counts (map #(count (:sequence %)) sequence-function-maps) total-sequence-counts (reduce + seq-counts) prog-rpt-ref (ref {:todo total-sequence-counts, :done 0, :total-time-taken 0})] (println (str "total counts = " total-sequence-counts)) (doseq [sequence-function-map sequence-function-maps] (let [{:keys [sequence function]} sequence-function-map] ;; 9. (loop [loop-seq sequence] (if (and stop-time (clj-time/after? (clj-time/now) stop-time)) (println "stopped at " (clj-time/now) ". Requested stop at " stop-time) (when loop-seq ;; 2. (let [item (first loop-seq) start-time (current-time)] ;; 7. (function item) (dosync (alter prog-rpt-ref update-in ;; 4. [:done] inc, [:total-time-taken] (partial + (- (current-time) start-time)))) ;; 5., 7. & 8. (let [average-time (/ (:total-time-taken @prog-rpt-ref) (:done @prog-rpt-ref))] ;; 8. (println "Avg time / each = " (hrs-min-sec average-time) ", Estimated total time left = " (hrs-min-sec (* average-time (- (:todo @prog-rpt-ref) (:done @prog-rpt-ref)))))) ;; 8. (recur (next loop-seq)))))))) (println "Total time taken = " (hrs-min-sec (- (current-time) beginning-time)) ", Done = " (:done @prog-rpt-ref) "/" (:todo @prog-rpt-ref)))) Now, lets take a look at the function itself. In general, when writing in clojure you should try to keep your functions generic and simple. By keeping them simple and focusing on only a single task, they are usually easier to compose and reuse. I would thus recommend splitting up your function. In your case what you want to do is really: map a function over a sequence: clojure already provides a function for that, namely MAP. stop the mapping operation if a specified time has been reached. print out the average time per call print out total time taken The first point is trivial. For the second point, we can exploit the fact that clojure has lazy sequences. This allows us to use the built-in TAKE-WHILE function with a predicate which checks the time and stops when it has passed your stop-time: (defn take-until "Returns a lazy seq of items from coll util the STOP-TIME has been reached." [stop-time coll] (take-while (fn [item] (if (not (> (current-time) stop-time)) true (do (println "stopped at " (current-time) ". Requested stop at " stop-time) false))) coll)) The third point can be achieved by simply mapping over a collection and printing out the necessary information. This again is really just what you did, but refactored into a function that works on any sequence: (defn measure-coll-retrieval "Returns a lazy seq of items from coll. Will print to stdout the average time between element extractions." ;; you can use destructuring inside the argument list as well, here ;; we're additionaly specifying some defaults for when the caller ;; does not provide a value [coll & {:keys [start-count total-count] :or {start-count 0 total-count nil}}] (let [beginning-time (current-time)] (map-indexed (fn [index item] (let [index (+ start-count index 1) average-time (/ (- (current-time) beginning-time) index)] (print "Avg time / each = " (hrs-min-sec average-time)) (if total-count (println ", Estimated total time left = " (hrs-min-sec (* average-time (- total-count index)))) (println)) item)) coll))) Before writing the reporting function, a little helper so we can get around seq-chunking in clojure: (defn unchunk "takes a chunked sequence and turns it into an unchunked sequence" [s] (lazy-seq (when-let [[x] (seq s)] (cons x (unchunk (rest s)))))) Now we can write the actual reporting function: (defn run-with-reporting [sequence-function-maps stop-time] (let [beginning-time (current-time) ;; 1. seq-counts (map (comp count :sequence) sequence-function-maps) total-sequence-counts (reduce + seq-counts) intermed-count (atom 0)] (println (str "total counts = " total-sequence-counts)) ;; note how it is possible to immediately destructure the map (doseq [{:keys [sequence function]} sequence-function-maps] (swap! intermed-count + (count ;; this is where everything happens: apply function, ;; take-until stop-time reached & measure average time (measure-coll-retrieval (take-until stop-time (map function (unchunk sequence))) :start-count @intermed-count :total-count total-sequence-counts)))) (println "Total time taken = " (hrs-min-sec (- (current-time) beginning-time)) ", Done = " @intermed-count "/" total-sequence-counts))) (defn test-run-w-reporting [stop-clj-time] (def testing-func (fn [item] (. java.lang.Thread sleep (rand 1000)))) (let [sequence-function-maps [{:sequence (range 30), :function testing-func}, {:sequence (range 15), :function testing-func}]] (run-with-reporting sequence-function-maps stop-clj-time))) I hope this has helped you some. Kind regards, Thomas
{ "domain": "codereview.stackexchange", "id": 1162, "tags": "beginner, clojure, lisp" }
Naive unification of scalar QFT and GR is possible?
Question: I am thinking on the Klein-Gordon equation with curved (non-diagonal) metrics. Is it possible? Doesn't have it some inherent contradiction? If yes, what? If no, what is this combined formula? Answer: The relativistic Schr\:odinger equation is known as the Klein-Gordon, and is the pre-quantized version of the quantum field theory of a noninteracting scalar field. If you attempt to couple this to general relativity, you get what is known as the Einstein-Klein Gordon equation. If you attempt to na\:ively quantize this theory, you encounter all of the usual issues with renormalizability of the gravitational degrees of freedom. Additionally, you run into issues when trying to impose causality conditions, since the lightcones are now dynamically determined--you have to know the gravitational field in order to know how your degrees of freedom can propogate. Even pure gravity does not quantize cleanly due to these issues, much less before you add in a scalar field.
{ "domain": "physics.stackexchange", "id": 14596, "tags": "quantum-field-theory, general-relativity, renormalization, klein-gordon-equation" }
Cosmic ray hazards
Question: The Pierre Auger Observatory site mentions the detection of a 3E20 eV (48 J) cosmic ray whose energy, well above the GZK cutoff, was based on an analysis of its atmospheric shower. This was equivalent to the kinetic energy of a baseball with a speed of 79.5 m/s or 177 mph. Of course, cosmic rays with such ulta-high energies are extremely rare. What kind of damage would occur if an astronaut or a space vehicle encountered such a cosmic ray? How would the damage differ from that from the hypothetical 79.5 m/s baseball? Answer: One must keep in mind also that it is the particle, not the shower that goes through the astronaut in dmckee's estimate above, where he treats the relativistic particle going through matter. The shower in your question which gave the energy estimate of the parent particle is generated by cascade/sequential collisions of deep angle scattering over a long path. The energy is not released in one go unless the astronaut is very unlucky. The deep inelastic scattering crossection at those energies is still not up to barn values ( a barn is about the size of a uranium nucleus) so the astronaut would have to be very unlucky even to get one energetic scatter let alone to start a shower.
{ "domain": "physics.stackexchange", "id": 620, "tags": "particle-physics, energy, radiation, cosmic-rays" }
How is it possible for trajectory and streamline to coincide in case of stationary velocity field?
Question: In continuum mechanics, the definition of trajectory or pathline is the locus of the positions occupied by a given particle in space throughout time. And streamlines are a family of curves which for every instant in time are the velocity field envelopes. For a stationary velocity field, trajectories and streamlines coincide.. How is this possible? Even if the velocity didn't depend on time, and was only changing with the change of position, it doesn't make sense for the streamline and trajectory to coincide because they hold different physical meaning. Answer: How is this possible? Even if the velocity didn't depend on time, and was only changing with the change of position, it doesn't make sense for the streamline and trajectory to coincide because they hold different physical meaning. The only thing that represents a single physical thing is a trajectory. Pathlines, streaklines, and streamlines can all be thought of as aggregations of trajectories that were assembled using different rules. They are traces in space, and somebody decided to make life easy and define them in a way that they are all congruent for a constant velocity field. A trajectory is a point in space and time. If you start with a velocity field, you need an initial point and time to get started. The trajectory point is then found by integrating to the final time. $$P\small{(x_0\,,y_0\,,t_0\,,t_1)}\normalsize{ =} \int_{t_0}^{t_1} \mathbf{V}(\small{x(t)\,,y(t)\,,t}\,\normalsize{)\,dt}$$ If $\mathbf{V}$ is a stationary field, several things are simpler. $\mathbf{V}$ doesn't have a time parameter, and the bounds of the integral can be shifted by any constant we like. $$P\small{(x_0\,,y_0\,,t_1)}\normalsize{ =} \int_{0}^{t_1} \mathbf{V}(\small{x(t)\,,y(t)}\,\normalsize{)\,dt}$$ A pathline is a collection of trajectory points that all have the same starting point $P$, with $t_1$ being anything. A streamline is basically the same thing, but we travel across the field infinitely fast. No time elapses. But this makes collecting data points from the integral difficult. So some clever person realized that the same curve can be gotten by parameterizing the curve differently. Instead of using time, we can use distance as the variable of integration. And instead of integrating the velocity, we integrate the normalized velocity. So $s$ is the new variable of integration, and it is related to $t$ by $ds=|\mathbf{V}\small{(x\,,y)}\normalsize{|\,dt}$ And a streampoint becomes $\displaystyle P\small{(x_0\,,y_0\,,s_1)}\normalsize{ =} \int_{0}^{s_1} \mathbf{v}(\small{x(s)\,,y(s)}\,\normalsize{)\,ds}$ where $\displaystyle\mathbf{v}=\frac{\mathbf{V}}{|\mathbf{V}|}$ So what we are doing is reparameterizing the integral to get rid of an infinity problem and provide nice bounds on the definite integral. This parameterization is different at each point in the field, but it all comes back together when you integrate over the field. Different parameterizations, same shaped curve. And that's okay, because with pathlines, streakline, and streamlines, we don't keep an index of the parameter values that produced a given point the way we do with a trajectory. A trajectory should retain this timestamp info. Now if you want to test your calculus chops, assume $\mathbf{V}$ isn't stationary, and $ds=|\mathbf{V}\small{(x\,,y\,,t)}\normalsize{|\,dt}$ and figure out what the real restrictions on $\displaystyle\mathbf{v}=\frac{\mathbf{V}}{|\mathbf{V}|}$ are such that streamlines and pathlines are the same. A constant $\mathbf{V}$ is overly strict, there are certain $\mathbf{V}(t)$ for which this is true as well.
{ "domain": "engineering.stackexchange", "id": 3898, "tags": "fluid-mechanics, solid-mechanics" }
What DSP techniques can be used to change the timbre of complex tone?
Question: I'm experimenting with applying unusual 'spectral mangling' type special effects to audio. 1) What DSP techniques can be used to change the timbre of a complex tone, such as that produced by the voice, or a musical instrument? Are there any ways to manipulate the level of specific frequencies in the spectral profile of a complex tone. E.g. Assuming a tone with a fundamental frequency of 440 hz, use some DSP method to augment, for example the 2nd harmonic overtone at 1320Hz Khz, or remove another peak at say, 3Khz). What's the best way to remove specific frequencies, or as narrow bands of frequencies as possible using DSP? I use mixing/mastering, offline, and real time DSP VSTs, standalone programs, outboard effects units, but am interested in any methods which could work. 2) What alternatives (if any), are there to using techniques which introduce significant ringing or other artifacts which degrade the sound? 3) Are there any ways to 'clean up' audio after processing, removing artefacts which have been introduced? Answer: Phase vocoders with custom resynthesizers can be used for this, as in time pitch stretching code that attempts to preserve the spectral formant envelope. But phase vocoder analysis/resynthesis is not artifact-free.
{ "domain": "dsp.stackexchange", "id": 282, "tags": "filters, noise, frequency-spectrum" }