anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Simple way to prove $\left \{ 0^{n}1^{m} \mid (n-m) \bmod 5=0 \right \}$ is regular?
Question: Prove: $\left \{ 0^{n}1^{m} \mid (n-m) \bmod 5=0 \right \}$ is regular. Is it reasonable to get a DFA with at least 30 states for this language? is there an easier way to prove it is regular? Answer: $(n-m)\bmod 5$ can only be $0,1,2,3,4$. So intuitively, the language in the question is a regular language. We can show that conclusion rigorously in the following two ways. Regular expression The simplest way is to verify that the language is described by the following regular expression. $$(00000)^*(\epsilon+01+0011+000111+00001111)(11111)^*.$$ Intuitively, the language consists of words each of which is zero or more five 0s, followed by some number of 0s, followed by the same number of 1s, followed by zero of more five 1s. Minimal DFA Here are the states of the minimal DFA for this language. $Z_{0}$, which represents words each of which is zero or more five 0s. $Z_{1}$, which represents words each of which is zero or more five 0s, followed by 0. $Z_{2}$, which represents words each of which is zero or more five 0s, followed by 00. $Z_{3}$, which represents words each of which is zero or more five 0s, followed by 000. $Z_{4}$, which represents words each of which is zero or more five 0s, followed by 0000. $S_{0}$, which represents words each of which is zero or more five 0s, followed by nothing or 01 or 0011 or 000111 or 00001111, followed by zero or more five 1s, ending with 1. $S_{1}$, which represents words each of which is zero or more five 0s, followed by 0 or 001, or 00011 or 0000111 or 1111, followed by zero or more five 1s, ending with 1. $S_{2}$, which represents words each of which is zero or more five 0s, followed by 00 or 0001 or 000011 or 111 or 01111, followed by zero or more five 1s, ending with 1. $S_{3}$, which represents words each of which is zero or more five 0s, followed by 000 or 00001 or 11 or 0111 or 0001111, followed by zero or more five 1s, ending with 1. $S_{4}$, which represents words each of which is zero or more five 0s, followed by 0000 or 1 or 011 or 00111 or 0001111, followed by zero or more five 1s, ending with 1. $R$, which represents all other words, which are words that are not of the form $0^*1^*$. In other words, besides the "rejecting" state $R$ that behaves as a black hole, we have, for $i=0,1,2,3,4$, a state $Z_i$ which represents words of the form $0^*$ whose number of 0s is the same as $i$ modulo 5. a state $S_i$ which represents words of the form $0^*1^*1$ whose number of 0s is the same as $i$ plus its number of 1s modulo 5. It should be not be difficult for you to figure out the initial state, the accepting states and the transitions between the states. Is it reasonable to get a DFA with at least 30 states for this language? It is totally fine if you can construct a DFA with 30 states or 2019 states for this language. That will show the language is regular. On the other hand, it is sometimes interesting or helpful or challenging to find the DFA with the minimal number of states. For this language, 11 states is the minimum. Here are two related exercises. Exercise 1. Show $\left \{ 0^m1^n \mid (n-m) \bmod 5=2 \right \}$ is a regular language. How many states are there in the minimal DFA for it? Exercise 2. Show $\left \{ 0^k1^m2^n \mid (k+m-n) \bmod 2=0 \right \}$ is a regular language. How many states are there in the minimal DFA for it?
{ "domain": "cs.stackexchange", "id": 13411, "tags": "regular-languages, finite-automata" }
I am struggling to get the right answer regarding the quantification of entropy
Question: 1 kg of air in a piston-cylinder apparatus can exchange heat only with a reservoir maintained at 300 K. When 10 kJ of work is done on the air, its state is asserted to change from 1 bar 300 K to 2.5 bar, 310 K. (a) What is the entropy change of the air? (b) What is the heat transfer from the air? for part a, s = cpln(T2/T1) - Rln(p2/p1) so the entropy change of the air is s = 1.004ln(310/300) - 0.287ln(2.5/1) change in entropy for air = -0.23 kJ/K For part b, I am using the equation ΔS = ΔQ/T(average) ΔQ = -0.23 x 305 = -70.15 kJ of heat transferred from the air. My textbook says the correct answer for part b is -2.82 kJ/K I am struggling to understand how to arrive at this answer and I would appreciate it if someone could explain. Thanks. Answer: You can work out how much the internal energy of the air has changed ($mc_V\mathrm{\Delta}\!\!T$). You know how much work has been done on the air ($10\,\mathsf{kJ}$). The difference between the two is the heat transferred out of the air. As to why the method you tried didn't produce the right answer: I think @RC_23 is right about "irreversible work". What that means is that the work-doing movement of the piston happens so fast that the air is no longer a single thermodynamic system at a uniform temperature (e.g. the piston moves supersonically and a shock front forms), and there are internal heat transfers from higher-temperature subsystems of the air to lower-temperature subsystems of the air. Those internal heat transfers increase entropy, but don't contribute anything to the heat transfer out of the air.
{ "domain": "engineering.stackexchange", "id": 4699, "tags": "thermodynamics, homework" }
Quadratic probing maximum load factor with $c_1 = c_2 = 0.5$ to guarantee successful insertion
Question: For quadratic hashing, i.e an open-addressed table with the hash function of the form - $h(x, i)= (h'(x) + c_1i + c_2i^2) \mod m$ Setting $c_1 = c_2 = 1/2$, and $m$ to some power of 2, leads to a hash function $h(x, i) = (h'(x) + 1 + 2 + ... +i) \mod m$ I have read that the load-factor of a quadratically probed table should not exceed $0.5$ to guarantee insertion to succeed if an empty cell exists in the table. Wikipedia's article here https://en.wikipedia.org/wiki/Quadratic_probing#Limitations says that "With the exception of the triangular number case for a power-of-two-sized hash table, there is no guarantee of finding an empty cell once the table gets more than half full, or even before the table gets half full if the table size is not prime. " So what is the maximum load factor that will guarantee successful insertion for this case? Answer: For the case you describe ($c_1=c_2=1/2$, $m=2^k$), you can reach a load factor of 1. The probe sequence touches all the cells in the table. As a practical matter, you probably don't want to get really close to a load factor of 1, as the probing can take $O(n)$ time.
{ "domain": "cs.stackexchange", "id": 9085, "tags": "hash-tables" }
rosparam load into shell script
Question: Hello, I would like to run move_stack from the binary. So what i am actually trying to do is to load the parameters into a shell script. The shell script looks like this: `#!/bin/bash clear echo "Executing Move_base" rosparam load -v ~/ros_stacks_sc/navigation_kostas/cfg_rafa/costmap_common_params.yaml /global_costamap rosparam load -v ~/ros_stacks_sc/navigation_kostas/cfg_rafa/costmap_common_params.yaml /local_costamap rosparam load -v ~/ros_stacks_sc/navigation_kostas/cfg_rafa/base_global_planner_params.yaml rosparam load -v ~/ros_stacks_sc/navigation_kostas/cfg_rafa/base_local_planner_params.yaml rosparam load -v ~/ros_stacks_sc/navigation_kostas/cfg_rafa/global_costmap_params.yaml rosparam load -v ~/ros_stacks_sc/navigation_kostas/cfg_rafa/local_costmap_params.yaml cd ~/ros_stacks_sc/navigation_kostas/move_base/bin/ ./move_base home/kostasof/Desktop/test_t420/Move_Base.xml wlan0` As you can see i am loading the parameters by using the "rosparam load" command, and after i go the folder that i have compiled my distribution of "move_base" and executing the binary by giving 2 more arguments(necessary for my changes into the code). The problem is that when i am executing the binary it says that it loads the parameters (-v : verbose mode) but during the move_base execution the parameters that have been loaded are not taken into account. Any ideas? Originally posted by kostasof on ROS Answers with karma: 1 on 2013-03-21 Post score: 0 Answer: This is not a direct answer to your question as I am not sure what's going wrong with your approach. I suspect some namespacing issue in play here. Anyway here are 2 alternative solutions for what you're trying to do using launch files: The roslaunch syntax allows for supplying custom arguments to your binary. See the args attribute inside the node element here: http://www.ros.org/wiki/roslaunch/XML/node You can convert your binary arguments to ROS parameters, and add appropriate parameters to your launch file. See http://www.ros.org/wiki/roscpp/Overview/Parameter%20Server Originally posted by piyushk with karma: 2871 on 2013-03-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 13474, "tags": "navigation, shell, rosparam, script, move-base" }
Number of strings in elementary particles
Question: I've seen many articles about the string theory and have a very simple question : I'd like to know how many Strings are in a quark or an electron? Answer: There are a lot of different ways to get quantum field theories that look like the Standard Model in string theory. In some string theory models (such as the heterotic models), every particle that the Standard Model treats as point-like (electrons, quarks, etc) is a single elementary string. But there are other more complicated models in which the standard model particles are not built out of strings at all, but instead realized as the low energy excitations of D-branes wrapped around various kinds of singularities. We don't know which (if any) of these models is actually correct, so we can't say with certainty that string theory predicts that an electron is made up of some number N of strings.
{ "domain": "physics.stackexchange", "id": 6643, "tags": "string-theory" }
Generalized translation on graph
Question: David I.Shuman in "vertex-frequency analysis on graph" claims that,"we generalize one of the most important signal processing tools – windowed Fourier analysis – to the graph setting and When we apply this transform to a signal with frequency components that vary along a path graph, the resulting spectrogram matches our intuition from classical discrete-time signal processing. Yet, our construction is fully generalized and can be applied to analyze signals on any undirected, connected, weighted graph." In this paper generalized translation operator that allows us to shift a window around the vertex domain so that it is localized around any given vertex, just as we shift a window along the real line to any center point in the classical windowed Fourier transform for signals on the real line. generalized translation operator: $$(T_{i}f)(n) := \sqrt{N}(f*\delta_{i})(n) = \sqrt{N}\sum_{\ell=0}^{N-1}\hat{f}(\lambda_{\ell}) \chi^{*}_{\ell}(i)\chi_{\ell}(n) .$$ See the picture Question How do I understand translation on the graph? How to label irregular graphs with high dimensional data? Thanks. Answer: To understand either of these, you first have to understand the basic premise behind Graph Signal Processing (GSP) which is to map a signal to a graph and then work with it on the "Graph space". This is possible due to certain similarities of classic DSP concepts and Algebraic Graph Theory. So, it is easier to start from the second question because, before we start applying GSP, we first need a graph. How to label irregular graphs with high dimensional data? The short answer is that this is still an open problem and currently, there are "signals" that are naturally mapping on graphs and others where the mapping is either arbitrary or in some way constructed. Signals that naturally map on graphs are usually expressed as weights of the graph's edges through a Weight Matrix that is similar to an Adjacency Matrix. Typical examples are usually items and some form of similarity between them. For example, suppose that you have a set of $N$ time series $X$ and you evaluate their cross correlation. This will result in a Weight Matrix (let's call it $W$) whose $i^{th}, j^{th}$ element ($W_{i,j}$) is the cross correlation between time series $X_{:,i}$ and $X_{:,j}$. (So, $X$ is an $m \times n$ matrix of $n$ time series signals each being $m$ samples long.) What does this graph look like? It looks like a Clique. In other words, because we have examined all-to-all cross correlatons, all nodes are considered connected with each other. But, the strength of the connection is expressed by some weight. So, yes, they are all connected, but some are much more closer than others. For signals that do not naturally map on graphs, you first have to solve the corresponding graph labeling problem. This is generally done in two ways, either by coming up with a function that maps a signal to some graph or arbitrarily. In the arbitrary case, you select some graph whose order (the number of nodes) is equal to the number of samples in your signal. That graph's nodes can be arbitrarily connected, it could for example be an entirely random graph where there is equal chance for any two nodes to be connected. This is what the author of the paper that you link is actually doing. They take an arbitrary graph (a road network) and they map on to it an exponential decay signal. How? Arbitrarily. Does it make sense? No, but it illustrates the point they are trying to make about showing the effect of the operators. (See page 4:"Note that the definitions of the graph Fourier transform and its inverse [...] depend on the choice of graph Laplacian eigenvectors, which is not necessarily unique. Throughout this paper, we do not specify how to choose these eigenvectors, but assume they are fixed. The ideal choice of the eigenvectors in order to optimize the theoretical analysis conducted here and elsewhere remains an interesting open question; however, in most applications with extremely large graphs, the explicit computation of a full eigendecomposition is not practical anyhow, and methods that only utilize the graph Laplacian through sparse matrix-vector multiplication are preferred.") The other way that you can do the mapping is with an intuitive or model fitting (in the sense of optimisation) way. So, an intuitive way to map a signal to a graph is to put the samples of some $x[n]$ time series on the nodes of a graph that are simply connected as a "line" (so, something looking like $x[0] \rightarrow x[1] \rightarrow x[2] \rightarrow x[3] \ldots \rightarrow x[n]$ ). And a constructed way is to use optimisation in order to construct a graph whose connectivity represents SOME aspect of your original signal $x[n]$. Which brings us to the first question: How do I understand translation on the graph? The short answer is that translation on a graph is equivalent to a re-ordering of the edges that effects a new connectivity pattern on the nodes of the graph. In this way, the nodes appear to have "moved" or translated to a different "position". So now the question is how do you define "position" and to an extent this question is a bit related to the first one because "position" and how you represent the signal are related. But, here is a very simple example, just to demonstrate a trivial translation. Say we have this signal: $x = \left\{ 0,1,2,3,2,1,0,1,2,3,2,1,0 \right\}$ and we map it to the "line" graph $G(V,E)$ we saw earlier that looks like $x[0] \rightarrow x[1] \rightarrow x[2] \rightarrow x[3] \ldots $. In other words, we assign $x[0]$ to $v_0$, $x[1]$ to $v[1]$ and so on and we assume that nodes are connected "sequentially" (and cyclically). The adjacency matrix (or the weight matrix) of this graph is: $$A = \begin{vmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{vmatrix}$$ Notice here that I have connected $x[|V|]$ back to $x[0]$ with that last line. So, how do you "move" nodes around? In classic DSP, if you wanted to shift things in time, you did something like: $y[n] = x[n+2], n \in \left\{0 .. |x| \right\}$ and the shift is cyclic here. After this, our shifted sequence $y$ looks like: $y = \left\{ 2,3,2,1,0,1,2,3,2,1,0,0,1 \right\}$ Right, so how could we achieve the same, solely utilising the $A$ to effect the same shifting on our graph signal? Well, that's easy here because instead of having our initial time series $x$ connected as $x[n] \rightarrow x[n+1]$ it is as if we now connect $x[0] \rightarrow x[2], x[1] \rightarrow x[3], x[2] \rightarrow x[4], \ldots$. So basically, the same $A$ as above, only that now the $1$s appear two places to the right of their current position. Did you notice how we expressed something that happened in the time domain to something that happens in the graph domain? The key idea here is that we translated the signal by changing the way the nodes are connected. Translation is basically an operator on the connectivity of the graph. BUT! Notice here that we said earlier that we assume a "line" graph as the underlying graph for our signal. We made an arbitrary decision. We could have mapped our signal on the road network of some city (as the authors of the paper that you link have done). Then, how do you define translation on that thing?!?". This is where the Graph Laplacian and Algebraic Graph Theory come into play. To cut a long story short, the Graph Laplacian is like the Discrete Fourier Transform for signals in the time domain. It has been the topic of a lot of research in pure mathematics and its eigenvectors and eigenvalues are supposed to return a lot of information about the graph's connectivity structure (for example, whether it contains cycles or not, what sort of lengths of cycles, whether it is completely connected or not, etc). So, basically, what the authors are working on in the paper that you linked are a translation operator and a "DFT" equivalent operator on the graph laplacian so that you can "translate" nodes around an arbitrary connected graph (not only one looking like a line, it could have any shape) and decompose and recompose the graph connectivity matrix to elementary components no matter how complex the graph is. You can see now how the representation of the graph and "translation" are connected. The Laplacian of the graph depends on the values of its adjacency (or weight) matrix (i.e. its structure). You map your signal $x[n]$ on the node set of the graph $V$ and you assume (or construct) the edge set $E$. Therefore, any notions of "translation" or "frequency" now depend on the structure of the adjacency matrix. Therefore, don't try to understand why Fig.7 in the paper that you link looks the way it looks. First of all, the mapping of the signal on the road network is arbitrary and second, the "translation" depends both on the mapping and the connectivity matrix of the road network. Conceptually, this particular example does not have an immediate connection with reality. But at the same time, conceptually it shows you what translation means over a graph signal and a graph that can have arbitrary connectivity. Perhaps it is easier to think about GSP in terms of linear algebra because at the end of the day, this is what it is all based on. If we forget about graphs, adjacencies, nodes, edges, mappings, etc for a minute and focus on the Laplacian: The whole point of GSP is to come up with a new representation for $x[n]$ in the form of a matrix. A new "decomposition" if you like, similar to the way the DFT matrix decomposes a signal or similar to the way Wavelets decompose a signal. In fact, wavelets are part of the family of "constructed" graphs I am talking about earlier. They are basically a matrix. This matrix could also be expressed as a graph. When it is expressed as a graph, it opens the door to "new" ways of working with signals or "new" ways of working with graphs. For more information on this line of thinking please see this paper or this paper and this paper (for methods of discovering graph representations). Hope this helps.
{ "domain": "dsp.stackexchange", "id": 6739, "tags": "signal-analysis, fourier-transform, neural-network, graph-theory" }
Running ROS on Digital Ocean?
Question: Has anyone done this? Create a "droplet" or whatever they call it and set up ROS, and be able to actually run roscore, gazebo, rviz, etc. from there? I can imagine various complications, like having a static IP, figuring out what to with graphics apps like gazebo. Is there any way? Thanks! Answer: I did install a YARP based environment on my droplet along with graphics and it works great. I use it for testing purpose. The installation in my case was very standard and when it came to graphics I simply searched for online guidance, which is usually super useful on Digital Ocean (DO). This is why I tend to prefer DO over other online premises: it's easy and highly configurable. An example of how to deal with graphics installation can be found here. Compared with ROS, YARP is a gentler beast, but I can imagine the installation is still doable, even though you would require definitely a droplet with more resources.
{ "domain": "robotics.stackexchange", "id": 1826, "tags": "ros, gazebo, rviz" }
Many Body Physics: Hamiltonian block structure and Symmetries
Question: Consider a many body problem of a small cluster, e.g. the 'Hubbard-Cluster' (albeit the question may be of relevance for other Hamiltonians as well): $$\mathcal{H}=\sum_{<ij>\sigma} t_{ij} (c^\dagger_{i\sigma}c^{}_{j\sigma}+ c.c.) + U\sum_i n_{i\uparrow}n_{i\downarrow} -\mu N$$ It is well understood that when such an operator commutes with an oberservable like the density $N=\sum_i n_i,\; n_i=n_{i\uparrow}+n_{i\downarrow}$ and/or the magnetic moment $M=\sum_i m_i,\; m_i=n_{i\uparrow}-n_{i\downarrow}$ these are good quantum numbers and under an appropriate sorting of fock states $⟨\psi_\alpha|$ the Hamiltonian matrix $$\mathcal{H}_{\alpha\beta} = ⟨\psi_\alpha|\mathcal{H}|\psi_\beta⟩$$ decomposes into blocks of constant particle number and magnetic moment. However in most of the literature it is also mentioned that if the cluster is invariant under a symmetry operation $S$ the problem may be further simplified, i.e. the Hamiltonian be decomposed into yet smaller blocks by a unitary transformation. Now here are my questions: Is there any systematic understanding of this simplification? Given a symmetry $S$, what is the unitary transformation simplifying the Hamiltonian? Once a symmetry operation has been found, how small are the resulting blocks? Can their size be predicted? If there are several symmetry operations at hand, which one results in the greatest simplification of the problem? How can exact solutions be found, using symmetry related unitary transformations? Answer: You may find the following paper useful: A Symbolic Solution of the Hubbard Model for Small Clusters, by J. Yepez. You may also want to review group theory for condensed matter physics, because your questions essentially span the basics of group and representation theory. Many texts give good overviews of the fundamentals of group theory as applied to solid state crystals, and some are available on-line, for instance "Symmetry in Condensed Matter Physics" by P.G. Radaelli (U. of Oxford link) or "Applications of Group Theory to the Physics of Solids" by M.S.Dresselhaus (MIT link). Anyway, the basic ideas are as follows: Is there any systematic understanding of this simplification? Given a symmetry S, what is the unitary transformation simplifying the Hamiltonian? Yes, there is a sistematic way to do this, and it has to do with the group of symmetry transformations of the Hamiltonian $H$. A simplifying unitary transformation can be found starting from any arbitrary basis of states as follows: i) In the given basis generate the matrix representations for the Hamiltonian and the symmetry generators. These matrices produce in general a reducible representation of the symmetry group. ii) Use the generator matrices and group theoretical techniques to construct projectors on states that are invariant under the symmetry transformations. These new states are not energy eigenstates, just symmetry invariant states. iii) Construct the new basis states and the unitary operation that transforms the original basis into the new one. This is the unitary transformation you are looking for. See below for further details. Once a symmetry operation has been found, how small are the resulting blocks? Can their size be predicted? Yes, the size of the blocks and the degeneracy of the energy eigenstates is determined by the nature of the symmetry group. The new symmetry invariant basis contains groups of states that transform into each other under the symmetry transformations, but cannot be decomposed into any smaller groups with the same property. Each of these groups generates what is called an irreducible representation of the symmetry group. The unitary transformation described above decomposes the original reducible representation into some number of such irreducible representations. The states corresponding to any given irreducible representation mix only among themselves in the Hamiltonian matrix. Therefore the transformation to symmetry invariant states and the decomposition into irreducible representations resolves the matrix of the Hamiltonian into a simpler block diagonal form. The possible dimensions of the blocks are known and are determined by the structure of the symmetry group, independently of the particular form of the Hamiltonian. That is, Hamiltonians of completely different systems that share the same symmetry group will have block-diagonal forms with blocks of the same pre-determined dimensions. These are the dimensions of the irreducible representations of the symmetry group. They differ from group to group, but have been calculated and tabulated for all important symmetry groups. If there are several symmetry operations at hand, which one results in the greatest simplification of the problem? The general rule is that operations of higher symmetry generate reducible representations with respect to operations of lower symmetry. Therefore the operation that actually determines the resolution into the smallest blocks is the one of lowest symmetry among the operations of the symmetry group. How can exact solutions be found, using symmetry related unitary transformations? Basically symmetry simplifications of the Hamiltonian matrix produce decompositions into much smaller diagonal blocks (irreducible representations) that can then be diagonalized independently. So the procedure is to diagonalize a block, generate the associated unitary transformation, and then use the latter to find the exact eigenstates. Sometimes the diagonalization can be done analytically, but even if it needs to be done numerically, it is still a much simpler problem than the original one.
{ "domain": "physics.stackexchange", "id": 25028, "tags": "quantum-mechanics, solid-state-physics, symmetry, linear-algebra, many-body" }
Can we tell if a particle has collapsed due to a measurement?
Question: Suppose we have two electrons A and B. My friend measure the spin on electron B the value is +1/2, and he writes on a piece of a paper the value. Electron A has not been measured, so the spin is in superposition. Now my friend ask me to guess which of the two electrons has collapsed to a known state, is there a way to do that? Now my doubt is that if I can't tell which of the two electrons has collapsed means that I am unable to prove that wave function collapse exist or it doesn't affect a different observer. If am able to check which of two has collapsed means that I can use quantum entaglement to transfer information: As example: I can take 10 electrons on earth numbered from 0 to 9, and other 10 entagled on mars. On Earth I measure electron 3 and on Mars I check them to know if they are collapsed, I find the number 3, so information has been transferred, and as far I know this is not possible. Since quantum superposition, and wave collapsing theories have been proved, I suppose that since I am a different observer both electrons are still in superposition until I do a measure, only after that I determine the value. But this involves a paradox, since I don't need to do a measure to know the state, I just need to read the paper note where the friend has written the value. It's like to admit that the piece of paper is in a quantum superposition until I read its value (very like Schrödinger's cat paradox). But as far as I know for quantum decoherence a macrosopic thing can't be in superposition. Now my guess is that a differen't observer can't know if the electron has collapsed in a known state, the piece of paper will be in superposition until quantum decoherence has occurred and quantum decoherence propagates at light speed. Answer: A study was done on a very similar question. The question was whether a person who was flipping coins, and the state of the coin, could be in superposition for someone else who hadn't observed the first person and the coin flip. According to the study which used sets of photons, the answer is yes. You can find a layman's article and the link to the original study here: https://phys.org/news/2019-11-quantum-physics-reality-doesnt.html So if this study holds, the answer to your question should be which electron got measured should be in superposition for you, WHILE it is not in superposition for the person who measured the electron. Note: This means that there is no way directly for YOU to tell which electron got measured until you look at the paper or until the first person tells you.
{ "domain": "physics.stackexchange", "id": 63521, "tags": "quantum-information, quantum-entanglement, superposition, wavefunction-collapse, decoherence" }
Why do bacteria adapt quickly to antibiotics but not to much more common threats?
Question: People cook food all the time, why don't common bacteria adapt to extreme heat? It's winter in either hemisphere every year, why don't common bacteria adapt to extreme cold? People use soap or spray cleaners, why don't bacteria adapt to those chemicals? Why is it specifically only antibiotics that bacteria adapt to so quickly? Answer: Antibiotics usually target a single protein (most commonly components of the ribosome machinery). If you have a large population of bacteria, there are bound to be cells with mutations in the gene encoding for the target of the antibiotic that render the antibiotic ineffective. Microbes also have drug efflux systems that can help with resistance to antibiotics. Temperature on the other hand has broad lethal effects from disrupting the cell membrane to damaging DNA and affecting the reaction rates of vital enzymes so that it's impossible to adapt in such a short time by gaining a few mutations.
{ "domain": "biology.stackexchange", "id": 9478, "tags": "antibiotics" }
Calculation of the $\langle H \rangle$ for a particle in a box
Question: I am working through a problem in which a particle is in an infinite potential well of length $L$ at $t=0$ before the spontaneous change of the box being expanded to length $2L$. I have calculated the wave function $$\Psi(x,t)=\Sigma_{n=0}^{\infty}c_{n}\sqrt{\dfrac{2}{L}}\sin(\frac{n \pi x}{L})\exp(-i(n^{2}\pi^{2}\hbar^{2}/2m(2L)^{2})/t)$$ including all coefficients $c_{n}$ where $c_{n}=0$ if $n$ is even and $c_{n}=\dfrac{\pm 4\sqrt{2}}{\pi(4-n^{2})}$ if $n$ is odd. To calculate $\langle H \rangle$ I'd like to use $\langle H \rangle=\Sigma_{n=1}^{\infty}|c_{n}|^{2}E_{n}$ where the allowed values of E after the change in the well length are $E_{n}=\dfrac{n^{2}\pi^{2}\hbar^{2}}{2m(2L)^{2}}$. My result is $$\langle H \rangle=\dfrac{16\hbar^{2}}{mL^{2}}\Sigma_{n=0}^{\infty}\dfrac{(2n+1)^{2}}{(4-(2n+1)^{2})^{2}}=\dfrac{\pi^{2}\hbar^{2}}{4mL^{2}},$$ which is different than the Hamiltonian before the change in length ($\dfrac{\pi^{2}\hbar^{2}}{2mL^{2}}$). I suspected that the Hamiltonian should not change, since, after all, $2L$ is just a label, and I could call that distance some other number without a factor of 2, and it shouldn't change the physics involved. What is the Hamiltonian after the change? If it is different from the Hamiltonian before the change, then why is it different? Answer: Would it be simpler to consider only the second Hamiltonian with the appropriate initial condition? As an intial condition, I am imagining some non-zero amplitude from $0$ to $L$ and zero amplitude from $L$ to $2L$. That would be consistent with the particle having been confined to the narrower well prior to time $t=0$. Is the wavefunction, as you have written it, consistent with the condition that the wavefunction is initially non-zero only over an interval that is $L$ wide? I am not be sure what your "$\pm$" means in your expression for $c_n$, but it looks to me like the wavefunction is spread over the entire $2L$ interval at $t=0$. Also, shouldn't the exponential factors be $\exp(-i(n^{2}\pi^{2}\hbar/8mL^{2})t)$, consistent with a well of width $2L$? Approach Just to be clear, the hamiltonian could be written as a single expression involving step functions of time. It is usually said that such a time dependent system does not have eigenfunctions and eigenvalues. Various approaches are available for dealing with time dependent hamiltonians. To predict what happens after the potential well changes suddenly from width L to width 2L, one approach is to consider a first time independent hamiltonian that acts up to time $t=0$ and a second time independent hamiltonian that acts after that time. The first hamiltonian is considered only to the extent that it is required to determine the initial conditions for predicting the dynamics after time $t=0$. The first hamiltonian, $H_1$, and its eigenfunctions The first hamiltonian is $H_1=\hat{p}^2/2m+V_1(x)$ where $V_1(x)=0$ on the interval $[0,L]$ and infinite everywhere else. The infinite potential energy just means the probability of finding the particle in that region is zero. The normalized eigenfunctions are $u_j(x)=\sqrt{\frac{2}{L}}\sin(j \pi x/L)$ on the interval $[0,L]$ and $u_j(x)=0$ everywhere else. This ensures the probability of finding the particle outside of the interval $[0,L]$ is zero. It also means the wavefunction is zero outside the interval $[0,L]$. There is a discontinuity in the slope of the eigenfunctions at 0 and at L. This is usually accepted by making an analogy to a classically rigid wall. Expectation value of $H_1$ Calculating $\langle H_1\rangle$ is tricky, because of the infinite potential function. I do not know a mathematical argument for saying $\int u^*_j(x)V_1(x)u_j(x) dx =0$, but physically the integral is zero because there is no wavefunction outside the interval $[0,L]$ and $V_1(x)$ is zero where there is a wavefunction. Except for that issue, it is straight forward to show $\langle H_1 \rangle=\Sigma_{j=1}^{\infty}|a_{j}|^{2}\epsilon_{j}$ where $\epsilon_j=j^2\pi ^2 \hbar ^2/2mL^2$ and $\psi (x)=\Sigma_{j=1}^{\infty} a_{j}u_j(x)$. The second hamiltonian, $H_2$, and its eigenfunctions The second hamiltonian is $H_2=\hat{p}^2/2m+V_2(x)$ where $V_2(x)=0$ on the interval $[0,2L]$ and infinite elsewhere. The normalized eigenfunctions are $w_n(x)=\sqrt{\frac{1}{L}}\sin(n \pi x/2L)$ on the interval $[0,2L]$ and $w_n(x)=0$ everywhere else. The eigenvalues are $E_n=n^2\pi ^2 \hbar ^2/8mL^2$ Note that the eigenfunctions $w_n(x)$ of $H_2$ cannot be expanded in terms of the $u_j(x)$ because the $u_j(x)$'s are zero on the interval $[L,2L]$. Except for the usual details, it is possible to expand the $u_j(x)$'s in terms of the $w_n(x)$'s. One detail is continuity of $\hat{p}\psi(x)$ at $x=L$ and at $x=2L$. With the first hamiltonian the momentum was discontinuous at $x=L$, consistent with the classical turning point. With the second hamiltonian, there should be no turning point at $x=L$. That is, the momentum should be continuous there. The wavefunction in the second potential well If the wavefunction is $\psi (x,0)=\Sigma_{j=1}^{\infty} a_{j}u_j(x)$ at $t=0$, the wavefunction in the second potential well will evolve according to $\psi (x,t)=\Sigma_{n=1}^{\infty} b_{n}w_n(x)\exp(-iE_nt/\hbar)$ for $t>0$, where the $b_n$'s are to be determined. The derivation of an expression for the $b_n$'s in terms of the $a_n$'s goes like this: $$ \psi (x,0)=\Sigma_{n=1}^{\infty} b_{n}w_n(x)=\Sigma_{j=1}^{\infty} a_{j}u_j(x)$$ $$ \Sigma_{n=1}^{\infty} b_{n}w_k^*(x)w_n(x)=\Sigma_{j=1}^{\infty} a_{j}w_k^*(x)u_j(x) $$ $$ b_{k}=\Sigma_{j=1}^{\infty} a_{j}\int_{-\infty}^{\infty}w_k^*(x)u_j(x)dx $$ $$ b_{k}=\Sigma_{j=1}^{\infty} a_{j}\int_{0}^{L}w_k^*(x)u_j(x)dx $$ Note that the integration only goes from zero to L because the $u_j(x)$'s are zero everywhere else. A simple example For example, if a particle in an infinite square well of width of 2L with hamiltonian $H_2$ somehow starts out with a wavefunction $\psi(t,0)=u_1(x)$, then $$ b_n=\frac{4 \sqrt{2}\sin(n\pi /2)}{(4-n^2)\pi}, n=1,2,3,5,7,... $$ $$b_k=0, k=4,6,8,10,...$$ The particle is not in an energy eigenstate. Note that in this example I am not saying the particle started in a narrow well or that there are two hamiltonians, but I am saying these results are the same as in the original problem with the time dependent hamiltonian. What is the expectation value $\langle H_2 \rangle$? It is straight forward to show $$\langle H_2 \rangle=\frac{\hbar^2 \pi^2}{8mL^2}\Sigma_{n=1}^{\infty}n^2 \left| b_n\right|^2=\frac{\hbar^2 \pi^2}{2mL^2}=\epsilon_1 $$ I used Mathematica to do the integrals and derive that the $n^2 \left| b_n \right| ^2$'s all add up 4. This shows that the expectation value of the energy is equal to the eigenvalue of $H_1$ acting on $u_1(x)$, as expected. This is only an expectation value, though. It is not "the energy", since a particle with that initial condition does not have a definite energy in this square well. It looks like the difference between this result and yours is at $b_2=1/\sqrt{2}$. Leaving that $2^2b_2^2$ out of the sum reduces the sum by a factor of 2, so your result was too small by a factor of 2.
{ "domain": "physics.stackexchange", "id": 21368, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, hamiltonian" }
How to change speed/velocity when moving a robot using JointState?
Question: I am able to move a 2 arm ABB YuMi robot arm (7 joints and a gripper per arm) using the following function as intended (http://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/JointState.html) def go_joint(self, position, arm, reuse_controller=False): if not reuse_controller: self.request_controller(arm) state = JointState() state.header = Header() state.header.stamp = rospy.Time.now() if arm == LEFT: state.name = ['yumi_robl_joint_1','yumi_robl_joint_2','yumi_robl_joint_3','yumi_robl_joint_4','yumi_robl_joint_5','yumi_robl_joint_6','yumi_robl_joint_7','gripper_l_joint'] else: state.name = ['yumi_robr_joint_1','yumi_robr_joint_2','yumi_robr_joint_3','yumi_robr_joint_4','yumi_robr_joint_5','yumi_robr_joint_6','yumi_robr_joint_7','gripper_r_joint'] state.position = position state.velocity = [10]*8 state.effort = [] self.cur_arm.go(state, wait=True) print(state) However, despite the fact that the velocity is being registered (here is an print of the state) header: seq: 0 stamp: secs: 1672929166 nsecs: 529814243 frame_id: '' name: - yumi_robl_joint_1 - yumi_robl_joint_2 - yumi_robl_joint_3 - yumi_robl_joint_4 - yumi_robl_joint_5 - yumi_robl_joint_6 - yumi_robl_joint_7 - gripper_l_joint position: [0.25920000672340393, -1.1437000036239624, 1.0319000482559204, 0.2806999981403351, -0.002400000113993883, 0.9562000036239624, -0.2125999927520752, 0.0] velocity: [10, 10, 10, 10, 10, 10, 10, 10] effort: [] The speed of the movement is not changing. Any idea why this is happening ? Originally posted by ramyun on ROS Answers with karma: 3 on 2023-01-05 Post score: 0 Answer: Yes, the reason is that the JointState message http://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/JointState.html is more like a container than a controller. You send an information (e.g. the position of a particular joint) and the robot move to that coordinate. The velocity field in the message are not changing the velocity (meant as a state) of the robot. But they are used more for "information porpuses". If you want to change the speed/velocity of the robot for moving from point A to point B slower or quicker, then you can only control the frequency of the JointState message. By increasing or reducing the time lapse between two messages, you can control the speed/velocity for your robot around one or more joints. Is it clear? EDIT: For instance, let's take the tutorial on this site and let change it to move quickly around the 3 joints: https://docs.ros.org/en/humble/Tutorials/Intermediate/URDF/Using-URDF-with-Robot-State-Publisher.html Looking in the code, we find out, that a "frequency" has been passed as an argument to the method loop_rate ... loop_rate = self.create_rate(30) ... If we take the same code and replace the fixes value 30 with a variable: ... self.frequency = 30 ... loop_rate = self.create_rate(self.frequency) ... .... self.frequency += 10 # Or self.frequency -= 10 if you want to decrease your speed. ... then you have indirectly changed the frequency at which you send the msg JointState But pay attention: The example above is not professional and definetely not elegant. I personally would keep the loop_rate constant and change only the frequency at which the single joint state messages are sent. In pseudocode: if (timecurrent - timeprevious) > delta_time: # Where delta_time is a variable which can be changed according to your desired speed self.joint_pub.publish(joint_state) Originally posted by Andromeda with karma: 893 on 2023-01-09 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ramyun on 2023-01-09: Thank you for your comment about the velocity. I am actually new to this, can you please elaborate on your second comment (for the time lapse)? Because I think I did try increasing it and there was an overutilization of the channel. Maybe I did it the wrong way, I would appreciate any further details or any link that can guide me through it. Thanks again. Comment by Andromeda on 2023-01-09: I edited my answer above
{ "domain": "robotics.stackexchange", "id": 38211, "tags": "ros, moveit, python3" }
Is it equally difficult to lift something as it is to lower it?
Question: This just seems a bit counterintuitive cause lifting stuff seems harder, but if I do them at the same, constant speed, I will apply the same force opposing gravity, just over opposite displacements, resulting in equal (but opposite) work. Is this correct? Answer: First of all note that human muscles require energy even when doing no work, so be a bit cautious about analysing any situation involving humans manipulating objects. See Why does holding something up cost energy while no work is being done? for more details. If we replace the human by some form of mechanical cantilever then you are quite correct that the force is the same whether you are going up or down, so the magnitude of the work done is the same. However when lifting the arm is doing work on the object i.e. the energy of the arm decreases and the energy of the object increases. When going down the object is doing work on the arm i.e. the energy of the arm increases and the energy of the object decreases. So the same amount of work is done in both cases, but the energy flows in a different direction. If you want to see how the maths describes this then we start with the equation for the work: $$ W = \int_{x=a}^{x=b} \mathbf F\cdot\mathrm d \mathbf x $$ Where $a$ is the value of the position $\mathbf x$ when the motion starts and $b$ is the position when it ends. In the 1D case where $\mathbf x$ is just distance up and down and the force $\mathbf F$ is the constant $mg$ the integral becomes: $$ W = mg(b - a) $$ When we are going up $b > a$ so $W_\text{up}$ is positive, and when we are going down $b \lt a$ so $W_\text{down}$ is negative: $$ W_\text{up} = - W_\text{down} $$
{ "domain": "physics.stackexchange", "id": 32167, "tags": "gravity, work" }
Basic Linear Momentum and Conservation
Question: I have begun learning about Momentum and the Conservation of Momentum. For some reason, I have really struggled with understanding this topic. Right, so I understand momentum is given by $$ p = mv $$ However, I fully don't understand the following statement: "If no external forces are acting on our system, the total momentum of the system remains constant" if this true, apparently the following is $$ m_1v_1 + m_2v_2 = m_1v'_1 + m_2v'_2 $$ I understand the formulaic approach of showing this $$ \displaystyle \frac{dp}{dt} = m\frac{dv}{dt} = \frac{d(mv)}{dt}=ma $$ $$ \therefore \frac{dp}{dt} = F_{net} $$ The first thing that throws me off with the above statement is the external forces part. Why doesn't it hold if internal forces are acting on our system? Furthermore, what is a external force? Something like gravity right? The second part throws me off even more! Why does the total momentum remain constant?? I understand this question may be very vague but it's very hard to describe what you don't understand! Answer: "External" and "internal" is relative to the system you are working with. If your system is made of, for example, two bodies A and B, then the internal forces are: the force A makes over B, and the force made by B over A. Internal forces, by action and reaction principle, always comes in pairs and furthermore, they are opposite! so they cancel when you calculate the total force. Then only external forces appear, the ones that other bodies different from A and B in the example apply to the system. The complete demonstration is: \begin{equation} \frac{dp}{dt} = F_{net} \end{equation} if we separate the sum in external and internal forces, we have \begin{equation} F_{net} = \sum F = \sum F_{int} + \sum F_{ext} \end{equation} Then, for the third Newton Law $\sum F_{int} = 0$, and so \begin{equation} F_{net} = \sum F = \sum F_{ext} \rightarrow \frac{dp}{dt} = \sum F_{ext} \end{equation} In this image, there are 4 bodies. If your system is composed by the red balls then the internal forces are the grey arrows and the external forces are the black ones. Note that in that system the black ones doesn't have a pair. About the second question, you'll see that if the derivative of p over time is zero, then p is constant ;)
{ "domain": "physics.stackexchange", "id": 40878, "tags": "classical-mechanics" }
How can I fit a DN16 flange to a DN10 flanged pipe?
Question: One of the valves on my 3 bar line just failed. Without it, our production has ground to a stop and we can't operate until we get a replacement (which we have already ordered). In order to make a deadline, I am trying to find a way to use a (properly rated for pressure) DN 16 flanged valve to fit with the existing DN 10 piping. Is there any way to (safely) connect these two flanges together temporarily? Answer: If you have access to a lathe you should be able to quickly machine a flanged reducer out of Nylon 6. Not sure what temperature you're operating at but make it a bit thick and it will easily be able to handle 3 bar.
{ "domain": "engineering.stackexchange", "id": 21, "tags": "steam, piping" }
Why is the bispectrum not commonly used in experimental physics?
Question: Power spectra, coherence spectra, and linear transfer functions are ubiquitous tools of experimental physics. However, our instruments often retain small nonlinear effects which can contaminate measurements. It appears that higher order spectra, in particular the bispectrum, would be ideal tools to investigate nonlinear interactions. Nonetheless, I've never actually seen them put to use in experimental physics. For example, consider the (frequency-domain) coherence: $C_{xy}(f) = \frac{\langle X(f)Y(f)^*\rangle}{\sqrt{\langle X(f)X(f)^*\rangle\langle Y(f)Y(f)^*\rangle}}$ The bicoherence considers not two but three signals, and looks for correlations between oscillations at frequencies $f_1$ and $f_2$ combining nonlinearly to produce a signal at $f_1+f_2$: $C_{xyz}(f_1,f_2) = \frac{\langle X(f_1) Y(f_2) Z^*(f_1 + f_2)\rangle}{ \sqrt{\langle X(f_1)X^*(f_1)\rangle\langle Y(f_2)Y^*(f_2)\rangle\langle Z(f_1+f_2)Z^*(f_1+f_2)\rangle} } $ ...which seems like a useful thing to do. Why are the bispectrum and bicoherence not used more frequently in experimental physics? I am specifically thinking about time domain, multi-input/multi-output systems where one is looking for nonlinear couplings between various signals. One of the top Google hits on the subject is for the Matlab Higher Order Spectral Analysis (HOSA) Toolbox, which seems like a nice resource (though it appears to be no longer maintained and now suffering from bit-rot). Answer: Such objects are used all the time. The mathematics is done in terms of quantum fields, which to some extent conceals what's going on. For example, your "(frequency-domain) coherence" is a correlation coefficient, which is normalized, whereas Physicists typically work in terms of correlation functions, which typically are not, but they largely amount to the same thing. Your observables $X(f_1)$, etc., are constructed as functions of frequency, however this is a singular object in quantum field theory. In quantum field theory, we instead construct observables $\phi(F_1)$, etc., as functionals of test functions $F_1(x)$, etc. One singular choice would be $F_1(x)=\exp(if_1\cdot x)$, which makes $\phi(F_1)$ essentially the same object as your $X(f_1)$; it's singular, however, because $F_1(x)$ is not square integrable. Another choice of singular test function is, of course, $\delta(x-y)$, which gives the value of the field at a point, which we might write in your terms as something like $X(y)$. For a quantum field, this is also a rather singular object. In fact, when you say $X(f_1)$, what you really mean is $\int X(f) {\mathrm d}f$, over some small range of frequencies, and in the mathematical and experimental details this has to be taken into account. Making everything precise requires that we know what the frequency ranges of each of the measurements is, which an experimenter either must characterize or must read off from a manufacturer's data sheets. In even more detail, we will have to construct a weight function, saying that frequencies near $f_1$ are still registered by the measurement device, but not as much as near $f_1$. We may well take the weight function, as a first approximation, to be Gaussian. This corresponds to taking the test function $F_1(x)$ to be that Gaussian. Signal analysis usually calls the test function a window function. Test or Window functions can be difficult to become familiar with, but I believe it's well worth getting there. In these terms, your $C_{xyz}$ is a particular choice of (normalized) 3-point function. The choice of $f_1+f_2$ for the third frequency is of course not necessary, we can consider 3-point correlations between any three frequencies $f_1,f_2,f_3$ (and their vicinities). In quantum field theory, we would represent the 3-point correlation function, in the vacuum state, as $\left<0\right|\phi(F_1)\phi(F_2)\phi(F_3)\left|0\right>$. Replace the vacuum vector by some other state vector, if you like. In the particular case when quantum field observables are mutually commutative, it can be understood to generate probability measures that correspond to an equivalent description in terms of probability measures over classical random variables, and hence quite precisely to a stochastic signal analysis. When quantum field observables do not commute, everything gets lots more complicated, but a remnant of the signal processing point of view can be maintained. There is a mathematics of random fields that is used in cosmology because it is generally not necessary to worry about measurement incompatibility in that context. Mathematicians generally present signal analysis in Hilbert space terms unless they are writing for an engineering audience.
{ "domain": "physics.stackexchange", "id": 543, "tags": "experimental-physics, fourier-transform, signal-processing" }
What canonical momenta are the "right" ones?
Question: I'm doing some classical field theory exercises with the Lagrangian $$\mathscr{L} = -\frac{1}{4}F_{\mu \nu}F^{\mu \nu}$$ where $F_{\mu \nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$. To find the conjugate momenta $\pi^\mu_{\ \ \ \nu} = \partial \mathscr{L} / \partial(\partial_\mu A^\nu)$, I can use two methods. First method: directly apply this to $\mathscr{L}$. We get a a factor of $2$ since there are two $F$'s, and another factor of $2$ since each $F$ contains two $\partial_\mu A_\nu$ terms, giving $$\pi^\mu_{\ \ \ \nu} = -F^\mu_{\ \ \ \nu}.$$ Second method: get $\mathscr{L}$ in terms of $A$ by expanding and integrating by parts, yielding $$\mathscr{L} = \frac{1}{2}(\partial_\mu A^\mu)^2 - \frac{1}{2}(\partial_\mu A^\nu)^2.$$ Differentiating this gets factors of $2$ and gives $$\pi^\mu_{\ \ \ \nu} = \partial_\rho A^\rho \delta^\mu_\nu - \partial^\mu A_\nu.$$ These two answers are different! (They do give the same equations of motion, at least.) I guess that means doing the integration by parts changed the canonical momenta. Is this something I should be worried about? In particular, I have another exercise that wants me to show that one of the canonical momenta vanishes -- this isn't true for the ones I get from the second method! Plus, my stress-energy tensor is changed too. When a problem asks for "the" canonical momenta, am I forbidden from integrating by parts? Answer: OP is pondering if the corresponding Hamiltonian formulation is affected if the Lagrangian density $$\tag{1} {\cal L}~\longrightarrow~\tilde{\cal L}~:=~ {\cal L}+\sum_{\mu=0}^3d_{\mu}F^{\mu}$$ is modified with a total divergence$^1$ term $d_{\mu}F^{\mu}$, so that the definition of canonical momentum $$\tag{2} p_i~:=~ \frac{\delta L}{\delta v^i}-\frac{d}{dt}\frac{\delta L}{\delta \dot{v}^i}+\ldots, \qquad L~:=~\int\! d^3x~{\cal L}, $$ is modified as well? That's a good question. Some technical notes: (i) The reason for the functional (rather than partial) derivatives in eq. (2) is because of the presence of spatial directions in field theory (as opposed to point mechanics), cf. e.g. this Phys.SE post. (ii) The ellipsis $\ldots$ in eq. (2) denotes possible dependence of higher time derivatives in the Lagrangian $L[q,v,\dot{v},\ddot{v},\dddot{v},\ldots;t]$. (We assume implicitly that all the dependence $\dot{q},\ddot{q},\dddot{q},\ldots,$ has been replaced with $v,\dot{v},\ddot{v},\ldots,$ in the Lagrangian, respectively.) Although we are only here interested in the normal physical case where the Euler-Lagrange equations contain at most two time derivatives, there could still be higher time derivatives inside a total divergence term in the action. Higher time derivatives are not just a purely academic exercise. E.g. the Einstein-Hilbert (EH) action contains higher time derivatives, cf. e.g. this Phys.SE post. We briefly return to higher time derivatives in Section 7. (iii) Changing the action with a total divergence term may affect the choice of consistent boundary conditions. E.g. the EH action is amended with a Gibbons–Hawking–York (GHY) boundary term for consistency reasons. OP is not asking about the Lagrangian formulation, and already knows that the Euler-Lagrange equations are not changed, cf. e.g this Phys.SE post. Let us from now on focus on the Legendre transformation and the Hamiltonian formulation. The transformation (1) consists of two types of transformations: (i) a change by a total spatial derivative $$\tag{3} {\cal L}~\longrightarrow~\tilde{\cal L}~:=~ {\cal L} +\sum_{k=1}^3d_kF^k, $$ which does not change the momentum definition (2); and (ii) a change by a total time derivative$^1$ $$ \tag{4}L~\longrightarrow~\tilde{L}~:=~L+\frac{\partial G}{\partial t}+ \int\! d^3x\left[\frac{\delta G}{\delta q^i} v^i + \frac{\delta G}{\delta v^i} \dot{v}^i+\ldots \right] ~\approx~L+\frac{dG}{dt}. $$ We will for simplicity only consider the latter transformation (4) from now on. Let us for simplicity consider point mechanics. (The field theoretic generalization is straightforward.) Eq. (2) and (4) then become $$\tag{5} p_i~:=~ \frac{\partial L}{\partial v^i}-\frac{d}{dt}\frac{\partial L}{\partial \dot{v}^i}+\ldots, $$ $$ \tag{6}L~\longrightarrow~\tilde{L}~:=~L+\frac{\partial G}{\partial t}+ \frac{\partial G}{\partial q^i} v^i + \frac{\partial G}{\partial v^i} \dot{v}^i+\ldots ~\approx~L+\frac{dG}{dt}, $$ respectively. The canonical momentum (5) changes as $$ \tag{7} P_i~=~p_i+\frac{\partial G}{\partial q^i} +\frac{\partial^2 G}{\partial v^i\partial q^j} (v^j-\dot{q}^j) ~\approx~p_i+\frac{\partial G}{\partial q^i} . $$ [The $\approx$ symbol means equality modulo equations of motion or $v^i\approx\dot{q}^i$.] First let us assume that the Legendre transformation $v\leftrightarrow p$ is regular. If $G$ does not depend on the velocity fields $v^i$ and higher time-derivatives in the transformation (6), this is Exercise 8.2 (Exercise 8.19) in Goldstein, Classical Mechanics, 3rd edition (2nd edition), respectively. One can use a type 2 canonical transformation $$ \tag{8}p_i\dot{q}^i-H ~=~ -\dot{P}_iQ^i-K + \frac{dF_2}{dt},$$ $$ \tag{9} \qquad F_2~:=~P_i q^i-G, $$ where $$\tag{10} Q^i~:=~q^i, \qquad P_i~:=~p_i+\frac{\partial G}{\partial q^i}, \qquad K~:=~H-\frac{\partial G}{\partial t}.$$ An Hamiltonian action principle based on either the lhs. or rhs. of eq. (8) has the Hamilton's equations $$ \tag{11} \dot{q}^i~\approx~ \frac{\partial H}{\partial p_i}, \qquad -\dot{p}_i~\approx~ \frac{\partial H}{\partial q^i}, $$ and the Kamilton's equations $$ \tag{12} \dot{Q}^i~\approx~ \frac{\partial K}{\partial P_i}, \qquad -\dot{P}_i~\approx~ \frac{\partial K}{\partial Q^i}, $$ as stationary point, respectively. Hence eqs. (11) and (12) are equivalent under the transformation (6). If $G$ depends on the velocity fields $v^i$, there appear higher time derivatives inside the total time-derivative term $\frac{dG}{dt}$, cf. eq. (6). Then additional complications arise (in writing down an equivalence proof). E.g. the relation for the next Ostrogradsky momentum $$ \tag{13} P^{(2)}_i~:=~ \frac{\partial \tilde{L}}{\partial \dot{v}^i}+\ldots~=~\frac{\partial G}{\partial v^i}+\ldots, $$ can typically not be inverted to eliminate the acceleration $\dot{v}^j$. In other words, the Legendre transformation is singular. In case of singular Legendre transformations, it is less clear, but widely believed, that the modified Hamiltonian formulation (resulting from the Dirac-Bergmann constrained analysis) is still equivalent. OP's case (E&M) has constraints (Gauss's law), but in that case, it is easy to check explicitly the equivalence. -- $^1$ Note this subtlety.
{ "domain": "physics.stackexchange", "id": 24580, "tags": "lagrangian-formalism, momentum, hamiltonian-formalism, field-theory, variational-principle" }
Looking for research for separating conversational audio files
Question: I have been looking for conversational audio speech separation. Looking for researches around this topic, I came to across asteroid framework that was implemented on LibriMix dataset for audio speech separation where two voices are overlapping on each other (You can listen to such a sample here). But when we talk about conversational speech data (e.g. two people are conversing with each other) the audio of those people do not overlap with each other. Instead, people take turns in speaking and speak (Here is a sample). While looking online for research on audio separation for conversational audio, I wasn't able to find one. Datasets like WSJ, LibriMix, etc do exist, but they are not designed for conversational problems and are more oriented towards the problem overlapping voices. If there is any project, dataset or research around this that one knows of, it would be really helpful. Answer: The task of separating a conversation with multiple speakers taking turns is called Speaker Diarization. This is different from Speech Separation, which is a special case of Source Separation, refers to separating out multiple overlapping/concurrent sources from an audio stream. Some free datasets include ICSI Meeting Corpus AMI Meeting Corpus VoxConverse More can be found in the Github project wq2012/awesome-diarization.
{ "domain": "datascience.stackexchange", "id": 9215, "tags": "deep-learning, audio-recognition" }
Energy-work theorem and dissipation of energy by an accelerating charge
Question: By the work energy theorem we have that the total energy of a nonrelativistic point charge, $q_0$ of mass $m$, moving in an electric field $\mathbf{E}$ is $ E = E_k + U_e = \frac{1}{2}mv^2 + q_0V \quad Eq.1 $ As I read in my text: The electrical charge acquires potential energy. If the charge is released, work is done by the field and the charge accelerates. It means that its potential energy is converted into kinetic energy. But if an accelerating charge emits electromagnetic radiation some electric potential energy, should be dissipated and could not be converted into kinetic energy. How can eq. 1 hold? The text mentions the dissipation of energy by accelerating charges only later in the course, and says nothing when treating energy conservation of moving charges in the electrostatic field. Answer: The conservation of energy in this problem is of course an approximation - like many things in physics (and one could even say that physics is only an approximation to the real world.) The energy dissipated by an accelerated charge can be calculated using the Larmor formula: $$ P=\frac{2}{3}\frac{q^2a^2}{c^3},$$ where $a$ is the charge acceleration. The speed-of-light entering the numerator hints that this is a relativistic corrections - something one is likely to ignore in problems, where the kinetic energy can be described by non-relativistic expression $\frac{mv^2}{2}$.
{ "domain": "physics.stackexchange", "id": 94072, "tags": "electromagnetism, electromagnetic-radiation, energy-conservation, charge, acceleration" }
Is there a floating joint in Gazebo?
Question: URDF description of the robot allows to specify a 'floating' joint -- a joint with 6 degrees of freedom. But when the robot description gets translated into SDF, the floating joint is converted into fixed joint. Is there a way how to achieve a floating joint in Gazebo? I want to completely control the child link's movement with my plugin. Originally posted by kumpakri on Gazebo Answers with karma: 755 on 2019-10-18 Post score: 0 Original comments Comment by awck on 2020-12-04: Hi, did you figure out a way to do 6DOF pose control of a robot model in Gazebo? Answer: A floating joint in Gazebo (SDF) is just no joint Unlike URDF, links in SDF can just exist as children of the robot model without being connected to each other in a tree structure. Originally posted by chapulina with karma: 7504 on 2019-10-18 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 4449, "tags": "gazebo-7" }
Is my answer incomplete? Checking the stability of a system
Question: Yesterday, during my exam, I had the following exercise: Given $$H(s) = \frac{1}{s^2+2s+4}$$ check if it's stable. which was supposed to be the hardest (since it was the last one). From my knowledge, I quickly found out the poles, checked the real part of the pole (which was $-1$) and said that the system is stable. Is my argument incomplete? Everyone in the classroom seemed to write a lot more at that final exercise. Answer: Did you mention the region of convergence? The system can be stable, but anti-causal depending on the ROC.
{ "domain": "dsp.stackexchange", "id": 5514, "tags": "continuous-signals, linear-systems, stability" }
How is the waveform deviation from the equilibrium related to the air molecule movement?
Question: I'm not sure if this is a stupid question. I've been considering the deviation from the equilibrium reflects the air pressure, with larger deviations reflecting higher air pressure. But in Reetz and Jongman's Phonetics, it's said that one reason to use RMS amplitude is to reflect the fact that more energy is required to move the air molecule further away. This reason seems to assume that waveform deviation also reflects the air molecule movement. My question is, does larger deviation necessarily reflect the further movement of the air molecule, or, is the higher air pressure must be derived from the further movement of the air molecule? Answer: The Mean Free Path of a molecule in the air at normal pressure and temperature is about 70 nm which is very short compared to the wavelength of typical sound waves (34 cm at 1 kHz). Therefore the motion of single molecules is not affected by the fact that a sound wave is propagating through the air. What really propagates is a small difference in air pressure.
{ "domain": "physics.stackexchange", "id": 47415, "tags": "acoustics, molecules, mean-free-path" }
The existence of sodium carbide
Question: (Here, I quote a question from an Indonesian undergraduate Chemistry competition: "sodium carbide reacts with water to form ethyne gas." This is the problem that caused me to ask this question on the site.) Please provide references of the existence or inexistence of sodium carbide. Thank you in advance for you who have answered my question. Edit 2: The type of answers I look for are the ones which includes analysis of probable sodium carbide structures, but deeper explanation will also be highly appreciated. Thank you to "Community" who suggested an edit to this question. Edit 3: I (Galen) have changed the title of this question and rephrased my question. Thanks to Oscar Lanzi and others who have attempted to answer my question. Answer: As the comments imply, sodium carbide certainly does exist. However, it is difficult to get because it is only metastable. Therefore it cannot be made from direct combination of the elements in thermodynamically stable form, a procedure often used (in some cases indirectly) with stable binary compounds. Sodium carbide can be made by using the sodium/CO reaction commented by Waylander (https://patents.google.com/patent/US2642347A/en). This patent gives a brief reference of some other methods: It has been proposed to produce sodium carbide by reacting sodium vapor with carbon in an electric arc (German Patent 526,627), by reacting calcium carbide with sodium monoxide or sodium hydroxide (Vaughn U. S. P. 2,156,365) or by reacting metallic sodium with acetylene (British Patent 336,516 of 1930). Note that in no case is the carbon in a thermodynamically stable form when the carbide is formed; for example without the sodium component the carbon monoxide in the sodium/CO reaction would spontaneously decompose on cooling to the actual carbide formation temperature. In the case of the electric arc the arc takes the carbon (and the sodium) into a higher-energy, metastable state. Sodium carbide is not alone. Magnesium carbide presents a similar issue. Wikipedia gives a curious explanation of the lack of full stability (and thus the impact on possible synthesis techniques) in terms of the related process of forming graphite intercalation compounds: Different from other alkali metals, the amount of Na intercalation is very small. Quantum-mechanical calculations show that this originate from a quite general phenomenon: among the alkali and alkaline earth metals, Na and Mg generally have the weakest chemical binding to a given substrate, compared with the other elements in the same group of the periodic table.[1] The phenomenon arises from the competition between trends in the ionization energy and the ion–substrate coupling, down the columns of the periodic table.[1] Cited Reference 1. Liu, Yuanyue; Merinov, Boris V.; Goddard, William A. (5 April 2016). "Origin of low sodium capacity in graphite and generally weak substrate binding of Na and Mg among alkali and alkaline earth metals". Proceedings of the National Academy of Sciences 113 (14): 3735–3739. arXiv:1604.03602. Bibcode:2016PNAS..113.3735L. https://doi.org/10.1073/pnas.1602473113. PMC 4833228. PMID 27001855.
{ "domain": "chemistry.stackexchange", "id": 15985, "tags": "inorganic-chemistry, stoichiometry, electronic-configuration" }
Does the length of DFT points (duration of a signal) affect the "amplitude" of the power spectral density?
Question: For example, performing a DFT on a 10-second-long and 20-second-long signal with the same sampling frequencies will change the "amplitude" of the power spectral density (PSD) at each frequency because of the difference in frequency resolution, but the integral of the PSD will be the same. Is my understanding correct? Answer: Yes I believe the OP understands correctly with some clarification. Ultimately we need to properly label the spectrum to either be a spectrum plot with a given resolution bandwidth, or a true PSD that has been normalized to frequency units (such as dBm/Hz). This is no different than a measurement of a noise or wideband signal using a spectrum analyzer for those familiar with that. The properly normalized PSD plot will not change as we increase or decrease the total number of bins, while a spectrum plot will have the measurement level for noise or wideband signal go up and down as the number of bins is changed (changing the resolution bandwidth accordingly). Each bin in the DFT is the integration of the total power under the frequency response for that bin as a bandpass filter (the frequency response of each bin is an aliased Sinc function, specifically the Dirichlet Kernel). For white noise, this total power is equivalent to that of a brickwall filter that is 1 bin wide. Thus for noise signals where the power is spread across multiple bins and given as a power spectral density in W / Hz (or other power per frequency units), when the DFT is properly scaled (see those details further below) the total magnitude squared of the DFT (as the integrated total power within the bin) will scale with the number of bins used given the same sampling rate, according to the resolution bandwidth for that bin. And the total power sum of all the bin (sum of $(\frac{1}{N}X[k])^2$) will be equal to the total power in the time domain, as given by Parseval's Theorem. The power in each bin is referred to as the "DFT Noise Floor" which I detail further in this post: Does the duration of a signal affect its frequency component's amplitude? Also, does the sampling frequency affect the power of a signal? This post explaining how the DFT is equivalently a bank of filters should be helpful too in understanding the concept of how the noise floor of the DFT for white noise signals scales by the total number of bins (and samples) used: Can anyone explain how dft works as a filter bank? A scaling by $N$ of the DFT result is required to have the same amplitude as the time domain signal. Consider the simple case of a sinusoid where the sampling rate is an integer multiple of the frequency (to avoid getting into spectral leakage effects for a simple example): $$x[n] = A\cos(2 \pi (f/f_s) n)$$ Where $A$ is the real magnitude, and $f_s$ is the sampling rate as an integer multiple of $f$. Using Euler's relationship we know this is: $$x[n] = \cos(\omega_n n) = \frac{A}{2}e^{j\omega_n n} + \frac{A}{2}e^{-j\omega_nn}$$ Where $\omega_n = 2 \pi (f/f_s)$. Every component in the DFT result is the coefficient for a $e^{j\omega n}$ in the time domain, so the DFT of a sinusoid with the corrected magnitude should have two components each with magnitude $A/2$ per the formula shown above (one component for the positive frequency given by $e^{j\omega_n n}$, and another for the negative frequency given by $e^{-j\omega_n n}$. For real signals the positive and negative frequency will be complex conjugate symmetric, so we can provide all information from the positive frequencies only, in which case if we only used half of the DFT bins relating to the positive frequency components, we would need to increase the resulting PSD by 3 dB to include the total power from both sides). Observe in the DFT formula given as: $$X(k) = \sum_{n=0}^{N-1}x[n]e^{-j2\pi n k/N}$$ When $2\pi n k/N = \omega_n$, the product in the summation will be $A/2$ and so the sum will grow to $NA/2$, so scaling by $N$ results in the matching amplitude of the coefficient of the $e^{j\omega t}$ time domain waveform. Further, since the magnitude for $\frac{A}{2}e^{\omega_n t}$ is constant the power is simply $ (\frac{A}{2})^2$ and not divided by $2$ as we would do with the sinusoid. And all bins sum in power so that the total power of $x[n]$ correctly equals $ (\frac{A}{2})^2 + (\frac{A}{2})^2 = 2(\frac{A}{2})^2 = \frac{A^2}{2}$ and we happily get that factor of 2 for the sinusoid we are familiar with. For single tones and narrow band signals that are less than the resolution bandwidth of the DFT bin, there is no effect on resolution bandwidth as the power for the signal has no distribution (meaning the power of the signal occupies a bandwidth that is 0 wide, so no matter how tight or loose we make the resolution bandwidth on a spectrum analyzer like the DFT we will get the same magnitude result, when properly scaled as I demonstrated). It is when we get into waveforms with bandwidth (and noise in general) that we must be careful about the effects of resolution bandwidth, windowing, etc in using the DFT for accurate power spectral density measurements. I address this in these other posts here on StackExchange: How can I get the power of a specific frequency band after FFT? Proof for the energy correction factor of DFT
{ "domain": "dsp.stackexchange", "id": 12467, "tags": "fft, fourier-transform, dft" }
How can I use GetPlan service in Python?
Question: Hello! I want to use make_plan service of move_base package in order to create a path towards a specified goal. How am i supposed to deal with this service in Python ? start = PoseStamped() start.header.seq = 0 start.header.frame_id = "map" start.header.stamp = rospy.Time(0) start.pose.position.x = robot_x #2.6 start.pose.position.y = robot_y #1.3 Goal = PoseStamped() Goal.header.seq = 0 Goal.header.frame_id = "map" Goal.header.stamp = rospy.Time(0) Goal.pose.position.x = goal_x #-6.2 Goal.pose.position.y = goal_y #-3.0 srv = GetPlan() srv.request.start = start srv.request.goal = Goal srv.request.tolerance = 1.5 I assume that in this way I can declare the robot's current position(start) and target's coordinates(goal) as presented here. In which way should I initialize the server so that i can obtain the path ? What's the crucial part that I have to add in my code? Originally posted by Dimi on ROS Answers with karma: 17 on 2019-08-13 Post score: 1 Answer: you have to call the service: get_plan = rospy.ServiceProxy('/move_base/make_plan', nav_msgs.GetPlan) req = nav_msgs.GetPlan() req.start = start req.goal = Goal req.tolerance = .5 resp = get_plan(req.start, req.goal, req.tolerance) print(resp) Originally posted by ct2034 with karma: 862 on 2019-08-13 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Dimi on 2019-08-13: Thanks, indeed. Just to point out that I need to modify the last line : resp = get_plan(req) to : resp = get_plan(req.start, req.goal, req.tolerance) otherwise I get an error ! Comment by LukeBowersox on 2020-04-14: I followed ct2304's advice in ros kinetic and it worked also I had to import nav_msgs.srv
{ "domain": "robotics.stackexchange", "id": 33614, "tags": "ros-kinetic" }
How to give goal using robot localization
Question: I've setup the move_base and and robot_localization and I'm getting Datum UTM coordinate is (483236.261047, 6674529.600528) when i launch the localization file. How to know who's publishing these values and how to publish these values to move_base? I created robot_localization files referring to husky's files I've read that I have to convert LLto utm but why it's giving me utm values edit: launch file <?xml version="1.0"?> <launch> <group ns="gps_nav"> <rosparam command="load" file="$(find sk_navigation)/params/ekf_params.yaml" /> <rosparam command="load" file="$(find sk_navigation)/params/navsat_params_sim.yaml" /> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_se_odom" clear_params="true"/> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_se_map" clear_params="true"> <remap from="odometry/filtered" to="odometry/filtered_map"/> </node> <node pkg="robot_localization" type="navsat_transform_node" name="navsat_transform" clear_params="true" output="screen" > <remap from="odometry/filtered" to="odometry/filtered_map"/> <remap from="gps/fix" to="/sk/gps/fix"/> <remap from="/imu/data" to="/sk/imu"/> </node> </group> </launch> params ekf_se_odom: frequency: 30 sensor_timeout: 0.1 two_d_mode: true transform_time_offset: 0.0 transform_timeout: 0.0 print_diagnostics: true debug: false map_frame: map odom_frame: odom base_link_frame: base_link world_frame: odom # ------------------------------------- # Wheel odometry: odom0: /odom odom0_config: [false, false, false, false, false, false, true, true, true, false, false, false, false, false, false] odom0_queue_size: 10 odom0_nodelay: true odom0_differential: false odom0_relative: false # -------------------------------------- # imu configure: imu0: /sk/imu imu0_config: [false, false, false, true, true, false, false, false, false, true, true, true, true, true, true] imu0_nodelay: false imu0_differential: false imu0_relative: false imu0_queue_size: 10 imu0_remove_gravitational_acceleration: true use_control: false use_control: false process_noise_covariance: [1e-3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3] initial_estimate_covariance: [1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0] ekf_se_map: frequency: 30 sensor_timeout: 0.1 two_d_mode: true transform_time_offset: 0.0 transform_timeout: 0.0 print_diagnostics: true debug: false map_frame: map odom_frame: odom base_link_frame: base_link world_frame: map # ------------------------------------- # Wheel odometry: odom0: /odom odom0_config: [false, false, false, false, false, false, true, true, true, false, false, true, false, false, false] odom0_queue_size: 10 odom0_nodelay: true odom0_differential: false odom0_relative: false # ------------------------------------- # GPS odometry: odom1: /sk/gps/fix odom1_config: [true, true, false, false, false, false, false, false, false, false, false, false, false, false, false] odom1_queue_size: 10 odom1_nodelay: true odom1_differential: false odom1_relative: false # -------------------------------------- # imu configure: imu0: /sk/imu imu0_config: [false, false, false, true, true, false, false, false, false, true, true, true, true, true, true] imu0_nodelay: true imu0_differential: false imu0_relative: false imu0_queue_size: 10 imu0_remove_gravitational_acceleration: true use_control: false process_noise_covariance: [1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3] initial_estimate_covariance: [1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0] topics /gps_nav/gps/filtered /gps_nav/imu/data /gps_nav/odometry/filtered /gps_nav/odometry/filtered_map /gps_nav/odometry/gps /gps_nav/set_pose Originally posted by me_saw on ROS Answers with karma: 31 on 2020-04-19 Post score: 0 Answer: Actual navigation of the robot is handled by move_base. You will give position goals to move_base and move_base will provide velocity goals to your controller to bring your robot to the final pose without colliding into obstacles. robot_localization is a package used to obtain an accurate estimate of your current position by applying a kalman filter over it. Ensuring the GPS data is in UTM format is crucial since it follows the ROS ENU standard. If you want to know who is publishing what, please use rqt_graph and it will display the relationship between topics and ros nodes. In move_base, you have to configure the odometry data by using ROS transform frames. If your GPS was published in a map to base_link frame, you can specify map as the global frame and base_link as the robot frame in move_base config file. move_base will use this information along with any costmap provided to navigate. Originally posted by hashirzahir with karma: 228 on 2020-04-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by me_saw on 2020-04-20: can you show me a example of robot_localization rqt_grpah so that i can have an idea how it shoukd it look Comment by hashirzahir on 2020-04-20: The robot_localization node should take in 1 or 2 sensor data such as sensor_msgs::IMU from your IMU and maybe nav_msgs::Odometry from your GPS publisher. The robot_localization node will publish 1 topic, something like /odom/filtered. If you are not sure what your node is doing, you can do rosnode list on the command line and rosnode info NODE_NAME whatever you want to see. As mentioned, ideally, robot_localization will take in a few topics from your different sensors (in the correct ENU orientation) and output 1 filtered odometry topic that is a good localization estimate of your robot. Please check here to see the published topics of robot_localization Comment by me_saw on 2020-04-20: I referred to husky robot's localization to configure my files and I'm getting so many topics so I'm confused which what I've edited the question and the params please check it once Comment by hashirzahir on 2020-04-20: First of all, why do you have 2 nodes <node pkg="robot_localization" type="ekf_localization_node" name="ekf_se_odom" clear_params="true"/> running? ekf_se_odom and ekf_se_map. The data sources are exactly the same for odom0 and imu0 so why do you need 2 nodes? Comment by me_saw on 2020-04-20: I thought for map i should another node so it's not required? Comment by hashirzahir on 2020-04-20: What is the map frame for in your use case? Traditionally, ROS follows map -> odom -> base_link where odom->base_link is the dead-reckoning based estimation that is used for low level controls. It updates at a high frequency. map->odom is a correction provided by some algorithm (say SLAM or GPS) to fix the finalized odometry if the GPS updates at a slow rate. Please explain what is your use case and what is your TF tree. What is your end goal? Comment by me_saw on 2020-04-20: I'm using gps and I want to know the current position of my robot and i want to input gps coordinates as goal to move_base to move to a certain position Comment by me_saw on 2020-04-20: i got /odom/filter its giving me the twist and pose values what are these values how to use these values move to a certain position by giving it to move_base and I'm also launching navsat_transform its publishing two topics what are they and how to them Comment by hashirzahir on 2020-04-20: Please read the documentation on Odometry message. It clearly states what is twist (speed) and pose (position). As long as this is correct, move_base can work correctly. You dont need to use robot_localization as your start point. Just create a simple publisher that publishes the TF from map -> base_link and an Odometry message over some topic using GPS alone.
{ "domain": "robotics.stackexchange", "id": 34794, "tags": "ros, gps, ros-melodic" }
First order Wiener–Hopf filter design
Question: Consider a random process with auto-correlation function: $$r_{\rm dd} [k] = \beta^{\lvert k \rvert}\quad\text{where}\quad 0 < \beta < 1. $$ Suppose also that the observation is: $$ x[n] = d[n] + v[n] $$ where $v[n]$ is uncorrelated white noise with variance $\sigma^2$. Design a first order Wiener–Hopf filter to reduce the noise in $x[n]$ of the form $$W(z) = w(0) + w(1)z^{-1}$$ Answer: I refer to the notation from the Wikipedia article. Your received signal is $x[n]$, and its Autocorrelation is given by $R_x[n]=R_d[n]+R_v[n]$ when noise and signal are uncorrelated. Hence, $$ R_x[n]=\beta^{|k|}+\sigma^2\delta[n] $$ The cross-correlation between the received signal and the signal of interest is $$R_{xd}[n]=E[d[n'](d[n'+n]+v[n'+n])]=R_d[n]$$ under the assumption again that signal and noise are uncorrelated. Now, the Wiener-Hopf equation gets you $$ \begin{pmatrix}R_x[0] & R_x[1]\\ R_x[1] &R_x[0]\end{pmatrix}\begin{pmatrix}w[0]\\w[1]\end{pmatrix}=\begin{pmatrix}R_{xd}[0]\\R_{xd}[1]\end{pmatrix} $$ Filling in the variables, we get $$ \begin{pmatrix}1+\sigma^2 & \beta\\ \beta &1+\sigma^2\end{pmatrix}\begin{pmatrix}w[0]\\w[1]\end{pmatrix}=\begin{pmatrix}1\\\beta\end{pmatrix} $$ Now, you just need to solve for $w[0],w[1]$ to get your filter coefficients.
{ "domain": "dsp.stackexchange", "id": 4646, "tags": "filter-design, homework, adaptive-filters, random-process" }
Training CNN for Regression
Question: Background: I am using CNN to predict forces acting on a circular particle in a granular medium. Based on the magnitude of the forces, particle exhibits different patterns on its surface. The images are greyscaled 64-by-64 pixels. You can see different pictures with the magnitude of the corresponding force on the x-axis attached below. My attempt at a solution: I am relatively new to deep learning and data science and decided to use a simple conv net to run a regression. My code is provided below. I tried to fit the model using adam optimizer and MSE as a loss function, but it takes forever and sometimes aborts execution by itself. What could be the problem? I am running it on a PC with 8GB RAM, 1TB SSD, Intel i7 CPU, and GTX 1080 GPU. def build_model(): model = Sequential() model.add(Conv2D(64, kernel_size=(3, 3), strides = (1,1), padding = 'valid', activation='relu', input_shape=input_shape)) model.add(Conv2D(64, kernel_size = (3, 3), strides = (1,1), padding = 'valid', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2), strides = (2,2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(53824, activation='relu')) model.add(Dense(53824, activation='relu')) model.add(Dense(53824, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='linear')) return model Images of 9 particles with different force labels on x-axis: Answer: Although building neural network models is admittedly still an art rather than a science, there are some (unwritten) rules, at least for initial approaches to a problem, such as yours here (I guess). One of them is that dense layers with 50,000 nodes are too large, and AFAIK I have never seen such large layers in practice; multiply this x3 (layers), and no wonder your code takes forever. I would certainly suggest to experiment with a dense layer size between 100 - 1000, and even start with less than 3 dense layers. Reducing your 1st CNN layer to 32 is also certainly an option.
{ "domain": "datascience.stackexchange", "id": 6108, "tags": "deep-learning, keras, tensorflow, cnn, image" }
Rounded borders for different controls (Button, TextBox, ComboBox) via Attached Property
Question: Suppose you just want to set border radius for different controls: Button TextBox ComboBox The most frequent approach I usually find is totally override default template like in answers https://stackoverflow.com/a/4779905/1548895 https://stackoverflow.com/a/6746271/1548895 https://stackoverflow.com/a/17681374/1548895 I find overriding default styles for every CornerRadius value of Button, TextBox, ComboBox very bad idea for the following reasons: You'll have to carry a lot of styles in your resources and they will be polluted with a lot of default properties. You create duplicate code for every property value if you need different values You don't have clean, elegant code for setting border radius itself So I've implemented Attached property for this. The most changing part of implementing was to implement this for ComboBox public class CornerRadiusSetter { public static CornerRadius GetCornerRadius(DependencyObject obj) => (CornerRadius)obj.GetValue(CornerRadiusProperty); public static void SetCornerRadius(DependencyObject obj, CornerRadius value) => obj.SetValue(CornerRadiusProperty, value); public static readonly DependencyProperty CornerRadiusProperty = DependencyProperty.RegisterAttached(nameof(Border.CornerRadius), typeof(CornerRadius), typeof(CornerRadiusSetter), new UIPropertyMetadata(new CornerRadius(), CornerRadiusChangedCallback)); public static void CornerRadiusChangedCallback(object sender, DependencyPropertyChangedEventArgs e) { Control control = sender as Control; if (control == null) return; control.Loaded += Control_Loaded; } private static void Control_Loaded(object sender, EventArgs e) { Control control = sender as Control; if (control == null || control.Template == null) return; control.ApplyTemplate(); CornerRadius cornerRadius = GetCornerRadius(control); Control toggleButton = control.Template.FindName("toggleButton", control) as Control; if (control is ComboBox && toggleButton != null) { toggleButton.ApplyTemplate(); // Set border radius for border radius border Border toggleButtonBorder = toggleButton.Template.FindName("templateRoot", toggleButton) as Border; toggleButtonBorder.CornerRadius = cornerRadius; // Expand padding for combobox to avoid text clipping by border radius control.Padding = new Thickness( control.Padding.Left + cornerRadius.BottomLeft, control.Padding.Top, control.Padding.Right + cornerRadius.BottomRight, control.Padding.Bottom); // Decrease width of dropdown and center it to avoid showing "sticking" dropdown corners Popup popup = control.Template.FindName("PART_Popup", control) as Popup; Popup popup = control.Template.FindName("PART_Popup", control) as Popup; if (popup != null) { double offset = cornerRadius.BottomLeft - 1; if (offset > 0) popup.HorizontalOffset = offset; } SystemDropShadowChrome shadowChrome = control.Template.FindName("shadow", control) as SystemDropShadowChrome; if (shadowChrome != null) { double minWidth = control.ActualWidth - cornerRadius.BottomLeft - cornerRadius.BottomRight; if (minWidth > 0) shadowChrome.MinWidth = minWidth; } } // setting borders for non-combobox controls Border border = control.Template.FindName("border", control) as Border; if (border == null) return; border.CornerRadius = cornerRadius; } } So now you can set borders either in styles of in code via single property <Button local:CornerRadiusSetter.CornerRadius="10">Button</Button> <Button local:CornerRadiusSetter.CornerRadius="5,7,10,12">Button</Button> <TextBox local:CornerRadiusSetter.CornerRadius="3,0,0,3" /> <TextBox local:CornerRadiusSetter.CornerRadius="7,8,2,1" /> <ComboBox local:CornerRadiusSetter.CornerRadius="5" /> <ComboBox local:CornerRadiusSetter.CornerRadius="7" /> Or for multiple controls inside Resources: <Style TargetType="Button"> <Setter Property="local:CornerRadiusSetter.CornerRadius" Value="10" /> </Style> Results of styling Answer: This is a nice feature, and all in all it works well. A couple of remarks though: Split up the main method in two methods - one for ComboBoxes and one for other controls. It is more readable and easier to maintain: private static void Control_Loaded(object sender, EventArgs e) { Control control = sender as Control; if (control == null || control.Template == null) return; control.ApplyTemplate(); if (control is ComboBox) SetComboBox(control as ComboBox); else SetOtherControl(control); } Be aware that you set the Loaded event handler every time you change the CornerRadius. You probably don't change the corner radius after app load, but if you do, you attach a new handler for the controls Loaded event. It's no big deal because it is only called once (at load time) - but anyway it looks bad. You could probably handle it like this: public static void CornerRadiusChangedCallback(object sender, DependencyPropertyChangedEventArgs e) { Control control = sender as Control; if (control == null) return; control.Loaded -= Control_Loaded; control.Loaded += Control_Loaded; } You should be aware that SystemDropShadowChrome is defined in more than one assembly (Aero and Aero2 for instance) so you'll have to link to the right assembly. Which one that is in use I think is system dependent(?). You could maybe use Reflection to overcome this(?) Setting the properties of the Popup/Dropdown of the ComboBox at load time is a bad idea, because if the ComboBox changes size (width), the settings you made at load time are not adjusted to the new size. Instead you should make the settings for the ComboBox in both the Loaded and the SizeChanged events or maybe just in the DropDownOpened event.
{ "domain": "codereview.stackexchange", "id": 31061, "tags": "c#, .net, wpf" }
Newton Mechanics
Question: I am confused about this question, what does it mean? A particle has a mass of $2\ \mathrm{kg}$ and a force $$F = 24t^2 i + ( 36t - 6 ) j - 12tk$$ acting on it. At the time $t = 0$ the particle is at position $r = 3i - j + 4k$ and has initial velocity $v=6i+15j-8t$. Look for speed as a function of $t$ and position at every time. Answer: I am not going to answer your question for you, as that would deprive you from learning the material. I can clear up a few things for you though. The mass is $2kg$ The Force is given as three component vectors in the x, y, z direction ($i,j,k$) It gives you its velocity is three component vectors as well. You are given Force, Velocity($V_i$), Mass, and position. It asks for you to find $V_f (t)$ Use newtons second law and calculus to solve for $V_f$. I'll do one of the components for you to get an idea of what you are looking for. Doing the entire problem is just time consuming. As you probably already know any vector is the sum of all component vectors: $\overrightarrow {F_i} = \overrightarrow {F_x}+\overrightarrow {F_y}+\overrightarrow {F_z}$ Lets just look at the X-component for all given information. You are dealing, like I said, with Force, Mass, $V_o$, and position ($\Delta S$) $F_x = 24t^2N$ $\Delta S_x = 3m$ $V_{x}= 6 ms^{-1}$ $m=2kg$ If you are lost at this point, go look at the original information you provided to see where I found the numbers above. Now it asks you to provide a function of both Velocity and Position with respect to time, in laymans terms its asking you to find $V_f(t)$ and $\Delta S (t)$. We are going to do this just for the X- component vector (you still need to do it for Y and Z which I won't do on here). So with the knowns that you have, you can find the $V_f$ by using newtons second law: $$ a={F \over m} $$ Then use integral calculus to find Velocity as a function of time: $$ {d v \over d t } = {F_x \over m} $$ Rearrange equation and simplify $$ {dv } = {24t^2 \over m} d t $$ $$ \int_{V_o}^{V_f} {dv} = {1 \over m} \int_{t_o}^{t_f} {{24t^2} dt} $$ $$ V_f - V_o = {1 \over m} \mid_{t_o}^{t_f} 8t_f^3 -8t_o^3 $$ $$ V_{fx}(t)= {8 \Delta t^3 \over m} +V_{ox} $$ Plugging in known values you get. $$ V_{fx}(t)=4t^3 +6 $$ Now this is where I leave you to figure out the rest for yourself, because now you have to take what you are given and find a function of position in respect to time, and not only that but now you are going to have to find a function for both velocity and position for the remaining vector components of Y and Z Hope this helps. ** Hint to easily find position from the velocity formula use calculus **
{ "domain": "physics.stackexchange", "id": 26146, "tags": "homework-and-exercises, newtonian-mechanics, forces, vectors" }
Find the average score, given data in a table with labeled columns not in a fixed order
Question: I was on Hacker Rank for the first time and there was a question that required to get the inputs from STDIN and use collections.namedtuple(). It was my first time working with getting inputs from STDIN so the way I acquired the inputs and used them to calculate the data might not be efficient. The question was to calculate the average mark of the students and was given a multiple line string input with specific data about the student (like: NAME, CLASS, ID) in which one of them is MARKS (I have added an image of a sample STDIN input data below to explain better). But the layout of the input data could change so the code must be able to figure out where the MARKS are in the data and also get the number of students in order to calculate the average of their marks. I came up with a quick way to solve this but was wondering how to appropriately and efficiently acquire the input data and perform the calculation. Briefly, what is a better (pythony, efficient, shorter) way to write my code? # Enter your code here. Read input from STDIN. Print output to STDOUT from collections import namedtuple import sys data = [line.rstrip().split() for line in sys.stdin.readlines()] sOMCache = []; n = 0 dataLength = len(data) if 'MARKS' in data[1] : marksIndex = data[1].index('MARKS') for i in range(dataLength) : if n > 1 : sOMCache.append(data[n][marksIndex]) n += 1 sOM = sum([int(x) for x in sOMCache]) Point = namedtuple('Point','x,y') pt1 = Point(int(sOM), int((data[0][0]))) dot_product = ( pt1.x / pt1.y ) print (dot_product) Samples Testcase 01: 5 ID MARKS NAME CLASS 1 97 Raymond 7 2 50 Steven 4 3 91 Adrian 9 4 72 Stewart 5 5 80 Peter 6 Expected output: 78.00 Testcase 02: 5 MARKS CLASS NAME ID 92 2 Calum 1 82 5 Scott 2 94 2 Jason 3 55 8 Glenn 4 82 2 Fergus 5 Expected output: 81.00 Answer: I liked the fact that you used the sum() builtin function, but I have no idea what sOM or sOMCache mean in sOM = sum([int(x) for x in sOMCache]) — "Sum of marks", maybe? But the capitalization is weird by Python standards, and this isn't a cache — you wouldn't just eject data from this "cache", would you? I think, as you probably suspected, that you missed the mark for this exercise. The main issues with your solution are: Misunderstanding the Point. What you're computing here is an average, not a dot product. The dot product in the tutorial was just to illustrate how you can access member fields within a namedtuple using .x or .y. Failure to take advantage of namedtuple. If you do things right, you should be able to write something like row.MARKS to get the value of the MARKS column for a particular row. Lack of expressiveness, such that it's not obvious what the code intends to do at a glance. Suggested solution from collections import namedtuple from statistics import mean import sys def records(line_iter): """ Read records, one per line, space-separated, following a header. The header consists of two lines: the first line contains an integer specifying the number of records, and the second line specifies the column names. Records are yielded as namedtuples, where the fields have names specified by the second header row. """ n = int(next(line_iter)) Record = namedtuple('Record', next(line_iter)) for _ in range(n): yield Record(*next(line_iter).split()) print(mean(float(rec.MARKS) for rec in records(sys.stdin))) Explanation: The final line of code drives all the work: read records from STDIN, and print the mean of the MARKS column. I've chosen to interpret the marks as floats for versatility, but if you're sure that the data are all integers then you could call int(rec.MARKS) instead. All of the work to read the input is handled by a generator function. The docstring describes what it does. As you can see, it takes advantage of the namedtuple to let you refer to the column of interest later.
{ "domain": "codereview.stackexchange", "id": 42419, "tags": "python, python-3.x, programming-challenge, parsing" }
K&R - The C programming language Exercise1-12
Question: I am working through the K&R and just finished exercise 1-12 and below is my solution. Exercise1-12: Write a program that prints input one word per line. #include <stdio.h> main() { int c; while((c = getchar()) != EOF) { if(c == ' ' || c == '\t' || c == '\n') { putchar('\n'); while(c == ' '|| c == '\t' || c == '\n') c = getchar(); } putchar(c); } } My question is both author and solutions here defined and used the state variable. when we compare my solution is pretty simple but it works. am I missing something? maybe there is a case that my solution won't work well? Looking for your comments to improve, Thanks Answer: To get a historically correct impression of programming in C, K&R is probably the best book available. In addition to that book, you should also get a more modern book about C since K&R doesn't teach you anything about function prototypes and buffer overflows (one of the main reasons that software written in C is often unreliable). For example, the page you linked has a solution containing char buffer[1024];. That solution will fail with unpredictable behavior as soon as you pass it a file containing very long words, as the buffer will overflow and the C program will not reliably crash but is free to do anything it wants, including crashing or making daemons fly out of your nose. This is called undefined behavior. There's not much to improve about your code. To make it modern, you simply have to replace one line: main() // before int main(void) // after After that, you should tell your editor to format the source code. This will indent the innermost line c = getchar() a bit more, so that it is clearly inside the while loop. // before: while(c == ' '|| c == '\t' || c == '\n') c = getchar(); // after: while (c == ' '|| c == '\t' || c == '\n') c = getchar(); Some other reviewers will say that you must always use braces in the if and while statements, but I disagree. It's enough to let your editor or IDE format the source code automatically, this will make it obvious when your code is indented wrongly.
{ "domain": "codereview.stackexchange", "id": 37430, "tags": "c" }
Spinning bucket of water in zero gravity
Question: Everyone knows how the surface of a spinning bucket of water would look like on earth - parabolic. But what if we turned off gravity (for instance by doing the experiment in a freely falling lift)? Would the surface be still parabolic? I'll explain my confusion in more detail. The velocity of the spinning bucket is transferred to the water by means of frictional forces arising in the boundary between the bucket and water. But these frictional forces exist no matter whether there is gravity or not. So if I consider the whole bulk of water inside the bucket as a single system, this frictional force would give it a positive torque. Thus the water has to rotate. For the sustained rotation of water, a centripetal force has to exist. In normal gravity, the water surface changes its shape into a paraboloid so that there is a net force on any particle directed inward. But in free fall, there is no pressure on a particle inside the liquid. Thus the only force that can supply the centripetal acceleration is inter-molecular force between the particles which is weak to sustain huge velocities. So what exactly happens? Answer: Assuming the bucket has a lid you will end up with most of the water lining the outside of the bucket. This is how your basic artificial-gravity, spinning habitat works, after all. Some water could, in principle, remain floating in the center but it is not stable. If the bucket has no lid, the water oozes up against the sides and runs out the open end.
{ "domain": "physics.stackexchange", "id": 5136, "tags": "classical-mechanics" }
What is a fermion doublet exactly?
Question: I am trying to prove that the Weinberg operator is the only dimension-5 operator that can be constructed out of Standard Model fields. To that end, I've tried to write up all the dimension-5 operators I can think of, and then apply Lorentz, $U(1)_Y$, and $SU(2)_L$ transformations to see which are invariant. Problem is, I'm very unsure what the properties of the fermion doublets, $$ L = \begin{pmatrix} \nu_L \\ e_L \end{pmatrix}, $$ are. Schwartz writes that it transforms as a left-handed Weyl spinor under $SU(2)$, while the singlet $e_R$ transforms as a right-handed Weyl spinor. Does this mean that $\nu_L$, $e_L$ and $e_R$ are Weyl spinors? I think I have a good understanding of Dirac and Weyl spinors, $$ \psi = \begin{pmatrix} \psi_1 \\ \psi_2 \\ \psi_3 \\ \psi_4 \end{pmatrix} = \begin{pmatrix} \psi_L \\ \psi_R \end{pmatrix}, \quad \psi_L = \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix}, \quad \psi_R = \begin{pmatrix} \psi_3 \\ \psi_4 \end{pmatrix}. $$ I know you can build Dirac spinor bilinears that transform nicely under the Lorentz group using the $\gamma^\mu$-matrices, $$ \overline{\psi}\psi, \quad \overline{\psi}\gamma^5\psi, \quad \overline{\psi}\gamma^\mu\psi, \quad \overline{\psi}\gamma^5\gamma^\mu\psi, \quad \overline{\psi}[\gamma^\mu,\gamma^\nu]\psi, $$ and I know that for Weyl spinors, the $\sigma^\mu$-matrices are used instead. For instance, in the Weyl basis, $$ \overline{\psi}\gamma^\mu\psi = \begin{pmatrix} \psi_L^\dagger & \psi_R^\dagger \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & \sigma^\mu \\ \overline{\sigma}^\mu & 0 \end{pmatrix} \begin{pmatrix} \psi_L \\ \psi_R \end{pmatrix} = \psi_L^\dagger\overline{\sigma}^\mu\psi_L + \psi_R^\dagger\sigma^\mu\psi_R. $$ But when I write something like $\overline{L}\gamma^\mu L$, what exactly does it mean? Is this just confusing notation, or does it really mean $$ \overline{L}\gamma^\mu L = \overline{\nu}_L\gamma^\mu\nu_L + \overline{e}_L\gamma^\mu e_L, $$ such that each component in the doublet is a Dirac spinor? I'm very confused. Answer: Everything with a subscript of $L$ or $R$ is a Weyl spinor and always will be. As you note, a Dirac spinor can be expressed in the Weyl basis as \begin{equation} \psi = \begin{pmatrix} \psi_L \\ \psi_R \end{pmatrix} \end{equation} meaning the bilinear $\bar{e}_L \gamma^\mu e_L$ say, which appears in $\bar{L} \gamma^\mu L$, should be evaluated by the rule \begin{equation} e_L \mapsto \begin{pmatrix} e_L \\ 0 \end{pmatrix} \end{equation} so that only one block of the $\gamma$ matrices act on it. There would be less of an abuse of notation if we wrote $L^\dagger \bar{\sigma}^\mu L$ to begin with.
{ "domain": "physics.stackexchange", "id": 94270, "tags": "quantum-field-theory, standard-model, spinors, electroweak, isospin-symmetry" }
How to prove that $\{\$x\$\}$ is a regular language if $x$ is derived from $L=\{w\}$ by substituting substrings?
Question: Prove that if $L$ is regular over $\Sigma=\{0,1,2\}$ then the following language over $\{0,1,2,\$\}$ is also regular: $$ G=\{\$x\$|\exists w\in L: x\text{ is derived from }w\text{ by substituting } 01 \text{ with }\$\$ \} $$ For example, if $10112\in L$ then $\$1\$\$12\$\in G$. I think this can be solved using closure properties of regular languages. 1) Let $H=\$L\$$. $H$ is also regular because it's concatenation. 2) Let $h:\Sigma\to \Sigma^*$ be defined as follows: $$ h(\$)=h(0)=h(1)=\$ $$ Then: $$ G=h^{-1}(H)\cap \$(\Sigma^*\$\$\Sigma^*)^+\$ $$ which is regular because $h^{-1}$, intersection and regular expressions are closed in regular languages. I wonder if my proof using closure is correct or automaton should be built in this case? In addition if I managed to think of a regular expression to describe $G$ then this alone would've proved that $G$ is regular? Answer: Unfortunately, your argument breaks down at step 2). For example, let $L=\{01\}$. Then $G =\{\$\$\$\$\}.$ $H=\{\$01\$\}$. $h^{-1}(H)=\emptyset$ since every word in the range of $h$ contains neither 0 or 1. $h^{-1}(H)\cap \$(\Sigma^*\$\$\Sigma^*)^+\$=\emptyset.$ However, $G$ is not empty. If I managed to think of a regular expression to describe G then would this alone have proved that G is regular? Of course. However, it looks like it is not immediate to figure out a regular expression for $G$ even if we have been given the regular expression for $L$ and DFA for $L$. Here is a way to show $G$ is regular by DFA. Let the DFA for $L$ be $(\Sigma,Q, q_0,\delta_L, F)$. Define (an incomplete) DFA $D$ with alphabet $\{0,1,2,\$\}$, states $Q\times \{s_0, s_1, o\}$, initial state $(q_0, s_0)$, accepting state $F\times\{s_0, o\}$, transition function $\delta_D$ such that $\delta_D((q, s_0), 0)= (\delta_L(q, 0), o) $ $\delta_D((q, s_0), 1)= (\delta_L(q, 1), s_0) $ $\delta_D((q, s_0), 2)= (\delta_L(q, 2), s_0) $ $\delta_D((q, s_0), \$)= (\delta_L(q, 0), s_1) $ $\delta_D((q, s_1), \$)= (\delta_L(q, 1), s_0) $ $\delta_d((q, o), 0)= \delta_L(q, 0), o) $ $\delta_d((q, o), 2)= \delta_L(q, 2), s_0) $ $\delta_d((q, o), \$)= \delta_L(q, 1), s_1) $ Here is how we can understand the states state $(q,o)$ corresponds to the states that are reached by words that end with $0$. state $(q,s_1)$ corresponds to the states that are reached by words that end with odd number of $\$$'s. state $(q,s_0)$ corresponds to the states that are reached by words that end with $1$ or $2$ or even number of $\$$'s. We can check that $G=\$L(D)\$$. Exercise. Prove that if $L$ is regular over $\Sigma=\{0,1\}$ then the following language over $\{0,1,2\}$ is also regular. $$ G=\{x\mid \exists w\in L: x\text{ is derived from }w\text{ by substituting } 00 \text{ and } 11 \text{ with } 2 \text{ from left to right} \}$$ For example, if $w=1000111110$, then $x=1202210$.
{ "domain": "cs.stackexchange", "id": 13173, "tags": "formal-languages, regular-languages, regular-expressions" }
Visualising gas temperature and gas pressure
Question: Gas pressure is created when gas molecules collide with the wall of the container creating a force. Gas temperature is a measure of how fast the molecules are moving / vibrating. However, they both seem to be concerned by "kinetic energy" of the molecules, or in other words, the "collision" they impose on the target. How do we visualize the difference between pressure and temperature of gas? Is there any obvious difference between the two? The same question in another form: A gas is hot when the molecules collided with your measuring device. A gas have high pressure when the molecules collided with your measuring device. So, what is the difference between the two "collisions" in the physical sense and how do we visualize the difference? For Simplicity, How can a Hot gas be Low Pressured? ( They are supposed to have High Kinetic Energy since it is Hot. Therefore should be High Pressured at all times! But no. ) How can a High Pressured gas be Cold? ( They are supposed to collide extremely frequently with the walls of the container. Therefore should be Hot at all times! But no. ) Answer: Background Let us assume we have a function, $f_{s}(\mathbf{x},\mathbf{v},t)$, which defines the number of particles of species $s$ in the following way: $$ dN = f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \ d^{3}x \ d^{3}v $$ which tells us that $f_{s}(\mathbf{x},\mathbf{v},t)$ is the particle distribution function of species $s$ that defines a probability density in phase space. We can define moments of the distribution function as expectation values of any dynamical function, $g(\mathbf{x},\mathbf{v})$, as: $$ \langle g\left( \mathbf{x}, \mathbf{v} \right) \rangle = \frac{ 1 }{ N } \int d^{3}x \ d^{3}v \ g\left( \mathbf{x}, \mathbf{v} \right) \ f\left( \mathbf{x}, \mathbf{v}, t \right) $$ where $\langle Q \rangle$ is the ensemble average of quantity $Q$. Application If we define a set of fluid moments with similar format to that of central moments, then we have: $$ \text{number density [$\# \ (unit \ volume)^{-1}$]: } n_{s} = \int d^{3}v \ f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \\ \text{average or bulk velocity [$length \ (unit \ time)^{-1}$]: } \mathbf{U}_{s} = \frac{ 1 }{ n_{s} } \int d^{3}v \ \mathbf{v}\ f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \\ \text{kinetic energy density [$energy \ (unit \ volume)^{-1}$]: } W_{s} = \frac{ m_{s} }{ 2 } \int d^{3}v \ v^{2} \ f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \\ \text{pressure tensor [$energy \ (unit \ volume)^{-1}$]: } \mathbb{P}_{s} = m_{s} \int d^{3}v \ \left( \mathbf{v} - \mathbf{U}_{s} \right) \left( \mathbf{v} - \mathbf{U}_{s} \right) \ f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \\ \text{heat flux tensor [$energy \ flux \ (unit \ volume)^{-1}$]: } \left(\mathbb{Q}_{s}\right)_{i,j,k} = m_{s} \int d^{3}v \ \left( \mathbf{v} - \mathbf{U}_{s} \right)_{i} \left( \mathbf{v} - \mathbf{U}_{s} \right)_{j} \left( \mathbf{v} - \mathbf{U}_{s} \right)_{k} \ f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \\ \text{etc.} $$ where $m_{s}$ is the particle mass of species $s$, the product of $\mathbf{A} \mathbf{B}$ is a dyadic product, not to be confused with the dot product, and a flux is simply a quantity multiplied by a velocity (from just dimensional analysis and practical use in continuity equations). In an ideal gas we can relate the pressure to the temperature through: $$ \langle T_{s} \rangle = \frac{ 1 }{ 3 } Tr\left[ \frac{ \mathbb{P}_{s} }{ n_{s} k_{B} } \right] $$ where $Tr\left[ \right]$ is the trace operator and $k_{B}$ is the Boltzmann constant. In a more general sense, the temperature can be (loosely) thought of as a sort of pseudotensor related to the pressure when normalized properly (i.e., by the density). Answers How can a Hot gas be Low Pressured? If you look at the relationship between pressure and temperature I described above, then you can see that for low scalar values of $P_{s}$, even smaller values of $n_{s}$ can lead to large $T_{s}$. Thus, you can have a very hot, very tenuous gas that exerts effectively no pressure on a container. Remember, it's not just the speed of one collision, but the collective collisions of the particles that matters. If you gave a single particle the enough energy to impose the same effective momentum transfer on a wall as $10^{23}$ particles at much lower energies, it would not bounce off the wall but rather tear through it! How can a High Pressured gas be Cold? Similar to the previous answer, if we have large scalar values of $P_{s}$ and even larger values of $n_{s}$, then one can have small $T_{s}$. Again, from the previous answer I stated it is the collective effect of all the particles on the wall, not just the individual particles. So even though each particle may have a small kinetic energy, if you have $10^{23}$ hitting a wall all at once, the net effect can be large.
{ "domain": "physics.stackexchange", "id": 26423, "tags": "pressure, temperature, ideal-gas" }
Java beginner exercise : Write a class "Air Plane"
Question: I had to do this exercise for a further education in which I'm currently enrolled in: Write a Java class "Air Plane". Object-property names and types are given and compulsory. Write the according constructor and getter-, setter-method. Check within the constructor the given values for being valid. Moreover are the following methods to implement: info load fillUp fly getTotalWeight getMaxReach Further requirements concerning the implementation of the methods I have written into my code as comments. Here's my Plane-class package plane; public class Plane { private double maxWeight; private double emptyWeight; private double loadWeight; private double travelSpeed; private double flyHours; private double consumption; private double maxFuel; private double kerosinStorage; public Plane( double maxWeight, double emptyWeight, double loadWeight, double travelSpeed, double flyHours, double consumption, double maxFuel, double kerosinStorage ) { this.maxWeight = maxWeight; this.emptyWeight = emptyWeight; this.loadWeight = loadWeight; this.travelSpeed = travelSpeed; this.flyHours = flyHours; this.consumption = consumption; this.maxFuel = maxFuel; this.kerosinStorage = kerosinStorage < this.maxFuel ? kerosinStorage : this.maxFuel; } public double getMaxWeight() { return maxWeight; } public double getEmptyWeight() { return emptyWeight; } public double getLoadWeight() { return loadWeight; } public double getTravelSpeed() { return travelSpeed; } public double getFlyHours() { return flyHours; } public double getConsumption() { return consumption; } public double getMaxFuel() { return maxFuel; } public double getKerosinStorage() { return kerosinStorage; } public void setMaxWeight(double maxWeight) { this.maxWeight = maxWeight; } public void setEmptyWeight(double emptyWeight) { this.emptyWeight = emptyWeight; } public void setLoadWeight(double loadWeight) { this.loadWeight = loadWeight; } public void setTravelSpeed(double travelSpeed) { this.travelSpeed = travelSpeed; } public void setFlyHours(double flyHours) { this.flyHours = flyHours; } public void setConsumption(double consumption) { this.consumption = consumption; } public void setMaxFuel(double maxFuel) { this.maxFuel = maxFuel; } public void setKerosinStorage(double kerosinStorage) { this.kerosinStorage = this.kerosinStorage + kerosinStorage > maxFuel ? maxFuel : this.kerosinStorage + kerosinStorage; } /* Returns the total weight of the plane. Which is: emptyWeight + weight of load + weight of kerosin. Expect 1 liter Kerosin as 0.8 kg. */ public double getTotalWeight () { return emptyWeight + loadWeight + (kerosinStorage * 0.8); } /* How far can the plane fly with the current kerosin storage? */ public double getMaxReach () { return (kerosinStorage / consumption) * travelSpeed; } /* Prevent flying further then possible (with the current kerosin) ! */ public boolean fly (double km) { if (km <= 0 || getMaxReach() < km || getTotalWeight() > maxWeight) { return false; } flyHours += (km / travelSpeed); kerosinStorage -= (km / travelSpeed) * consumption; return true; } /* ! The parameter 'liter' can be a negative number. Doesn't have to be overfilled. Prevent a negative number as value of the 'kerosinStorage' property ! */ public void fillUp (double liter) { if ((kerosinStorage + liter) > maxFuel) { kerosinStorage = maxFuel; } else if ((kerosinStorage + liter) < 0) { kerosinStorage = 0; } else { kerosinStorage += liter; } } /* Prevent illogical value-assignments ! */ public boolean load (double kg) { if ((loadWeight + emptyWeight + kg) > maxWeight) { return false; } else if ((emptyWeight + kg) < 0) { loadWeight = 0; return true; } else { loadWeight += kg; return true; } } // Display flying hours, kerosin storage & total weight on t. terminal. public void info () { System.out.println("Flying hours: " + flyHours + ", Kerosin: " + kerosinStorage + ", Weight: " + getTotalWeight()); } } And my Plane-test class: package plane; public class TestPlane { public static void main (String[] args) { Plane jet = new Plane( 70000, 35000, 10000, 800, 500, 2500, 25000, 8000); jet.info(); jet.setKerosinStorage(1000); System.out.println(jet.getKerosinStorage()); System.out.println(jet.getTotalWeight()); System.out.println("Maximal reach: " + jet.getMaxReach()); System.out.println("Fly hours 1: " + jet.getFlyHours()); jet.fly(5000); System.out.println("Fly hours 1: " + jet.getFlyHours()); jet.load(10000); jet.info(); } } They let automated tests run upon the code. It passed that test but I'm still not sure about it. So therefore: I would appreciate your comments and hints concerning my implementation of the described task. Answer: Builder pattern Consider the "builder pattern". As I started to pass in arguments to the constructor, it was hard to keep the semantics right. The builder pattern helps the developer to abstract from argument input order handle a lot of constructor arguments abstract from default values that make sense make arguments optional and therefore avoid telescope constructors The builder pattern has only one assertion: It doesn't matter how many arguments you passed in, it will always build a consistent object. Avoid multiple return-statements Return-statements are structural identical to goto-statements although they are a formalized version. What all goto-alike-statements (return, continue, break) hav in common: They are not refactoring-stable. They hinder you to apply reforings like "extract method". If you have to insert a new case in an algorithm that uses break, continue and return-statements you may have to overthink the whole algorithm so that your change will not break it. Avoid inexpressive return values You may see return values like true/false to indicate something has been processed well or not. These return values may be sufficient for trivial cases in trivial environments where less exceptional cases occur. In complex environment a method execution may fail due to several reasons. A connection to the server was lost, an inconsistency on the database-side was recognized, the execution failed because of security reasons... to name only the tip of the iceberg. There modern languages introduce a concept for "exceptional" cases: Exceptions. E.g. you have following signature: public boolean load (double kg) Beside you have mixed two concerns in one method (load/unload) that you treat differently (overload will not be allowed, unload will be corrected) you also try to publish success information via return value. I suggest to not publish true or false. I suggest to have either no return value or the new value of the loadWeight. Exceptional cases I would handle with the concepts of exceptions. I would expect a signature like this: public double load (double kg) throws OverloadedException The OverloadedException may not be signature relevant (RuntimeException) but it expresses the intention of the method. Beside that I would split responsibilities and introduce a method: public double unload (double kg) Avoid comments If you want to make comments it is an indicator for that your code itself may not be clear enough. I intentionally said "avoid comments" but not "do not comment anything". First think about the things that will be compiled and run to be as clear as possible. Then if you think it's neccessary to comment then comment. Comments have to be maintained separately. They are "uncompiled" code and cannot be be put under test. So they may lie if they diverge from your code semantics. E.g. you have following signature: public void fillUp (double liter) In your comment you mentioned that "liter" may be negative. This is an allowed value but your method signature says "fillUp". So one of them is lying. You now have two possibilities: Think about a name, that abstracts from draining or filling up fuel (adjust?) so it is clear that you may have a negative argument or ... ... separate the concerns (draining, filling up) into separate methods to match SRP. The best "comment" for a "procedure", "function", "method" is a set of tests that show the usage of it so other developers can see, how your code will work in different situations. Instead of testing your object in a main-scope I suggest to make ... Unit Tests Following the suggestions you can do expressive unit tests: public class TestPlane { /** * A plane's fuel can be filled up. */ @Test public void fillUpNormal() { Plane plane = new PlaneBuilder().setMaxFuel(2000).setInitialKerosinStorage(1700).build(); Assert.assertEquals(1800, plane.fillUp(100)); } /** * A plane cannot be filled up beyond max fuel. */ @Test public void fillUpOverfilled() { Plane plane = new PlaneBuilder().setMaxFuel(2000).setInitialKerosinStorage(1700).build(); try { plane.fillUp(400); Assert.fail(); } catch (OverfilledException e) { Assert.assertEquals(100, e.getOverfilledBy()); } } } You should decide which coverage you want to aim. I prefer condition coverage over statement coverage because it forces you to keep your methods small. Methods under condtion coverage have at least 2^condition_elements of test cases. If you have long methods with several conditions your test case count may explode. As you see in the test cases, I have comments. They describe the business rules you want to enforce.
{ "domain": "codereview.stackexchange", "id": 34457, "tags": "java, beginner, object-oriented" }
How does the tautological one-form convert a velocity to a momentum?
Question: The Wikipedia page on the "tautological one-form" $\theta$ says that it is used to create a correspondence between the velocity of a point in a mechanical system and its momentum, thus providing a bridge between Lagrangian mechanics and Hamiltonian mechanics and that velocities are appropriate for the Lagrangian formulation of classical mechanics, but in the Hamiltonian formulation, one works with momenta, and not velocities; the tautological one-form is a device that converts velocities $\dot{q}$ into momenta $p$. I certainly understand why this device is physically useful, but unfortunately, the Wikipedia doesn't explicitly explain how $\theta$ maps $\dot{q}$ to $p$. In order to make sure that I understand the tautological one-form correctly, I'd like to explain it in very concrete detail with a minimum of mathematical formalism; if anything below is incorrect, then please let me know. As I understand it, $\theta$ is constructed via the following steps: We start with the configuration space, which is represented by an $n$-dimensional smooth manifold $Q$. The velocities $v := \dot{q}$ live in the tangent space $TQ$. The cotangent bundle $T^*Q$ is a $2n$-dimensional smooth manifold that can (loosely) be thought of as the set of all ordered pairs $(q, p|_q)$, where $q \in Q$ and $p|_q$ is a one-form that linearly maps the tangent space $T_qQ$ at the point $q$ to $\mathbb{R}$. $q$ and $p|_q$ both have $n$ degrees of freedom, so the cotangent space $T^*Q$ is a $2n$-dimensional smooth manifold. It can therefore be locally parameterized by local coordinate charts $(U \subset T^*Q) \to \mathbb{R}^{2n}$, which we can split up into $2n$ different coordinate charts $(U \subset T^*Q) \to \mathbb{R}$ that we'll call $q^i,\ i = 1, \dots, n$ and $p_j,\ j = 1, \dots, n$, where we've used the cotangent bundle geometry to distinguish the $q^i$ from the $p_j$. Specifically, we separate the coordinate charts such that the $n$ charts $q^i$ only depend nontrivially on the first argument $q$ in each $(q,p|_q) \in T^*Q$, while the other $n$ coordinate charts $p_j$ can depend nontrivially on both $q$ and $p|_q$. These are just coordinate charts $(U \subset T^*Q) \to \mathbb{R}$, not vectors or one-forms or tensors of any kind. Fix an ordered pair $m = (q, p|_q) \in T^*Q$, where $q \in Q$ and $p|_q \in T^*_q$ map to a fixed set of $2n$ real numbers $q^i(m)$ and $p_i(m)$. The tautological one-form corresponding to the element $m \in T^*Q$ is the fixed linear functional $$\theta_m := \sum_{i=1}^n p_i(m)\ dq^i(m).$$ Despite the notation, the sum on $i$ is not a tensor contraction, but just a plain old sum. If we now let $m$ vary over $T^*Q$, then we get a one-form $$\theta \in T^*T^*Q := \{(m, \theta_m) | m \in T^*Q\}$$ over the whole cotangent bundle manifold $T^*Q$. For a given $i$, $q^i$ is a real-valued function of (an open subset of) the $2n$-dimensional cotangent bundle smooth manifold $T^*Q$. The one-form $\theta$ over $T^*Q$ is therefore technically $2n$-dimensional. But as mentioned above, we used the contangent-bundle structure of $T^*Q$ to separate out the coordinate charts $q^i$ and $p_i$ so that the $n$ functions $q^i$ only depend nontrivially on the $q \in Q$ in the element $(q, p|_q) \in T^*Q$. But each summand (with fixed $i$) in the definition of $\theta$ is a one-form that points along one of the coordinate basis directions $dq^i$, and none of the summands point along a $dp_j$ direction. So while the linear functional $\theta_m$ technically acts on a $2n$-dimensional space, it is only nonzero within the $n$-dimensional subspace spanned by the $dq^i$. The defining map $m \to \theta_m$ technically maps the $2n$-dimensional cotangent bundle $T^*Q$ to $T^*_mT^*Q$, the $2n$-dimensional dual space spanned by the $dq^i$ and $dp_j$. But the image of this map lies entirely with the $n$-dimensional dual subspace spanned by the $dq^i$, so without loss of generality we can restrict the target space down to that dual subspace and think of the defining map $m \to \theta_m$ as a map from the $2n$-dimensional cotangent bundle $T^*Q$ to the only $n$-dimensional dual space $T^*_mQ$. Is everything above correct (or at least correct enough for a physics level of rigor)? If so, I still don't see exactly what is the map from velocities $v \in TQ$ to momenta $p \in T^*Q$. Given an explicit position $q \in Q$ and a list of coefficients $\dot{q}^i$ in the expansion $v = \dot{q}^i(v) \frac{\partial}{\partial q^i}$, how exactly do we use the tautological one-form $\theta$ to figure out the corresponding momentum $(q,p|_q) \in T^*Q$? Answer: Your statements are correct for me. However your final claim is not true, you need to be careful which manifolds are being identified. The identifying map it gives is between $T(T^*M)$ and $T^*(T^*M)$. This is done by the symplectic structure. Taking the exterior derivative of the tautological one from, you get a two form on the cotangent bundle: $$ \omega=d\theta $$ The non degeneracy of $\omega$ induces at every $m\in T^*M$ an isomorphism between $T_m(T^*M)$ and $T_m^*(T^*M)$ given by: $$ \alpha \in T_m(T^*M) \to \omega_m(\alpha,\cdot)\in T_m^*(T^*M) $$ which extends to the entire manifolds $T(T^*M)$ and $T^*(T^*M)$. However, this does not give an identification from $TM$ to $T^*M$. For this, you will need a Lagrangian (or conversely a Hamiltonian). Explicitly, given a Lagrangian $L$ defined on $TM$, its “partial derivative with respect to velocity” gives the map. Its inverse is constructed by constructing the Legendre transform of $L$, called the Hamiltonian, and taking the derivative with respect to conjugate momentum. Intuitively, you need to find a way to raise/lower the indices, which cannot be done only using the differential structure of $M$, you need something more. Hope this helps.
{ "domain": "physics.stackexchange", "id": 93792, "tags": "classical-mechanics, differential-geometry, hamiltonian-formalism, phase-space, canonical-conjugation" }
Gravity Disabled on Robot Pinned to World
Question: I have a simple robot with four links and four revolute joints. The first joint is pinned to the world. When I run the simulation, the robot is not affected by gravity. I know gravity is enabled because when I drag another object like a box and drop it into the world, it falls down. I suspect that gravity is not affecting my four-link robot because it is pinned to the world. Could this be the case? How can I enable gravity for my robot? Update: The problem was fixed by specifying collision elements for each link in the robot. This is strange since collisions and dynamics should be decoupled. Should this be considered a bug? Originally posted by liangfok on Gazebo Answers with karma: 21 on 2012-11-08 Post score: 2 Original comments Comment by hsu on 2012-11-12: can you post your robot model? (with and without collision bodies?) Comment by hsu on 2012-11-30: can you change your question's title to reflect the issue? i.e. dynamics disabled if missing collision element? thanks. Answer: It appears if a model does not have any collision elements, it behaves statically, ticketed here. Originally posted by hsu with karma: 1873 on 2012-11-30 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by scpeters on 2013-01-11: This issue has been resolved. Comment by scpeters on 2013-01-11: This issue has been resolved.
{ "domain": "robotics.stackexchange", "id": 2799, "tags": "gazebo" }
ImportError: No module named rospkg
Question: I am using ROS kinetic on Ubuntu 16.04. When I typed this command, "roslaunch turtlebot_gazebo turtlebot_world.launch", the following error occurred. Please teach me how to solve it.... ... logging to /home/yuki/.ros/log/fafca31c-4c63-11e7-a590-ac2b6eec753e/roslaunch-yuki-Inspiron-13-7368-13183.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. Traceback (most recent call last): File "/opt/ros/kinetic/share/xacro/xacro.py", line 55, in <module> import xacro File "/opt/ros/kinetic/lib/python2.7/dist-packages/xacro/__init__.py", line 42, in <module> from roslaunch import substitution_args File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/__init__.py", line 48, in <module> import rospkg ImportError: No module named rospkg while processing /opt/ros/kinetic/share/turtlebot_gazebo/launch/includes/kobuki.launch.xml: Invalid <param> tag: Cannot load command parameter [robot_description]: command [/opt/ros/kinetic/share/xacro/xacro.py '/opt/ros/kinetic/share/turtlebot_description/robots/kobuki_hexagons_asus_xtion_pro.urdf.xacro'] returned with code [1]. Param xml is <param command="$(arg urdf_file)" name="robot_description"/> The traceback for the exception was written to the log file Originally posted by Yuki on ROS Answers with karma: 1 on 2017-06-08 Post score: 0 Original comments Comment by 130s on 2018-02-17: As this is FAQ, you should be able to find many other threads that are already answered. While the accepted answer should work around your issue, I've asked #q282982 as a more fundamental question. Answer: try to install the package sudo apt-get install python-rospkg Originally posted by angeltop with karma: 351 on 2017-06-15 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28082, "tags": "ros, roslaunch, ros-kinetic, ubuntu, rospkg" }
Choosing Gazebo , ROS , Linux version
Question: Hi, I'm quite new to ROS. First Quest: I want to download ROS and gazebo in my VMware, but I am confused with the compatibility. My Current Linux version is Xenial 16.04.1. According to this page Gazebo 5.0 is compatible with ROS jade, so I plan to use them. HOWEVER, my ubuntu version is Xenial, and according to that page they(the gazebo and the jade ROS) are not supported in Xenial(x) Ubuntu. Moreover, There is no Gazebo version supported in Xenial version. AND the ROS version supported in Xenial version is only Kinetic, which is not compatible to any gazebo. Do you have any suggestion and explanation on this? My last question is: For every gazebo and ROS version we can see that there is the EOL. For example, EOL for gazebo 5 is 2017-01-25 EOL for jade ROS according to this ROS distribution page is May 2017 Does it mean that after the EOL, we can no longer use it (gazebo and ros) ? If so , then how? If no, what does it mean? Please explain Big Thanks Originally posted by alienmon on ROS Answers with karma: 582 on 2016-08-04 Post score: 0 Answer: I suspect that Gazebo page is out of date. The migration page for ROS Kinetic says: The Gazebo official versions supported in ROS Kinetic are the 7.x series. Also Kinetic is currently the recommended ROS release, so right now the recommended setup is Ubuntu 16.04 Xenial Xerus + ROS Kinetic Kame + Gazebo 7.1. End of Life (EOL) simply means that support ends for that version, in other words, the developers are no longer going to spend any time updating, patching, adding features, troubleshooting etc. that version. The reason they do this is so they can spend their time wisely on supporting more current versions and working on new features for future versions rather than fix things that most people aren't using anymore. You are free to use something that is EOL, but if you find a bug the developers are probably not going to fix it for you. You could fix it yourself on your local copy, put up with the bug, or upgrade to the current version. Originally posted by Airuno2L with karma: 3460 on 2016-08-05 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 25453, "tags": "ros-jade" }
Proton spin/flavor wavefunction
Question: I am currently working through Griffiths' Introductory to Elementary Particle Physics and I'm a little confused about a particle's spin/flavor wavefunctions. As a specific example, I've attached Griffths solution to the proton's wavefunction, and the formula he used to get it. I understand the solution, but what confuses me is the ordering for the antisymmetric spins/flavors. As an example, looking at the first term, wouldn't the flavors still be antisymmetric in particles 1 and 2 if we just switched the udu and duu terms. We would get a different final solution for the wavefunction now due to terms cancelling out after expanding. Answer: Note that Griffiths is very careful to match each of these terms, $$udu ~~\Leftrightarrow~ \uparrow \downarrow \uparrow,$$and if you match both terms at once you get two sign flips: $$(\downarrow \uparrow \uparrow - \uparrow \downarrow \uparrow)\otimes (duu - udu) = (-1)^2 (\uparrow \downarrow \uparrow - \downarrow \uparrow \uparrow)\otimes(udu - duu) $$and since $(-1)^2 = 1$ this is a non-issue. So the real question you're asking is, why do we have to match these terms? And that's a good question and it has to do with how the 3 terms all play together (a sign flip on any individual term does nothing for consistency or inconsistency). So the expression takes the form of "we're going to insert some $u_\uparrow$ state into the twice-antisymmetrized 2-quark state $$d_\downarrow u_\uparrow - d_\uparrow u_\downarrow - u_\downarrow d_\uparrow + u_\uparrow d_\downarrow,$$because we know we have two $\uparrow$ spins and two $u$ quarks and so one of these up-quarks has to be in the spin-up state." (Note that under $1^\text{st}\leftrightarrow2^\text{nd}$ interchange the above is in fact symmetric, that last term being exactly the first term with the two particles switching places.) Now the expression chooses to symmetrically insert this $u_\uparrow$ quark in the first position, the second position, and the third position, so that the result will still be symmetric here and will become antisymmetric after correcting for color charge. What you're proposing by flipping the sign of the first term, therefore, is not symmetrically inserting this $u_\uparrow$ quark in each of the three spots, but inserting it in the first place with a 180-degree phase shift. And that naturally will not be properly symmetric here or antisymmetric afterwards.
{ "domain": "physics.stackexchange", "id": 37538, "tags": "particle-physics, wavefunction, standard-model, quarks, protons" }
How to spawn robot in Gazebo from robot_description topic?
Question: I am trying to launch a Gazebo simulator and spawn a model in the simulator form the /robot_description topic. This already worked sometimes but now I get the following error after calling the launch file: [create-2] [ERROR] [1706829850.265189123] [ros_gz_sim]: Must specify either -file, -param, -stdin or -topic This is how I try to spawn my robot in my launch file: spawn = Node( package='ros_gz_sim', executable='create', parameters=[{'topic': 'robot_description'}], output='screen', ) I am using Ros Humble and Gazebo Fortress on Ubuntu 22.04. What could be the problem? How could I try to debug this? Answer: The way you pass the topic in your launch file is through the ROS 2 param approach. But the create node expects to simply get it through a classical command-line argument, like you would write: ros2 run ros_gz_sim create -topic /robot_description The equivalent in a launch file would be: spawn = Node( package='ros_gz_sim', executable='create', arguments=['-topic', '/robot_description'], # <- this output='screen',
{ "domain": "robotics.stackexchange", "id": 38964, "tags": "gazebo, roslaunch, spawn-model, spawn, gz-sim" }
What exactly is a one particle density?
Question: In Density Functional Theory (DFT) we derive the Grand Potential as a functional of a so-called one particle density (OPD). I have trouble imagining what exactly that is. Could someone help me with that ? I know the mathematical background of the performed calculus of variations but i can't figure out what this OPD represents. Answer: The one-particle density can be viewed as the localization probability of a particle in the system, with integration over all the state vectors except that of the single particle of interest. For example, suppose you are interested in the positions $\mathbf{x}_i$ of $N$ electrons in a many-electron system in which the $i$-th electron is in spin state $\sigma_i$. The OPD for a single electron is then given as $$ n(\mathbf{x_1})=N\int \mathrm{d}\mathbf{x}_2\mathrm{d}\mathbf{x}_3...\mathrm{d}\mathbf{x}_N\sum_{\mathbf{\sigma}} |\Psi(\mathbf{x_1}\sigma_1,\mathbf{x_2}\sigma_2,...,\mathbf{x}_N\sigma_N)|^2. $$ Similarly, you can define the two-particle density $n(\mathbf{x}_1,\mathbf{x}_2)$ with a similar integral over the vector positions $\mathbf{x}_3...\mathbf{x}_N$. The density matrix for your full system is given by these one-particle terms on the diagonal (appropriately normalised such that the trace is equal to unity), and the pairwise exchange terms (the Pauli correlation) as the off-diagonal elements [1]. [1] http://th.physik.uni-frankfurt.de/~engel/2pd_helium.html
{ "domain": "physics.stackexchange", "id": 22804, "tags": "definition, density, density-functional-theory" }
ROS rgbdslam fuerte support?
Question: I want to know when will there be support for rgbdslam in fuerte. I tried to tinker with the electric supported version to make it run in fuerte but could not get it to work. If someone has found a way around please let me know. The errors are listed below: rosdep was nable to find eigen(resolved),gl2ps,qt4,libglew,libdevil. src files in g2o do not seem to automatically resolve header files( I had to change many paths to make these error go) Originally posted by gpsinghsandhu on ROS Answers with karma: 231 on 2012-07-28 Post score: 1 Original comments Comment by Sudhan on 2012-07-31: exactly what problem do you have while running it? Not able to built the package successfully Comment by Sudhan on 2012-08-01: check below Answer: follow this: 1st built ros package g2o in the proper ros package path. don't make any change in paths. Nothing you have to edit in g2o. next edit manifest.xml in rgbdslam package. delete the line '' and add the following then you can find something similar to loop starting and ending with . within that add the following line without editing other lines: 3.then install qt4. simply, use ubuntu software centre. In devoloper tools -> IDE. you can find it (most probably in the name qt creator). 4.then $ sudo apt-get install libglew1.5-dev libdevil-dev libsuitesparse-dev $ sudo apt-get install libgsl0-dev 5.then do $ rosmake rgbdslam. Incase, if you find any error try it with $ rosmake --pre-clean rgbdslam. I hope it should work. Originally posted by Sudhan with karma: 171 on 2012-07-31 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 10403, "tags": "slam, navigation, rgbd6dslam, rgbd, ros-fuerte" }
Which hyperparameters in neural network are accesible to users adjustment
Question: I am new to Neural Networks and my questions are still very basic. I know that most of neural networks allow and even ask user to chose hyper-parameters like: amount of hidden layers amount of neurons in each layer amount of inputs and outputs batches and epochs steps and some stuff related to back-propagation and gradient descent But as I keep reading and youtubing, I understand that there are another important "mini-parameters" such as: activation functions type activation functions fine-tuning (for example shift and slope of sigmoid) whether there is an activation funciton in the output range of weights (are they from zero to one or from -1 to 1 or -100 to +100 or any other range) are the weights normally distributed or they just random etc... Actually the question is: Part a: Do I understand right that most of neural networks do not allow to change those "mini-parameters", as long as you are using "readymade" solutions? In other words if I want to have an access to those "mini-parameters" I need to program the whole neural network by myself or there are "semi-finished products" Part b:(edited) For someone who uses neural network as an everyday routine tool to solve problems(Like data scientist), How common and how often do those people deal with fine tuning things which I refer to as "mini-parameters"? Or those parameters are usually adjusted by a neural network developers who create the frameworks like pytorch, tensorflow etc? Thank you very much Answer: In general, many of the parameters you mentioned are called hyperparameters. All hyperparameters are user-adjusted (or user-programmed) in training phase. Some hyperparameters are: learning rate, batch size, epochs, optimizer, layers, activation functions etc. To answer your (a) part of your question, there are obsiously many frameworks and libraries, for example in python; TensorFlow, pytorch and so on. You might never create a net from the very beginning; maybe only in order to understand the forward and backpropagation algorithms. When we call from scatch networks, we mean that these networks are trained from scratch, with learnable weights and chosen hyperparameters; with no transfer learning. To answer your (b) part of your question, I can understand from it that you mean when a net is good enough. Dependently of your data, of course, a neural network is good enough, when it is trained adequately on them. That is, you should be aware of overfitting, underfitting, and in general of the model you are trying to train with all its parameters and hyperparameters. Since, you are at the very beginning with Machine Learning, I propose you read some books, in order to get everything needed, in terms of Mathematical and Computer Science aspects.
{ "domain": "ai.stackexchange", "id": 2363, "tags": "activation-functions, hyper-parameters, weights, pretrained-models" }
How to choose between taking the real part or the absolute value of an inverse discrete Fourier transform?
Question: I kind of expected this to of been asked before, so if this is duplicate please point me in the right direction, however I couldn't find anything. I am currently experimenting with low pass filtering. I take an image and apply a 2-dimensional fast Fourier transform to the image, next I apply a shift such that the zero frequency component is moved to the center (Using Matlab this is fftshift). Then I zero Fourier co-efficients which are not within some distance $r$ of the center pixel. I apply the inverse of the shift and then an inverse 2-dimensional fast Fourier transform. Now I end up with a matrix of complex values, but to display an image I want a matrix of real values. With the matrix obtained after applying the IDFT I mapped the pixel values to their real part, imaginary part, magnitude and phase so that I could view the matrix as an image. I found that the real part and the magnitude looked like the original image and taking the imaginary part and phase rendered the image unrecognizeable compared to the original image. I also found that their wasn't a recognizable difference in quality between taking the real value of the transformed image or the absolute value. My question is, is there advantages or disadvantages between taking the real part of absolute value after filtering (low pass, or in general)? This is my first question on DSP.SE, so please leave comments on how to improve my question if it's not up to standard Answer: If your processed 2D FFT is conjugate symmetric, then any imaginary components in the complex IFFT result are just numerical noise or rounding errors. Thus using them to compute an absolute value is mostly a waste of CPU cycles.
{ "domain": "dsp.stackexchange", "id": 11267, "tags": "image-processing, fourier-transform" }
In thermodynamics, how can $\oint \frac{dQ}{T}$ make sense for an irreversible process?
Question: In thermodynamics, a reversible process can be identified by a curve in the $pV$ plane, that is, a sufficiently regular function $\gamma :[a,b]\rightarrow \mathbb{R^2}$. A cycle, then, will simply be a closed curve, i.e., a curve such that $\gamma (a)=\gamma (b)$. It is well known that for such processes $$\oint_\gamma \frac{dQ}{T}=0$$ So far so good. But let's instead consider an irreversible cycle; specifically, one that is neither quasi-static: there is, then, no corresponding closed curve we can associate to such a process, because the system may not have a well-defined temperature of pressure at all times. But it is equally well known that for this kind of cycle $$\oint \frac{dQ_{irr.}}{T}<0$$ Mathematically, the integral of a differential form must be calculated along some line (hence, "line integral"). It simply doesn't make sense to calculate $\int d\omega$ along no curve! But in the case of irreversible paths there is no corresponding curve along which we can integrate, therefore what does $\oint \frac{dQ_{irr.}}{T}<0$ even mean? Answer: But in the case of irreversible paths there is no corresponding curve along which we can integrate, therefore what does $\oint \frac{dQ_{irr.}}{T}<0$ even mean? The Clausius inequality you ask about is more accurately written in this way: $$ \oint \frac{dQ_{irr.}}{T_{reservoir}} < 0. $$ That is, the temperature in the denominator is actually temperature of the reservoir in thermal contact with the system, not necessarily equal to temperature of the system itself. The integral does not refer to integration in space of thermodynamical states of the system. Instead, it refers to the following theoretical construction (the simplest one where the system exchanges heat with at most one reservoir at a time; it could be extended to more complicated situations but that is not necessary here). 1) It is assumed there is a physical process where the system studied can possibly get into non-equilibrium states, but at the end it ends up in the same equilibrium state it started with. Let us introduce real number $t$ that tracks states of the system, at the beginning $t=0$ and at the end $t=t_{max}$. 2) The system accepts and gives off energy by heat transfer only with bodies that have temperature defined at all times. Those bodies are called reservoirs. 3) Let $Q(t)$ be net heat accepted by the system in time interval $[0;t]$ and let $T_{reservoir}(t)$ be temperature of that reservoir that is in contact with the body at time $t$, at time $t$. 4) Then it can be shown, based on the second law of thermodynamics, that the sum of reduced heats over the whole cycle is less than or equal to zero: $$ \int_0^{t_{max}} \frac{dQ/dt}{T_{reservoir}(t)}\,dt \leq 0. $$ Because the values of time variable $t$ and $t_{max}$ do not play much role in the derivation, it is customary to rewrite the integral using the loop integral symbol $\oint$, understanding that the integration is over the actual progression of (possibly non-equilibrium) states that begins and ends in the same equilibrium state. The simplification of writing the integral over monotonously increasing real variable $t$ into a general form, free of any auxiliary variable, is quite reasonable. After all, the value of the integral can be often calculated even without knowledge of functions $Q(t), T(t)$, for example by splitting into several integrals and using substitution method. However, the ubiquitous omission of the subscript "reservoir" from "T" in the integral as given in many study documents is, I think, a significant mistake. If this point is not stressed enough by the teacher, students, when studying from their notes, are bound to confuse $T$ with temperature of the system. In hindsight it seems better to always write the subscript making a difference between the system, and the reservoir.
{ "domain": "physics.stackexchange", "id": 42808, "tags": "thermodynamics, entropy, reversibility" }
Can lymph be in peripheral blood?
Question: I read an argument that 1-3% of lymph is in peripheral blood. However, I am not sure if this lymph is about lymphocytes in peripheral blood; not lymph itself. Lymph gets exchanged between capillaries through Starling law powered by the heart and Frank-Starling mechanism. In this way, it makes sense that some lymph is leaking out from capillaries into the bolod. Can lymph be in the peripheral blood? Answer: Depends if your talking about interstitial fluid or lymph: As per starling forces: the oncotic pressure of the late capillary plus hydrostatic pressure of interstitial space, would be higher than oncotic pressure of interstitium [though I dunno why since intertitial GAGs are very water hungry and they are responsible for the negative (hydrostatic?) pressures found in interstitium... Read lots of articles, still makes no sense to me] and hydrostatic pressure of the capillary... Thus there would be a net influx of interstitial fluid in the very ending part of the capillary, toward the venous side... I say interstitial fluid, because it's not lymph yet since the interstitial fluid hasn't yet entered the adjacent lymph capillaries... if the interstitial fluid entered the lymph capillary then it would be stuck there, the mechanism of this one way flow is the junctions between the endothelial cells in these blind ended lymph capillary... These junctions which only opens up gaps between these endothelial cells when the interstitial pressure outside the lymph capillaries is higher than pressure inside the lymph capillaries... The lymph then goes forward through lymphatic vessels/nodes because external pressure (pulsing of adjacent artery, contraction of adjacent skeletal muscle, changes in thoracic pressuring during breathing, etc) and there is little back flow because of valves (just like venous valves which prevent back flow in the low pressure venous system)... Ultimately the lymph returns by either thoracic duct (in the angle between left subclavian and left jugular vein) or right lymphatic duct (in the angle between right subclavian and right jugular vein)... There are some sweet review articles from University of Bergen, Norway that go much more in depth on the physiology of interstitial fluid... Here is one of their more recent review articles: http://m.physrev.physiology.org/content/92/3/1005 it's free IN SUMMARY: the starling forces bring interstitial fluid into the blood (the "peripheral part" right after the capillary)... The thoracic and right lymphatic duct drain lymph into the blood (I'm assuming this does not apply by your definition of peripheral, since this venous blood is about to be drained into the right atrium and is destined for the lungs)... P.S. aqueous humor (in the eye), endolymph (cochlea aka inner ear) and CSF are also absorbed back into the blood... Also the brain has an interstitial fluid that is different from the CSF found in the subarachnoid spce of the meninges, i actually don't know how that drains... But anyway, the head is so special, I'm sure you weren't wondering about these specifics...
{ "domain": "biology.stackexchange", "id": 2444, "tags": "physiology, hematology, lymphatic-system" }
Intuitive explanation for the existence of an energy gap in superconductors
Question: In 2012 there was a nice answer explaining basic superconductivity. It ends with the sentence: The trouble is you're now going to ask for an intuitive description of why the electron correlations open a gap in the energy spectrum, and I can't think of any way to give you such a description. Sorry :-( Can anyone please follow up with an intuitive explanation of the energy gap in super conductors? In theory it can be derived from the BCS theory -- but this is not, what I would call intuitive ;) Answer: The short, simple, and intuitive explanation is that in a superconductor state, electrons are paired (BCS case) because there is an effective interaction between them. To destroy such a pair and produce free electrons you need to invest a minimum of energy, which is this energy gap $\Delta$. This produces an excitation (2 free electrons), remember that SC is the ground state of the system. This is not so different than band gap excitations in semiconductors, where you need a minimum of energy to go from the valence to conduction band. Hope it helps
{ "domain": "physics.stackexchange", "id": 21077, "tags": "electrons, superconductivity, bosons" }
"multiple definition of `main'" error
Question: I have 2 nodes/files(one node for each file) in my package. It's like in talker and listener tutorial. However, after adding a new one (second node/file), I received an error of "multiple definition of `main'". I've check all the braces, they are well-placed. What could be the possible culprits? Originally posted by alfa_80 on ROS Answers with karma: 1053 on 2011-12-22 Post score: 1 Original comments Comment by alfa_80 on 2011-12-22: I've just updated the question. Comment by DimitriProsser on 2011-12-22: Could you be more clear? You added a second what? Answer: I just forgot to change the node name in the "rosbuild_add_executable" parameter in the CMakeList.txt. I only added the different name for both of them, but not the node name. Originally posted by alfa_80 with karma: 1053 on 2011-12-22 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 7715, "tags": "roscpp" }
Where does the partial derivative come from in Sakurai's derivation of the momentum operator?
Question: How is the momentum operator derived in Dirac formalism? I am reading Quantum Mechanics by Sakurai and he gives the following derivation. But I don't understand how he goes from the third equation to the last equation in (1.7.15). What I don't understand is where the partial derivative with respect to $x^{\prime}$ comes from. Here is the derivation from the book. Momentum Operator in the Position Basis We now examine how the momentum operator may look in the $x$-basis - that is, in the representation where the position eigenkets are used as base kets. Our starting point is the definition of momentum as the generator of infinitesimal translations: $$\begin{align} \biggl(1 - \frac{ip\Delta x'}{\hbar}\biggr)\lvert\alpha\rangle &= \int dx' \mathcal{J}(\Delta x')\lvert x'\rangle\langle x'\lvert\alpha\rangle \\ &= \int dx' \lvert x' + \Delta x'\rangle\langle x'\lvert\alpha\rangle \\ &= \int dx' \lvert x'\rangle\langle x' - \Delta x'\lvert\alpha\rangle \\ &= \int dx' \lvert x'\rangle\biggl(\langle x'\lvert\alpha\rangle - \Delta x'\frac{\partial}{\partial x'}\langle x'\lvert\alpha\rangle\biggr).\tag{1.7.15} \end{align}$$ Comparison of both sides yields $$p\lvert\alpha\rangle = \int dx'\lvert x'\rangle\biggl(-i\hbar\frac{\partial}{\partial x'}\langle x'\lvert\alpha\rangle\biggr)\tag{1.7.16}$$ or $$\langle x'\rvert p\lvert\alpha\rangle = -i\hbar\frac{\partial}{\partial x'}\langle x'\lvert\alpha\rangle,\tag{1.7.17}$$ Answer: He's doing a linear approximation. Suppose $\Delta x$ is very small. Then $\langle x - \Delta x | \alpha \rangle$ is almost equal to $\langle x | \alpha \rangle$, but not quite, because $\Delta x$ isn't zero. So we do a first order approximation: Let's write $\langle x | \alpha \rangle$ as $f(x)$. Then $f(x - \Delta x) \approx f(x) - \Delta x \left.\frac{\partial f}{\partial x} \right|_{\Delta x = 0}$. In Dirac's notation, $\langle x - \Delta x | \alpha \rangle \approx \langle x | \alpha \rangle - \Delta x \frac{\partial}{\partial x} \langle x | \alpha \rangle$.
{ "domain": "physics.stackexchange", "id": 17055, "tags": "quantum-mechanics, operators, momentum" }
Optimal torch placement in voxel games
Question: Problem definition The world consists of an infinite three-dimensional cartesian grid, i.e. every position is in $P = \mathbb{Z}^3$. Neighbours of a position $p \in P$ are defined by $N(p) = \{p + e_1, p - e_1, p + e_2, p - e_2, p + e_3, p - e_3\}$, where $e_i$ are the unit vectors, for example $e_1 = (1,0,0)^T$. For a torch placement $T \subset P$, at each position $p$, there is a voxel $b(T, p)$, which is air, a torch or stone: $$ M \subset 2^{P} \\ b: M \times P \rightarrow \{air, torch, stone\} \\ b(T, p) = torch \iff p \in T $$ Furthermore, a torch can only exist next to stone and the positions of stone voxels are fixed: $$ \forall p \in P: \left(b(T, p) = torch \Rightarrow \exists n \in N(p): b(\{\}, n) = stone\right) \\ \forall p \in P: \left(b(T, p) = stone \iff b(\{\}, p) = stone \right) $$ Light originates from torches, recursively spreads to neighbouring positions, and does not go into stone. This means the light $l(T, p)$ at the position $p$ is defined by $$ l(T, p) = \begin{cases} 0, & b(T, p) = stone \\ 12, & b(T, p) = torch \\ max(\{1\} \cup \{l(T, n) | n \in N(p)\}) - 1, & b(T, p) = air \end{cases} $$ Light at $p$ has an effect on visibility ($v(p)$) if and only if it is next to stone and not in stone itself, i.e. $$ v(p) \iff b(\{\}, p) \neq stone \land \exists n \in N(p): b(\{\}, n) = stone $$ The optimisation problem is to find a torch placement $T$ such that everything is visible with a light of at least one and as few torches as possible are placed: $$ T_{opt} \in argmin_{T}\left\{|T| : \forall p \in P: (v(p) \Rightarrow l(T, p) > 0) \right\} $$ To make the optimisation feasible, we assume that everything outside a cube of side length $a$ is stone: $$ \forall p \in P: \forall i \in \{1,2,3\}: (p_i < 0 \lor p_i \geq a \Rightarrow b(\{\}, p) = stone) $$ My Question I know that it is possible to find a $T$ with a greedy algorithm which places torches one after another until everything is lit up sufficiently, but it does not give an optimal solution in general, so it gives only an upper bound on $|T_{opt}|$. Is there a computational and memory-efficient algorithm to find a $T_{opt}$ and what is its asymptotic time and memory requirement with respect to $a$? Answer: No, assuming $\mathsf{P} \neq \mathsf{NP}$ there is no efficient algorithm for optimal torch placement, even if the world is just a 2D grid. This can be shown by providing a reduction from the vertex cover problem on cubic planar graphs, which is known to be $\mathsf{NP}$-hard. I'm only going to sketch the general idea of the reduction, so I'll be somewhat sloppy. Let $G = (V, E)$ be a cubic planar graph with $n = |V|$ vertices edges and consider an embedding on the infinite grid such that all edges can be drawn with axis-parallel polylines between the corresponding vertices. Now "scale up" this embedding so that each vertex becomes a square and there is "enough space" between different edges and vertices. The whole world (a plane) except for consists of stone except for the following "room" and "tunnel" gadgets, that will encode vertices and edges (respectively). Each vertex $v$ of the graph is encoded by the following "room". This room has three openings on the top which we will use to connect the three edges incident to $v$. Moreover, this room has the following properties: Regardless of the placement of the torches outside of the room, there must be at least one torch inside the room. Regardless of the placement of the torches outside of the room, there is exactly only way to illuminate the whole room with one torch, i.e., placing the torch in the spot highlighted in yellow. A torch in the yellow spot "provides no light" to the locations immediately after the three openings (marked with $x$). Any light coming from one of the three openings cannot "reach" any other opening. Using two torches (e.g. in the yellow and in the red spot), it is possible to light the whole room and provide a light level of at least $4$ to all the locations marked with $\times$. Each edge $e=(u,v)$ is encoded with a "tunnel of width 1" having a length of $23k_e + h_e$ where $k_e$ is a positive integer and $h_e \in \{1,2\}$. The tunnel can twist and bend in 90-degree turns and connects an opening of $u$ with an opening of $v$. Notice that it is always possible to create tunnels of the prescribed length. Indeed, it suffices to ensure that the tunnels detaching form a node continue straight for at least $30$ blocks. In this way each tunnel has at least $12$ sections as the one shown below in figure (a) (i.e., $6$ sections for each endvertex). Each of these $12$ sections can be kept unaltered or it can be replaced with the one of figure (b) to extend the length of the tunnel by $2$. This allows to extend the length by any even amount of choice ranging from $0$ to $24$, which is always enough to satisfy the constraint. Consider any (partial) placement of torches that lights up all the rooms and let $23k_e + h_e$ be the length of a generic tunnel that corresponds to edge $e = (u,v)$. If $u$ and $v$ have exactly one torch each, at least $k_e+1$ torches need to be placed in the tunnel (since each torch lights up at most $23$ spots). If at least one of $u$ and $v$ provides a light level of at least $2$ on the first location of the tunnel (marked with $\times$), then $\left\lceil \frac{23k_e+h_e - 2}{23} \right\rceil \le k_e$ torches suffice. Moreover, regardless of the light levels provided by $u$ and $v$ on their respective entrances of the tunnel (which can be at most $11$), at least $\left\lceil\frac{23k_e+h_e - 2 \cdot 11}{23} \right\rceil \ge \left\lceil \frac{23k_e - 21}{23} \right\rceil = k_e$ torches are needed to light up the tunnel. This shows that if there exists a vertex cover $S$ of size at most $x$, we can light up all the world with at most $n + x + \sum_{e \in E} k_e$ torches (place a torch in the yellow spot of each room, place a torch in the red spot of the rooms corresponding to vertices in $S$, use $k_e$ torches for each tunnel $e$). On the other hand, if we can light up the whole world with some arrangement of at most $n + x + \sum_{e \in E} k_e$ torches, then there must also be an arrangement that uses at most the same number of torches in total but uses exactly $k_e$ torches for each tunnel $e$ (there cannot be less than $k_e$ torches in the tunnel; if there are more than $k_e$ torches then add one to an arbitrary endvertex of $e$ and re-light the tunnel using exactly $k_e$ torches). Consider such an arrangement: since each tunnel $e=(u,v)$ uses exactly $k_e$ torches, at least one of the rooms of $u$ and $v$ contains more than one torch. Then the set $S$ of vertices whose rooms contain $2$ or more torches is a vertex cover. Since each room must use at least one torch, $|S| \le \left( n + x + \sum_{e \in E} k_e \right) - \sum_{e \in E} k_e - n = x$.
{ "domain": "cs.stackexchange", "id": 21572, "tags": "algorithms, optimization" }
What is the unit of bending strain?
Question: In plate theory, there is this equation: $$\epsilon_x=-z\frac{\partial^2 w}{\partial x^2}$$ For example, you can see this equation here. And there is a conflict, which I'd like to resolve. Wikipedia says that the strain is unitless, i.e. its SI unit is 1. But, in the right side of the quoted equation, there is $z$, which has the unit of meter, and $\frac{\partial^2 w}{\partial x^2}$ is unitless. So, on the right side, meter is the unit, and on the left side, $\epsilon_x$ is unitless. Where am I wrong? Answer: The quantity $\frac{\partial^2 w}{\partial x^2}$ is not unitless; it has units of 1/length. Think about how the second derivative is defined: $$\frac{d^2y}{dx^2}=\lim_{h\to 0}\frac{y(x+h)-2y(x)+y(x-h)}{h^2}$$ Since $h$ is added to $x$ in the argument of $y$, it clearly has the same units as $x$, so if $y$ and $x$ have units of length, then the above quantity is an inverse length.
{ "domain": "physics.stackexchange", "id": 49109, "tags": "elasticity" }
Ruby install script; packages+installs as a .deb or .rpm from source
Question: Is this bad practice? Also, how can it be improved? #!/usr/bin/env bash RUBY_VERSION=2.1.0 printf "Installing Ruby $RUBY_VERSION\\n" if [ -d ruby_build ]; then rm -Rf ruby_build fi if [[ `command -v ruby` && `ruby --version | colrm 11` == "ruby $RUBY_VERSION" ]] ; then echo "You already have this version of Ruby: $RUBY_VERSION" exit 113 fi sudo apt-get build-dep -y ruby1.9.1 mkdir ruby_build && cd ruby_build curl -O "http://ftp.ruby-lang.org/pub/ruby/2.1/ruby-$RUBY_VERSION.tar.gz" tar xf "ruby-$RUBY_VERSION.tar.gz" cd "ruby-$RUBY_VERSION" ./configure make sudo checkinstall -y --pkgversion "$RUBY_VERSION" --provides "ruby-interpreter" --replaces="ruby-1.9.2" cd ../.. && rm -Rf ruby_build Answer: First,I would like to point out that not all .deb-based distributions are as fond of sudo as Ubuntu is. For example, Debian doesn't use sudo out of the box. It appears that you're trying to build a newer version of Ruby than is available in the stock package repository. In that case, why not make a proper .deb package with its .dsc and publish it in a private APT repository? Then, everything integrates properly into the distribution the way package management is supposed to work, including the build process and subsequent updates. Everyone will have a better experience when you work with the system instead of against it.
{ "domain": "codereview.stackexchange", "id": 6139, "tags": "bash, installer" }
Numeric method to calculate the charge distribution on a conducting surface?
Question: If I have an arbitrary (closed?) conducting surface and a nearby charge density, is there a simple numeric way of computing the induced charge distribution on the surface? Answer: There is no simple way. The "standard" way is to solve Poisson equation with proper boundary conditions (constant $\varphi$ at the surface). Out of potential distribution it is easy to extract charge distribution. For simple shapes (infinite plane, sphere, etc) it is possible to solve the problem analytically. For arbitrary shape there is no simple solution.
{ "domain": "physics.stackexchange", "id": 1820, "tags": "electromagnetism, computational-physics" }
How do we recover fusion energy from gamma photons
Question: Most fusion reactions create energy in form of hard gamma radiations. A single gamma photon carries energy orders of magnitude higher than any ionization level. Even if we develop some hardcore version of "solar" panel based on photoelectric effect, most energy will still be scattered or pass through. In effect the reactor with glow in hard-X. Even if we don't care about losing significant portion of energy it's still not good idea to have a X-ray lighthouse anywhere near humans. Answer: From the released energy of 17.6 MeV per fusion reaction (Deuterium+Tritium) are 14.1 MeV in the form of kinetic energy of the neutron and 3.5 MeV in kinetic energy of the helium. The neutrons are unaffected by the magnetic field and reach the blanket, where they release their energy as heat by collisions. The heat can be used to turn steam turbines to generate electricity, as with any other power plant. The gammas can also be converted into heat. The photons generated have energies in the range of 1 MeV. At that photon energy the absorption coefficient for water is $\mu \approx 0.07 \,\text{cm}^{-1}$, which means that one meter of water shielding will absorb $1-e^{-\mu x}\approx 99.9$% of all gammas and turn them into heat.
{ "domain": "physics.stackexchange", "id": 59266, "tags": "fusion, gamma-rays" }
Electric Field from a Uniformly Charged Disk
Question: As I was reading the solution of this problem the author gave the electric field in the point P as follows: $$ \vec{E} = \sigma /(2 \epsilon ) [1-x/(x^2+R^2)^{1/2}]\ \hat \imath$$ Where: $\sigma$ is the surface charge density on the disk $x$ is the distance from the center of the disk to the point P $R$ is the radius of the disk Here the question comes: for $R \rightarrow 0$ with keeping $Q$ constant (the total charge) why is it that $E$ should go to a point charge? According to my knowledge $E$ goes to zero as we calculate the limit of $E$ while $R$ tends to zero. Even if the absolute value of $x$ gives $-x$ we still get another constant. Answer: First, I think a little intuition could help. If you imagine a disk with a charge $Q$ get smaller, but all the while keeping the charge Q intact, shouldn't it geometrically approach a point with charge Q? So, in theory, we should expect the field to approach that due to a point charge. Now, intuition aside, let's go to the mathematics. While the factor $\left[ 1-x/(x^2 + R^2)^{1/2} \right]$ does go to zero as $R\to0$, the charge density $\sigma = Q/(\pi R^2)$ goes to infinity. This is the source of your problem, because you end up with an indeterminate form $\infty \times 0$ while calculating the limit, and not 0. To evaluate the limit, you could use l'Hôpital's rule, after rewriting your formula as an appropriate fraction (and substituting $\sigma$ for its expression in terms of $R$). As a bonus, a quick way to do this would be to use this handy approximation, which works for small $y$ values: $$ (1+y)^n \approx 1+ny. $$ You can use this if you factor $x^2$ from the square root: $$ \frac{x}{ \left(x^2 + R^2\right)^{1/2} } = \frac{x}{ \lvert x \rvert \left(1 + (R/x)^2\right)^{1/2} } = \frac{x}{\lvert x \rvert}\left(1+\left(R/x\right)^2 \right)^{-1/2}\approx\frac{x}{\lvert x \rvert} \left( 1 - \frac{R^2}{2x^2}\right). $$ Here I used $y=R/x$ and $n=-1/2$. If $R$ goes to 0, then $y$ goes to 0, and this approximation gets better. Replacing in the expression you have given, we end up with $$ \vec E = \frac{Q}{4 \pi \epsilon x^2} \frac{x \hat \imath}{\lvert x \rvert} = \frac{Q}{4 \pi \epsilon x^2} \frac{\vec x}{\lvert x \rvert}, $$ which is the expression for a field due to a point charge. (Notice that the term $\vec x / \lvert x \rvert$ only gives you the direction of the field, but doesn't change its magnitude.) Edit: if you try to do the calculations for $x<0$ you'll end up in trouble. The actual formula for the electric field should be $$ \vec E = \frac{\sigma}{2\epsilon}\left[ \frac{x}{\lvert x \rvert} - \frac{x}{\left(x^2 + R^2 \right)^{1/2}} \right] \hat \imath, $$ which you can see if you follow the derivation of the equation.
{ "domain": "physics.stackexchange", "id": 29934, "tags": "homework-and-exercises, electrostatics" }
Is dimensional analysis wrong?
Question: In many physics textbooks dimensional analysis is introduced as a valid method for deducing physical equations. For instance, it is usually claimed that the period of a pendulum cannot possibly depend on its mass because if it did the units would not match. However, I think that this kind of argument is not correct. Let's imagine we were trying to deduce Coulomb's law. We would make an educated guess by stating that the force between two charges depends on its charge and the distance. Nevertheless it is obvious that it is not possible to obtain the unit Newton from Coulombs and Meters. If we repeated the argument we used with the pendulum, we would end up with a different law for the force between two charges. Am I wrong? If not, why and when is dimensional analysis used? Answer: Dimensional analysis is used when you're trying to figure out how a certain set of parameters (your "inputs") can be combined to yield a quantity with a particular set of units (your "output"). It only works under the following assumptions: You know what all of your inputs are, and the list of inputs is finite; There are no redundancies in your list of inputs (i.e. each input has different units), or, if there are redundancies, there must be additional information that constrains the behavior of the redundant inputs (e.g. "the Coulomb force must involve both $Q_1$ and $Q_2$"); Each of these quantities are expressed in units that are compatible with each other and with your output (in practice, this means that each of them should be expressible in only SI base units); Any constant factors are assumed to be pure numbers, and The number of additive terms is constrained to be finite. If these assumptions are satisfied, dimensional analysis will yield a set of possible combinations of inputs ("formulas") that will have the same units as your output. If you're lucky, there will only be one; if you're not, there will be a few, which serve as arguments for an arbitrary function (this follows from the Buckingham Pi theorem). Let's try this procedure on your two examples. In the first one, we have a pendulum. Our output is the period, which has units of time ($T$). The inputs are: The length of the pendulum rod, which has units of length ($L$); The local gravitational acceleration, which has units $\frac{L}{T^2}$, and The mass of the pendulum, which has units of mass ($M$) (since we don't know a priori that it can't be an input, we include it as a possible input). Using dimensional analysis, we can constrain the possible forms for the formula using the following equation and solving for the powers $a$, $b$, and $c$: $$T=L^a\left(\frac{L}{T^2}\right)^bM^c$$ where $k$ is some unitless proportionality constant. Equating the powers of $L$, $T$, and $M$ on the left-hand side with their powers on the right-hand side: $$0=a+b\quad\quad\quad 1=-2b\quad\quad\quad 0=c$$ Solving this system gives you $a=\frac{1}{2}$, $b=-\frac{1}{2}$, $c=0$. So, even though we didn't know that the period didn't depend on mass before, we have just proven that this is the case, assuming that our list of inputs was complete. Substituting the powers back into the original expression, we get an expression for the period $\tau$: $$\tau=k \ell^{1/2}g^{-1/2}m^0=k\sqrt{\frac{\ell}{g}}$$ Since the solution to the linear system above is unique, there is only one term. For Coulomb's law, we must use a system in which the units of each quantity are compatible; as such, we use Gaussian electromagnetic units, in which charge has units of $M^{1/2}L^{3/2}T^{-1}$ (i.e. units of g$^{1/2}$ cm$^{3/2}$/s). The units of the separation are, of course, $L$. In this case, we have three inputs: the two charges $Q_1$ and $Q_2$, which both have the above units, and the separation between the charges. Our output is force, which has units of $MLT^{-2}$ (i.e. g cm/s$^2$). Setting up dimensional analysis again: $$MLT^{-2}=(M^{1/2}L^{3/2}T^{-1})^a(M^{1/2}L^{3/2}T^{-1})^bL^c$$ Solving for the powers: $$1=\frac{a}{2}+\frac{b}{2}\quad\quad\quad 1=\frac{3a}{2}+\frac{3b}{2}+c\quad\quad\quad -2=-a-b$$ This system is degenerate, and gives you one free parameter, so $(a,b,c)=(a,2-a,-2)$. As such, the abstract general form of the force that you get with dimensional analysis is $$F=f\left(\left\{k_a\frac{Q_1^aQ_2^{2-a}}{r^2}\right\}_{a\in\mathbb{R}}\right)$$ for some arbitrary function $f$. Note that we immediately get the $1/r^2$ nature of the force just through dimensional analysis. It is also very easy to experimentally eliminate all but one of these possible arguments, by invoking one piece of additional information, that can easily be gleaned from experiment: exchange symmetry. The force on two charges does not change if you swap the two charges with each other. This means that the powers on $Q_1$ and $Q_2$ must be equal. As such, the only possible powers are $(a,b,c)=(1,1,-2)$. This additional empirical information eliminates the redundancy in this dimensional analysis, and so we arrive at the correct formula: $$F=k\frac{Q_1Q_2}{r^2}$$ This all comes with one major caveat: if there's an input that you don't know about and don't include, then you could get an entirely different formula. For example, let's look at the pendulum again. Let's assume that in this case, the rod is very slightly elastic, with an effective spring constant $K$ that has the usual units of N/m, or abstractly $MT^{-2}$. Now, redoing the dimensional analysis yields: $$T=L^a\left(\frac{L}{T^2}\right)^b M^c \left(\frac{M}{T^2}\right)^d$$ Which yields the following system: $$0=a+b\quad\quad\quad 1=-2b-2d\quad\quad\quad 0=c+d$$ Note that there are 4 variables and 3 equations, so there is 1 free parameter, which I will take to be $c$. As such, we solve the system to obtain $(a,b,c,d)=(-c+1/2,c-1/2,c,-c)$, which gives us an abstract general formula: $$\tau=f\left(\left\{ k_c\left(\frac{\ell}{g}\right)^{1/2-c}\left(\frac{m}{K}\right)^c\right\}_{c\in\mathbb{R}}\right)$$ again for some arbitrary function $f$. Now, if $c$ is nonzero, then the period of a slightly elastic pendulum does depend on the mass of the pendulum, and the particular way that it does must be measured. But now let's consider the limit of very slight elasticity (i.e. the limit of large $K$). Equivalently, suppose $\tau$ varies slowly enough with $m/K$ that $\log\tau$ vs. $\log(m/K)$ is well-approximated by a line. This means that there is only one nonzero term in the above set of arguments, since a linear log-log plot corresponds to power-law behavior. This simplified our expression considerably: $$\log\tau=\log k+\left(\frac{1}{2}-c\right)\log\left(\frac{\ell}{g}\right)+c\log\left(\frac{m}{K}\right)$$ Therefore, we have reduced a complicated physical task (finding the period of a pendulum with a slightly elastic rod) into the much easier task of empirically finding the two constants $k $ and $c $.
{ "domain": "physics.stackexchange", "id": 47780, "tags": "dimensional-analysis" }
The computational complexity of spectral norm of a matrix
Question: How hard is computing the spectral norm of a matrix? This paper says, ... it suffices to say that, except for few particular cases, the Matrix Norm problem is NP-hard. I expected that the relevant chapter 2 would describe those exceptional cases but failed to find it. Can anyone suggest a more definitive reference for computational complexity of spectral norm of a matrix? Answer: The answer to your question is the contents of section 1.3.2, titled "[w]hen $\mathcal{P}_{p,r}$ is known to be difficult". (Here $\mathcal{P}_{p,r}$ is the problem of computing the norm $\|A\|_{p,r} = \sup_{\|x\|_p=1} \|Ax\|_r$.) According to that section, the only cases which are known to be difficult are $\mathcal{P}_{\infty,1},\mathcal{P}_{\infty,2},\mathcal{P}_{2,1}$. For example, $\mathcal{P}_{\infty,1}$ (even restricted to positive semidefinite matrices) is a generalization of MAX CUT. Since $\|B'B\|_{\infty,1} = \|B\|_{\infty,2}^2$, $\mathcal{P}_{\infty,2}$ is also hard. Finally, $\mathcal{P}_{2,1}$ is as hard as $\mathcal{P}_{\infty,2}$ as part of the more general observation (proved in section 1.3.1) that $\mathcal{P}_{p,r}$ is as hard as $\mathcal{P}_{1/(1-1/r), 1/(1-1/p)}$. The thesis goes on to prove that $\mathcal{P}_{p,r}$ is hard whenever $p > r$ - this is the chapter you were reading (chapter 2). Section 1.3.1 described some easy cases: $\mathcal{P}_{1,\ast}$, the symmetric $\mathcal{P}_{\ast,\infty}$, and the case that MCH mentioned, $\mathcal{P}_{2,2}$. Section 1.3.3 covers some approximability results, several novel of which are described in section 1.4 and the remaining chapters. The title of section 1.3.2 appears in the table of contents (page iii) - just a hint for next time.
{ "domain": "cstheory.stackexchange", "id": 2101, "tags": "np-hardness, quantum-computing, matrices, quantum-information, norms" }
Technical details of the VSLAM package
Question: Hi, Can anyone help me with the technical details of the VSLAM package? Like What feature matching techniques were used (in the VSLAM_SYSTEM package available in ROS), what was the main uncertainty algo used - Kalman Filter? EKF? or Monte-Carlo Implementation? Or is there any paper that I could refer to that would help me figure these out. I want to know what methods were used to implement the VSLAM package available in ROS. I'm trying to analyze the performance of VSLAM for my thesis and I really need to find out what algorithms have been used in order for me to get started. Thank you. Originally posted by Divya on ROS Answers with karma: 46 on 2012-03-19 Post score: 0 Answer: The VSLAM package in ROS (http://www.ros.org/wiki/vslam), is (a) unsupported (b) out of date and (c) very brittle, as most VSLAM systems are. If you want to know how it works, read the code. If you're looking for a highly general VSLAM that will work on your system, for your data, we recommend you look elsewhere. Originally posted by Mac with karma: 4119 on 2012-03-20 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Divya on 2012-03-20: Um.. I know its slightly out-dated. Unfortunately, my project is based on this particular version of VLSAM implementation and I've already done my testing using this particular set of packages! That's precisely why I need the technical details of this particular implementation! :| .. Comment by Divya on 2012-03-20: I'm also using another Visual odometry method called libviso which works with OpenCV.. I am basically trying to compare the performance of that with VLSAM.. Sorry if I am making it all the more confusing! Comment by Mac on 2012-03-21: Then you'll need to read the code; since it's not supported anymore, that's the best we can do. Comment by Divya on 2012-03-21: ahh!! I read through most of the code. To be honest, I don't understand most of it! Anyways, thank you :)
{ "domain": "robotics.stackexchange", "id": 8650, "tags": "ros, vslam" }
Effect of magnetism on dissolved Na+ and Cl- ions in water
Question: Suppose we have put some salt in water. My state of knowledge is, that a hydration envelope is formed from the oxygen side of the water molecules on the Na+-Ions and one another from the hydrogen side of the water molecules on the (Cl-)-Ions to drag these out of the crystal. My questions now are: When the hydration envelope is formed on the ions, does it (partially) change the charge of the ions? (Now the main question) If I would put two magnets with opposite poles on both sides of the container, in which the salt solution is, would the ions be pulled to the side of the opposite pole, (if yes or no, then why)? I made a little sketch an let the hydration envelope out for the sake of the little place. And one another thing. There is no reason to downvote my question, because I am not playing around here, but the question just jumped in my head Answer: There are a couple of issues: first, a uniform magnetic field doesn't exert a force on stationary charges and secondly, when the charges are moving the force is at right angles to the magnetic field. There are a few ways to get the charges moving. You could have an electric current flow from the bottom of your figure to the top. The $\text{Na}^+$ ions would move with the current, up the page and since the magnetic field points to the left in your figure, the $\text{Na}^+$ ions would be pushed towards the reader by the right hand rule resulting in higher electric potential near the reader. This is called a positive Hall effect. The $\text{Cl}^-$ ions would move against the current, down the page and paradoxically would be pushed towards the reader again by the right hand rule, this time resulting in a lower electric potential near the reader: a negative Hall effect. Another way is to have the electric current directed out of the page. Positive $\text{Na}^+$ ions move towards the reader and then are pushed down the page by the magnetic field according to the right hand rule. Negative $\text{Cl}^-$ ions move away from the reader and are also pushed down the page by the magnetic field via the right hand rule. This is the principle of magnetohydrodynamic propulsion. Another way is to have the brine flowing up the page which forces $\text{Na}^+$ charges towards the reader and $\text{Cl}^-$ charges away from the reader by the right hand rule, both resulting in higher electrostatic potential on the near side of the stream. This would be magnetohydrodynamic propulsion in reverse operating as a generator.
{ "domain": "chemistry.stackexchange", "id": 14929, "tags": "salt" }
(Brilliant.org) Arrays Intermediate 3: Contiguous Knapsack
Question: Problem Given an array and a number \$t\$, write a function that determines if there exists a contiguous sub-array whose sum is \$t\$. Source (Note that the URL may not link to the exact problem as it's from a quiz site, and questions seem to be randomly generated). I was able to come up with a \$O(n^2)\$ solution, and further thinking hasn't improved upon it: def substring_sum(lst, target): sums = [] for val in lst: if val == target: return True for idx, _ in enumerate(sums): sums[idx] += val if sums[idx] == target: return True sums.append(val) return False Answer: You are currently calculating the sum of all possible subarrays. A different approach is to imagine a sliding window on the array. If the sum of the elements in this window is smaller than the target sum, you extend the window by one element, if the sum is larger, you start the window one element later. Obviously, if the sum of the elements within the window is the target, we are done. This algorithm only works if the array contains only non-negative numbers (as seems to be the case here when looking at the possible answers). Here is an example implementation: def contiguous_sum(a, target): start, end = 0, 1 sum_ = sum(a[start:end]) # print(start, end, sum_) while sum_ != target and start < end < len(a): if sum_ < t: end += 1 else: start += 1 if start == end: end += 1 sum_ = sum(a[start:end]) # print(start, end, sum_) return sum_ == target This algorithm can be further improved by only keeping a running total, from which you add or subtract: def contiguous_sum2(a, t): start, end = 0, 1 sum_ = a[0] #print(start, end, sum_) while sum_ != t and start < end < len(a): if sum_ < t: sum_ += a[end] end += 1 else: sum_ -= a[start] start += 1 if start == end: sum_ += a[end] end += 1 #print(start, end, sum_) return sum_ == t The implementation can be streamlined further by using a for loop, since we actually only loop once over the input array, as recommended in the comments by @Peilonrayz: def contiguous_sum_for(numbers, target): end = 0 total = 0 for value in numbers: total += value if total == target: return True while total > target: total -= numbers[end] end += 1 return False All three functions are faster than your algorithm for random arrays of all lengths (containing values from 0 to 1000 and the target always being 100):
{ "domain": "codereview.stackexchange", "id": 33895, "tags": "python, performance, beginner, algorithm, programming-challenge" }
How to connect a computer with something like a little electric car?
Question: I am a complete beginner and I wondered how to hardware like a controller with a f.e. little electric car. So when I want to program a controller like a ps4 controller so that it can let a little electric car drive, how can I do that? (programming language in best case Python) Answer: The Raspberry Pi is a popular single-board computer that can be programmed to achieve this. Somebody actually already built one full-sized electric car, called the Teslonda. If you want to go smaller than the Raspberry Pi Zero, you can try programming an Arduino (but this involves C instead of Python). Both the Raspberry Pi and Arduino has their own Stack Exchange sites. You might get a better response there than here.
{ "domain": "physics.stackexchange", "id": 73955, "tags": "electrical-engineering" }
bag file to pcd files
Question: when i try to convert bag file using the following command to pcd file then i get multiple pcd files. is this correct ? How can i obtain single pcd file from a bag file. rosrun pcl_ros bag_to_pcd <input_file.bag> <topic> <output_directory> Originally posted by noman on ROS Answers with karma: 21 on 2016-05-24 Post score: 1 Original comments Comment by Vignesh_93 on 2021-02-08: can this node be used to convert to point cloud data directly from the "scan" topic which is in laser reading format or does the topic need to be a point cloud data publishing topic ? Answer: The bag_to_pcd file will use the topic that you choose as input, and turn each sensor_msgs/PointCloud2 message on that topic into it's own PCD file. If you want a single PCD file, you need to use a tool like the point_cloud_assembler (http://wiki.ros.org/laser_assembler) to concatenate all of the clouds together into a single cloud. You can then use the bag_to_pcd tool to save that new single cloud. Originally posted by mjcarroll with karma: 6414 on 2016-10-04 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 24724, "tags": "ros, pcl, bag-to-pcd, pcl-ros" }
fail to connect denso hs45452 robot
Question: Hello everyone, I'm working on a tutorial http://wiki.ros.org/denso_robot_ros/Tutorials/How%20to%20control%20an%20RC8%20with%20MoveIt%21 version indigo, about How to control an RC8 with MoveIt! . After I finished the tutorial, I launched bringup file, but get this error, [ERROR] [1548245807.887070308]: Failed to change to slave mode. (83500126) And I checked the teach pendant, it says error code (83500126) needing registering(invoke source, PC). But I have followed the tutorial and registered b-cap slave. Because of this error, I got the following error [ERROR] [1548245821.516886560]: MoveItSimpleControllerManager: Action client not connected: hs45452/arm_controller/follow_joint_trajectory Would you like to help me on this problem? Originally posted by shyreckdc on ROS Answers with karma: 16 on 2019-01-23 Post score: 0 Answer: I found the problem, after I registered following the tutorial, the teach pendant should restart. Originally posted by shyreckdc with karma: 16 on 2019-01-23 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32325, "tags": "ros, ros-indigo" }
Python OOP - creating library
Question: I have been learning programming in Python recently and I got a task to program a library in OOP. You can add books to a library, search for a specific book according to the ISBN, name of the author or search books which cost less than price. Everything seems to be okay, but I am new to OOP so I would like to hear some advice on whether it is programmed a "good way" or not. class Library(object): books = [] written_by_author = [] books_under_price = [] def __init__(self, name, author, isbn, price): self.name = name self.author = author self.isbn = isbn self.price = price def addBook(self): Library.books.append((self.name,self.author,self.isbn,self.price)) def searchBookISBN(self,isbn): for book in Library.books: if book[2]==isbn: return book def searchBookAuthor(self,author): for book in Library.books: if book[1]==author: Library.written_by_author.append(book) return Library.written_by_author def searchUnderPrice(self,price): for book in Library.books: if book[3]<price: Library.books_under_price.append(book) return Library.books_under_price book = Library('Geometry', 'Jeff Potter', '0596805888', 22) book.addBook() book = Library('Math', 'George Harr', '0594805888', 15) book.addBook() book = Library('English', 'James Odd', '0596225888', 10) book.addBook() book = Library('Physics','Jeff Potter','0597884512',18) print (book.searchBookISBN('0594805888')) print (book.searchBookAuthor('George Harr')) print (book.searchUnderPrice(20)) Answer: Your design doesn't follow best practice. Your Library class should create a library when you use __init__, rather than a book. And so I'd split your code in half, create two classes; one for books, the other for the library. You shouldn't mutate data on Library. This is as it mutates it on the class, rather than on the instance. And so if you make a new instance, then it will be tied to all the other instances. When making the book class. This can significantly be simplified with collections.namedtuple. Such as: Book = namedtuple('Book', 'name author ISBN price') And so I'd change your code to: from collections import namedtuple class Library(object): def __init__(self): self.books = [] def addBook(self, book): self.books.append(book) def searchBookISBN(self, ISBN): for book in self.books: if book.ISBN == ISBN: return book def searchBookAuthor(self, author): written_by_author = [] for book in self.books: if book.author == author: written_by_author.append(book) return written_by_author def searchUnderPrice(self, price): books_under_price = [] for book in self.books: if book.price < price: books_under_price.append(book) return books_under_price Book = namedtuple('Book', 'name author ISBN price') library = Library() library.addBook(Book('Geometry', 'Jeff Potter', '0596805888', 22)) library.addBook(Book('Math', 'George Harr', '0594805888', 15)) library.addBook(Book('English', 'James Odd', '0596225888', 10)) library.addBook(Book('Physics', 'Jeff Potter', '0597884512', 18)) print(library.searchBookISBN('0594805888')) print(library.searchBookAuthor('George Harr')) print(library.searchUnderPrice(20)) After this, I'd recommend: You read and follow PEP 8, so that your code is easier to read. You learn list comprehensions Which would further improve your code to: class Library(object): def __init__(self): self.books = [] def add_book(self, book): self.books.append(book) def book_with_ISBN(self, ISBN): for book in self.books: if book.ISBN == ISBN: return book def books_by_author(self, author): return [book for book in self.books if book.author == author] def books_under_price(self, price): return [book for book in self.books if book.price < price]
{ "domain": "codereview.stackexchange", "id": 29671, "tags": "python, object-oriented" }
Does MOND reduce the discrepancy in a quantifiable way?
Question: Which of the following statements accurately describes the impact of Modified Newtonian Dynamics (MOND) on the observed "missing baryonic mass" discrepancy in galaxy clusters? MOND is a theory that reduces the observed missing baryonic mass in galaxy clusters by postulating the existence of a new form of matter called "fuzzy dark matter." MOND is a theory that increases the discrepancy between the observed missing baryonic mass in galaxy clusters and the measured velocity dispersions from a factor of around 10 to a factor of about 20. MOND is a theory that explains the missing baryonic mass in galaxy clusters that was previously considered dark matter by demonstrating that the mass is in the form of neutrinos and axions. MOND is a theory that reduces the discrepancy between the observed missing baryonic mass in galaxy clusters and the measured velocity dispersions from a factor of around 10 to a factor of about 2. MOND is a theory that eliminates the observed missing baryonic mass in galaxy clusters by imposing a new mathematical formulation of gravity that does not require the existence of dark matter. We were arguing about this, and someone said 4 is wrong even though that's what it says in the wikipedia page for MOND. They are arguing that MOND doesn't really reduce the discrepancy in a quantifiable way. They are saying that 5 is supposedly a better answer. We were hoping folks here could help let us know if wikipedia is wrong here. Answer: Point 1 is obviously wrong. MOND and fuzzy dark matter are different things, since the latter is a type of dark matter and MOND is a modified theory of gravity. Point 2 is also wrong, since MOND reduces the discrepancy, not increases it. Point 3 is a variant of point 1 - neutrinos are not fuzzy dark matter, but axions would qualify. So that leaves points 4 and 5. MOND definitely reduces the need for dark matter, but it doesn't eliminate it entirely. On the other hand, a much-smaller amount of dark matter is easier to explain in mundane terms - it could for example just be black holes, dust, or planets that don't shine, etc. Hence both 4 and 5 are correct, they just describe MOND differently.
{ "domain": "physics.stackexchange", "id": 96445, "tags": "modified-gravity" }
Why do living things go belly up as they die?
Question: I have seen birds, lizards, frogs, fish, etc in various places on their back dead. May be insecticides cause them to flip over but I do not believe every upside down creature died by poison as stated in this question's answer: Why do cockroaches flip over when they die? and the roach question only covers roaches and insecticides. My question covers all animals not affected by poison that are belly up. Other than poisoning is there any other reasons like DNA programming on behavior or muscle spasms that cause this? Answer: limbs are light weight compared to bodies, if you knock an animal around (and it has no active correction AKA is not alive) it will tend to end up on its back just due to where its center of gravity is.
{ "domain": "biology.stackexchange", "id": 8512, "tags": "genetics, neuroscience, psychology, death, life" }
Creating a valid dataset for obtaining results
Question: I have created a domain-specific dataset, lets say it is relating to python programming topic posts. I have taken data from various places specific to this topic to create positive examples in my dataset. For example, python related subreddits, stack exchange posts tagged with python, twitter posts hashtagged with python or python specific sites. The data points taken from these places are considered positive data points and then I have retrieved data points from the same sources but relating to general topics, searched if they contain the word python in them and if they do discard them to create the negative examples in my dataset. I have been told that I can use the training set from the dataset as is, but that I need to manually annotate the test set for the results to be valid, otherwise they would be biased. Is this correct? How would they be biased? To be clear the test set contains different entries to the training set. There are close to 200,000 entries in the test set which makes manual annotation difficult. I have seen similar methods been used in papers I have previously read without mention of manual annotation. Is this technique valid or do I have to take some extra steps to ensure the validity of the test sets? Answer: There are two potential biases: With this automatic method, you might have a few erroneous labels. For example it happens regularly here on DataScienceSE that a user tags a question "python" but actually the question is not specific to python at all. Same thing for the opposite case: for example it's possible that some content contains some python code but doesn't mention "python" anywhere. The distribution between positive and negative classes is arbitrary. Let's assume you use 50% positive / 50% negative: if later you want to apply your classifier on a new data science website where only 10% of the content is about python, it's likely to predict a lot of false positive cases so the true performance on this data will be much lower than on your test set. It's rare to have a perfect dataset, so realistically in my opinion the first issue is probably acceptable because the noise in the labels should be very limited. The second issue could be a bit more serious but this depends on what is the end application. Keep in mind that a trained model is meant to be applied to the same kind of data as it was trained/tested on.
{ "domain": "datascience.stackexchange", "id": 8025, "tags": "dataset, text-mining, nlp, text-classification" }
Data visualization with extreme far away points
Question: I want to show points across two groups. However, for both groups, there are some points which are far away from most of the other points within each group, shown below. Any suggestions for this situation? Thank you. Answer: If you want to see the distribution of the data that is hidden in the bottom portion, you can add a histogram or probability plot, or even a violin plot. Each will show the distribution of the data more clearly than this boxplot does, and you can still see the true value directly. You can also add some jitter to the boxplot to see more of the overlapping points displayed. Jitter: Violin: Probability Plot with Boxplot:
{ "domain": "datascience.stackexchange", "id": 7800, "tags": "r, visualization" }
Graphs for distributing rewards
Question: I want to be able to distribute a total reward among referral network, where the distribution diminishes according to the depth of the network but the total sum of distributed rewards equals the initial total reward. I have attempted to use Graphs as a way to solve this. With the assumptions that: Give a fixed percentage of the total reward to the direct referrer. Distribute the remaining reward down the referral chain in decreasing amounts. The goal is for Node1 to receive 20% of the total reward off the top, with the remaining 80% distributed among all nodes, with decreasing amounts based on their distance from Node1. I want to also be able to support circular referrals and self referrals. Code: import networkx as nx import matplotlib.pyplot as plt def distribute_rewards( graph: nx.DiGraph, start_node: str, total_reward: float, direct_referral_ratio: float, decrease_factor: float ) -> None: """ Distributes rewards within a referral network. Args: graph (nx.DiGraph): The referral network graph. start_node (str): The node from which distribution starts. total_reward (float): Total reward amount to distribute. direct_referral_ratio (float): Ratio of the total reward given to the start node. decrease_factor (float): Factor by which the reward decreases at each level. """ nx.set_node_attributes(graph, 0, 'reward') direct_reward = total_reward * direct_referral_ratio graph.nodes[start_node]['reward'] = direct_reward remaining_reward = total_reward - direct_reward level_weights: dict[int, float] = {} queue: list[tuple[str, int]] = [(start_node, 0)] visited: set[str] = {start_node} while queue: node, level = queue.pop(0) level_weights.setdefault(level, 0) level_weights[level] += decrease_factor ** level for neighbor in graph.successors(node): if neighbor not in visited: visited.add(neighbor) queue.append((neighbor, level + 1)) total_weight = sum(level_weights.values()) level_rewards = { level: (remaining_reward * weight) / total_weight for level, weight in level_weights.items() } for node in graph.nodes: level = nx.shortest_path_length(graph, source=start_node, target=node) if level in level_rewards: num_nodes_at_level = sum( 1 for n in graph.nodes if nx.shortest_path_length(graph, source=start_node, target=n) == level ) graph.nodes[node]['reward'] += level_rewards[level] / num_nodes_at_level def create_complex_graph(max_levels: int) -> nx.DiGraph: """ Creates a binary tree graph representing a referral network. Args: max_levels (int): The maximum depth of the referral network. Returns: nx.DiGraph: The generated referral network graph. """ graph = nx.DiGraph() for level in range(max_levels): for i in range(2**level): parent = f'Node{2**level + i}' children = [f'Node{2**(level + 1) + 2*i}', f'Node{2**(level + 1) + 2*i + 1}'] for child in children: graph.add_edge(parent, child) return graph def visualize_graph(graph: nx.DiGraph): layout = nx.spring_layout(graph, k=0.5, iterations=20) node_sizes = [graph.nodes[node]['reward'] / 10 for node in graph] # Draw nodes and edges nx.draw_networkx_nodes(graph, layout, node_size=node_sizes, node_color='skyblue', edgecolors='black') nx.draw_networkx_edges(graph, layout, arrows=True) # Draw node labels nx.draw_networkx_labels(graph, layout, font_size=8) # Remove axis for a cleaner look and display the graph plt.axis('off') plt.show() def main() -> None: max_levels = 5 total_reward = 3500 direct_referrer = 'Node1' # From the POV that Node1 is the direct referrer. direct_referral_ratio = 0.2 # 20% decrease_factor = 0.5 # 50% graph = create_complex_graph(max_levels) distribute_rewards(graph, direct_referrer, total_reward, direct_referral_ratio, decrease_factor) for node in sorted(graph.nodes, key=lambda x: int(x.lstrip('Node'))): print(f"{node}: Reward = {graph.nodes[node]['reward']:.2f}") visualize_graph(graph) if __name__ == "__main__": main() Edit As requested here is a screenshot of the graph. It was created using create_complex_graph(max_levels=5) and I arbitrarily chose max_level=5 which generates 63 nodes. For context, the nodes on the graph represent people to be rewarded and the edges represent referrals. See below: Edit 2 Given this is a reward/referral network, as a priority I want to accommodate the distribution of rewards up the referral chain from Node1 because the upstream direction identifies who initiated the referral chain and reveals the network of referrals that led to a specific user's inclusion in the system. Distributing a fixed percentage to Node1. Distributing the remaining reward upwards through the referral chain. Limiting the distribution by either the remaining reward, a maximum number of referral levels, the absence of further predecessors. import networkx as nx import matplotlib.pyplot as plt def create_simple_graph(): graph = nx.DiGraph() graph.add_edge('Node0', 'Node1') # Node0 refers Node1 return graph def create_complex_graph(): graph = nx.DiGraph() graph.add_edge('Node0', 'Node1') # Node0 refers Node1 (Level 1) graph.add_edge('Node-1', 'Node0') # Node-1 refers Node0 (Level 2) graph.add_edge('Node-2', 'Node-1') # Node-2 refers Node-1 (Level 3) graph.add_edge('Node-3', 'Node-2') graph.add_edge('Node-4', 'Node-3') graph.add_edge('Node-5', 'Node-4') graph.add_edge('Node-6', 'Node-5') graph.add_edge('Node-7', 'Node-6') # Up to Node-7, making it 7 levels upstream. return graph def visualize_graph(graph: nx.DiGraph): pos = nx.spring_layout(graph) labels = nx.get_node_attributes(graph, 'reward') nx.draw( graph, pos, with_labels=True, labels={ node: f"{node}\nReward: {reward:.2f}" for node, reward in labels.items() }, node_size=5000, node_color='skyblue' ) plt.show() def distribute_rewards( graph: nx.DiGraph, start_node: str, total_reward: float, start_node_ratio: float, upward_referral_ratio: float, max_referral_levels: int ) -> None: nx.set_node_attributes(graph, 0, 'reward') graph.nodes[start_node]['reward'] = total_reward * start_node_ratio remaining_reward = total_reward - graph.nodes[start_node]['reward'] # Start with the assumption that all directly and indirectly referred nodes need some reward # We will first focus on distributing rewards to the start node and its direct and indirect referrers # Initialize a queue to process nodes level by level (breadth-first) queue = [(start_node, 0)] visited = set() while queue: current_node, level = queue.pop(0) if level > 0: # Calculate and distribute reward reward_for_node = (remaining_reward * upward_referral_ratio) / (2 ** (level - 1)) graph.nodes[current_node]['reward'] += reward_for_node remaining_reward -= reward_for_node if level < max_referral_levels: for predecessor in graph.predecessors(current_node): if predecessor not in visited: queue.append((predecessor, level + 1)) visited.add(predecessor) def main(): graph = create_complex_graph() distribute_rewards( graph, 'Node1', # Start Point 1000, # Total Reward 0.33, # 33% of total reward goes to the start node 0.1, # 10% of the remaining reward distributed upwards per level 7 # Maximum levels to go up in the referral chain ) for node in graph.nodes: print(f"{node}: Reward = ${graph.nodes[node]['reward']:.2f}") visualize_graph(graph) if __name__ == "__main__": main() See the graph below which at the moment is a tree structure. I plan to extend this solution to support self-referrals (Node1, referring themselves) and reverse-referrals so I think keeping it as a graph is probably a sane thing to do. Answer: The bounty says you're "looking for an answer from a reputable source", but it's unclear what you mean by that. Is there a specific question you want a reputable answer to? The algorithm you've written is depth-first search; it's canonical and typical and fine. From a theoretical CS perspective my only quibble would be that, in theory, you should be able to use the one search for all of your graph traversal. Instead you're following up with nx.shortest_path_length (and then doing that again in a nested loop). Since you're building your own DFS process, it's probably possible to re-write it to do everything you want, but I'm pretty sure we're going to find a networkx tool that will simplify the whole problem. ... and yes: bfs_layers will serve. I was hoping for some functional-style traversal process we could use to do everything at once; it's probably there I just didn't find it. With a single call to bfs_layers, you'll get a data structure that can replace your whole while queue:... and all your calls to nx.shortest_path_length. There are some other aspects of your code that I would have written differently, but IDK how much time you want to spend polishing this. A detail I didn't understand until I started testing: The root node gets their "direct" amount plus their exponential-proportionate share of the remainder. That actually answers several questions so I assume it's on purpose. Consider calling the "direct" share a "bonus", to make this more clear. from dataclasses import dataclass from typing import Sequence @dataclass(frozen=True) class WeightedLevel: rate: float nodes: Sequence[str] def distribute_rewards( graph: nx.DiGraph, start_node: str, total_reward: float, direct_referral_ratio: float, decrease_factor: float ) -> None: """ Distributes rewards within a referral network. Args: graph: The referral network graph. start_node: The node from which distribution starts. total_reward: Total reward amount to distribute. direct_referral_ratio: Ratio of the total reward given to the start node. decrease_factor: Factor by which the reward decreases at each level. """ nx.set_node_attributes(graph, 0, 'reward') direct_reward = total_reward * direct_referral_ratio remaining_reward = total_reward - direct_reward visited: set[str] = set() def dedupe(layer: list[str]) -> list[str]: # there's probably a better way deduped = [n for n in layer if n not in visited] visited.update(deduped) return deduped levels = [WeightedLevel(decrease_factor ** i, nodes) for (i, nodes) in enumerate(map(dedupe, nx.bfs_layers(graph, start_node))) if nodes] weight_value = remaining_reward / sum(wl.rate * len(wl.nodes) for wl in levels) for wl in levels: reward = weight_value * wl.rate for node in wl.nodes: graph.nodes[node]['reward'] = reward graph.nodes[start_node]['reward'] += direct_reward
{ "domain": "codereview.stackexchange", "id": 45583, "tags": "python, graph" }
Efficiently filter a large (100gb+) csv file (v2)
Question: EDIT: This question is followed up by this question. I'm in the process of filtering some very(!) large files (100gb+): I can't download files with a lower granularity. This is a followup from this question. The problem is as follows: I need to filter large files that look like the following (3b+ rows). TIC, Date, Time, Bid, Offer AAPL, 20090901, 09:45, 145, 145.5 AAPL, 20090902, 09:45, 145, 145.5 AAPL, 20090903, 09:45, 145, 145.5 I filter based on TICKER+DATE combinations found in an external file. I have, on average, ~ 1200 dates of interest per firm for ~ 700 firms. The large file contains all dates for the firms of interest, for which I want to extract only a few dates of interest. The big files are split by month (2013-01, 2013-02 etc.). AAPL, 20090902 AAPL, 20090903 A few changes were made since the previous post: I used the CSV module, as was suggested. I write the the rows to be retained to disk after each 5m rows. I iterate over the files using a try except statement. I'm currently at 6 minutes of processing time for 30 million rows (1% of the file); I tested a few files and it works properly. However, with about 3 billion rows per file, that puts it at ~10 hours for one 120gb file. Seeing as I have about twelve files, I'm very curious whether I can get significant performance improvements by doing things differently. Any tips are greatly appreciated. import os import datetime import csv import re ROOT_DIR = "H:/ROOT_DIR/" SOURCE_FILES = os.path.join(ROOT_DIR, "10. Intradayfiles (source)/") EXPORT_DIR = os.path.join(ROOT_DIR, "11. CSV Export (step 1 Extract relevant firmdates)/") DATES_FILE = os.path.join(ROOT_DIR, "10. Dates of interest/firm_date_of_interest.csv") # Build the original date dict # For example: # d['AAPL'] is a list with ['20140901', '20140902', '20140901'] with open(DATES_FILE, "r") as csvfile: d = {} reader = csv.reader(csvfile) reader.next() for line in reader: firm = line[1] date = line[2] if firm in d.keys(): d[firm].append(date) else: d[firm] = [date] def main(): for root, dir, files in os.walk(SOURCE_FILES): num_files = len(files) for i, file in enumerate(files): print('File ' + str(i+1) + '/' + str(num_files) + '; ' + file) basename = os.path.splitext(file)[0] filepath = os.path.join(root, file) # Annotate files with 'DONE' after succesful processing: skip those if re.search("DONE", basename): continue start = datetime.datetime.now() rows_to_keep = [] # Read the file, append only rows for which the dates occurs in the dictionary for that firm. with open(filepath, 'rb') as csvfile: startfile = datetime.datetime.now() reader = csv.reader(csvfile) saved = 0 for i, row in enumerate(reader): # Every 5 million rows, I save what we've extracted so far. if i % 5000000 == 0: if rows_to_keep: with open(os.path.join(EXPORT_DIR, basename+' EXTRACT' + str(saved) + '.csv'), 'wb') as csvfile: writer = csv.writer(csvfile, quoting=csv.QUOTE_NONNUMERIC) for k, line in enumerate(rows_to_keep): writer.writerow(line) saved += 1 rows_to_keep = [] file_elapsed = datetime.datetime.now() - startfile print("Took me " + str(file_elapsed.seconds) + ' seconds... for ' + str(i) + ' rows..') # See if row[1] (the date) is in the dict, based on row[0] (the ticker) try: if row[1] in d[row[0]]: rows_to_keep.append(row) except KeyError: continue except IndexError: continue os.rename(os.path.join(root, file), os.path.join(root, os.path.splitext(file)[0]+'- DONE.csv')) elapsed = datetime.datetime.now() - start print("Took me " + str(elapsed.seconds) + ' seconds...') return rows_to_keep if __name__ == "__main__": main() Answer: You can make the reading of the file a generator function: def getLines(filename, d): with open(filename, "rb") as csvfile: datareader = csv.reader(csvfile) for row in datareader: try: if row[1] in d[row[0]]: yield row except KeyError: continue except IndexError: continue This extracts the actual filter to a different function. I would also suggest not accumulating the list of values to write out but instead use the generator to create chunks directly.
{ "domain": "codereview.stackexchange", "id": 13418, "tags": "python, csv" }
Can an object be infinitely small?
Question: I read somewhere that the earth has to be smaller than 1 cm to become a black hole, according to Schwarzschild. Since big bang came from a singularity, I am wondering, is there any minimum volume for anything? Answer: Infinity is a mathematical term, very useful, but the history of physics has shown us that when we make mathematical extrapolations that lead to infinities of one sort or another, a different mathematical model will eliminate those infinities ( call me quantum mechanics). In thermodynamics the black body radiation leads to the ultraviolet catastrophe, and quantum mechanics saves the day. In classical electromagnetism, a point like electron would tend to an infinite potential at (0,0,0) as it goes with 1/r. Quantum electrodynamics saves the day. That is because quantum mechanics has inherent probabilistic indeterminacies when sizes become of order of h(the planck constant). Even though elementary particles are postulated as point particles, they are not classical particles, the wave/particle duality saves the day, so the minimum volume would be of dimensions compatible with h in the variables examined and the measurement methods used. Once gravity is quantized, the set will be complete, taking care of minimum black hole volumes too, in a similar way.
{ "domain": "physics.stackexchange", "id": 44325, "tags": "black-holes, spacetime, quantum-gravity" }
small template metaprogramming list library
Question: So, time to explore the scary depths of template metaprogramming (well, scary for me, anyways). This library basically provides 2 different lists, a list of types and a list of sizes. Both lists support: Contains: check whether an element is in the list IndexOf: retrieve the index of an element in the list. static_asserts that the element is in the list. Size: retrieve the length of a list. Rename: Instantiate another template with the elements in the list. Filter: Return a list containing only the elements matching a predicate. Additionally, the list of sizes supports a Get operation: retrieve the size at an index. Any suggestions for improvements are welcome! mpl_types.h #pragma once namespace MPL { namespace Types { namespace impl { struct type_list_end{}; template<typename T, typename... Ts> struct type_list { using current = T; using next = type_list<Ts...>; }; template<typename T> struct type_list<T> { using current = T; using next = type_list_end; }; template<size_t Count, typename... Ts> struct type_list_builder_helper { using type = type_list<Ts...>; }; template<typename... Ts> struct type_list_builder_helper<0u, Ts...> { using type = type_list_end; }; template<typename... Ts> struct type_list_builder { using type = typename type_list_builder_helper<sizeof...(Ts), Ts...>::type; }; template<typename TypeList, typename T> struct type_list_contains; // forward declaration template<bool Same, typename TypeList, typename T> struct type_list_contains_helper { static const constexpr bool value = type_list_contains<typename TypeList::next, T>::value; }; template<typename TypeList, typename T> struct type_list_contains_helper<true, TypeList, T> { static const constexpr bool value = true; }; template<typename TypeList, typename T> struct type_list_contains { static const constexpr bool value = typename type_list_contains_helper<std::is_same<typename TypeList::current, T>::value, TypeList, T>::value; }; template<typename T> struct type_list_contains<type_list_end, T> { static const constexpr bool value = false; }; template<size_t Index, typename TypeList, typename T> struct type_list_index_of; template<bool Same, size_t Index, typename TypeList, typename T> struct type_list_index_of_helper { static const constexpr size_t value = type_list_index_of<Index + 1, typename TypeList::next, T>::value; }; template<size_t Index, typename TypeList, typename T> struct type_list_index_of_helper<true, Index, TypeList, T> { static const constexpr size_t value = Index; }; template<size_t Index, typename TypeList, typename T> struct type_list_index_of { static const constexpr size_t value = type_list_index_of_helper<std::is_same<typename TypeList::current, T>::value, Index, TypeList, T>::value; }; template<size_t Size, typename TypeList> struct type_list_size { static const constexpr size_t value = type_list_size<Size + 1, typename TypeList::next>::value; }; template<size_t Size> struct type_list_size<Size, type_list_end> { static const constexpr size_t value = Size; }; template<template<typename...> typename Target, typename TypeList> struct type_list_rename; template<template<typename...> typename Target, typename... Ts, template<typename...> typename TypeList> struct type_list_rename<Target, TypeList<Ts...>> { using type = Target<Ts...>; }; template<template<typename...> typename Target> struct type_list_rename<Target, type_list_end> { using type = Target<>; }; template<typename TypeList, template<typename> typename Pred, typename... Ts> struct type_list_filter; template<bool Same, typename TypeList, template<typename> typename Pred, typename... Ts> struct type_list_filter_helper { using type = typename type_list_filter<typename TypeList::next, Pred, Ts...>::type; }; template<typename TypeList, template<typename> typename Pred, typename... Ts> struct type_list_filter_helper<true, TypeList, Pred, Ts...> { using type = typename type_list_filter<typename TypeList::next, Pred, Ts..., typename TypeList::current>::type; }; template<typename TypeList, template<typename> typename Pred, typename... Ts> struct type_list_filter { using type = typename type_list_filter_helper<Pred<typename TypeList::current>::value, TypeList, Pred, Ts...>::type; }; template<template<typename> typename Pred, typename... Ts> struct type_list_filter<type_list_end, Pred, Ts...> { using type = typename type_list_builder<Ts...>::type; }; } template<typename... Ts> using List = typename impl::type_list_builder<Ts...>::type; template<typename TypeList, typename T> static constexpr bool Contains() { return impl::type_list_contains<TypeList, T>::value; } template<typename TypeList, typename T> static constexpr size_t IndexOf() noexcept { static_assert(Contains<TypeList, T>(), "TypeList does not contain T"); return impl::type_list_index_of<0u, TypeList, T>::value; } template<typename TypeList> static constexpr size_t Size() noexcept { return impl::type_list_size<0u, TypeList>::value; } template<template<typename...> typename Target, typename TypeList> using Rename = typename impl::type_list_rename<Target, TypeList>::type; template<typename TypeList, template<typename> typename Pred> using Filter = typename impl::type_list_filter<TypeList, Pred>::type; } } mpl_sizes.h #pragma once namespace MPL { namespace Sizes { namespace impl { struct size_list_end{}; template<size_t I, size_t... Is> struct size_list { static const constexpr size_t current = I; using next = size_list<Is...>; }; template<size_t I> struct size_list<I> { static const constexpr size_t current = I; using next = size_list_end; }; template<size_t Count, size_t... Is> struct size_list_builder_helper { using type = size_list<Is...>; }; template<size_t... Is> struct size_list_builder_helper<0u, Is...> { using type = size_list_end; }; template<size_t... Is> struct size_list_builder { using type = typename size_list_builder_helper<sizeof...(Is), Is...>::type; }; template<typename SizeList, size_t Value> struct size_list_contains; template<bool Same, typename SizeList, size_t Value> struct size_list_contains_helper { static const constexpr bool value = size_list_contains<typename SizeList::next, Value>::value; }; template<typename SizeList, size_t Value> struct size_list_contains_helper<true, SizeList, Value> { static const constexpr bool value = true; }; template<typename SizeList, size_t Value> struct size_list_contains { static const constexpr bool value = size_list_contains_helper<SizeList::current == Value, SizeList, Value>::value; }; template<size_t Value> struct size_list_contains<size_list_end, Value> { static const constexpr bool value = false; }; template<size_t Index, typename SizeList, size_t Value> struct size_list_index_of; template<bool Same, size_t Index, typename SizeList, size_t Value> struct size_list_index_of_helper { static const constexpr size_t value = size_list_index_of<Index + 1, typename SizeList::next, Value>::value; }; template<size_t Index, typename SizeList, size_t Value> struct size_list_index_of_helper<true, Index, SizeList, Value> { static const constexpr size_t value = Index; }; template<size_t Index, typename SizeList, size_t Value> struct size_list_index_of { static const constexpr size_t value = size_list_index_of_helper<SizeList::current == Value, Index, SizeList, Value>::value; }; template<size_t Index, size_t Value> struct size_list_index_of<Index, size_list_end, Value>{}; template<size_t Size, typename SizeList> struct size_list_size { static const constexpr size_t value = size_list_size<Size + 1, typename SizeList::next>::value; }; template<size_t Size> struct size_list_size<Size, size_list_end> { static const constexpr size_t value = Size; }; template<size_t CurrentIndex, typename SizeList, size_t Index> struct size_list_get; template<bool Same, size_t CurrentIndex, typename SizeList, size_t Index> struct size_list_get_helper { static const constexpr size_t value = size_list_get<CurrentIndex + 1, SizeList, Index>::value; }; template<size_t CurrentIndex, typename SizeList, size_t Index> struct size_list_get_helper<true, CurrentIndex, SizeList, Index> { static const constexpr size_t value = SizeList::current; }; template<size_t CurrentIndex, typename SizeList, size_t Index> struct size_list_get { static const constexpr size_t value = size_list_get_helper<CurrentIndex == Index, CurrentIndex, SizeList, Index>::value; }; template<template<size_t...> typename Target, typename SizeList, size_t... Is> struct size_list_rename { using type = typename size_list_rename<Target, typename SizeList::next, Is..., SizeList::current>::type; }; template<template<size_t...> typename Target, size_t... Is> struct size_list_rename<Target, size_list_end, Is...> { using type = Target<Is...>; }; template<typename SizeList, template<size_t> typename Pred, size_t... Is> struct size_list_filter; template<bool Match, typename SizeList, template<size_t> typename Pred, size_t... Is> struct size_list_filter_helper { using type = typename size_list_filter<typename SizeList::next, Pred, Is...>::type; }; template<typename SizeList, template<size_t> typename Pred, size_t... Is> struct size_list_filter_helper<true, SizeList, Pred, Is...> { using type = typename size_list_filter<typename SizeList::next, Pred, Is..., SizeList::current>::type; }; template<typename SizeList, template<size_t> typename Pred, size_t... Is> struct size_list_filter { using type = typename size_list_filter_helper<Pred<SizeList::current>::value, SizeList, Pred, Is...>::type; }; template<template<size_t> typename Pred, size_t... Is> struct size_list_filter<size_list_end, Pred, Is...> { using type = typename size_list_builder<Is...>::type; }; } template<size_t... Is> using List = typename impl::size_list_builder<Is...>::type; template<typename SizeList, size_t Value> static constexpr bool Contains() noexcept { return impl::size_list_contains<SizeList, Value>::value; } template<typename SizeList, size_t Value> static constexpr size_t IndexOf() noexcept { static_assert(Contains<SizeList, Value>(), "SizeList does not contain Value"); return impl::size_list_index_of<0u, SizeList, Value>::value; } template<typename SizeList> static constexpr size_t Size() noexcept { return impl::size_list_size<0u, SizeList>::value; } template<typename SizeList, size_t Index> static constexpr size_t Get() noexcept { static_assert(Index < Size<SizeList>(), "Index out of range"); return impl::size_list_get<0u, SizeList, Index>::value; } template<template<size_t...> typename Target, typename SizeList> using Rename = typename impl::size_list_rename<Target, SizeList>::type; template<typename SizeList, template<size_t> typename Pred> using Filter = typename impl::size_list_filter<SizeList, Pred>::type; } } examples #include "mpl_types.h" #include "mpl_sizes.h" #include <tuple> struct A {}; struct B {}; struct C {}; struct D {}; using TypeList1 = MPL::Types::List<A, B, C>; // TypeList1 contains A, B and C static_assert(MPL::Types::Contains<TypeList1, A>() && MPL::Types::Contains<TypeList1, B>() && MPL::Types::Contains<TypeList1, C>(), ""); // ... but not D static_assert(!MPL::Types::Contains<TypeList1, D>(), ""); using TupleOfList = MPL::Types::Rename<std::tuple, TypeList1>; // TupleOfList is exactly the same as std::tuple<A, B, C>; static_assert(std::is_same<TupleOfList, std::tuple<A, B, C>>::value, ""); // this predicate matches all types but B template<typename T> using isNotB = std::integral_constant<bool, !std::is_same<T, B>::value>; // filter TypeList1 with this predicate using TypeList2 = MPL::Types::Filter<TypeList1, isNotB>; // TypeList2 should now contain (A, C) static_assert(!MPL::Types::Contains<TypeList2, B>(), ""); static_assert(MPL::Types::IndexOf<TypeList2, A>() == 0u, ""); static_assert(MPL::Types::IndexOf<TypeList2, C>() == 1u, ""); static_assert(MPL::Types::Size<TypeList2>() == 2u, ""); template<typename... Ts> using Indices = MPL::Sizes::List<MPL::Types::IndexOf<TypeList1, Ts>()...>; using IndicesInTuple = MPL::Types::Rename<Indices, TypeList2>; // IndicesInTuple contains (0, 2) template<size_t... Indices> struct TupleExtractor { template<typename... Ts> static auto extract(std::tuple<Ts...> t) { return std::make_tuple(std::get<Indices>(t)...); } }; int main() { TupleOfList t; B bValue = std::get<MPL::Types::IndexOf<TypeList1, B>()>(t); using MyExtractor = MPL::Sizes::Rename<TupleExtractor, IndicesInTuple>; std::tuple<A, C> newTuple = MyExtractor::extract(t); } Answer: #pragma once namespace MPL { ... You don't #include <type_traits>, but you do use std::is_same. Add the #include: #pragma once #include <type_traits> namespace MPL { ... struct type_list_end{}; template<typename T, typename... Ts> struct type_list { using current = T; using next = type_list<Ts...>; }; template<typename T> struct type_list<T> { using current = T; using next = type_list_end; }; It's a bit odd to define a separate type to indicate an empty sequence. I would expect the empty sequence to be type_list<>: // We just let our default case be the empty sequence // That saves us a bit of code. If you don't want to do so, // you can specialize the template for empty arguments template<typename... Ts> struct type_list {}; template<typename T, typename... Ts> struct type_list<T, Ts...> { using current = T; using next = type_list<Ts...>; }; This means that (almost) everywhere you had to have a special case for an empty sequence, you now don't. For example, you can completely remove this: template<template<typename...> typename Target> struct type_list_rename<Target, type_list_end> { using type = Target<>; }; There would also be no reason to have type_list_builder, so you could remove it altogether: template<typename... Ts> using List = impl::type_list<Ts...>; One thing about template metaprogramming: instantiating templates / SFINAE are very expensive. If we can avoid instantiating templates as much as is reasonable, we can improve compile times. You can greatly simplify your implementation of Contains: template<typename TypeList, typename T> struct type_list_contains; template<typename... Ts, typename T> struct type_list_contains<type_list<Ts...>, T> { // You don't need both const and constexpr on variables static constexpr bool value = any<std::is_same<Ts, T>::value...>::value; }; But then you'd need an implementation of any, such as: template<bool...> struct bool_list {}; // If you are using C++14, you may want to use // template<bool... Bs> // using bool_list = std::integer_sequence<bool, Bs...>; template<bool... Bs> using all = std::is_same< bool_list<true, Bs...>, bool_list<Bs..., true> >; // C++17: // constexpr bool all = Bs && ..; template<bool... Bs> using any = std::integral_constant<bool, !all<(!Bs)...>::value >; // C++17: // constexpr bool any = Bs || ..; Your implementation of Size is doing way too much work. There's no need to recurse at all, you can simply do: template<typename TypeList> struct type_list_size; template<typename... Ts> struct type_list_size<type_list<Ts...>> { static constexpr size_t value = sizeof...(Ts); }; Rename is more well known as "apply". You take a meta-function and an argument list, and apply the function on the arguments. For your Size list: template<size_t I, size_t... Is> struct size_list { static const constexpr size_t current = I; using next = size_list<Is...>; }; This is arbitrarily restricted to only take size_t, when you can easily extend it to take any type T: template<typename T, T first, T... rest> struct value_list { static constexpr T current = first; using next = value_list<T, rest...>; }; On that matter, it's a bit annoying that we have to rewrite the code for values as well as types. However, we could instead wrap the values in a type and reuse our type list: template <typename T, T... values> using value_list = type_list<std::integral_constant<T, values>...>;
{ "domain": "codereview.stackexchange", "id": 26955, "tags": "c++, c++11, template-meta-programming" }
Connection between asymptotic behavior of scalar field and scaling dimension in $AdS_4$
Question: In Gubser's famous paper on breaking Abelian gauge symmetry near a black hole horizon, he talks about how to connect the asymptotic behavior of the scalar field $\psi$ to the scaling dimension $\Delta$ of the dual operator. Solving the equation of motion for $\psi$ (Eqn. 9 in the text), $$\psi''+\frac{-1+(8r-4)k+4(4r^3-1)/L^2}{(r-1)(-1+4kr+4r(r^2+r+1)/L^2)}\psi'+\frac{m_{eff}^2}{(r-1)(-1+4kr+4r(r^2+r+1)/L^2)}\psi=0$$ he finds that $$\psi \sim \frac{A_\psi}{r^{3-\Delta}}+\frac{B_{\psi}}{r^\Delta}$$ where $A_\psi$ and $B_\psi$ are constants. I'm a little confused how he gets this expansion; i.e., how he gets this specific $r$ dependence. A similar calculation is done in "Exact Gravity Dual of a Gapless Superconductor", by Koutsoumbas et. al., where an exact form of the hair is given in terms of the greatly simplified MTZ solution: $$\psi(r)=-\sqrt{\frac{3}{4\pi G}}\frac{r_0}{r+r_0}$$ The asymptotic solution is given in equation 5.12: $$\psi\sim \frac{\psi^1}{r}+\frac{\psi^2}{r^2}+...$$ If these two expansions are equal, then $\Delta=2$. This agrees with Gubser's result (below Eqn. 17), but I am not sure if this is intentional or not. Ultimately, I have three interconnected questions that can be summed up as follows: 1) How, exactly, does the conformal dimension come out of Gubser's calculation? Is it connected to k? 2) Is the asymptotic expansion performed by Gubser and Koutsoumbas equivalent? 3) What is the physical significance of having $\Delta=2$ in both cases? Any explanation or clarifying references would be appreciated. EDIT: Let me clarify the first question. Taking the asymptotic limits of the above expressions, the differential equation for $\psi$ can be simplified to $$ \psi''+\frac{4}{r}\psi'+\frac{1}{4}m_{eff}^2\left(\frac{L}{r}\right)^4\approx 0$$ From this, we can then solve for $\psi$ and take a further expansion to get $$A+B\frac{1}{L^2 m_{eff}^2 r^2}(L^4m_{eff}^4\alpha-\beta \sqrt{-L^2m_{eff}^2})+...$$ and so on, where A, B, $\alpha$, and $\beta$ are constants. Now I know that we can relate the mass to the conformal dimension by $L^2m^2=\Delta(\Delta -3)$ in AdS$_4$, but my confusion with Gubser's calculation is the following: 1a) Why does he get an expansion in terms of r's to the power of $\Delta$? Shouldn't it be in integer powers of $r$ (like Koutsoumbas' calculation), with the conformal dimension multiplying each term? Answer: I will be doing a similar example that is quite simpler for illustrative purposes. The following example has been analyzed in various places in the literature. I will be giving them at the end. Assume a five-dimensional AdS spacetime in the following parameterization $ \begin{equation} ds^2=\frac{1}{x_0^2} (\eta_{\mu \nu} dx^{\mu} dx^{\nu} + dx^2_0) \end{equation} $ In this parameterization, the conformal boundary of space is reached for $x_0 \rightarrow 0$. Another frequent choice is the one corresponding to the change of variables $x_0 \rightarrow \frac{1}{r}$. We want to study a massive scalar and its dynamics that is governed by the action $$ \begin{equation} S = \int d^5x \sqrt{-g} (g^{AB} \partial_A \phi \partial_B \phi + m^2 \phi^2 ) \end{equation} $$ where $\phi$ is the scalar field under consideration and the capital letters are indices in the bulk of the theory. The field, of course, can depend on any of the coordinates and so we abbreviated essentially the formally written $\phi(x_0,x_{\mu})$ by $\phi$ in the above. From standard techniques, it is easy to show that the equations of motion $$ \begin{equation} \begin{split} \frac{1}{\sqrt{g}} \partial_{A} (\sqrt{-g} g^{AB} \partial_B \phi) - m^2 \phi &= 0 \\ \partial_{x_0} \left( \frac{1}{x_0^3} \partial_{x_0} \phi \right) + \partial_{\mu} \left( \frac{1}{x_0^3} \partial^{\mu} \phi \right) = \frac{m^2}{x_0^5} \phi \end{split} \end{equation} $$ The important thing to understand is that from the above equation, the $x_0$ dependence will yield the relation to the conformal dimension associated to the boundary operator. Focusing on the $x_0$ part of the above differential equation yields to power like solutions. In other words, assume an ansatz $\phi = x_0^{\Delta}$ and obtain $$ \begin{equation} x_0^{\Delta} (-m^2+\Delta(\Delta-4)) = 0 \end{equation} $$ and a little cute Mma "hack" for the above x0^5 D[1/x0^3 D[f[x0], x0], x0] - m^2 f[x0] /. f -> (#^\[CapitalDelta] &) // Factor From which you obtain the infamous relation between the bulk AdS mass of the field and the conformal dimension of the operator. It is a very straightforward generalization to obtain the equivalent for a $(d+1)$-dimensional AdS spacetime. Now one can start to think about what kind of values the dimension can get and what that means for the operator. I am skipping the discussion here, but you can find details in all the references at the end of the answer. A next step of the analysis would be to decompose the scalar field (separate variables) by performing a Fourier decomposition. That is $$ \phi = e^{i ~ k^{\mu} ~ x_{\mu}} f(x_0) $$ A brief comment: The difference that I see between the expansions of Gubser and Koutsoubas is that it seems that the latter author has specified the scaling dimension of the scalar operator. I have not studied the papers, but I am taking your word that we are dealing with the same gravity construction in both works. I also don't see anything wrong with Gubser's expression. He has integer powers. Regarding the physical significance/ special meaning of that particular value of the conformal dimension, I have no idea. Maybe it is related to superconductors and their properties. Maybe they wanted a particular relevant operator of the theory (???) -see on page 47 of the first reference for some discussion on (ir)relevant and marginal operators. A common practice here is to not include links to pdf but rather abstract pages so I am choosing to present the references in the following way as I cannot find a link to an abstract page for the first one. A place where you can find a neat analysis that is concise is the first result after you google search "alberto zaffaroni lectures ads/cft" There are many formal analyses. The SUGRA book by Freedman and Van-Proeyen would be such a place, as well as the famous review by D'Hoker and Freedman, but pretty much all lecture notes on the AdS/CFT contain that example and discussion. For more applied matters and discussion you might want to have a loot at the book by Ammon and Erdmenger.
{ "domain": "physics.stackexchange", "id": 67328, "tags": "black-holes, superconductivity, ads-cft, holographic-principle" }
IBM Q experience - error code 520
Question: I have been running a variational circuit optimizing the parameters on the "melbourne" device. The system launches multiple jobs in parallel, and only gets the jobs accepted that my credit allows. The remainder get queued with the following message: "FAILURE: Can not get job id, Resubmit the qobj to get job id.Error: 403 Client Error: Forbidden for url: https://api.quantum-computing.ibm.com/api/Network/ibm-q/Groups/open/Projects/main/Jobs? access_token=.... Your credits to run jobs are not enough, Error code: 3458." This is normal, and it continues till the program runs till completion. Unfortunately, I have encountered a new error that actually halts the program and throws the error shown below (520 Server Error). My questions: Why is this happening? What can I do against it? Thanks FAILURE: Can not get job id, Resubmit the qobj to get job id.Error: 403 Client Error: Forbidden for url: https://api.quantum-computing.ibm.com/api/Network/ibm-q/Groups/open/Projects/main/Jobs?access_token=.... Your credits to run jobs are not enough, Error code: 3458. Traceback (most recent call last): File "file.py", line 307, in <module> QSVMsetup(featuremap) File "file.py", line 284, in QSVMsetup training_result = svm.train(df_train_test_x_Q, df_train_test_y_Q, quantum_instance) File "vqc.py", line 437, in train gradient_fn=grad_fn # func for computing gradient File "/home/user/.local/lib/python3.6/site-packages/qiskit/aqua/algorithms/adaptive/vq_algorithm.py", line 118, in find_minimum gradient_function=gradient_fn) File "/home/user/.local/lib/python3.6/site-packages/qiskit/aqua/components/optimizers/spsa.py", line 131, in optimize max_trials=self._max_trials, **self._options) File "/home/user/.local/lib/python3.6/site-packages/qiskit/aqua/components/optimizers/spsa.py", line 182, in _optimization cost_minus = obj_fun(theta_minus) File "/home/user/ibm/vqc_mod_v10.py", line 476, in _cost_function_wrapper predicted_probs, predicted_labels = self._get_prediction(self._batches[batch_index], theta) File "/home/user/ibm/vqc_mod_v10.py", line 348, in _get_prediction results = self._quantum_instance.execute(list(circuits.values())) File "/home/user/.local/lib/python3.6/site-packages/qiskit/aqua/quantum_instance.py", line 312, in execute self._skip_qobj_validation, self._job_callback) File "/home/user/.local/lib/python3.6/site-packages/qiskit/aqua/utils/run_circuits.py", line 343, in run_qobj logger.info("Backend status: {}".format(backend.status())) File "/home/user/.local/lib/python3.6/site-packages/qiskit/providers/ibmq/ibmqbackend.py", line 112, in status api_status = self._api.backend_status(self.name()) File "/home/user/.local/lib/python3.6/site-packages/qiskit/providers/ibmq/api_v2/clients/account.py", line 64, in backend_status return self.client_api.backend(backend_name).status() File "/home/user/.local/lib/python3.6/site-packages/qiskit/providers/ibmq/api_v2/rest/backend.py", line 58, in status response = self.session.get(url).json() File "/home/user/.local/lib/python3.6/site-packages/requests/sessions.py", line 546, in get return self.request('GET', url, **kwargs) File "/home/user/.local/lib/python3.6/site-packages/qiskit/providers/ibmq/api_v2/session.py", line 166, in request raise RequestsApiError(ex, message) from None qiskit.providers.ibmq.api_v2.exceptions.RequestsApiError: 520 Server Error: Origin Error for url: https://api.quantum-computing.ibm.com/api/Network/ibm-q/Groups/open/Projects/main/devices/ibmq_16_melbourne/queue/status?access_token=... Answer: This is likely because the Melbourne device is currently offline for upgrades, it should be back up in around 2 weeks. If you join the Slack workspace, linked from the qiskit website and then join the #ibm-q-systems channel you can get updates about all the devices and when they will be taken offline.
{ "domain": "quantumcomputing.stackexchange", "id": 1124, "tags": "qiskit, ibm-q-experience" }
Incompatibility Between Relativity and Quantum Mechanics
Question: Why does Gravity distort space and time while the electromagnetic, strong, and weak forces do not? Does this have to do with why Quantum Mechanics and Relativity are incompatible? Answer: Although it would be more precise to say that gravity is the manifestation of the effect of curved space-time on moving bodies, and it is mass that curves the space-time, so prof. Rennie is correct about this, there are differences of opinion, at least, about the other aspects. It is not at all clear that mass is a kind of charge analogous to electric charge, although some researchers are trying to make this idea work and unify gravity with the Standard Model or QFT. Be that as it may, what is clear is that gravity or curvature is different from electromagnetism or charm etc., for one thing, because gravity is not a force. Einstein, Schroedinger, other pioneers in GR were quite explicit about this. See gravity is not a force mantra, https://physics.stackexchange.com/a/18324/6432 for a discussion of this. So there are major differences between gravity and the (other) fundamental forces, and this may well be the reason why gravity has not yet been successfully quantised. But there are even more incompatibilities between the whole spirit of GR and the spirit of QM. J.S. Bell was quite concerned about the seemingly fundamental incompatibilities between relativity and quantum theory, too. For me, I would point out that in QM, the wave functions live on configuration space, which for, say, two particles, is six-dimensional, and also QM treats other dynamical variables such as spin as being equal in right, this makes the space even larger. Also QM treats momentum as just as valid a basis for coordinates as position, and this, too, is alien to the spirit of relativity, which treats the actual four-dimensional Riemannian manifold as basic. For precisely the need to overcome this incompatibility, passing to Quantum Field Theory replaced these wave functions over configuration space with operator-valued functions on space-time. But although this kinda works to overcome the incompatibility of special relativity with QM, it makes the foundations of QFT much murkier (the role of probabilities, for instance the Born rule) and introduces infinities. Thus although it might be a way to reconcile QM and relativity theory, it is still more of an unfinished project and because of the unsatisfactory foundations of QFT (compared to the clear foundations of QM), one can still suspect there is a missing idea to really reconcile the two or even that somebody has to budge and concede something or there will be no treaty...
{ "domain": "physics.stackexchange", "id": 2328, "tags": "quantum-mechanics, general-relativity, gravity" }
Magnitude of the cross product of two bra-kets?
Question: From the mathematical perspective, one can take the magnitude of a cross product: $$ |a\times b|=|a| |b| \sin{\theta}\cdot n, $$ where $\theta$ is the angle between a and b in the plane containing them, and n is the unit vector perpendicular to them. Does this apply to the cross product of two bra-kets? I became curious from equation 10 on Dr. Berry's paper, where the Berry phase acquired by a state is the double integral of the following term: $$ \langle \nabla n | m\rangle \times \langle m | \nabla n\rangle. $$ So, does the following make sense? $$ | \langle \nabla n | m\rangle \times \langle m | \nabla n\rangle| = | \langle \nabla n | m\rangle| | \langle m | \nabla n\rangle| \cdot\sin{\theta}, $$ where $$ \cos{(\theta)}=\langle \nabla n | m\rangle \cdot\langle m | \nabla n\rangle. $$ To me, it doesn't make sense intuitively because cross products are usually given in tensorial notation in physics (for example: Expressing the magnitude of a cross product in indicial notation). Why is the application of the magnitude above incorrect, or when is it applicable? Answer: Usually, when we write $|\cdots\rangle$, the symbol(s) contained inside are just labels used to distinguish different elements of the Hilbert space from each other. That label can be a scalar, vector, matrix, or a list of our favorite arthropod species — but it's still labeling just a single element of the Hilbert space. However, Berry's paper [1] is using a slightly unconventional notation $|\nabla n\rangle$ in which each component $\nabla_k n$ of $\nabla n$ labels a different element of the Hilbert space, so $|\nabla n\rangle$ is actually an abbreviation for three different kets, namely the kets $|\nabla_k n\rangle$ with $k\in\{1,2,3\}$. Therefore, in Berry's equation ($7$c), the quantity $v = \langle m|\nabla n\rangle$ is a vector with three components $v_k = \langle m|\nabla_k n\rangle$ with $k\in\{1,2,3\}$. Its complex conjugate is $v^*=\langle\nabla n|m\rangle$. Both $v$ and $v^*$ are ordinary vectors with 3 complex components each. The fact that their components were constructed using bras and kets does not affect the meaning of the cross product $v^*\times v$. The only unconventional feature of $v^*\times v$ is the fact that the components of the vectors are complex numbers, and part of the question is whether or not a familiar identity for the magnitude of the cross-product generalizes to this case. In particular, the relationship between $|v^*\times v|$ and $v^*\cdot v$ is questioned. To address this, write $v=v_R+iv_I$ where $v_R$ and $v_I$ are the real and imaginary parts of $v$. Then $$ v^*\times v = 2i\,v_R\times v_I \hskip2cm v^*\cdot v = v_R\cdot v_R + v_I\cdot v_I. $$ This shows that the quantity $v^*\times v$ depends on the angle between $v_R$ and $v_I$, but the quantity $v^*\cdot v$ does not. Therefore, the value of $v^*\cdot v$ cannot determine the value of $|v^*\times v|$. This shows that the relationship questioned in the OP cannot hold in general. As a check, consider the case $v=(1,z,0)$ with $z=\exp(i\phi)$. Then $$ |v^*\times v|=2|\sin\phi| \hskip2cm v^*\cdot v=2 $$ for all $\phi$. This confirms that $v^*\cdot v$ does not determine the value of $|v^*\times v|$. Reference: [1] Berry (1984), "Quantal phase factors accompanying adiabatic changes," https://michaelberryphysics.files.wordpress.com/2013/07/berry120.pdf
{ "domain": "physics.stackexchange", "id": 56602, "tags": "quantum-mechanics, hilbert-space, vectors, berry-pancharatnam-phase" }
Coefficient of friction between inclined plane and ground
Question: Apply a vertical force to an inclined plane. What coefficient of friction is required between the bottom of the plane and ground to keep the system in equilibrium? Referencing the image, the vertical force $F$ should subject the plane to some amount force in the horizontal direction. To maintain equilibrium, the coefficient of friction $µ$ must be great enough to satisfy: $0=F_x-µN=F_x-µF$. I am having trouble identifying $F_x$. If it were a sub-component force of $F'_x$ and $F'_y$ then the forces would cancel out. Ultimately, my evidence supporting the existence of $F_x$ is that an object sliding down the plane would move horizontally and should induce complimentary motion in the plane. I am slightly ashamed to admit that I am not a student, but someone who shouldn't need to ask this question! I was hoping for an explicit response, as this is not a homework question. I have read the relevant force analysis of power screws, which yields the following equation when simplified by ignoring friction and using square thread. The question being answered in this analysis is, what torque is required to raise a load W using a power screw? $τ=R*F_{reaction}=R*W*\frac{\sin{a}}{\cos{a}}$ Which should mean that $F_X$ in the inclined plane problem is: $F_x=F*\frac{\sin{a}}{\cos{a}}$. I can't figure out how to get to this answer and my reference does not show the steps. Answer: Assuming that the sloping face of the plane is frictionless, any applied force $P$ must be normal to the surface, at an angle of $a$ with the vertical. (This is your force $F_y'$.) You are told that the vertical component of $P$ has value $F$. The horizontal component of $P$ is the horizontal force $F_x$ which you seek. From the triangle of forces, $$\frac{F_x}{F}=\tan a$$ Where you went wrong is when you drew a triangle to resolve $F$ into components. Probably you assumed that the applied force could not have any horizontal component because you were not told about it. But there must be one to ensure that there is no applied force parallel to the surface.
{ "domain": "physics.stackexchange", "id": 64536, "tags": "newtonian-mechanics, forces, friction, vectors, free-body-diagram" }
How do you build roslisp_support on fuerte?
Question: I'm quite new to ROS, so my apologies if some of my assumptions or terminology are mistaken. I have run through the beginner tutorials and built and gotten to run some toy examples using Python. But to make progress on my real work I need to get Roslisp running. I'm using the the NooTrix prebuilt virtual machine containing ROS Fuerte on Ubuntu 12.04.2 LTS, running in VirtualBox 4.2.16 under Mac OS X 10.8.4. I am limited to Fuerte, as my ultimate objective is to integrate my work with that of others who are using Fuerte. It appears that to use Roslisp I need to install the roslisp_support stack, as without it there is no SBCL for me to launch and get started. Searching about I can find no pre-built roslisp_support, so I've checked out the sources from (I'm not allowed to do links in this formum but it's https colon code.ros.org slash svn slash ros slash stacks slash roslisp_support slash trunk) into a subdirectory under ~/ros_workspace. I've cd'd there and run rosmake without any arguments. This fails while trying to build the sbcl package with the error make-host-1.sh: 31: make-host-1.sh: sbcl: not found I note that line 31 of make-host-1.sh appears to be $SBCL_XC_HOST < make-host-1.lisp || exit 1 I may be completely mistaken, but am guessing that may imply that something, I don't know what, upstream in the build process is setting the environment variable SBCL_XC_HOST to 'sbcl', but that that executable, if it exists at all, is not in my PATH at that point. Am I correct in believing I need to build roslisp_support from source in order to use Roslisp? If so, does anyone have any advice on how to get it built successfully? Originally posted by dfm on ROS Answers with karma: 26 on 2013-07-30 Post score: 0 Answer: I think I've figured this one out. Before you can build and install SBCL, you have to install SBCL. :-) That's not quite as silly as it sounds. The following recipe worked for me. Start with sudo apt-get install sbcl This installs a recent version of SBCL, not necessarily the one Roslisp wants. However, with it available, you can then successfully do a rosmake of the roslisp_support sources, which will use the newer SBCL to bootstrap its build of the, presumably older, version of SBCL that Roslisp wants. I'm guessing most folks who try to build roslisp_support never encounter this as a problem 'cause they already have a version of SBCL on their machines -- mostly it's just Lisp hackers that are going to want to use Roslisp, I'm guessing. But I stumbled over it since I was using a nearly pristine VM. Anyway, installing SBCL before installing SBCL seems to have done the trick for me. Originally posted by dfm with karma: 26 on 2013-07-31 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15104, "tags": "ros, roslisp" }
1st law of thermodynamics mcq question
Question: Given the following question: could someone explain how A is correct and D is wrong? I'm a little confused. Since its a closed cycle shouldnt there be no energy lost? Answer: A and D are ambiguous. Your interpretation, that "energy lost or gained" means overall change in internal energy, does indeed lead to A being wrong and D being right. I suppose that another interpretation might be that the gas loses energy in the form of heat, equal to the work done on it over the cycle, leading to A being right and D being wrong. Not your fault.
{ "domain": "physics.stackexchange", "id": 48498, "tags": "thermodynamics, ideal-gas" }
Approximating strength and hardness of alloy
Question: I do not have any formal education in metallurgy, but I am trying to make a game mechanic based on it. I need to be able to alloy together different metals and alloyable materials such as carbon in different percentages, and get an approximation of the strength and hardness of the resulting alloy. If anyone has any ideas how what I would need to do to calculate such a thing I would be very appreciative. Answer: Look at the free resource provided at MatWeb. Under the Physical Properties tab, you can choose up to 3 of them from a rather extensive list, including hardness. For example, I chose "Carbon" and "Hardness, Rockwell R," and 25 results were returned. Under the Alloy Composition tab, you can choose up to 3 elements and percentages for your alloy, and see if you get a hit. I chose "Nonferrous Metal" and carbon (minimum 5%, maximum 75%) and silicon (minimum 5%, maximum 75%), and got one hit. I believe for your purposes, you can assemble a list of materials by hardness and strength (I'm not a metallurgist either, so you might have to do some research on which of the hardness and strength metrics you'll need to choose) and/or by selecting by composition (again, you might need to research which 2- and 3-element combinations make sense and which ones don't).
{ "domain": "chemistry.stackexchange", "id": 3985, "tags": "metallurgy" }
Intcode computer in Rust
Question: This is my first attempt to learn Rust by applying it for a problem that I assume is suitable for the language. It's the Intcode computer from https://adventofcode.com/2019. I've implemented all the features of the computer, and the result is the following: use crate::ComputeResult::{CanContinue, Halt, WaitingForInput}; use std::sync::mpsc; use std::sync::mpsc::{Receiver, Sender}; use std::thread; fn main() { let input = "1,0,0,3,1,1,2,3,1,3,4,3,1,5,0,3,2,6,1,19,1,19,5,23,2,9,23,27,1,5,27,31,1,5,31,35,1,35,13,39,1,39,9,43,1,5,43,47,1,47,6,51,1,51,13,55,1,55,9,59,1,59,13,63,2,63,13,67,1,67,10,71,1,71,6,75,2,10,75,79,2,10,79,83,1,5,83,87,2,6,87,91,1,91,6,95,1,95,13,99,2,99,13,103,1,103,9,107,1,10,107,111,2,111,13,115,1,10,115,119,1,10,119,123,2,13,123,127,2,6,127,131,1,13,131,135,1,135,2,139,1,139,6,0,99,2,0,14,0"; println!("Hello, world, {:?}", str_to_intcode(input)); } fn str_to_intcode(string: &str) -> Vec<i64> { string .split_terminator(",") .map(|s| s.parse().unwrap()) .collect() } struct State { instruction_pointer: u32, intcode: Vec<i64>, input: Vec<i64>, output: Vec<i64>, relative_base: i64, } enum ComputeResult { Halt, CanContinue, WaitingForInput, } //todo turn into an enumeration instead of using u8 for the parameter modes? fn parameter_modes(opcode: i64) -> (u8, u8, u8, u8) { let a = opcode / 10000; let b = (opcode - a * 10000) / 1000; let c = (opcode - a * 10000 - b * 1000) / 100; let d = opcode - a * 10000 - b * 1000 - c * 100; (a as u8, b as u8, c as u8, d as u8) } fn state_from_string(string: &str) -> State { State { instruction_pointer: 0, intcode: str_to_intcode(string), input: vec![], output: vec![], relative_base: 0, } } //todo how to deal with the situation when the state is invalid or the parameter mode isn't supported? fn get_value(parameter_mode: u8, pointer: u32, state: &State) -> i64 { //position mode if parameter_mode == 0 { let at_index = state.intcode[pointer as usize]; state.intcode[at_index as usize] } //immediate mode else if parameter_mode == 1 { state.intcode[pointer as usize] } else if parameter_mode == 2 { let at_index = state.intcode[pointer as usize] + state.relative_base as i64; state.intcode[at_index as usize] } else { panic!("parameter mode {} not supported", parameter_mode) } } fn extend_memory(memory_index: u32, state: &mut State) { if memory_index >= state.intcode.len() as u32 { state.intcode.resize((memory_index + 1) as usize, 0); } } fn get_memory_address(parameter_mode: u8, pointer: u32, state: &State) -> i64 { //position mode if parameter_mode == 0 { state.intcode[pointer as usize] } //immediate mode else if parameter_mode == 1 { panic!("writing to memory will never be in immediate mode") } //relative mode else if parameter_mode == 2 { state.intcode[pointer as usize] + state.relative_base as i64 } else { panic!("parameter mode {} not supported", parameter_mode) } } fn five_amplifiers_in_sequence(intcode: &str, phase_setting: Vec<i64>) -> i64 { computer(intcode, vec![0, phase_setting[0]]) //todo refactor pop for something immutable? .and_then(|mut o| computer(intcode, vec![o.pop().unwrap(), phase_setting[1]])) .and_then(|mut o| computer(intcode, vec![o.pop().unwrap(), phase_setting[2]])) .and_then(|mut o| computer(intcode, vec![o.pop().unwrap(), phase_setting[3]])) .and_then(|mut o| computer(intcode, vec![o.pop().unwrap(), phase_setting[4]])) .ok() .unwrap() .pop() .unwrap() } fn computer(intcode: &str, input: Vec<i64>) -> Result<Vec<i64>, &str> { let mut state = state_from_string(intcode); state.input = input; loop { match compute(&mut state) { Ok(r) => match r { Halt | WaitingForInput => break Ok(state.output), CanContinue => continue, }, //todo refactor the nested match and simplify the error mapping Err(_) => break Err("bam"), } } } fn pop_and_send(state: &mut State, rx: &Sender<i64>) -> i64 { let mut last = 0; loop { //todo for future use-cases, this might not be desired behaviour, replace with drain match state.output.pop() { None => break last, Some(v) => { last = v; rx.send(v) } }; } } fn five_amplifiers_in_a_feedback_loop( intcode: &'static str, phase_setting: Vec<i32>, ) -> Option<i64> { assert_eq!( phase_setting.len(), 5, "phase sequence of length five expected, while {} provided", phase_setting.len() ); let (tx_a, rx_a): (Sender<i64>, Receiver<i64>) = mpsc::channel(); let (tx_b, rx_b): (Sender<i64>, Receiver<i64>) = mpsc::channel(); let (tx_c, rx_c): (Sender<i64>, Receiver<i64>) = mpsc::channel(); let (tx_d, rx_d): (Sender<i64>, Receiver<i64>) = mpsc::channel(); let (tx_e, rx_e): (Sender<i64>, Receiver<i64>) = mpsc::channel(); let lambda = move |name: &str, tx: Sender<i64>, rx: Receiver<i64>| -> i64 { let mut state = state_from_string(intcode); loop { match compute(&mut state) { Ok(r) => match r { Halt => { break pop_and_send(&mut state, &tx); } WaitingForInput => { pop_and_send(&mut state, &tx); match rx.recv() { Ok(v) => { state.input.push(v); continue; } Err(e) => panic!("{} error: {}", name, e), } } CanContinue => continue, }, //todo refactor the nested match and simplify the error mapping Err(_) => panic!("{} ",), } } }; tx_a.send(phase_setting[0] as i64) .and_then(|_| tx_a.send(0)) .and_then(|_| tx_b.send(phase_setting[1] as i64)) .and_then(|_| tx_c.send(phase_setting[2] as i64)) .and_then(|_| tx_d.send(phase_setting[3] as i64)) .and_then(|_| tx_e.send(phase_setting[4] as i64)) .unwrap(); let _a = thread::spawn(move || lambda("A", tx_b, rx_a)); let _b = thread::spawn(move || lambda("B", tx_c, rx_b)); let _c = thread::spawn(move || lambda("C", tx_d, rx_c)); let _d = thread::spawn(move || lambda("D", tx_e, rx_d)); let e = thread::spawn(move || lambda("E", tx_a, rx_e)); e.join().ok() } fn compute(state: &mut State) -> Result<ComputeResult, String> { let offset = state.instruction_pointer; //todo is this defensive programming a good idea? assert!( offset < state.intcode.len() as u32, "offset {} out of bounds, intcode length {}", offset, state.intcode.len() ); assert!(state.intcode.len() > 0, "no intcode to process"); let (a, b, c, opcode) = parameter_modes(state.intcode[offset as usize]); //add if opcode == 1 { let memory_address = get_memory_address(a, offset + 3, state); extend_memory(memory_address as u32, state); let first_parameter = get_value(c, offset + 1, state); let second_parameter = get_value(b, offset + 2, state); state.intcode[memory_address as usize] = first_parameter + second_parameter; state.instruction_pointer += 4; Ok(CanContinue) } //multiply else if opcode == 2 { let memory_address = get_memory_address(a, offset + 3, state); extend_memory(memory_address as u32, state); let first_parameter = get_value(c, offset + 1, state); let second_parameter = get_value(b, offset + 2, state); state.intcode[memory_address as usize] = first_parameter * second_parameter; state.instruction_pointer += 4; Ok(CanContinue) } //input else if opcode == 3 { let memory_address = get_memory_address(c, offset + 1, state); //attempt to read from the input match state.input.pop() { Some(v) => { extend_memory(memory_address as u32, state); state.intcode[memory_address as usize] = v as i64; state.instruction_pointer += 2; Ok(CanContinue) } None => Ok(WaitingForInput), } } //output else if opcode == 4 { let value_to_output = get_value(c, offset + 1, state); state.output.push(value_to_output); state.instruction_pointer += 2; Ok(CanContinue) } //jump it true else if opcode == 5 { let first_parameter = get_value(c, offset + 1, state); let second_parameter = get_value(b, offset + 2, state); if first_parameter != 0 { state.instruction_pointer = second_parameter as u32; } else { state.instruction_pointer += 3; } Ok(CanContinue) } //jump it false else if opcode == 6 { let first_parameter = get_value(c, offset + 1, state); let second_parameter = get_value(b, offset + 2, state); if first_parameter == 0 { state.instruction_pointer = second_parameter as u32; } else { state.instruction_pointer += 3; } Ok(CanContinue) } //less than //todo refactor because the only difference in the logic for opcode 7 and 8 is '<' vs. '==', lambda or something? else if opcode == 7 { let memory_address = get_memory_address(a, offset + 3, state); extend_memory(memory_address as u32, state); let first_parameter = get_value(c, offset + 1, state); let second_parameter = get_value(b, offset + 2, state); let value = if first_parameter < second_parameter { 1 } else { 0 }; state.intcode[memory_address as usize] = value; state.instruction_pointer += 4; Ok(CanContinue) } //equals else if opcode == 8 { let memory_address = get_memory_address(a, offset + 3, state); extend_memory(memory_address as u32, state); let first_parameter = get_value(c, offset + 1, state); let second_parameter = get_value(b, offset + 2, state); let value = if first_parameter == second_parameter { 1 } else { 0 }; state.intcode[memory_address as usize] = value; state.instruction_pointer += 4; Ok(CanContinue) } //adjust relative base else if opcode == 9 { let first_parameter = get_value(c, offset + 1, state); state.relative_base += first_parameter; state.instruction_pointer += 2; Ok(CanContinue) } else if opcode == 99 { Ok(Halt) } else { let error = format!("{} {}", "Unknown opcode", opcode); Err(error) } } #[cfg(test)] mod tests { use crate::{ computer, five_amplifiers_in_a_feedback_loop, five_amplifiers_in_sequence, str_to_intcode, }; use permutohedron::Heap; #[test] fn can_parse_intcode() { assert_eq!(vec![1, 0, 0, 0, 99], str_to_intcode("1,0,0,0,99")); } #[test] fn input_output() { assert_output("3,0,4,0,99", Some(55), vec![55]) } #[test] fn parameter_modes() { assert_output("1002,4,3,4,33", None, vec![]) } fn input_day5() -> &'static str { "3,225,1,225,6,6,1100,1,238,225,104,0,1,192,154,224,101,-161,224,224,4,224,102,8,223,223,101,5,224,224,1,223,224,223,1001,157,48,224,1001,224,-61,224,4,224,102,8,223,223,101,2,224,224,1,223,224,223,1102,15,28,225,1002,162,75,224,1001,224,-600,224,4,224,1002,223,8,223,1001,224,1,224,1,224,223,223,102,32,57,224,1001,224,-480,224,4,224,102,8,223,223,101,1,224,224,1,224,223,223,1101,6,23,225,1102,15,70,224,1001,224,-1050,224,4,224,1002,223,8,223,101,5,224,224,1,224,223,223,101,53,196,224,1001,224,-63,224,4,224,102,8,223,223,1001,224,3,224,1,224,223,223,1101,64,94,225,1102,13,23,225,1101,41,8,225,2,105,187,224,1001,224,-60,224,4,224,1002,223,8,223,101,6,224,224,1,224,223,223,1101,10,23,225,1101,16,67,225,1101,58,10,225,1101,25,34,224,1001,224,-59,224,4,224,1002,223,8,223,1001,224,3,224,1,223,224,223,4,223,99,0,0,0,677,0,0,0,0,0,0,0,0,0,0,0,1105,0,99999,1105,227,247,1105,1,99999,1005,227,99999,1005,0,256,1105,1,99999,1106,227,99999,1106,0,265,1105,1,99999,1006,0,99999,1006,227,274,1105,1,99999,1105,1,280,1105,1,99999,1,225,225,225,1101,294,0,0,105,1,0,1105,1,99999,1106,0,300,1105,1,99999,1,225,225,225,1101,314,0,0,106,0,0,1105,1,99999,1108,226,226,224,102,2,223,223,1005,224,329,101,1,223,223,107,226,226,224,1002,223,2,223,1005,224,344,1001,223,1,223,107,677,226,224,102,2,223,223,1005,224,359,101,1,223,223,7,677,226,224,102,2,223,223,1005,224,374,101,1,223,223,108,226,226,224,102,2,223,223,1006,224,389,101,1,223,223,1007,677,677,224,102,2,223,223,1005,224,404,101,1,223,223,7,226,677,224,102,2,223,223,1006,224,419,101,1,223,223,1107,226,677,224,1002,223,2,223,1005,224,434,1001,223,1,223,1108,226,677,224,102,2,223,223,1005,224,449,101,1,223,223,108,226,677,224,102,2,223,223,1005,224,464,1001,223,1,223,8,226,677,224,1002,223,2,223,1005,224,479,1001,223,1,223,1007,226,226,224,102,2,223,223,1006,224,494,101,1,223,223,1008,226,677,224,102,2,223,223,1006,224,509,101,1,223,223,1107,677,226,224,1002,223,2,223,1006,224,524,1001,223,1,223,108,677,677,224,1002,223,2,223,1005,224,539,1001,223,1,223,1107,226,226,224,1002,223,2,223,1006,224,554,1001,223,1,223,7,226,226,224,1002,223,2,223,1006,224,569,1001,223,1,223,8,677,226,224,102,2,223,223,1006,224,584,101,1,223,223,1008,677,677,224,102,2,223,223,1005,224,599,101,1,223,223,1007,226,677,224,1002,223,2,223,1006,224,614,1001,223,1,223,8,677,677,224,1002,223,2,223,1005,224,629,101,1,223,223,107,677,677,224,102,2,223,223,1005,224,644,101,1,223,223,1108,677,226,224,102,2,223,223,1005,224,659,101,1,223,223,1008,226,226,224,102,2,223,223,1006,224,674,1001,223,1,223,4,223,99,226" } #[test] fn day5_part_one() { assert_output( input_day5(), Some(1), vec![0, 0, 0, 0, 0, 0, 0, 0, 0, 11049715], ) } #[test] fn day5_part_two() { assert_output(input_day5(), Some(5), vec![2140710]) } fn input_day7() -> &'static str { "3,8,1001,8,10,8,105,1,0,0,21,42,67,84,97,118,199,280,361,442,99999,3,9,101,4,9,9,102,5,9,9,101,2,9,9,1002,9,2,9,4,9,99,3,9,101,5,9,9,102,5,9,9,1001,9,5,9,102,3,9,9,1001,9,2,9,4,9,99,3,9,1001,9,5,9,1002,9,2,9,1001,9,5,9,4,9,99,3,9,1001,9,5,9,1002,9,3,9,4,9,99,3,9,102,4,9,9,101,4,9,9,102,2,9,9,101,3,9,9,4,9,99,3,9,102,2,9,9,4,9,3,9,1002,9,2,9,4,9,3,9,1001,9,2,9,4,9,3,9,102,2,9,9,4,9,3,9,102,2,9,9,4,9,3,9,1001,9,2,9,4,9,3,9,1002,9,2,9,4,9,3,9,102,2,9,9,4,9,3,9,1001,9,2,9,4,9,3,9,101,2,9,9,4,9,99,3,9,1001,9,1,9,4,9,3,9,101,2,9,9,4,9,3,9,1001,9,2,9,4,9,3,9,1002,9,2,9,4,9,3,9,101,2,9,9,4,9,3,9,1002,9,2,9,4,9,3,9,102,2,9,9,4,9,3,9,1002,9,2,9,4,9,3,9,101,1,9,9,4,9,3,9,101,2,9,9,4,9,99,3,9,101,1,9,9,4,9,3,9,1001,9,1,9,4,9,3,9,1002,9,2,9,4,9,3,9,1002,9,2,9,4,9,3,9,1002,9,2,9,4,9,3,9,1001,9,2,9,4,9,3,9,102,2,9,9,4,9,3,9,102,2,9,9,4,9,3,9,101,2,9,9,4,9,3,9,1001,9,2,9,4,9,99,3,9,102,2,9,9,4,9,3,9,102,2,9,9,4,9,3,9,1001,9,2,9,4,9,3,9,102,2,9,9,4,9,3,9,1001,9,2,9,4,9,3,9,102,2,9,9,4,9,3,9,102,2,9,9,4,9,3,9,101,1,9,9,4,9,3,9,1001,9,2,9,4,9,3,9,1002,9,2,9,4,9,99,3,9,101,1,9,9,4,9,3,9,101,1,9,9,4,9,3,9,102,2,9,9,4,9,3,9,1001,9,2,9,4,9,3,9,1001,9,2,9,4,9,3,9,1002,9,2,9,4,9,3,9,101,1,9,9,4,9,3,9,102,2,9,9,4,9,3,9,1001,9,1,9,4,9,3,9,1001,9,2,9,4,9,99" } #[test] fn day7_examples() { assert_eq!( five_amplifiers_in_sequence( "3,15,3,16,1002,16,10,16,1,16,15,15,4,15,99,0,0", vec![4, 3, 2, 1, 0] ), 43210 ); assert_eq!( five_amplifiers_in_sequence( "3,23,3,24,1002,24,10,24,1002,23,-1,23,101,5,23,23,1,24,23,23,4,23,99,0,0", vec![0, 1, 2, 3, 4] ), 54321 ); assert_eq!( five_amplifiers_in_sequence( "3,31,3,32,1002,32,10,32,1001,31,-2,31,1007,31,0,33,1002,33,7,33,1,33,31,31,1,32,31,31,4,31,99,0,0,0", vec![1, 0, 4, 3, 2] ), 65210 ); assert_eq!( five_amplifiers_in_a_feedback_loop("3,26,1001,26,-4,26,3,27,1002,27,2,27,1,27,26,27,4,27,1001,28,-1,28,1005,28,6,99,0,0,5", vec![9, 8, 7, 6, 5]), Some(139629729)); assert_eq!( five_amplifiers_in_a_feedback_loop("3,52,1001,52,-5,52,3,53,1,52,56,54,1007,54,5,55,1005,55,26,1001,54,-5,54,1105,1,12,1,53,54,53,1008,54,0,55,1001,55,1,55,2,53,55,53,4,53,1001,56,-1,56,1005,56,6,99,0,0,0,0,10", vec![9, 7, 8, 5, 6]), Some(18216)) } #[test] fn day7_part_two() { let mut data = vec![5, 6, 7, 8, 9]; let heap = Heap::new(&mut data); let mut permutations = Vec::new(); for data in heap { permutations.push(data.clone()); } let mut res: Vec<i64> = permutations .into_iter() .map(|phase_setting| { five_amplifiers_in_a_feedback_loop(input_day7(), phase_setting).unwrap() }) .collect(); res.sort(); assert_eq!(*res.last().unwrap(), 70602018) } fn input_day9() -> &'static str { "1102,34463338,34463338,63,1007,63,34463338,63,1005,63,53,1102,3,1,1000,109,988,209,12,9,1000,209,6,209,3,203,0,1008,1000,1,63,1005,63,65,1008,1000,2,63,1005,63,904,1008,1000,0,63,1005,63,58,4,25,104,0,99,4,0,104,0,99,4,17,104,0,99,0,0,1102,1,30,1010,1102,1,38,1008,1102,1,0,1020,1102,22,1,1007,1102,26,1,1015,1102,31,1,1013,1102,1,27,1014,1101,0,23,1012,1101,0,37,1006,1102,735,1,1028,1102,1,24,1009,1102,1,28,1019,1102,20,1,1017,1101,34,0,1001,1101,259,0,1026,1101,0,33,1018,1102,1,901,1024,1101,21,0,1016,1101,36,0,1011,1102,730,1,1029,1101,1,0,1021,1102,1,509,1022,1102,39,1,1005,1101,35,0,1000,1102,1,506,1023,1101,0,892,1025,1101,256,0,1027,1101,25,0,1002,1102,1,29,1004,1102,32,1,1003,109,9,1202,-3,1,63,1008,63,39,63,1005,63,205,1001,64,1,64,1106,0,207,4,187,1002,64,2,64,109,-2,1208,-4,35,63,1005,63,227,1001,64,1,64,1105,1,229,4,213,1002,64,2,64,109,5,1206,8,243,4,235,1106,0,247,1001,64,1,64,1002,64,2,64,109,14,2106,0,1,1105,1,265,4,253,1001,64,1,64,1002,64,2,64,109,-25,1201,4,0,63,1008,63,40,63,1005,63,285,1106,0,291,4,271,1001,64,1,64,1002,64,2,64,109,14,2107,37,-7,63,1005,63,313,4,297,1001,64,1,64,1106,0,313,1002,64,2,64,109,-7,21101,40,0,5,1008,1013,37,63,1005,63,333,1105,1,339,4,319,1001,64,1,64,1002,64,2,64,109,-7,1207,0,33,63,1005,63,355,1106,0,361,4,345,1001,64,1,64,1002,64,2,64,109,7,21102,41,1,9,1008,1017,41,63,1005,63,387,4,367,1001,64,1,64,1106,0,387,1002,64,2,64,109,-1,21102,42,1,10,1008,1017,43,63,1005,63,411,1001,64,1,64,1106,0,413,4,393,1002,64,2,64,109,-5,21101,43,0,8,1008,1010,43,63,1005,63,435,4,419,1106,0,439,1001,64,1,64,1002,64,2,64,109,16,1206,3,455,1001,64,1,64,1106,0,457,4,445,1002,64,2,64,109,-8,21107,44,45,7,1005,1017,479,4,463,1001,64,1,64,1106,0,479,1002,64,2,64,109,6,1205,5,497,4,485,1001,64,1,64,1106,0,497,1002,64,2,64,109,1,2105,1,6,1105,1,515,4,503,1001,64,1,64,1002,64,2,64,109,-10,2108,36,-1,63,1005,63,535,1001,64,1,64,1105,1,537,4,521,1002,64,2,64,109,-12,2101,0,6,63,1008,63,32,63,1005,63,561,1001,64,1,64,1105,1,563,4,543,1002,64,2,64,109,25,21108,45,46,-2,1005,1018,583,1001,64,1,64,1105,1,585,4,569,1002,64,2,64,109,-23,2108,34,4,63,1005,63,607,4,591,1001,64,1,64,1106,0,607,1002,64,2,64,109,3,1202,7,1,63,1008,63,22,63,1005,63,633,4,613,1001,64,1,64,1106,0,633,1002,64,2,64,109,12,21108,46,46,3,1005,1015,651,4,639,1106,0,655,1001,64,1,64,1002,64,2,64,109,-5,2102,1,-1,63,1008,63,35,63,1005,63,679,1001,64,1,64,1105,1,681,4,661,1002,64,2,64,109,13,21107,47,46,-7,1005,1013,701,1001,64,1,64,1105,1,703,4,687,1002,64,2,64,109,-2,1205,2,715,1106,0,721,4,709,1001,64,1,64,1002,64,2,64,109,17,2106,0,-7,4,727,1105,1,739,1001,64,1,64,1002,64,2,64,109,-23,2107,38,-6,63,1005,63,759,1001,64,1,64,1106,0,761,4,745,1002,64,2,64,109,-3,1207,-4,40,63,1005,63,779,4,767,1105,1,783,1001,64,1,64,1002,64,2,64,109,-8,2101,0,-1,63,1008,63,35,63,1005,63,809,4,789,1001,64,1,64,1105,1,809,1002,64,2,64,109,-6,2102,1,8,63,1008,63,32,63,1005,63,835,4,815,1001,64,1,64,1106,0,835,1002,64,2,64,109,6,1201,5,0,63,1008,63,37,63,1005,63,857,4,841,1106,0,861,1001,64,1,64,1002,64,2,64,109,2,1208,0,32,63,1005,63,883,4,867,1001,64,1,64,1106,0,883,1002,64,2,64,109,23,2105,1,-2,4,889,1001,64,1,64,1106,0,901,4,64,99,21102,27,1,1,21101,0,915,0,1106,0,922,21201,1,55337,1,204,1,99,109,3,1207,-2,3,63,1005,63,964,21201,-2,-1,1,21101,0,942,0,1105,1,922,21202,1,1,-1,21201,-2,-3,1,21102,957,1,0,1105,1,922,22201,1,-1,-2,1106,0,968,21201,-2,0,-2,109,-3,2105,1,0" } #[test] fn relative_base() { assert_output( "109,1,204,-1,1001,100,1,100,1008,100,16,101,1006,101,0,99", None, vec![ 109, 1, 204, -1, 1001, 100, 1, 100, 1008, 100, 16, 101, 1006, 101, 0, 99, ], ) } #[test] fn large_numbers() { assert_output("104,1125899906842624,99", None, vec![1125899906842624]); assert_output( "1102,34915192,34915192,7,4,7,99,0", None, vec![1219070632396864], ) } #[test] fn day9_part_one() { assert_output(input_day9(), Some(1), vec![3765554916]) } fn assert_output(intcode: &str, input: Option<i64>, expected_output: Vec<i64>) { assert_eq!( computer(intcode, input.map_or(vec![], |v| vec![v])).unwrap(), expected_output ) } #[test] fn day9_part_two() { assert_output(input_day9(), Some(2), vec![76642]) } } I'd appreciate any input on how to improve the code and make it more Rusty. I also have a few specific questions: Is it a common practice to validate function input in Rust, like I did in five_amplifiers_in_a_feedback_loop for example? The pop_and_send function does two things instead of one: it not only empties a vector, but it sends the last value to a channel. What would be a better way to break this down? In five_amplifiers_in_sequence I'm "chaining" five computers in sequence and pass on the output from one computer to the next one. Is there a way to further abstract this, maybe my choice for State can be improved? In five_amplifiers_in_a_feedback_loop I attempted multithreading in Rust. Is this the way to do it? Thanks! Answer: That is quite a lot of code, and fun to review. Let's start with the big picture. Code organization The general organization of the code can be improved. In its current form, the code is scattered in a bunch of functions, some of which are hard to understand without context. The order of functions is also a bit puzzling. I would envision an interface along the lines of: (naming and other details here are quite arbitrary) pub mod Intcode pub struct Program pub fn parse_from — corresponding to str_to_intcode pub struct Computer { instruction_pointer: usize, ... } pub fn new — constructs a computer (with settings, if any) pub fn compute — takes a program, executes it, and returns the result private helper functions pub fn execute — combines all steps in one for convenience and other necessary items. Now, instead of passing States everywhere, the functions become methods and associated functions of the Computer struct. I would probably start the development with something like this, and gradually refine it as I complete the code. Error handling Instead of &str and String, it is preferable to raise proper error classes and embed the error message in its Display implementation. See Why do many exception messages not contain useful details? I recommend using the anyhow crate to handle errors. It removes a lot of boilerplate and simplifies the propagation of errors. Naming The naming throughout your code also leaves room for improvement; I'll touch on this later. Types Personally, I would type aliases to clarify the meaning of types like i64, u8, etc.: type Word = i64; type ParameterMode = u8; For memory indexes, usize seems to be a better fit than u32, as you are spamming as usize everywhere. Details Now let's go through the code and pay attention to the details. use crate::ComputeResult::{CanContinue, Halt, WaitingForInput}; Glob-imports (or variations thereof), especially global ones, are generally considered bad form (with some exceptions). use std::sync::mpsc; use std::sync::mpsc::{Receiver, Sender}; use std::thread; These use declarations can be condensed: use std::{ sync::mpsc::{self, Receiver, Sender}, thread, }; The self keyword resolves to the current module in a path — in this case, std::sync::mpsc. fn main() { let input = /* ... */; println!("Hello, world, {:?}", str_to_intcode(input)); } The main function is supposed to be the entry point to a binary. In your code, it looks like the remnant of debugging. Simply remove it if your code is intended as a library. fn str_to_intcode(string: &str) -> Vec<i64> { string .split_terminator(",") .map(|s| s.parse().unwrap()) .collect() } Seeing as you used split_terminator instead of the regular split here, are you sure that the last number, which isn't followed by a comma, does not count as part of the program? If so, it might make sense to explicitly indicate this in the code. struct State { instruction_pointer: u32, intcode: Vec<i64>, input: Vec<i64>, output: Vec<i64>, relative_base: i64, } As I mentioned before, it makes sense to expand this into a full-fledged Computer struct. enum ComputeResult { Halt, CanContinue, WaitingForInput, } Status might be a better name. Also, perhaps Halt / Success / Blocked? //todo turn into an enumeration instead of using u8 for the parameter modes? I agree. fn extend_memory(memory_index: u32, state: &mut State) { if memory_index >= state.intcode.len() as u32 { state.intcode.resize((memory_index + 1) as usize, 0); } } Following the precedent set by the standard library, reserve may be more descriptive than extend. fn five_amplifiers_in_sequence(intcode: &str, phase_setting: Vec<i64>) -> i64 { computer(intcode, vec![0, phase_setting[0]]) //todo refactor pop for something immutable? .and_then(|mut o| { computer(intcode, vec![o.pop().unwrap(), phase_setting[1]]) }) .and_then(|mut o| { computer(intcode, vec![o.pop().unwrap(), phase_setting[2]]) }) .and_then(|mut o| { computer(intcode, vec![o.pop().unwrap(), phase_setting[3]]) }) .and_then(|mut o| { computer(intcode, vec![o.pop().unwrap(), phase_setting[4]]) }) .ok() .unwrap() .pop() .unwrap() } Phew ... are you sure you don't want a loop here? I assume there is a logic to the number 5 here. Consider using a const to signify. fn computer(intcode: &str, input: Vec<i64>) -> Result<Vec<i64>, &str> { let mut state = state_from_string(intcode); state.input = input; loop { match compute(&mut state) { Ok(r) => match r { Halt | WaitingForInput => break Ok(state.output), CanContinue => continue, }, //todo refactor the nested match and simplify the error mapping Err(_) => break Err("bam"), } } } The error handling seems sub-optimal. Also, the ? operator is handy. fn pop_and_send(state: &mut State, rx: &Sender<i64>) -> i64 { let mut last = 0; loop { //todo for future use-cases, this might not be desired behaviour, replace with drain match state.output.pop() { None => break last, Some(v) => { last = v; rx.send(v) } }; } } Basically inspect + last. fn five_amplifiers_in_a_feedback_loop( intcode: &'static str, phase_setting: Vec<i32>, ) -> Option<i64> { assert_eq!( phase_setting.len(), 5, "phase sequence of length five expected, while {} provided", phase_setting.len() ); let (tx_a, rx_a): (Sender<i64>, Receiver<i64>) = mpsc::channel(); let (tx_b, rx_b): (Sender<i64>, Receiver<i64>) = mpsc::channel(); let (tx_c, rx_c): (Sender<i64>, Receiver<i64>) = mpsc::channel(); let (tx_d, rx_d): (Sender<i64>, Receiver<i64>) = mpsc::channel(); let (tx_e, rx_e): (Sender<i64>, Receiver<i64>) = mpsc::channel(); let lambda = move |name: &str, tx: Sender<i64>, rx: Receiver<i64>| -> i64 { let mut state = state_from_string(intcode); loop { match compute(&mut state) { Ok(r) => match r { Halt => { break pop_and_send(&mut state, &tx); } WaitingForInput => { pop_and_send(&mut state, &tx); match rx.recv() { Ok(v) => { state.input.push(v); continue; } Err(e) => panic!("{} error: {}", name, e), } } CanContinue => continue, }, //todo refactor the nested match and simplify the error mapping Err(_) => panic!("{} ",), } } }; tx_a.send(phase_setting[0] as i64) .and_then(|_| tx_a.send(0)) .and_then(|_| tx_b.send(phase_setting[1] as i64)) .and_then(|_| tx_c.send(phase_setting[2] as i64)) .and_then(|_| tx_d.send(phase_setting[3] as i64)) .and_then(|_| tx_e.send(phase_setting[4] as i64)) .unwrap(); let _a = thread::spawn(move || lambda("A", tx_b, rx_a)); let _b = thread::spawn(move || lambda("B", tx_c, rx_b)); let _c = thread::spawn(move || lambda("C", tx_d, rx_c)); let _d = thread::spawn(move || lambda("D", tx_e, rx_d)); let e = thread::spawn(move || lambda("E", tx_a, rx_e)); e.join().ok() } Again, consider using loops. Most of the matches can also be eliminated using the ? operator or expect. fn compute(state: &mut State) -> Result<ComputeResult, String> { let offset = state.instruction_pointer; //todo is this defensive programming a good idea? assert!( offset < state.intcode.len() as u32, "offset {} out of bounds, intcode length {}", offset, state.intcode.len() ); assert!(state.intcode.len() > 0, "no intcode to process"); let (a, b, c, opcode) = parameter_modes(state.intcode[offset as usize]); // ... } Is it a good idea? If you consider this to be the logical place to validate the internal state, then go ahead. With proper encapsulation, however, the check can be safety elided (or changed to debug_assert!) if the code is refactored to make class invariants. Also, opcode does not look like a parameter mode. This entire compute function is too long; break it down into smaller functions. Also, addition and multiplication can be merged by passing an argument that determines the operation to use; the same applies to some other operations. I didn't go into the details to save space, so feel free to ping me if you find any part of this unclear.
{ "domain": "codereview.stackexchange", "id": 40662, "tags": "rust" }
Methods of detecting gravitino DM and possible implications for theories of everything
Question: Let me summarize my thoughts about topics in theoretical physics: Theory of everything: The theory of everything aims to unite all the four forces of nature into one single elegant equation. Supersymmetry: important as, roughly put, it explains the nature of force in terms of matter. The gravitino is the supersymmetric partner of graviton, which is a hypothetical particle that mediates the gravitational force. Dark matter candidate: It is theorized that the graviton has no mass and on the other hand gravitino is very massive. Hence some nerds proposed dark matter may be made mainly of gravitinos. Existence of gravitinos: It is suspected that early in the formation of Universe when it was still mostly in a plasma state, gluons (the force carrier of strong nuclear force) collided with each other and produced gravitinos. My questions are: Is there any reliable, direct or indirect, method to detect gravitinos? If gravitinos were observed, would it mean that supersymmetry was the theory of everything? Answer: The answer to the first question is no. It is a basic but subtle point... Science is about adopting the simplest explanation of phenomena. This is usually referred to as Ocam's razor. With very exotic, high-energy theories like string theory or here a gravitino (I think you have a typo!) there is no single clear signature that can not be explained by some other model in a modern experiment. Future experiments are a different game. The reason being that the energy scales where these phenomena occur are so far from what we may probe experimentally that many different models look the same at these low energies. This is similar to how the modern atom and the older plum-pudding model look the same at lower energies with blunt experimental techniques used in the past. Many theories of physics reduce to older models with low energy experiments. For example at low energy relativistic effects die away and Newtonian physics becomes accurate. Indeed gravitinos predict measurable things for modern experiments that may be explained in many ways. We can not probe the scale where all the different theories make different predictions. If gravitinos were observed supersymmetry would not be the theory of everything. Supersymmetry is not a theory, it is a symmetry a theory may have. Discovering a gravitino would indicate supersymmetry is a symmetry of nature lending significant proof towards a string theory as the theory of everything.
{ "domain": "physics.stackexchange", "id": 21112, "tags": "particle-physics, supersymmetry, dark-matter, theory-of-everything" }
What is the domain of momentum operator on $\mathbb{R}$?
Question: Observables in QM are postulated to be self-adjoint operators. Those have to obey $\hat A \vphantom{A}^+ \! = \hat A$, including the equality of their domains. If we work on a finite interval $(a, b)$, an example of such an observable is the momentum operator: $$ \hat p: {\rm D}(\hat p) \to L^2 \big( (a,b) \big) \\[5pt] {\rm D}(\hat p) = \big\{\; f \in W^{1,2} \big( (a,b) \big) \; \big| \; f(a+) = f(b-) \;\big\} \\[5pt] \hat p f = -{\rm i} f' $$ We can easily inspect that $\hat p$ with this domain is indeed self-adjoint using integration by parts: $$ \big( \hat p f, \; g \big)_{L^2} = {\rm i} \big( f', \; g \big)_{L^2} = \big[ fg \big]_a^b - {\rm i} \big( f, \; g' \big)_{L^2} = \big[ fg \big]_a^b + \big( f, \; -{\rm i}g' \big)_{L^2} $$ Here, $g$ has to be from $W^{1,2}$ in order to have a derivative and the necessary and sufficient condition for $[fg]_a^b$ to be zero is $g(a+) = g(b-)$, hence ${\rm D}(\hat p^+) = {\rm D}(\hat p)$ and $\hat p$ is self-adjoint. However, this doesn't work on infinite intervals. In $L^2(\mathbb{R})$, functions either don't have a limit at infity, or it's zero. If we require that $f(-\infty) \to 0, \;\; f(+\infty) \to 0$, it is sufficient for $g$ to be only bounded at infinity and we get ${\rm D}(\hat p^+) \subsetneq {\rm D}(\hat p)$. On the other hand, if we require that $f$ is bounded at infinity, we get that $g$ has to vanish at infinity, therefore ${\rm D}(\hat p^+) \!\supsetneq {\rm D}(\hat p)$. How do I achieve ${\rm D}(\hat p^+) = {\rm D}(\hat p)$ on $L^2(\mathbb{R})$? What is the domain of the momentum operator on $\mathbb{R}$? Answer: As it turns out, this actually does work on the infinite interval. The important observation the question is missing is that all functions $f \in W^{1,2}(\mathbb{R})$ are guaranteed to vanish at infinity – see this proof by Valter Moretti. This means that all the “different” domains that I considered were actually the same set: $$ \big\{\, f \in W^{1,2}(\mathbb{R}) \;\big|\; f(-\infty) = f(+\infty) = 0 \,\big\} = \big\{\, f \in W^{1,2}(\mathbb{R}) \;\big|\; f \text{ is bounded at } \infty \,\big\} = W^{1,2}(\mathbb{R}) $$ This means that the problem with ${\rm D}(\hat p)$ and ${\rm D}(\hat p^+)$ not being equal for different conditions is solved and the one true domain for the self-adjoint momentum operator is: $$ {\rm D}(\hat p) = W^{1,2}(\mathbb{R}) $$
{ "domain": "physics.stackexchange", "id": 74257, "tags": "quantum-mechanics, hilbert-space, operators, momentum, mathematical-physics" }
How to find the Miller indices for a family of planes?
Question: So I'm a bit confused about this question This question asks for the miller indices for the "families of planes". Is there a single set of Miller indices for each cubic unit cell which I can use to present all of the planes for that unit cell? For a) I have: $(1, 0, 0)$, $(-1, 0, 0)$ For b) I have: $(0, 1, 0)$, $(0, -1, 0)$, $(0, 3, 0)$, $(0, -3, 0)$ For c) I have: $(3, 2, 0)$, $(-3, -2, 0)$ and I have no idea how to find the others for this one. Also I noticed that the planes for each cubic unit cell has the same direction. I know that enclosing miller indices in square brackets represents a direction but isn't this just a vector, not a representation of a family of planes? Answer: Assume a 3D lattice and denote its reciprocal lattice basis vectors as $\vec{b}_{1,2,3}$. The symbol $\left(h,k,l\right)$ stands for all the planes orthogonal to the vector $h\vec{b}_{1}+k\vec{b}_{2}+l\vec{b}_{3}$ (also written $\left[h,k,l\right]$ as you stated), so in fact there is no difference between $\left(1,0,0\right)$ and $\left(-1,0,0\right)$ for instance.
{ "domain": "physics.stackexchange", "id": 45034, "tags": "homework-and-exercises, crystals, x-ray-crystallography" }
When will two pendulums be in phase with each other?
Question: Two pendulums with different frequencies released at the same time, when will these two pendulum be in phase? From what I know, the period of pendulum at small displacement is not affected by its amplitude, so I tried to use the period formula $$T=2\pi \sqrt{\frac{l}{g}}$$ and substitute $l$, $g_{1}$ and $g_{2}$ But I am not sure how to proceed Answer: I've not heard people talk about two oscillators at different frequencies being "in phase." Instead, I will present a solution for time instants when the two oscillators will be at the same phase in their oscillation. WLOG, let $\theta_1(0)=\theta_2(0)=0$. Define $\omega_i\equiv\sqrt{\frac{g_i}{l}}$ for $i\in\{1,2\}$; i.e. the angular frequency of the two oscillators. Then, we have $\theta_1(t)=A_1\sin(\omega_1t)$ and $\theta_2(t)=A_2\sin(\omega_2t)$. To find when the two oscillators coincide in phase, we find solutions of $t$ to the following equation: $$\sin(\omega_1t)=\sin(\omega_2t)$$ It's easiest to visualize solutions for this equation by graphing $\sin(x)=\sin(y)$. Solutions to this happen when $(\omega_1-\omega_2)t=2\pi i$ or $(\omega_1+\omega_2)t=\pi+2\pi i$, for $i\in\mathbb{Z}$. Rearranged for $t$, we have $t=\frac{2\pi i}{\omega_1-\omega_2}$ and $t=\frac{\pi+2\pi i}{\omega_1+\omega_2}$.
{ "domain": "physics.stackexchange", "id": 82779, "tags": "homework-and-exercises, newtonian-mechanics, harmonic-oscillator, oscillators" }
Divergence of $\frac{ \hat {\bf r}}{r^2} \equiv \frac{{\bf r}}{r^3}$, what is the 'paradox'?
Question: I just started in Griffith's Introduction to electrodynamics and I stumbled upon the divergence of $\frac{ \hat r}{r^2} \equiv \frac{{\bf r}}{r^3}$, now from the book, Griffiths says: Now what is the paradox, exactly? Ignoring any physical intuition behind this (point charge at the origin) how are we supposed to believe that the source of $\vec v$ is concentrated at the origin mathematically? Or are we forced to believe that because there was a contradiction with the divergence theorem? Also how would the situation differ if $\vec v$ was the same vector function but not for a point charge? Or is it impossible? Answer: Now what is the paradox, exactly? The paradox is that the vector field $\vec{v}$ considered obviously points away from the origin and hence seems to have a non-zero divergence, however, when you actually calculate the divergence, it turns out to be zero. How are we supposed to believe that the source of $\vec v$ is concentrated at the origin mathematically? Most important point to observe is that $\nabla \cdot \vec v = 0$ everywhere except at the origin. The diverging lines appearing are from the origin. Our calculations cannot account for that since $\vec v$ blows up at $r = 0$. Moreover, eq. (1.84) is not even valid for $r = 0$. In other words, $\nabla \cdot \vec v$ is indeterminate at that point. However, if you apply the divergence theorem, you will find $$\int \nabla \cdot \vec v \ \text{d}V = \oint \vec v \cdot \text{d}\vec a = 4 \pi$$ Irrespective of the radius of a sphere centred at the origin, we must obtain the surface integral as $4 \pi$. The only conclusion is that this must be contributed from the point $r = 0$. This serves as the motivation to define the Dirac delta function: a function which vanishes everywhere except blowing up at a point and has a finite area under the curve.
{ "domain": "physics.stackexchange", "id": 59466, "tags": "electrostatics, electric-fields, gauss-law, vector-fields, dirac-delta-distributions" }
Power absorbed by electron in plane electromagnetic wave
Question: How can the power (in Watts) absorbed by the electron be calculated, knowing the incident electric field amplitude $ E_0 $, wavelength $ \lambda $, and the electron momentum relaxation time $ \tau $ in the medium ? The units seem to check out for $ \frac {|q_e| E_0 \lambda }\tau $ (result is in Watts), but the correct answer is off by several orders of magnitude. What is missing? Answer: I guess you're also doing the Plasmonics course of edX. (Great course by the way, although I find the exercises quite difficult.) I solved the task using the formula for power that was shown in the lecture video: $$ P = <\cfrac{\vec{F}+\vec{F}^*}{2}\cdot \cfrac{\vec{v}+\vec{v}^*}{2}> = \cfrac{1}{2}\Re (\vec{F}^* \cdot \vec{v}) $$ The velocity is time derivative of position $\vec{r}$. To get $\vec{r}$ let's solve the equation of motion that is accounting for collisions (this was done in the edX course): $$ m_e\ddot{\vec{r}}=-\gamma m_e \dot{\vec{r}} - |e|\vec{E_0}e^{-i\omega t}$$ which yields the solution: $$\vec{r}(t)=\cfrac{|e|}{m_e\, \omega \,(\omega + i\gamma)}\vec{E_0}e^{-i\omega t}$$ $\vec{F}$ is Lorentz force due to electric field: $$\vec{F} = |e| \vec{E_0}e^{-i\omega t}$$ So that gives me: $$ P = \cfrac{1}{2}\Re (\vec{F}^* \cdot \vec{v}) = \cfrac{|e|^2}{2} \Re \left(E_0^2 \cfrac{-i\omega}{m_e \omega(\omega + i\gamma)} \right) = \cfrac{E_0^2\,e^2 \gamma}{2 m_e \,(\omega^2 + \gamma^2)} $$ Now we now that $\lambda = \frac{2 \pi c}{\omega}$, from this we can calculate $\omega$ knowing the wavelength. And following again the defitions from edX course $\gamma = \frac{1}{\tau}$. As you can see, the unit is still OK although the expression changed a lot. I would like to know if there is a simpler way but I cannot think of any right now.
{ "domain": "physics.stackexchange", "id": 55688, "tags": "electrons, power, absorption, plasmon" }
Rviz broken, no transform from child frame to parent frame
Question: Hello. I have just found that my rviz is broken. With several messages no transform from [child_frame] to [parent_frame]. Sorry, I cannot post screenshots. Any idea to fix this? Ar. Note: I have done the test with rrbot, the gazebo test package and my own robot model. Originally posted by Arn-O on ROS Answers with karma: 107 on 2013-09-22 Post score: 0 Answer: I was in fact missing the following package: ros-hydro-robot-state-publisher I do not understand how it has disappeared. Originally posted by Arn-O with karma: 107 on 2013-09-26 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15618, "tags": "rviz" }
Water electrolysis - what is happening to an iron anode?
Question: So I made an experiment to find a good electrolyte for a water electrolysis. I tried citric acid, which turned out to not produce any gases at the anode, I tried sodium hydroxide, which turned out to be the very best, and I tried sodium carbonate. The $\ce{Na2CO3}$ solution was about 2 g/l. As my electrodes I decided to try some nails, very likely made of iron or some iron alloy. So I hooked up my power source, just two nine-volt-batteries in series, and started the electrolysis. The kathode (negative terminal) bubbled vigorously, indicating the production of hydrogen gas. The anode (positive terminal) however bubbled very little and started decomposing, gray-green flakes started falling off and sinking to the bottom. This decomposition started as soon as the power was supplied. This struck me, because I was expecting oxygen to be liberated at the anode! When I took my anode out, it was at first a dirty green or green-gray, but it quickly turned a rusty brown. So in conclusion, the kathode produced hydrogen with no sign of wear when on the other hand the anode made little gas and decomposed into probably some iron compound, which turned into probably iron oxide (I'm judging by the color here) on prolonged contact with air. So, what could it be? I thought it might be iron hydroxide, but that'd be brown. This however was green. Answer: On an anode made of iron, some of the removed electrons may be used oxidizing iron metal to ferrous iron(II) ions, or oxidizing iron(II) ions to iron(III), rather than bubbling off oxygen gas. Pushing more current through the cell is likely to result in gas evolution alongside the iron oxidation, since all three of these processes will run simultaneously. $$\ce{Fe(metal) -> Fe^{2+}(aq) + 2 e^- }$$ $$\ce{Fe^{2+}(aq) -> Fe^{3+}(aq) + e^- }$$ $$\ce{2H_2O(liquid) -> O_2 (gas) + 4H^+ (aq) + 4e^- }$$ Ferrous iron(II) ion solutions are a pale green, but rapidly oxidize in air to vivid red-brown ferric iron(III). Iron oxides and hydroxides form precipitates and slimes in a variety of colors including black (magnetite), red (iron(III) oxide), and yellow (iron(III) hydroxide).
{ "domain": "chemistry.stackexchange", "id": 10868, "tags": "electrochemistry, water, metal" }
Difference between proper and comoving frames
Question: I'm reading this book "Introduction to Quantum Fields in Classical Backgrounds" by Mukhanov & Winitzki, and there in the chapter 8 "The Unruh Effect" they introduce 3 reference frames. Laboratory Frame: "is the usual inertial reference frame with the coordinates $(t,x,y,z)$". Proper Frame: "is the accelerated system of reference that moves together with the observer, we shall also call it the accelerated frame". Comoving Frame: "defined at a time $t_0$ is the inertial frame in which the accelerated observer is instantaneously at rest at $t=t_0$. (Thus the term 'comoving frame' actually refers to a different frame for each $t_0$)". Now, I don't understand why they say that the observer's proper acceleration at time $t=t_0$ is the 3-acceleration measured in the comoving frame at time $t_0$. Could you explain why? Also I don't understand completely the definition of comoving frame. Answer: Let's restrict to the case of special relativity based on the comments under the original post. Let $x^\mu(t) = (t, \mathbf x(t))$ be the path of a timelike particle as measured in the lab frame. In some inertial frame. Suppose that at some instant $t_0$, the particle is measured to have zero velocity in this frame; $$ \frac{d \mathbf x}{dt}(t_0) = 0 $$ Then this inertial frame is, by definition, comoving with the particle at time $t_0$. Indulging in a slight abuse of notation, suppose that $t(\tau)$ gives the inertial time as a function of proper time. Then we have $$ \frac{d}{d\tau}\mathbf x(t(\tau)) = \frac{d\mathbf x}{dt}(t(\tau))\frac{dt}{d\tau}(\tau) $$ and $$ \frac{d^2}{d\tau^2}\mathbf x(t(\tau)) = \frac{d^2\mathbf x}{dt^2}(t(\tau))\left[\frac{dt}{d\tau}(\tau)\right]^2 +\frac{d\mathbf x}{dt}(t(\tau))\frac{d^2t}{d\tau^2}(\tau) $$ evaluate this at $\tau_0$ satisfying $t(\tau_0) = t_0$; in other words $\tau_0$ is just the proper time at which the frames are comoving. The second term vanishes by the comoving assumption, moreover the factor $\frac{dt}{d\tau}(\tau_0)$ just equals $1$ because recall that $dt = \gamma d\tau$ and since the particle is momentarily at rest relative to the comoving frame, $\gamma = 1$ at that instant. We therefore get $$ \frac{d^2}{d\tau^2}\mathbf x(t(\tau))\Big|_{\tau = \tau_0} = \frac{d^2\mathbf x}{dt^2}(t_0) $$ The left hand side is the proper acceleration at the "comoving instant" of the particle by definition. The right hand side is the acceleration as measured by the comoving observer, so this is the equality you wanted.
{ "domain": "physics.stackexchange", "id": 7049, "tags": "general-relativity, special-relativity, acceleration, reference-frames" }
Is there a rule of thumb for the initial value of loss function in a CNN?
Question: I notice that most of my successfully trained CNNs initiate (i.e. first epoch) loss is in the 1-2 range. Typically, these are image classifiers, either for discrete classes (cat, dog, ship, etc) or semantic (pixelwise) segmentation. Occasionally, either because of poor design or architecture misunderstand on my part, I see an initial loss in the 60s; the other day one started at 200 (and didn't converge). So my question is: is there a heuristic that would allow me to determine a ball park value for my initial loss value? If so, on what aspects of my trained model is it contingent? Answer: In the Stanford CS231n coursework, Andrej Karpathy suggests the following: Look for correct loss at chance performance. Make sure you’re getting the loss you expect when you initialize with small parameters. It’s best to first check the data loss alone (so set regularization strength to zero). For example, for CIFAR-10 with a Softmax classifier we would expect the initial loss to be 2.302, because we expect a diffuse probability of 0.1 for each class (since there are 10 classes), and Softmax loss is the negative log probability of the correct class so: -ln(0.1) = 2.302.
{ "domain": "datascience.stackexchange", "id": 1721, "tags": "machine-learning, deep-learning, loss-function" }
How to find canonical form of three site MPS?
Question: I am trying to implement the iTEBD algorithm for a certain model, where the hamiltonian acts on three successive sites. This means that my time-evolution operator is a rank 6 tensor, acting on a rank 5 tensor (3 physical dimensions and 2 bond dimensions). I would now like to decompose this tensor back into the canonical form by performing SVDs but I am not sure how to go about doing this. In the case of two sites, we reshape the tensor into a (2D,2D) matrix and do a SVD. How do we extend this to the three site case? Answer: You can do it the same way as before -- just that you first have to keep two legs blocked together and do the decomposition with respect to the third leg. Once this is done, you still have a tensor acting jointly on 2 physical spins, which you can decompose with another SVD.
{ "domain": "physics.stackexchange", "id": 92670, "tags": "computational-physics, tensor-network" }
A find method on a Product class
Question: The following class holds an array with products that are generated from a Json file. This is what the json file looks like [ { "id": "A101", "description": "Screwdriver", "category": "1", "price": "9.75" }, { "id": "A102", "description": "Electric screwdriver", "category": "1", "price": "49.50" }, { "id": "B101", "description": "Basic on-off switch", "category": "2", "price": "4.99" } ] I'm using a find method on a Product class, but I'm in doubt if storing all generated products in the class itself is a good idea Should I maybe create a ProductCollection class? Should I build my products[] collection differently then using my method Should I generate a Factory for the product creation from my Json file. This is my code class Product { private static $products = []; public static function initialize($url) { // Build the product from our JSON file $data = json_decode(file_get_contents($url)); foreach($data as $item) { $product = new Product(); $product->id = $item->id; $product->description = $item->description; $product->category = $item->category; $product->price = $item->price; self::$products[] = $product; } } public static function find($id) { foreach(self::$products as $product) { if($product->id == $id) { return $product; } } return false; } } Thank you. Answer: Look closely at your code. What's the major difference between what you have now and this evil snippet: $PRODUCTS = []; function loadProducts($url) { global $PRODUCTS; $data = json_decode(file_get_contents($url)); foreach ($data as $item) { $PRODUCTS[] = new Product($item);//assuming __construct(stdClass $data) } } function findProduct($id) { global $PRODUCT; if (isset($PRODUCT[$id])) { return $PRODUCT[$id]; } return false; } There isn't much of a difference, is there? This is why static's in PHP especially, are often referred to as being globals in drag. If the $PRODUCTS array were public in your code, or a true global (as is the case in my horrible snippet, why would I bother calling a method, instead of just inlining the $product = isset($PRODUCTS[$id]) ? $PRODUCTS[$id] : null;? You're essentially tightly coupling functionality and state, which is never a good idea. You're offering no clean way to the user to fetch a certain data-set without that data being available to all parts of the application. A cleaner thing to do would be to have some sort of "fetcher" component: class JsonUrlReader {//horrible name, but it's late and this is what came up first protected $currentData = [];//for buffered reads /** * Allow user to pass URL + the class you want the data to mapped on to * @param string $url * @param string|null $class */ public function fetchData($url, $class = null) { $data = json_decode(file_get_contents($urls)); $items = []; foreach ($data as $item) { //pass data to constructor if class arg is given $item = $class ? new $class($item) : $item; $this->currentData[] = $items[] = $item; } return $items;//return current subset } } You could define interfaces that allow you to index the return array, you can add methods to fetch the $currentData property in full, or perhaps clear it. You could extend this class to fetch the data in a more specific way, or to focus around a known API (eg a wrapper for a specific api can have methods that make predefined, known calls and return known objects, but behind it all, they just perform curl calls). Another problem you will face in time is testing. As it stands, your code is pretty much un-testable. The only thing you can do is pass files to the initialize method, that contain known sets of data, but you're not checking the json_last_error value. At no point are you considering malformed JSON, timeouts or worse still: invalid arguments (what if I pass null or false?). You're assuming the caller will pass valid arguments at all time, you're also assuming the file_get_contents call will be successful, and the contents will be valid JSON. What's more, you're assuming that this valid JSON will be decoded into objects of a particular format. Ask yourself this simple question: if the third party providing the data changes their format, what would make most sense: you having to edit all places where you're setting the data transfer objects (eg Product), or you having to change the Product object itself (the thing that represents the third party data)? This is about all I have time for now, will update if I can...
{ "domain": "codereview.stackexchange", "id": 19714, "tags": "php" }
How could the universe be hyperbolic if hyperbolic space isn't symmetrical?
Question: In the 2-D projections of the shape of the universe shown here, we see that the flat universe and the spherical universe are perfectly symmetrical, so any triangle drawn anywhere on them will be the same. However, the hyperbolic universe appears to only be symmetrical on two axes, so any triangle drawn anywhere on it will not necessarily be the same as some identical triangle drawn somewhere else on it. This implies that the hyperbolic universe would have a center and that shapes would have different dimensions at different locations in the universe. Is this the case in this scenario, or is the apparent lack of symmetry an artifact of this being a projection of a 3-D space onto a 2-D image? Or is there something else that I am missing? Answer: It's impossible to draw an accurate picture of a 2D hyperbolic surface, because such a surface cannot be embedded into a 3D euclidean space; this is known as Hilbert's Theorem. The saddle surface in the figure is just an approximation, and serves as an illustration that every point on a hyperbolic surface is a saddle point.
{ "domain": "physics.stackexchange", "id": 72465, "tags": "cosmology, universe, curvature, geometry, visualization" }
Tf tree getting messed up when launching freenect.launch
Question: Just launching the robot description . After launching freenect roslaunch freenect_launch freenect.launch Originally posted by chrissunny94 on ROS Answers with karma: 142 on 2018-02-28 Post score: 0 Original comments Comment by Humpelstilzchen on 2018-03-01: The launch file has a "publish_tf" argument, try with this argument set to false Comment by chrissunny94 on 2018-03-01: Did the job , thank you! (@humpelstilzchen) Comment by chrissunny94 on 2018-03-01: [camera/camera_nodelet_manager-1] process has died [pid 12709, exit code -6, cmd /opt/ros/kinetic/lib/nodelet/nodelet manager __name:=camera_nodelet_manager __log:=/home/ubuntu/.ros/log/eeb501ca-1d10-11e8-875b-00044b65e5df/camera-camera_nodelet_manager-1.log]. log file: /home/ubuntu/.ros/log/eeb501c Comment by chrissunny94 on 2018-03-01: Now i am getting the above error , kinect launches and then crashes . Comment by chrissunny94 on 2018-03-01: I have tried hooking it upto RPI3 , same error . I am now using a Nvidia Jetson TX1 , still the same error persists . Any ideas on how to get around this problem ? Comment by chrissunny94 on 2018-03-01: When i am trying to run gmapping , i get the following error . Timed out waiting for transform from base_link to map to become available before running costmap, tf error: Could not find a connection between 'map' and 'base_link' because they are not part of the same tree.Tf has two or more unconnec Comment by Humpelstilzchen on 2018-03-01: Thats a new question, also look at the mentioned log file. Answer: Your TF tree is not fully connected. You need to publish the transformation between your base_link and camera _link Originally posted by Gayan Brahmanage with karma: 929 on 2018-03-02 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by chrissunny94 on 2018-03-06: Had to set the tf on freenect.launch to not publish tf , that was conflicting with my robot tf.
{ "domain": "robotics.stackexchange", "id": 30168, "tags": "navigation, odometry, kinect, ros-kinetic, transform" }
Would order 1AU metric expansion of space begin if the solar system were not inside a galaxy?
Question: In this question I describe the >30 years of laser ranging between lasers on Earth and the retroreflector arrays on the Moon. Amazingly, after comparing this data to simulation of all of the orbital mechanics and tidal effects, the residual is only a few centimeters. If $\text{H}_D$ is about 70 $\text{km}\ \text{s}^{-1} \text{Mpc}^{-1} $ (or about 2.3E-18 $\text{sec}^{-1}$), then with a semi-major axis of 3.84E+08 meters, over 37 years the effect would be of order 1 meter. Since these measurements are consistent with zero to the level of a few centimeters, this is taken as experimental evidence that metric expansion is not taking place locally compared to the rate seen between galaxies. If I understand correctly this is "suppression" is due to the large amount of mass in our galaxy, even though it is thousands of light years away. So I am wondering - if a similar experiment were done in a similar solar system associated with an isolated star alone in an intergalactic region, what does current theory predict - would metric expansion be detected, or would the mass of the one star be enough to suppress it? Then what if it were just a planet and a moon without the mass of the star, or even two smaller masses? I'm looking for an answer at a level similar to this answer and this answer, where time was taken to note the the specific relevant concepts and work from the paper linked in the first answer. Answer: These notes put some numbers on @ACuriousMind 's answer: one needs to be looking at length scales of 100 Mpc and greater for the FLRW metric to be a realistic description of reality. That's a staggering distance, and equivalent to timescales amounting to the whole Mesozoic era, comprising the rise and fall of the Dinosaurs! So one cannot expect the scalefactor expansion of spacetime to apply to our Earth-Moon system simply because the system doesn't fulfill the assumptions that justify the FLRW metric. Perhaps a more addressable variant to your question would be to ask what difference a positive cosmological constant makes to a metric that does describe the Earth-Moon system. This is a question that can be answered, and it is reasonably straightforward to go through the derivation of the Scwharzschild metric but with a positive cosmological constant. One finds that the metric changes as follows: $$g_{t\,t} =c^2\left( 1-\frac{r_s}{r} -\frac{r^2\,\Lambda}{3}\right)$$ $$g_{r\,r} = \frac{c^2}{g_{t\,t}}$$ and the cosmological constant, if small enough, does not change the basic character of geodesics; it will however shift the radiusses of stable orbits. These notes sketch how to work through the computation; the radially symmetric system with nonzero $\Lambda$ being Problem 23. in Chapter 23 of Moore, Thomas A., "A General Relativity Workbook". So there is no ongoing spacetime expansion in this system: orbits are just a little bigger than they would be with $\Lambda=0$ and some weakly bound orbits would become unbound with positive $\Lambda$. Therefore, we would not expect the Moon to be drifting away any faster that it is owing to nonrelativistic effects.
{ "domain": "physics.stackexchange", "id": 32374, "tags": "gravity, spacetime, space-expansion, physical-constants" }
At what point does force stop translating an object and start purely rotating it?
Question: At what point (or distance) from the axis of rotation, does force applied on a rigid body stop translating and purely rotating the body? Can such a point even exist? Does the body always have to translate? This question assumes that the body is in empty space and unattached. Answer: If there is no fulcrum, if there is no fixed pivot, a body will always translate. Such point does not exist. If the body is free, it can only translate but can never only rotate: if the impulse is at the Center of Mass linear velocity will be 100% , the minimum percentage of translational velocity is 25%, and it is reached at one tip of the rod. No matter what is the length or the mass of the rod, there is no point where linear velocity can be zero and rotational velocity 100%
{ "domain": "physics.stackexchange", "id": 20943, "tags": "kinematics, coordinate-systems, rotation, rotational-kinematics" }