anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
concatenating short noise sequences
Question: I'm trying to merge 2 identical audio files. The Audio file is nothing more than white noise generated using MATLAB commands block_len = 4096; Y = wgn(block_len, 1, 5, 120, 'dBm', 'real'); Rather than generating a longer block of data, i would like to make a longer file by concatenating the block of 4096 samples. Therefore I tried a simple, datat = [Y;Y;Y;Y;Y]; soundsc(datat) The output file is not smooth enough. There is an audible periodic pattern that indicates the multiple 'Y' segments. I believed this could be due to the imperfect combination at the beginning and End of each Y segment. Therefore I added the lines Y(1) = 0; Y(end) = 0; Concatenating after these still does not have a smoothness to it. I searched on stackoverflow and found this link. I tried the code based on the link above. I still can't get a smoothness to the white noise. What other methods can i try to make the sound as smooth as possible? Answer: @Marcus I took your advice based on the correlation. So i modified the Y segments to generate different white noise at each instance. Here is the MATLAB code for the same. clc;clear all;close all; block_len = 4096; YY = wgn(block_len, 1, 50, 10, 'dBm', 'real'); XY = wgn(block_len, 1, 50, 10, 'dBm', 'real'); ZY = wgn(block_len, 1, 50, 10, 'dBm', 'real'); VY = wgn(block_len, 1, 50, 10, 'dBm', 'real'); S1 = YY/max(YY); S2 = XY/max(XY); S3 = ZY/max(ZY); S4 = VY/max(VY); % S1 = rand(1000,1); % S2 = rand(1000,1) + 1; %\\ cross-fade over last 200 elements n = 256; nob = 2; W = linspace(1,0,n)'; %' S1(end-n+1:end) = S1(end-n+1:end).*W; S2(1:n) = S2(1:n).*(1-W); S2(end-n+1:end) = S2(end-n+1:end).*(W); S12 = zeros(size(S1,1) + size(S2,1) + size(S2,1) + size(S2,1) - n, 1); S12(1:size(S1,1)) = S1; S12((block_len*2)-n-size(S1,1)+1:(block_len*2)-n) = S12((block_len*2)-n-size(S1,1)+1:(block_len*2)-n) + S2; startpt = (block_len*3)-(2*n)-size(S1,1)+1; endpt = (block_len*3)-(2*n); S12(startpt:endpt) = S12(startpt:endpt) + S3; startpt = (block_len*4)-(3*n)-size(S1,1)+1; endpt = (block_len*4)-(3*n); S12(startpt:endpt) = S12(startpt:endpt) + S4; soundsc(S12) Now the periodic pattern is not audible. Therefore it was that correlation that was to blame. @Bob I also changed block_len = 16384 and startpt = (block_len*3)-(2*n)-size(S1,1)+1; endpt = (block_len*3)-(2*n); S12(startpt:endpt) = S12(startpt:endpt) + S2; startpt = (block_len*4)-(3*n)-size(S1,1)+1; endpt = (block_len*4)-(3*n); S12(startpt:endpt) = S12(startpt:endpt) + S2; at which point, the periodic pattern wasn't noticeable anymore. So both your explanations hold true.
{ "domain": "dsp.stackexchange", "id": 8045, "tags": "matlab, audio" }
Fine structure in sodium
Question: The energy shifts due to spin-orbit interaction are given by $$\Delta E_j = \frac{C}{2} (j(j+1)-l(l+1)-s(s+1))$$ If $l$, $s$ and $j$ are for the outer electron in sodium for example, then for $l>0$ there are only 2 sub levels. That means there are only two values of $j$. Why is this the case? If $j=l+s$, $l+s-1$,...$|l-s|$, why aren't there more possible values for higher values of $l$? Answer: j varies from $l+s$ to $|l-s|$ as you said and since it varies in only integer steps, the next integer lower than $l+\frac{1}{2}$ is $l+\frac{1}{2}-1$ = $l-\frac{1}{2}$ = $l-\frac{1}{2}$...since electron spin is $\frac{1}{2}$...no more possible values of $j$!
{ "domain": "physics.stackexchange", "id": 34684, "tags": "quantum-mechanics, atomic-physics" }
Contradiction between Force and Torque equations
Question: A thin uniform rod of mass $M$ and length $L$ and cross-sectional area $A$, is free to rotate about a horizontal axis passing through one of its ends (see figure). What is the value of shear stress developed at the centre of the rod, immediately after its release from the horizontal position shown in the figure? Firstly, we can find the angular acceleration $\alpha$ of the rod by applying the Torque equation about hinge point A as follows: $$ \frac{MgL}{2} = \frac{ML^2}{3} \alpha$$ $$\alpha = \frac{3g}{2L}$$ So the acceleration of the centre of the rod equals $\alpha \cdot \frac{L}{2} = \frac{3g}{4}$, Hence the hinge force $F_H = \frac{Mg}{4}$ Now consider an imaginary cut at the centre of the rod, dividing it into two halves. To account for the effect of one half on the other, we can add a shear force $F$ acting tangentially to the cross-section area on each half. Now focusing on the left-most half, the acceleration of its centre of mass should be equal to $\alpha$ times its distance from hinge point A So, $a_{cm} = {\frac{L}{4}} \cdot {\frac{3g}{2L}} = \frac{3g}{8}$ Keeping in mind that the mass of the left half is $\frac{M}{2}$ and applying force equation, we get the following: $$\frac{Mg}{2} + F - F_H = \frac{3Mg}{16}$$ $$F - F_H = \frac{-5Mg}{16}$$ $$F = \frac{Mg}{4} - \frac{5Mg}{16} = -\frac{Mg}{16}$$ But if we apply the Torque equation about hinge point A, for the left half: $$\frac{Mg}{2} \cdot \frac{L}{4} + F \cdot \frac{L}{2} = \frac{\left(\frac{M}{2}\right)\left(\frac{L}{2}\right)^2}{3} \cdot \alpha $$ $$ \frac{MgL}{8} + \frac{FL}{2} = \frac{ML^2}{24} \cdot \frac{3g}{2L}$$ $$ \frac{FL}{2} = \frac{MgL}{16} - \frac{MgL}{8} = -\frac{MgL}{16}$$ $$F = -\frac{Mg}{8}$$ Why is there a contradiction? Answer: Your initial calculations are correct. The pin force is indeed $\tfrac{m g}{4} $ a fact that kind of surprised me the first time I encountered this problem. Your idealization on the second part is where things were missed. I am using the sketch below, and I am counting positive directions as downwards (same as gravity) and positive angles as clock-wise. Notice each half-bar has mass $m/2$ and mass moment of inertia about its center of mass $ \tfrac{1}{12} \left( \tfrac{m}{2} \right) \left( \tfrac{\ell}{2} \right)^2 = \tfrac{m \ell^2}{96}$ Let's look at the equations of motion for the two half-bars as they are derived from the free body diagrams. $$ \begin{aligned} \tfrac{m}{2} a_G & = \tfrac{m}{2} g - F_C - F_A \\ \tfrac{m \ell^2}{96} \alpha & = -\tau_C - \tfrac{\ell}{4} F_C + \tfrac{\ell}{4} F_A \\ \tfrac{m}{2} a_H & = \tfrac{m}{2} g + F_C \\ \tfrac{m \ell^2}{96} \alpha & = \tau_C - \tfrac{\ell}{4} F_C \\ \end{aligned} $$ And consider the kinematics, where it all acts like a rotating rigid bar, with point accelerations $a_G = \tfrac{\ell}{4} \alpha$ and $a_H = \tfrac{3 \ell}{4} \alpha$ The solution to the above 4×4 system of equations is $$ \begin{aligned} F_A & = \tfrac{m g}{4} & F_C & = \tfrac{m g}{16} \\ \alpha & = \tfrac{3 g}{2 \ell} & \tau_C &= \tfrac{m g \ell}{32} \end{aligned} $$ I think because you did not account for the torque transfer $\tau_C$ between the half-bars, you got $F_C = \tfrac{m g}{8}$ which is incorrect. Note, I used PowerPoint and IguanaTex plugin for the sketches.
{ "domain": "physics.stackexchange", "id": 89625, "tags": "newtonian-mechanics, rotational-dynamics, torque" }
What is the length of the yarn in a ball of yarn?
Question: The image https://commons.wikimedia.org/wiki/File:Ball_of_yarn_10.jpg shows a typical ball of yarn. Such a spherical ball of radius $R$ has a volume $4πR^3/3$. The radius of the yarn is $r$. How long will the yarn be on average? The length will depend on the way the ball is formed. There will be air-filled space inside the ball. Therefore yarn length $L$ is surely smaller than $V/(πr^2)$. But what is its average length for an average ball? How can one determine this average length? Equivalently: how much air is contained in an average ball of yarn? The yarn is assumed to be unstretchable but infinitely flexible. This is a question about statistical geometry. (This is not homework; the question is about the ensemble average over all possible balls of radius $R$.) Is there some method, maybe using random walks, to estimate an expectation value for the yarn length? P.S. A few people continue to add the "homework" tag. This is not homework level. The level is more that of a master thesis. Answer: I agree with you that the question is tricky. To a first order approximation you might consider the packing density of the yarn to be equivalent to the hexagonal close packing of straight lengths of tube (that's the optimal) which is about 90%. However, if the yarn is wound at random you won't achieve that optimum density- sticking with the approximation of straight tubes, it will be as if the tubes are laid across each other at odd angles rather than being packed in an orderly way into a smaller volume. I suspect it might be that crossing two pieces of twine at right angles results in the worst use of space, while wrapping them side by side provides the best use. If you could model the worst case, then perhaps you could find an expectation value for a random winding by supposing it is midway between the two.
{ "domain": "physics.stackexchange", "id": 62624, "tags": "estimation, geometry, string, volume" }
Fibonacci Sequence in BrainF***
Question: I have been trying to write a BrainF*** program that prints the Fibonacci sequence numbers repeatedly. I was wondering whether this is the most efficient way to do it. I basically repeatedly duplicate two cells into a third cell, and shift one cell forward and repeat. +++++[->----[---->+<]>++.-[++++>---<]>.++.---------.+++.[++>---<]>--.++[->+++<]>.+++++++++..---.+++++++.+[-->+++++<]>-.<] Answer: Portability Your first loop immediately subtracts 4 from a cell whose value is 0. It seems like your program is designed to run using the wraparound dialect of Brainfuck, so it would be a good idea to note that in a comment. On the other hand, writing a Fibonacci sequence generator using the 8-bit dialect isn't that useful, because you would overflow after 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233. However, I'm puzzled by why you would want to do any subtraction at all. After all, the Fibonacci sequence involves addition, not subtraction. Readability There's no need to cram everything on one line. Best practice would be to insert line breaks after . commands and use indentation to show off the loop structure. Add some spaces as well, to let your code breathe. Suggested solution ++++++++++ [-> +++>+++++++>++++++++++>+++++++++++>++++++++++++ <<<<< ] >> +++ . < ++ . >>> ++ . < +++++ . > ++++ . > + . <<<< . >>> . < - . --- . << . >> + . > ----- . . --- . <<< + .
{ "domain": "codereview.stackexchange", "id": 19316, "tags": "fibonacci-sequence, brainfuck" }
What is the advantage of MISO technology in communications?
Question: What is the advantage if the MISO (Multiple Input Single Output) technology in communications? I have been searching it on the internet, but there are just about their model and full name. For example, What is its application? Is it used in 5G or IoT? Answer: Multiple input single output refers to the scenario where you have a transmitter with multiple antennas and a receiver with a single antenna. That is, the channel sees multiple inputs from the multiple transmit antennas and a single output at the single receiver antenna. With that being said there are many benefits of multiple antennas: Wireless channels undergo what is called fading and can sometimes experience a so called "deep fade" in which the channel strength drops low enough to a point where reliable communication is no longer possible. Say we have 10 antennas at the transmitter and a single antenna at the receiver, then there are 10 wireless channels, one for each transmit antenna. Intuitively you can think the probability of a deep fade event should now be less since all 10 channels would have to experience a deep fade at the same time. This is referred to as transmit diversity. With multiple antenna elements we can perform what is called beamforming. Beamforming allows us to amplify signals we send in certain directions and can even further attenuate signals in different directions (pointing a null beam). Many beamforming techniques have been developed including adaptive algorithms which perform beam pointing "on-the-fly" to try to maximize SINR for example. Multiple transmit antennas are necessary for certain space-time block codes (STBC) which boil down to transmitting specially modified versions of the same signal during the different time slots of transmission. A common and simple STBC is the Alamouti scheme. These are just the first few that came to my mind but, multiple antenna systems are an active research area and with 5G pushing massive MIMO there are many groups doing work in this area.
{ "domain": "dsp.stackexchange", "id": 7064, "tags": "digital-communications" }
Correlate an array of categorical features to binary outcome
Question: I have a data set that looks like this: target,items 1,[i1,i3] 1,[i4,i5,i9] 0,[i1] ... The variable target is 0-1 outcome. The feature "items" is a set of items (variable length). Each item is a categorical variable (one of: i1, i2, .., i_N). There's no order/relationship between the items. A business example would be "set of products in a cart, outcome whether the customer abandons cart". The size of data is approx. 1,000,000 by 5,000 (I have ~1 million examples, and N is approximately 5,000) I want to do the following analysis. I want to find the items that influence (or lead to) target = 1. I don't have extra features to add. What is the type of statistical analysis or machine learning modelling technique that I should use? Answer: How large is N? Can you reshape your data into something like: target i1 i2 i3 i4 i5 ... i9 ... iN 1 1 0 1 0 0 ... 0 ... 0 1 0 0 0 1 1 ... 1 ... 0 0 1 0 0 0 0 ... 0 ... 0 Once you have fit everything into a data frame, you can use any two-class supervised classification algorithm to build your model. There is no "best" model in general, but try a few to see which one works the best with your data. Sorry for posting as an answer; comment doesn't allow preformatted text.
{ "domain": "datascience.stackexchange", "id": 3285, "tags": "data-mining, data-analysis" }
Describe statement "Exactly k out of n variables should be true" in 2-SAT in time polynomial to n and k?
Question: I have a list of $n$ variables, exactly $k$ of which should be true. Is it possible to encode this as a 2-SAT problem in time polynomial to $n$ and $k$? Answer: If $x,y,z$ are three satisfying assignments of a 2SAT formula $\phi$, then the bitwise majority of $x,y,z$ is also a satisfying assignment $\phi$. To verify this, it suffices to check that this works for every clause of width 2. If $n \geq k+2$, we can consider the following three assignments: $x_i = y_i = z_i = 1$ for $i = 1,\ldots,k-1$. $x_k = 1$, and all unspecified indices are $0$. $y_{k+1} = 1$, and all unspecified indices are $0$. $z_{k+2} = 1$, and all unspecified indices are $0$. The majority of $x,y,z$ has only $k-1$ many $1$'s, and so when $n \geq k+2$, you cannot express your predicate in 2SAT at all. This still holds even if you allow extension variables, that is, if you allow formulas of the form $\phi(x,t)$ such that $\phi(x,t)$ holds for some $t$ iff at least $k$ of the $n$ variables in $x$ are true. If $n = k+1$ and $k \geq 2$, we can consider the following three assignments: $x_i = y_i = z_i = 1$ for $i = 1,\ldots,k-2$. $x_{k-1} = x_k = 1$ and $x_{k+1} = 0$. $y_{k-1} = y_{k+1} = 1$ and $y_k = 0$. $z_{k-1} = 0$ and $z_k = z_{k+1} = 0$. The majority of $x,y,z$ has $k+1$ many $1$'s, and so again the predicate is inexpressible. If $n = k$ then you can use $x_1 \land \cdots \land x_n$, or $(x_1 \lor t) \land (x_1 \lor \lnot t) \land \cdots$ if you want the width to be exactly $2$. If $(n,k) = (1,0)$ then you can use $\lnot x_1$ (or $(\lnot x_1 \lor t) \land (\lnot x_1 \lor \lnot t)$ if you want the width to be strictly $2$), and if $(n,k) = (2,1)$ then you can use $(x_1 \lor x_2) \land (\lnot x_1 \lor \lnot x_2)$.
{ "domain": "cs.stackexchange", "id": 19762, "tags": "2-sat" }
Why do we use wool for insulation?
Question: I'm really struggling to understand the science behind wool, and why it's a good insulator. I'm doing an investigation about heat transfer, and the topic I chose is insulators. Answer: The best insulation is air, actually it must be trapped air (not allowed to convect). Wool traps the air, similar to fibreglass or other home insulators.
{ "domain": "physics.stackexchange", "id": 51834, "tags": "thermodynamics, energy, everyday-life, material-science" }
How can the waves interfere in X-ray diffraction?
Question: This is the popular diagram surrounding the Bragg's law for X ray diffraction: But what I seem to not be able to understand is when they say that the reflected waves interact constructively to create peaks. I dont see how the two reflected rays can superimpose at all because they dont share the same path. They seem to be separate and far from each other, so how can they interfere constructively or destructively ? Or is the detector just receiving individual waves and merely considering a high intensity if all of those waves are in the same phase ? It seems to me that these two reflected waves can never "fully" superimpose on each other at least not in the reflected direction owing to the fact that after reflection the waves are rather spherical and not planar. Answer: The lines in the diagram represent the direction of travel of the wave and represent that the waves scatter off of atoms in different locations. As a preliminary remark, note that the extent of a plane wave is all of space. A plane wave is a function that has a value everywhere, not just along a single line. But, we can draw a line to indicate the direction of the wave vector $\vec k$. The wave is said to be "plane" because it has the same value everywhere along a plane perpendicular to the wave vector. Note also that the lines in the figure presented in the question are drawn in a way to help illustrate the phase difference. The waves do not literally travel only along the lines. The lines indicate the direction of travel and indicate that the total path length for one of the interfering waves is larger than the other. Generally, if some atom is at position $\vec R_1$ and for example a plane wave $e^{i\vec k \cdot \vec r}$ reflects off of the atom, the outgoing wave is proportional to: $$ \frac{1}{|\vec r - \vec R_1|}e^{ik|\vec r - \vec R_1|}e^{i\vec k\cdot \vec R_1} $$ If the same incident plane wave reflects off a different atom at a different location $\vec R_2$ the outgoing wave is proportional to: $$ \frac{1}{|\vec r - \vec R_2|}e^{ik|\vec r - \vec R_2|}e^{i\vec k \cdot \vec R_2} $$ There is a phase difference between the waves because the initial plane wave has to travel different distances to get to the different atoms. In a solid the locations $\vec R_i$ are regularly distributed and cause diffraction patterns. However the expression for the total diffracted wave can be written down for any collection of scatterers at locations $\vec R_i$ as: $$ \sum_i \frac{1}{|\vec r - \vec R_i|}e^{ik|\vec r - \vec R_i|}e^{i\vec k \cdot \vec R_i}\;, $$ where the first factor is the outgoing spherical wave (from each scatterer) and the last factor is due to the incoming plane wave having to travel different distances to each scatterer. We also use the approximation: $$ |\vec r - \vec R_i| \approx r - \frac{\vec r}{|r|}\cdot\vec R_i\;, $$ in the exponential of the outgoing spherical wave, which we will see is valid when the final observation location $r$ is very large compared to the distance between scatterers. And we use the approximation: $$ |\vec r - \vec R_i| \approx r\;, $$ in the denominator of the outgoing spherical wave. For the case of two scatterers we use the above approximation and find the total wave amplitude at the observation point $\vec r$ to be: $$ \frac{e^{ikr}}{r}\left( e^{-ik\hat r\cdot \vec R_1}e^{i\vec k \cdot \vec R_1} + e^{-ik\hat r\cdot \vec R_2}e^{i\vec k \cdot \vec R_2} \right) $$ And the total phase difference is: $$ (\vec k - k\hat r)\cdot(\vec R_1 - \vec R_2) = \frac{2\pi}{\lambda}2d\sin(\theta) $$
{ "domain": "physics.stackexchange", "id": 90840, "tags": "diffraction, crystals, x-rays, x-ray-crystallography, braggs-law" }
Error in display of Pointcloud in /base_link frame
Question: I tried to view a pointcloud2 message in rviz in /base_link frame. So i wrote a static transform publisher which published a tf base_link->camera_link. The pointcloud2 can be successfully viewed when fixed frame is camera_depth_frame but when i select fixed frame to be base_link the display stops and the pointcloud2 message in Rviz turns red. What could be possibly wrong..? I am supplying the required Tf, then why does the pointcloud2 message does not transform to /base_link frame. P.S. 1.During the above process the target frame is always filled with the option of fixed frame 2.The source of all the data is a bag file from Kinect. 3.Also i run roslaunch openni_launch openni.launch load_driver:=false. EDIT: Added the screenshot Originally posted by Karan on ROS Answers with karma: 263 on 2013-01-19 Post score: 3 Original comments Comment by Ben_S on 2013-01-19: Sounds like a problem with your transforms (what does the red text say?). Please run rosrun tf view_frames and upload the output somewhere. Comment by Karan on 2013-01-19: The error says "Message removed because it is too old" could this be because i didnt record the tf message at the time i recorded the bag file from Kinect..? But i also ran rosparam set use_sim_time true Comment by Ben_S on 2013-01-19: Your tf-tree looks fine. Try disabling use_sim_time. Or if you leave it enabled, make sure that you are adding the --clock parameter to rosbag play Comment by Karan on 2013-01-20: Hey the problem found to be was that i didn't record tf messages at the time I recorded the Kinect data. So later when I played the data using rosbag --clock parameter , i also had to play the tf_broadcaster with --clock parameter. With this the error disappeared..!! Answer: I had a similar issue, I had to use the following param.. rosparam set /use_sim_time true along with the --clock by Ben_S Originally posted by jimbo with karma: 36 on 2013-05-28 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12494, "tags": "ros, rviz, base-link, pointcloud" }
Spacetimes with "celestial Riemann surface" other than the sphere
Question: In the standard study of asymptotically flat spacetimes one defines null infinity demanding that topologically ${\cal I}^\pm \simeq \mathbb{R}\times S^2$ (c.f. Definition 1 of this review by Ashtekar). The reason is clearly that this is what happens in flat spacetime and intuitively we want to define things so that asymptotically flat spacetimes at infinity look like Minkowski spacetime. Nevertheless I imagine that from a mathematical perspective nothing would stop us from adapting the definition so that we have a larger class of Lorentzian manifolds, which are defined exactly like asymptotically flat spacetimes, with the only difference that now ${\cal I}^\pm \simeq \mathbb{R}\times \Sigma$ where $\Sigma$ is some arbitrary Riemann surface. In this case we would endow $\Sigma$ with a metric $\gamma_{AB}$ which would take the place of the usual round metric on $S^2$. In particular, in the usual retarded coordiantes $(u,r,x^A)$ we would have, near ${\cal I}^+$, $$ds^2=-du^2+2dudr+r^2\gamma_{AB}dx^Adx^B+\text{corrections...}$$ My question here: has this class of spacetimes been studied? Are they physically reasonable for any $\Sigma$, for just a subset of all Riemann surfaces or they are simply not physically reasonable at all? Answer: Here are examples of solutions that have toroidal null infinities: Schmidt, B.G., Vacuum space-times with toroidal null infinities, Class. Quantum Grav.,13, 2811–2816, (1996), doi:10.1088/0264-9381/13/10/017, free pdf. Hübner, P., More about vacuum spacetimes with toroidal null infinities, Class. Quantum Grav.,15, L21–L25, (1998), doi:10.1088/0264-9381/15/3/002, arXiv:gr-qc/9708042. This are constructed from analogue of the Schwarzschild metric with planar symmetry (metric A3 in classification of Ehlers & Kundt, from a 1962 volume Gravitation: an Introduction to Current Research, edited by L. Witten) also known as Taub's plane symmetric solution: $$ ds^2 = −\frac1R\,dT^2+R\,dR^2+R^2(dx^2+dy^2). $$ By imposing periodicity in both $x$ and $y$ directions and suitably “deforming” the metric one could obtain a family of vacuum asymptotically A3 solutions with null infinities having topology $\mathbb{R}\times T^2$. Note, that the A3 metric is not very physical, since it could be seen as a gravitational field of a negative mass naked singularity, see e.g. this paper for a discussion of its properties. One use for such spacetimes is as a test for numerical relativity codes since we can check results of numerical evolution against the exact solutions. Also, celestial sphere of ordinary asymptotically flat spacetimes can be modified by the action of finite superrotation transformations, which would introduce conical defects at the celestial sphere geometry: Adjei, E., Donnelly, W., Py, V., & Speranza, A. J. (2020). Cosmic footballs from superrotations. Classical and Quantum Gravity, 37(7), 075020, doi:10.1088/1361-6382/ab74f6, arXiv:1910.05435. Another viewpoint on finite superrotations, is that they correspond to cosmic strings in the bulk: Strominger, A., & Zhiboedov, A. (2017). Superrotations and black hole pair creation. Classical and Quantum Gravity, 34(6), 064002, doi:10.1088/1361-6382/aa5b5f, arXiv:1610.00639.
{ "domain": "physics.stackexchange", "id": 73802, "tags": "general-relativity, spacetime, differential-geometry, mathematical-physics, topology" }
Does there exist any relation between metallic property and oxidation potential?
Question: I was having a revision on periodic table I was thinking about different properties I could connect most of I could not connect metallic property and oxidation potential . Answer: The short answer is yes. The reason for this is that metals exhibit metallic properties because their electrons are relatively loosely bound to the atoms. A metallic bond is one in which the valence electrons are free to move throughout the lattice structure of the metal. Here's a schematic diagram from wiki commons: The idea is that the positively charged metallic "cores" (which includes the nucleus and the more tightly bound inner core electrons) are sort of suspended in a crystalline structure within a "sea" of mobile electrons. This model explains most of the metallic properties we observe, from heat and electrical conductance, to malleability and ductility. Now think about oxidation - in an oxidation-reduction (redox) reaction, one element gains electrons (it is reduced) and the other loses electrons (it is oxidized). Here's an animation, also from wikipedia: So, for an element to be metallic, its valence electrons need to be relatively "loose" - it can't take too much energy to detach them. For an element to be oxidized, something else has to "want" the electrons more - the energy cost of removing the electron has to be lower then the gain from adding them to the reducing agent. As a result there is a very strong correlation between the metallic nature of elements and their relative ease of oxidation. There is also a very strong correlation between both of these properties and the trends in ionization energy, electron affinity, and electronegativity. Of course, in chemistry, things are never that simple. There lots of details and it is tough to make general statements that are always true. For example, you can say that often times the more "metallic" an element is, the more easily it will be oxidized. However, when you compare elements using a standard electrode potential table (which gives us a way of measuring relative ease of oxidation) you find that it is easier to oxidize calcium ($-3.8 \space \rm{V}$) than it is to oxidize potassium ($-2.9 \space \rm{V}$), even though potassium is closer to the "metallic side" of the periodic table, it has a lower first ionization energy, and it is less electronegative. This illustrates a limitation of thinking of elements in terms of metals and non-metals: it is really a shorthand way of describing all of the things that affect how tightly those valence electrons are bound to the atom, in comparison to how badly other elements "want" to remove them. With that amount of generalization, we lose too much detail to be able to make accurate predictions in every situation.
{ "domain": "chemistry.stackexchange", "id": 1683, "tags": "periodic-table, periodic-trends" }
What is the universal speed limit relative to?
Question: If all speeds are relative, then what "governing" force is that speed limit relative to? Is there some sort of fixed or absolute grid with locations everything is compared to? Does this also mean that we have a "universal speed"? This being from the combined speed from the earth spinning, us orbiting the sun, the sun orbiting the galaxy, etc. If we do, then that would mean we could, in theory, utilize time dilations between an external observer (like a probe) and earth to determine which direction we're currently moving? Answer: Suppose we play a racing game. I scatter a little bit of dust around space, then you come by me in your spaceship at some speed $v$. Let's start with $v = c/2$, just so we're not contentious. Right as you pass, I fire a really bright laser pulse in the direction you're going. You're racing the laser light. The dust means that you see reflections of it, so you can measure, in your coordinates, where you think it is. Here's the basic problem: I (stationary relative to the dust) measure this light as moving away from me at speed $c$. You, moving relative to me at speed $v$, also see it moving away from you at speed $c$. The better your instruments, the better you will find that the light is moving away from you at speed $c$. Now let's speed you up a bit more. By this point I am far away, so don't count on me to help you: instead, you drop a little marker in space and then accelerate until that marker is moving backwards at speed $c/2$ relative to you. How fast is the laser pulse moving away from you? Still at speed $c$. So you drop another marker and you accelerate to speed $c/2$ relative to that. Still, the light is moving away from you at speed $c$. You cannot win. That is what the present theory says. Since I will always see the same events as you see, I will never see you outrun that light pulse. So from my perspective, nothing you can do short of magical teleportation will enable you to outrun the light pulse. Let's say that you can measure the speed of me moving away from you -- or maybe you just measure the speed of the dust. It does not go at speed $(-1/2)c$, then at speed $(-1)c$, then at speed $(-3/2)c$ relative to your spaceship. That is not how velocity addition works in relativity. Rather, it goes at speeds $(-1/2)c,~(-4/5)c, (-13/14)c$. The dust never goes faster than light either. You'll be heart-warmed to know that there are no paradoxes. We can prove that the mathematics is 100% consistent. The reason that this all happens is that when you start moving relative to me, we start to disagree on the time that the "present" is at far-off places. These disagreements start to add up pretty quickly as you start to go a significant fraction of the speed of light, and so that when you get close to it we both see each other's clocks as moving slowly, and we both see each other's spaceships appear to become shorter in the direction they're traveling. This gives you the answer to your second paragraph, too: because there is no "absolute frame of reference" that the speed of light is relative to (everyone sees light move at speed $c$), no, we can't determine our motion relative to that absolute frame of reference. (But, we know something a little similar: we have been able to determine our motion relative to the cosmic microwave background, which is actually pretty significant if you think about it; it basically says "we know how we're moving relative to our local part of the Big Bang.")
{ "domain": "physics.stackexchange", "id": 20229, "tags": "special-relativity, speed-of-light, reference-frames, observers, inertial-frames" }
Solving Schrödinger equation by neural networks - trial function explanation
Question: I'm reading this paper about solving Schrödinger equation using the combination of genetic algorithm and neural networks. But one part confuses me - the author defines his trial function, i.e. the function, which signifies the mistake of approximation of both the wavefunction $\psi(x)$ and the corresponding energy $E$ like this: $$R = \frac{\langle \psi | \hat{H} - E | \psi \rangle}{\langle \psi | \psi \rangle}$$ I don't really understand, what's meant by this formula. I suppose, that it is related to the expectation value of the Hamiltonian, but I don't understand it in depth. What does it mean to substract a number from an operator? And what does $\left< \psi | \psi \right>$ in the denominator mean (I know, that for "precise" $\psi$ it equals to 1)? Would you, please, explain it to me in more detail? Answer: What does it mean to substract a number from an operator? I agree that it can seem weird to do this. For example, if we represent our operators as matrices, how do we subtract a number from the matrix? To get around this, you can instead think of $\hat H - E$ as $\hat H - E\hat I$, where $\hat I$ is just the identity operator. Or what you can do is break up the inner product: $$\langle\psi|\hat H-E|\psi\rangle=\langle\psi|\hat H|\psi\rangle-\langle\psi|E|\psi\rangle$$ And what does $\langle\psi|\psi\rangle$ in the denominator mean? If our state is normalized, then this is equal to $1$. But we don't have to work with normalized states. If we choose not to work with normalized states, we must include this in the denominator. Why we need to is explained below: It looks like the quantity they are wanting to calculate is just the expectation value of how far off the energy $E$ is from the actual mean value of the Hamiltionian $\hat H$. Therefore, we would want to compute the value $\langle\hat H-E\rangle$, i.e., the expected value of a measurement of $H-E$. If we are not working with normalized states, then we need to include the term in the denominator to normalize everything. Therefore we end up with $$R=\frac{\langle\psi|\hat H-E|\psi\rangle}{\langle\psi|\psi\rangle}$$ Or if you want to do some extra math: $$\frac{\langle\psi|\hat H-E|\psi\rangle}{\langle\psi|\psi\rangle}=\frac{\langle\psi|\hat H|\psi\rangle}{\langle\psi|\psi\rangle}-\frac{\langle\psi|E|\psi\rangle}{\langle\psi|\psi\rangle}=\langle\hat H\rangle-E$$ So it is just the expectation value of the Hamiltonian subtracted by the value $E$. The closer this value is to $0$, the more confident we are in using $E$ to approximate the "actual energy" $\langle\hat H\rangle$.
{ "domain": "physics.stackexchange", "id": 53443, "tags": "wavefunction, schroedinger-equation, computational-physics, hamiltonian" }
Emergence of rotational symmetry on 2D square lattice
Question: On page 74 of David Tong's Statistical Field Theory lecture notes, it is said that $(\partial_1\phi)^2 + (\partial_2\phi)^2 $ respects both $D_8$ (that includes discrete four-dimensional rotation symmetry such as $x_1 \rightarrow x_2$ etc., as well as reflection symmetry $x_1 \rightarrow -x_1$ etc.) and $SO(2)$. And that the lowest-dimensional term that preserves $D_8$ but not $SO(2)$ is $$\phi \partial_1^4 \phi + \phi\partial_2^4\phi.$$ I was just wondering why this is the case? How can we tell if a term respects $SO(2)$? And what about $$\phi\partial_1^2\phi + \phi\partial_2^2\phi~? $$ Answer: Funny you ask, I was just reading this part in Tong's notes yesterday. As Cryo already mentioned $\vec{\nabla}$ is a vector. The point is for something to have the full rotational symmetry it needs to be a dot product. So $$(\partial_1 \phi)^2 + (\partial_1 \phi)^2 = (\vec{\nabla} \phi) \cdot (\vec{\nabla} \phi). $$ The other term you mentioned $\phi\partial_1^2\phi + \phi\partial_1^2\phi $ is actually not at all different from the above expression, up to a boundary term. But first notice, $$\phi\partial_1^2\phi + \phi\partial_1^2\phi = \phi \vec{\nabla} \cdot \vec{\nabla} \phi $$ so again a scalar. The two are actually related since by integrating by parts, $$\int d^dx\ (\vec{\nabla} \phi) \cdot (\vec{\nabla} \phi) = -\int d^dx\ \phi \vec{\nabla} \cdot \vec{\nabla} \phi \ + BT, $$ where BT indicate some boundary term. The term with the derivative to the forth power cannot be written as a dot product of any vectors, you can try $\phi ( \vec{\nabla} \cdot \vec{\nabla} \ \vec{\nabla} \cdot \vec{\nabla} ) \phi$, but it doesn't work. Yet that term is invariant under a subgroup of rotations, and the reflections. Not sure If I this ansers your question.
{ "domain": "physics.stackexchange", "id": 59963, "tags": "symmetry, field-theory, group-theory, rotation, lattice-model" }
How can l get 50 % examples in training set and 50% in test set for each class when splitting data?
Question: l have a dataset of 200 examples with 10 classes. l would like to split the dataset into training set 50% and test set 50%. for each class, l have 20 examples. Hence, l would like to get for each class : 10 training examples and 10 test examples. Here are my classes : classes=['BenchPress', 'ApplyLipstick', 'BabyCrawling', 'BandMarching', 'Archery', 'Basketball', 'ApplyEyeMakeup', 'BalanceBeam', 'BaseballPitch', 'BasketballDunk'] l tried the following : from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(final_data, true_label, test_size=0.50, random_state=42) However it returns a 50% training set and 50 % test set, without respecting the proportion for each class (l would like to get 10 examples in test set and 10 examples in training set for each class). Here is the resulted splitting : Answer: sklearn version 0.17 onwards the train_test_split should give you stratified results by using the stratify parameter. Example code: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(final_data, true_label, test_size=0.50, random_state=42, stratify=true_label) From the documentation about the parameter stratify: stratify: array-like or None (default is None) If not None, data is split in a stratified fashion, using this as the labels array. New in version 0.17: stratify splitting Hope this helps!
{ "domain": "datascience.stackexchange", "id": 2264, "tags": "python, scikit-learn, dataset" }
Make a map of a room using odometry
Question: I have installed the latest version of ROS Lunar on an Ubuntu 16.04.5 LTS. I need to construct a closed room with an obstacle inside, put a robot and make a map using odometry. How can I do it? I don't know how to make a map with a robot. Do I need to make it move around the room avoiding the walls and the obstacle? Maybe there is a controller that do that or maybe I need to program it. After doing the map and I have to use it to move the robot to a goal. Originally posted by Elric on ROS Answers with karma: 71 on 2018-09-08 Post score: 0 Answer: Odometry is a proprioceptive measure, i.e. it gives you information about the state of the robot not about the external world. SLAM algorithms (Simultaneous Localization and Mapping) are what you need, but they always rely on exteroceptive sensors (lidar, camera, sonar...). If you don't know what's surrounding your robot it's almost impossible to build a map. From the odometry you can only build the path the robot has followed so far. Moreover also obstacle avoidance becomes problematic. If you need to map an environment using standard SLAM techniques there are examples in ROS. http://wiki.ros.org/turtlebot_navigation/Tutorials/Build%20a%20map%20with%20SLAM http://wiki.ros.org/nav2d_tutorials?distro=lunar On the other hand, if what you need is really to build a map using ONLY odometry, the only solution is to cover all the environment with the robot. Just make an algorithm to move the robot from wall to wall using a "snake" pattern. Note that the robot will not be able to asses where obstacles are until it bumps into them. So you will need at least a bumper or another additional sensor which tells you that the wheels are spinning but the robot is not moving forward (because it's against a wall). In this case you could look for "wall following" algorithms and "environment coverage" ones. But I strongly suggest to think about you really need to map using odometry. Originally posted by alsora with karma: 1322 on 2018-09-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Elric on 2018-09-08: Thanks a lot. It's an exercise that I have to do in my robotic's subject. Comment by Elric on 2019-01-22: Do I need to use Navigation Stack? I want to use Gazebo also. Comment by alsora on 2019-01-22: The navigation stack is a "collection" of standard nodes used for navigation. You should look at it to get an idea of which components do you need. Then you can either use the included nodes or other similar ones. There are no requirements between using Gazebo and the navigation stack Comment by Elric on 2019-04-07: I'm going to teleop the robot with the keyboard to make the map. I had thought to use gmapping to store the map but it seems that it needs a laser and I can't use it. Any suggestion about how and where to store the map? Thanks.
{ "domain": "robotics.stackexchange", "id": 31743, "tags": "ros-lunar" }
MLP on Iris Data not working but it does fine on MNIST - Keras
Question: So I'm a little bit baffled. I have just started working with the Keras framework for Python (which is awesome by the way!). However just trying a few simple test of neural networks has got me a bit confused. I initially tried to classify the Iris data as it was a small, quick and simple dataset. However when I constructed a neural network for it (4 input dimensions, 8 node hidden layer, 3 node output layer for binary classification of the 3 classes). This however didn't produce any predictive powers at all (100 samples in the training set, a separate 50 in the test set. Both had been shuffled so as to include a good distribution of classes in each). Now I thought I was doing something wrong, but I thought I'd give the network a quick test on the MNIST dataset just in case. So I used basically the exact same network (other than changing the input dimensions to 784, hidden nodes to 30, and output nodes to 10 for the binary encoded 0-9 output). And this worked perfectly! With a 97% accuracy rate and a 5% loss. So now I'm not sure why the Iris dataset isn't playing ball, does anyone have any clues? I've tried changing the number of hidden layer nodes and also normalized the X input. Here's an output I've produced when I've manually run some of the test set through the trained Iris model. As you can see it's basically producing a uniform random guess. Target: [ 1. 0. 0.] | Predicted: [[ 0.44635904 0.43874186 0.45729554]] Target: [ 0. 0. 1.] | Predicted: [[ 0.44618103 0.43869928 0.45735642]] Target: [ 0. 1. 0.] | Predicted: [[ 0.44612524 0.43863046 0.45729461]] Target: [ 0. 0. 1.] | Predicted: [[ 0.44617626 0.43870446 0.45736298]] Target: [ 0. 0. 1.] | Predicted: [[ 0.44613886 0.43865535 0.45731983]] And here's my full code for the Iris MLP (the MNIST one is essentially the same) import numpy as np import random from keras.models import Sequential from keras.layers import Dense, Activation from sklearn.datasets import load_iris from sklearn import preprocessing # Model Layers Defnition # Input layer (12 neurons), Output layer (1 neuron) model = Sequential() model.add(Dense(8, input_dim=4, init='uniform', activation='sigmoid')) model.add(Dense(3, init='uniform', activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # Iris has 150 samples, each sample has 4 dimensions iris = load_iris() iris_xy = zip(iris.data, iris.target) random.shuffle(iris_xy) # Iris data is sequential in it's labels iris_x, iris_y = zip(*iris_xy) iris_x = preprocessing.normalize(np.array(iris_x)) iris_y = np.array(iris_y) # Encode decimal numbers to array iris_y_enc = np.zeros(shape=(len(iris_y),3)) for i, y in enumerate(iris_y): iris_y_enc[i][y] = 1 train_data = np.array(iris_x[:100]) # 100 samples for training test_data = np.array(iris_x[100:]) # 50 samples for testing train_targets = np.array(iris_y_enc[:100]) test_targets = np.array(iris_y_enc[100:]) model.fit(train_data, train_targets, nb_epoch=10) #score = model.evaluate(test_data, test_targets) for test in zip(test_data, test_targets): prediction = model.predict(np.array(test[0:1])) print "Target:", test[1], " | Predicted:", prediction Answer: By default sklearn.preprocessing.normalize normalizes samples, not features. Replace sklearn.preprocessing.normalize with sklearn.preprocessing.scale. This will center and scale (to unit variance) every feature. Also give it more than 10 epochs. Here are learning curves (log loss) for 5000 epochs: This should end up with an accuracy of about 96%.
{ "domain": "datascience.stackexchange", "id": 1009, "tags": "python, neural-network, keras" }
When exactly is a connect callback of a publisher called?
Question: Hi all, I would like to know when precisely the connect_cb is being invoked, specified in Publisher ros::NodeHandle::advertise(const std::string & topic, uint32_t queue_size, const SubscriberStatusCallback & connect_cb, const SubscriberStatusCallback & disconnect_cb = SubscriberStatusCallback(), const VoidConstPtr & tracked_object = VoidConstPtr(), bool latch = false ) (see http://docs.ros.org/melodic/api/roscpp/html/classros_1_1NodeHandle.html#ae4711ef282892176ba145d02f8f45f8d). More specifically, I know that if I do a NodeHandle:advertise(), the underlying (TCP) connection is not established directly, i. e., upon returning from advertise(), subscribers may not already be connected even if they are already "there" (= known to the master) since this clearly also takes time. So, publishing messages right after the advertise() can causes messages not being received by all "relevant nodes" at that time. (IIRC, this issue has been discussed frequently here on ros.answers.org and one nasty workaround is a delay after the advertise.) Assume I know about the existance of another node that will eventually/finally subscribe to the advertised topic. (Please, I don't want to discuss here that this may not be the idea of a decoupled communication concept, i.e., what pub/sub was designed for.) TL;DR: Is it safe to assume that once the above mentioned connect_cb is triggered, the underlying TCP connection must have been established so that publishing messages using the returned ros::Publisher object will definitely deliver the message to the node denoted by ros::SingleSubscriberPublisher::getSubscriberName() (= argument of the callback)? I was already grep'ping through ROS' source code but was not able to find the location where the callbacks are being invoked. Thanks you very much! Originally posted by CodeFinder on ROS Answers with karma: 100 on 2019-10-21 Post score: 0 Answer: TL;DR: the connect_cb is called AFTER the underlying TCP connection was created. So once the callback is triggered, a publisher knows for sure that messages being published will be received by the node denoted by the callback-provided ros::SingleSubscriberPublisher object. :-) I was following a similar path through the ROS code but was unsure if the peerConnect() function really triggers the callback. After a 2nd thought/look, also due to your efforts, it seems that it "just" adds them to the callback queue so that some idle spinning thread will catch it up later. And it seems (according to your findings), that this is done after the connection is established. Indeed, the onConnectionHeaderReceived() callback is invoked upon tcprosAcceptConnection() which seems like an evidance for my assumption. Also according to my further investigations, this cannot fail or cause the just created TCP connection to fail anymore. (As a sidenote, the ros::SingleSubscriberPublisher object provided to the callback also underpins the fact since one can send messages to the other node that just subscribed.) I know that the connect_cb is invoked once for every new subscription. Its exactly what I need. :-) Now, regarding your question wrt. the underlying application architecture that requires this kind of callback: I am working on / doing research on a local planning algorithm that entirely prevents collisions given some reasonable contraints. This requires that every robot knows about all others in the system. I am using a topic to allow robots to discover others but once discovered, I require somewhat (rather tightly) coupled communication between the robots in order to guarantee safety (collision free motions). It is somewhat in contrast to all these DWA, VFH, etc. planners out there. ;-) And the decoupled nature of pub/sub made it difficult to send messages while ensuring that a designated receiver is actually receiving it. (Does this answer your question?) Ah and yes, sure: state-based conditions are always better (= more precise) than time-based ones. That's why I am intending to use these callbacks. The problem with Publisher::getNumSubscribers() is that some (e.g.) rostopic echo ...also accounts for a subscriber and somewhat "disorts" the actual number of subscribers. :-) Thanks again for your super fast and detailed reply! Originally posted by CodeFinder with karma: 100 on 2019-10-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2019-10-23:\ Also according to my further investigations, this cannot fail or cause the just created TCP connection to fail anymore. (As a sidenote, the ros::SingleSubscriberPublisher object provided to the callback also underpins the fact since one can send messages to the other node that just subscribed.) it can certainly fail. This is all based on TCP/IP, so if the remote side goes off-line, becomes unresponsive or for some other reason does not keep up its end of the connection, all subsequent writes (and reads) will fail. So pedantically I would say the answer to your question should be "no" (we cannot be certain there is a connection at all) . But for practical purposes you can probably assume that there is one after this callback is invoked. Comment by gvdhoorn on 2019-10-23: Your post is also not really an answer btw. More a further clarification of your question combined with responses to my answer + comment. Comment by CodeFinder on 2019-10-23: Sure, connections can always fail. But this should cause the disconnect_cb to be called (may be another topic on its own). Pedantically, no, okay. But since my disconnect_cb will catch the case when the connection fails, it shouldn't be an issue. But generally, this is a problem of all coupled systems (relying on others causes somewhat "dangerous" dependencies but IMHO, this is inevitable anyway.) I was asking whether the connect_cb is called after the connection was established. One phase of establishing a TCP connection is listening for it, and finally (if there's one) accepting it. So, tcprosAcceptConnection() is the correct location to conclude that it's true which answers my question (disregarding the fact that it can fail at any time anyway). Accepting my answer was just to let others know that they will find this information in this thread (and its worth reading). If you prefer to not mark my post as an answer (?), I am okay with that, too...
{ "domain": "robotics.stackexchange", "id": 33919, "tags": "ros, ros-melodic, topic, publisher" }
How to calculate the width of the dark fringes?
Question: I am talking about the single slit diffraction experiment. The width of the central bright fringe is twice as wide as that of the others bright fringes. It can be calculated easily as follows. \begin{align} \text{width of other bright fringes} = \frac{\text{wave length}\times\text{distance between screen and slit}}{\text{width of the slit}} \end{align} Question From the intensity plot, it is clear that the width of dark fringes is zero. But when we look at the spectrum the width is not exactly zero. Could you tell me how to find the width of the dark fringes? I think several percentage of the maximum intensity should be considered as dark, right? What percentage is it usually adopted by physicists? Answer: This is actually somewhat more involved. You can compute the intensity/irradiance of the Fraunhofer diffraction pattern exactly and the result is the so-called Airy pattern: $I(\theta)=I_0\cdot \left[ \frac{2 \cdot J_1(k \cdot a \cdot sin \theta)}{k \cdot a \cdot sin \theta} \right]^2$, where $\theta$ is the observation angle, $k$ is the wavenumber and $a$ is the size of your aperture. This gives you the following intensity graph you have shown in your question. I am referencing the Wikipedia article here. In order to evaluate it you need to be able to compute Bessel functions of the first kind and these are available in most programming languages, such as Python and Fortran, as well as software like Matlab. The angles where the intensity minima occur are the zeros of these Bessel functions $J_1(x)$. Starting from there you can e.g. compute the angle $\theta$ at which the first intensity minimum occurs: $sin(\theta) \approx \frac{\lambda}{d}$, where $\lambda$ is your wavelength and $d$ the width of your aperture. More zeros can be found on this Mathematica website. Here is the list of the first five zeros of the Bessel function of the first kind for reference: 3.8317 7.0156 10.1735 13.3237 16.4706 ... So e.g. for the first intensity minimum you would have to solve $k \cdot a \cdot sin(\theta) = 3.8317$ for $\theta$, which is: $\theta = arcsin \left( \frac{3.8317}{k\cdot a} \right)$ This also means that the intensity is zero at specific angles and not over a continuum. It only appears so due to the contrast of the image you have chosen.
{ "domain": "physics.stackexchange", "id": 66504, "tags": "optics, diffraction" }
Palindromes with stacks
Question: I'm creating a simple function to verify whether or not a string is a palindrome, and must use stacks in solving it. bool isPalindrome(string word) { // #1 determine length int n = word.size(); // #3 make sure we have an even string if (n % 2 != 0) word.erase(0, 1); // #2 load the stack stack<char> s; for (int i = 0; i < n / 2; i++) { s.push(word[i]); word.erase(i, 1); } // #4 if our stack is empty, then return true if (s.empty()) return true; // #5 compare the first & last letters for a valid palindrome /*if (s.top() == word[0]) { s.pop(); word.erase(0, 1); } else return false;*/ while (n != 0) { if (s.top() == word[0]) { s.pop(); word.erase(0, 1); // return true only when our stack is empty if (s.empty()) return true; } else return false; } } Is there a better way to do this? Are there any evident issues with the current state of the program? Answer: First, a palindrome can have an odd number of characters: Even-character palindrome: 1221 Odd-character palindrome: 13531 You only support even-character palindromes, which is bad, but even worse, you do not return an error about invalid input for an odd-digit palindrome, you remove the first digit. Second, to help prevent errors, you should always use braces around one-liner if and loop statements: if (n % 2 != 0) { word.erase(0, 1); } Third, don't leave dead code in there. You have a chunk of code commented out, so that should be removed. Fourth, your comments are in the order of #1, #3, #2, ... Either the comments should be fixed or the code should be re-ordered. Fifth, there is a mistake right here: for (int i = 0; i < n / 2; i++) { s.push(word[i]); word.erase(i, 1); } Sixth, it appears that you are using namespace std; somewhere in your code as you did not prefix stack or string. This is not a good idea because namespaces can contain functions with the same name, so you can have problems with your code when you start using multiple namespaces, or even if you have a function in the same file that has the same name as a function in the namespace. You can read more about it here. So, with the word "qweewq", you first remove "q" so the word is "weewq". Next, i == 1, so you remove "e", so the word is "wewq", and so on. You should change this to: for (int i = 0; i < n / 2; i++) { s.push(word[0]); word.erase(0, 1); } This is a working solution: bool isPalindrome(std::string word) { int wordSize = word.size(); // #1 load the stack std::stack<char> s; for (int i = 0; i < wordSize / 2; i++) { s.push(word[0]); word.erase(0, 1); } // #2 make sure word size is same as stack size (handle odd-character palindromes) if (word.size() > s.size()) { word.erase(0, 1); } // #3 if our stack is empty, then return true if (s.size() == 0) { return true; } // #4 validate palindrome while (s.size() > 0) { if (s.top() == word[0]) { s.pop(); word.erase(0, 1); // return true only when our stack is empty if (s.size() == 0 || word.size() == 0) { return true; } } else { return false; } } } Improvements on this solution are given here.
{ "domain": "codereview.stackexchange", "id": 12327, "tags": "c++, strings, stack, palindrome" }
What processes generate entropy as heat flows across temperature gradient
Question: Suppose, we have a source at high temperature $T_\mathrm h$ and sink at temperature lower temperature $T_\mathrm l$. If $Q_\mathrm h$ amount of heat flow from source to sink, then change in entropy of source is $\frac{-Q_\mathrm h}{T_\mathrm h}$ and change in entropy of sink is $\frac{+Q_\mathrm h}{T_\mathrm l}$. I saw a similar question here: https://physics.stackexchange.com/questions/358142/entropy-generation-during-heat-transfer-processes. And while I understand that entropy is being generated in the partition, my question is how does heat conduction across a finite temperature gradient generates entropy Answer: For steady state heat conduction in the slab between the source and sink, we have $$\frac{d}{d x}\left(k\frac{d T}{d x}\right)=0$$where k is the thermal conductivity. If we divide this equation by absolute temperature T, we obtain$$\frac{1}{T}\frac{d}{d x}\left(k\frac{d T}{d x}\right)=0$$Next, making use of the product rule for differentiation, we have:$$\frac{1}{T}\frac{d}{d x}\left(k\frac{d T}{d x}\right)=\frac{d}{d x}\left(\frac{k}{T}\frac{dT}{dx}\right)+\frac{k}{T^2}\left(\frac{dT}{dx}\right)^2=0$$Assuming that the source is at x = 0 and the sink is at x = L, if we integrate this equation between x = 0 and x + L, we obtain: $$-\frac{q}{T_C}+\frac{q}{T_H}+\int_0^L{\frac{q^2}{k}dx}=0\tag{1}$$ where the heat flux q is given by $$q=-k\frac{dT}{dx}$$If we rearrange Eqn. 1 slightly, we obtain:$$\frac{q}{T_C}=\frac{q}{T_H}+\int_0^L{\frac{q^2}{k}dx}\tag{2}$$The quantity $\frac{q}{T_H}$ is the rate at which entropy enters the slab per unit area, and the quantity $\frac{q}{T_C}$ is the rate at which entropy exits the slab per unit area of slab. Eqn. 2 tells us that the rate at which entropy exits the slab is equal to the rate at which entropy exits the slab plus the rate at which entropy is generated within the slab. According to Eqn.2, the rate of entropy generation within the slab per unit volume is $q^/k$. This entropy generation rate is, of course, positive definite.
{ "domain": "chemistry.stackexchange", "id": 17168, "tags": "physical-chemistry, thermodynamics, entropy" }
Plots of prior and posterior distributions for a model
Question: I have an exercise that ask to plot the prior and posterior distribution in the Poisson-Gamma model. I did it (I think it's correct) inspired in the answer of this question https://stats.stackexchange.com/questions/70661/how-does-the-beta-prior-affect-the-posterior-under-a-binomial-likelihood Is there anything that can improve the code? colors = c("red","blue","green","orange","purple") n = 10 lambda = .2 x = rpois(n,lambda) grid = seq(0,7,.01) alpha = c(.5,5,1,2,2) beta = c(.5,1,3,2,5) plot(grid,grid,type="n",xlim=c(0,5),ylim=c(0,4),xlab="",ylab="Prior Density", main="Prior Distributions", las=1) for(i in 1:length(alpha)){ prior = dgamma(grid,shape=alpha[i],rate=1/beta[i]) lines(grid,prior,col=colors[i],lwd=2) } legend("topleft", legend=c("Gamma(0.5,0.5)", "Gamma(5,1)", "Gamma(1,3)", "Gamma(2,2)", "Gamma(2,5)"), lwd=rep(2,5), col=colors, bty="n", ncol=3) for(i in 1:length(alpha)){ dev.new() plot(grid,grid,type="n",xlim=c(0,5),ylim=c(0,10),xlab="",ylab="Density",xaxs="i",yaxs="i", main="Prior and Posterior Distribution") alpha.star = alpha[i] + sum(x) beta.star = beta[i] + n prior = dgamma(grid,shape=alpha[i],rate=1/beta[i]) post = dgamma(grid,shape=alpha.star,rate=beta.star) lines(grid,post,lwd=2) lines(grid,prior,col=colors[i],lwd=2) legend("topright",c("Prior","Posterior"),col=c(colors[i],"black"),lwd=2) } Answer: There was a couple of errors in the code as originally posted. The code was OK as far as base R goes. First thing I did was run styler and then lintr on your code; these two things help clean up the coding style in your scripts. That does things like this: colors = c("red","blue","green","orange","purple") # changed to (spaces / idiomatic assignment): colors <- c("red", "blue", "green", "orange", "purple") Then I changed your 1:length(alphas) to seq_along(alphas). The latter is a bit safer since the former can fail with empty input. Then I replaced your 5-separate prior/posterior plots with a single plot that contains 5 panels. This makes it easier to compare the appropriateness of the different priors. To do this, I removed your dev.new()s, added a call to par(mfrow = c(number_of_rows, number_of_columns)) and obviously, tidied this up afterwards (returning to a 1x1 grid) par(mfrow = c(2, ceiling(length(alpha) / 2))) for (i in seq_along(alpha)) { # removed dev.new() ... plotting code ... ) } par(mfrow = c(1, 1)) Then I cleaned up your experimental data and your prior-parameters / plotting parameters; they were all winding around each other. I also renamed your alpha / beta vectors - in R, these correspond to the shape and rate parameters that are passed into dgamma: # ---- experimental data num_observations <- 10 lambda <- .2 x <- rpois(num_observations, lambda) # ---- prior parameters # assumed 'beta' was a rate parameter # - this, since there was confusion in the parameterisation of dgamma(): # - early section used rate = 1 / beta[i]; # - later section used rate = beta[i]; and # - definition of beta_star = beta[i] + n; implied beta was definitely a rate shape <- c(.5, 5, 1, 2, 2) rate <- c(.5, 1, 3, 2, 5) # ---- plotting parameters colors <- c("red", "blue", "green", "orange", "purple") # ---- search parameters grid <- seq(0, 2, .01) Then I made a function to do your prior-comparison stuff (the first set of plots). Any parameters that were to be passed through to plot were passed in using the ... argument. # ---- comparison of the prior distributions plot_priors <- function(grid, shapes, rates, colors, legend_text, lwd = 2, ...) { plot(grid, grid, type = "n", ...) for (i in seq_along(shape)) { prior <- dgamma(grid, shape = shape[i], rate = rate[i]) lines(grid, prior, col = colors[i], lwd = lwd) } legend( "topleft", legend = legend_text, lwd = lwd, col = colors, bty = "n", ncol = 2 ) } This can be called like: plot_priors( grid, shape, rate, colors, legend_text = paste0("Gamma(", c("0.5,0.5", "5,1", "1,3", "2,2", "2,5"), ")"), xlim = c(0, 1), ylim = c(0, 4), xlab = "", ylab = "Prior Density", main = "Prior Distributions", las = 1 ) It's useful to split your computations away from your plotting code - so I extracted the code you used to compute the posterior params: compute_posterior_parameters <- function(observations, prior_shape, prior_rate) { list( shape = prior_shape + sum(observations), rate = prior_rate + length(observations) ) } Then I pulled the plotting code for your prior/posterior comparisons into a function (similarly to the above) plot_prior_post_comparison <- function( observations, grid, shapes, rates, colors, lwd = 2, ...) { # make a grid for plotting par(mfrow = c(2, ceiling(length(shapes) / 2))) for (i in seq_along(shapes)) { # details of the prior and post distributions posterior_params <- compute_posterior_parameters( observations, prior_shape = shapes[i], prior_rate = rates[i] ) prior <- dgamma( grid, shape = shapes[i], rate = rates[i] ) post <- dgamma( grid, shape = posterior_params$shape, rate = posterior_params$rate ) # plotting code plot(grid, grid, type = "n", ...) lines(grid, post, lwd = lwd) lines(grid, prior, col = colors[i], lwd = lwd) legend("topright", c("Prior", "Posterior"), col = c(colors[i], "black"), lwd = lwd ) } # revert the plotting grid back to 1x1 par(mfrow = c(1, 1)) } note that the par calls, which change the plotting grid are all nested inside the function, so any subsequent plots should be unaffected. Then I called that function: # ---- prior/posterior comparison plot_prior_post_comparison( observations = x, grid = grid, shapes = shape, rates = rate, colors = colors, xlim = c(0, 1), ylim = c(0, 10), xlab = "", ylab = "Density", xaxs = "i", yaxs = "i", main = "Prior and Posterior Distribution" ) Then I put all the functions at the start and all the calls at the end of a script: The full code: # ---- comparison of the prior distributions plot_priors <- function(grid, shapes, rates, colors, legend_text, lwd = 2, ...) { plot(grid, grid, type = "n", ...) for (i in seq_along(shape)) { prior <- dgamma(grid, shape = shape[i], rate = rate[i]) lines(grid, prior, col = colors[i], lwd = lwd) } legend( "topleft", legend = legend_text, lwd = lwd, col = colors, bty = "n", ncol = 2 ) } # ---- prior:posterior analysis compute_posterior_parameters <- function(observations, prior_shape, prior_rate) { list( shape = prior_shape + sum(observations), rate = prior_rate + length(observations) ) } plot_prior_post_comparison <- function( observations, grid, shapes, rates, colors, lwd = 2, ...) { # make a grid for plotting par(mfrow = c(2, ceiling(length(shapes) / 2))) for (i in seq_along(shapes)) { # details of the prior and post distributions posterior_params <- compute_posterior_parameters( observations, prior_shape = shapes[i], prior_rate = rates[i] ) prior <- dgamma( grid, shape = shapes[i], rate = rates[i] ) post <- dgamma( grid, shape = posterior_params$shape, rate = posterior_params$rate ) # plotting code plot(grid, grid, type = "n", ...) lines(grid, post, lwd = lwd) lines(grid, prior, col = colors[i], lwd = lwd) legend("topright", c("Prior", "Posterior"), col = c(colors[i], "black"), lwd = lwd ) } # revert the plotting grid back to 1x1 par(mfrow = c(1, 1)) } # ---- # ---- experimental data num_observations <- 10 lambda <- .2 x <- rpois(num_observations, lambda) # ---- prior parameters # assumed 'beta' was a rate parameter # - this, since there was confusion in the parameterisation of dgamma(): # - early section used rate = 1 / beta[i]; # - later section used rate = beta[i]; and # - definition of beta_star = beta[i] + n; implied beta was definitely a rate shape <- c(.5, 5, 1, 2, 2) rate <- c(.5, 1, 3, 2, 5) # ---- plotting parameters colors <- c("red", "blue", "green", "orange", "purple") # ---- search parameters grid <- seq(0, 2, .01) # ---- comparison of priors plot_priors( grid, shape, rate, colors, legend_text = paste0("Gamma(", c("0.5,0.5", "5,1", "1,3", "2,2", "2,5"), ")"), xlim = c(0, 1), ylim = c(0, 4), xlab = "", ylab = "Prior Density", main = "Prior Distributions", las = 1 ) # ---- prior/posterior comparison plot_prior_post_comparison( observations = x, grid = grid, shapes = shape, rates = rate, colors = colors, xlim = c(0, 1), ylim = c(0, 10), xlab = "", ylab = "Density", xaxs = "i", yaxs = "i", main = "Prior and Posterior Distribution" ) I don't thnk its perfect - the plotting and calculation steps are still pretty tied together - but it should be more easy to add extra pairs of rate/shape values into your prior distributions, for example. One place the code coould be further improved is by passing in a data.frame where each row contains the shape/rate/colour values and any annotations for a given prior. The functions contain too many arguments at the moment, and this would fix that (and guarantee that there is a shape for each rate).
{ "domain": "codereview.stackexchange", "id": 33343, "tags": "beginner, statistics, r, data-visualization" }
Setting GAZEBO_PLUGIN_PATH crashes Gazebo
Question: Hi all, So I have a model of a WAM Arm. I wrote a plugin that moves its joints through a list of joint positions. When I: don't have the GAZEBO_PLUGIN_PATH set cd into the directory of the model then run Gazebo I can insert the model from the "insert" menu. It then executes the joint movements. If I try running Gazebo from outside that directory, I can still insert the model, but it won't go through the movements. Then, if I try setting the GAZEBO_PLUGIN_PATH to the model directory, inserting the model results in an errorless crash. Any thoughts? Originally posted by Ben B on Gazebo Answers with karma: 175 on 2013-04-30 Post score: 0 Original comments Comment by scpeters on 2013-05-01: You may also need to set the LD_LIBRARY_PATH to include the model directory as well. Can you try that? Comment by Ben B on 2013-05-01: I still have both problems: The plugin isn't found when I launch Gazebo from outside of the model/plugin directory. Also, Gazebo still crashed when I set the GAZEBO_PLUGIN_PATH Comment by scpeters on 2013-05-01: Can you get a backtrace of the crash using gdb? gdb gzserver in one terminal, gzclient in another, and then insert the model. If it crashes, then bt in the gdb terminal. Comment by Ben B on 2013-05-02: Thanks for the help, scpeters. In the end, the problem was my fault. See below. Answer: In case this happens to anyone else: The plugin was designed to read in a list of joint coordinates from a file. The file was placed in the same directory as the SDF file. So: When I ran Gazebo from the directory containing the model & plugin, it found the plugin and found the joint file. No problem. When I ran Gazebo from outside of the directory without the GAZEBO_PLUGIN_PATH set, it wouldn't find the plugin, and it handled that gracefully by inserting the model, not running the plugin and giving me a "plugin not found" message. When I ran Gazebo from outside of the directory with the GAZEBO_PLUGIN_PATH set, it found the plugin, but the plugin couldn't find the joint file and segfaulted. This brought Gazebo down with it. Solution: In the plugin code: Use an absolute file path instead of a file path that was relative to the plugin. Or, in the plugin code, handle missing joint files more gracefully. Originally posted by Ben B with karma: 175 on 2013-05-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by scpeters on 2013-05-03: Can you clarify which environment variables you set? I don't think GAZEBO_MODEL_PLUGIN is right. Comment by scpeters on 2013-05-03: Also, if you want to store additional files in your model directory, you can find them using gazebo::common::SystemPaths::FindFileURI() with the uri "model://model_name/path/to/file" Comment by scpeters on 2013-05-03: Also, if you want to store additional files in your model directory, you can find them using gazebo::common::SystemPaths::FindFileURI() with the uri "model://model_name/path/to/file" Comment by Ben B on 2013-05-03: Right you are :) I meant GAZEBO_PLUGIN_PATH. Thanks for the tip on FindFileURI! Comment by Ben B on 2013-05-03: Actually, I'm a bit confused how to use this. SystemPaths is a class, right? So I try to instantiate an object inside the load function of my plugin : common::SystemPaths sys_path_obj; and then I get the error: /usr/include/gazebo-1.7/gazebo/common/SystemPaths.hh:60:16: error: ‘gazebo::common::SystemPaths::SystemPaths()’ is private. Is that telling me the constructor is private? Comment by scpeters on 2013-05-03: You don't need to instantiate that class; you should be able to just call the gazebo::common::SystemPaths::FindFileURI(), as long as you have the header included. Comment by Ben B on 2013-05-03: Doing this: std::string pathy = gazebo::common::SystemPaths::FindFileURI("model://WAM Model/forGazebo"); gives /home/benjaminblumer/.../set_joints.cc: In member function ‘virtual void gazebo::SetJoints::Load(gazebo::physics::ModelPtr, sdf::ElementPtr)’: /home/benjaminblumer/.../plugins/set_joints.cc:49:99: error: cannot call member function ‘std::string gazebo::common::SystemPaths::FindFileURI(const string&)’ without object Comment by Ben B on 2013-05-03: And I'm including the common/common.hh header. Comment by scpeters on 2013-05-03: My apologies, based on ModelDatabase.cc:396, you should use gazebo::common::SystemPaths::Instance()->FindFileURI(). Comment by Ben B on 2013-05-03: Ah. That works great. Thanks, SC.
{ "domain": "robotics.stackexchange", "id": 3253, "tags": "gazebo" }
Automatic function call on scope exit
Question: I need to add logging functionality for an existing code, based on a result of an operation. I was thinking of creating a class that, when constructed, receives a condition function and a function for execution. Based on the result of the condition, the code will be executed. Example: class auto_logger{ public: auto_logger(const std::function<bool()>& cond, const std::function<void()>& func): _cond(cond), _func(func) {} virtual ~auto_logger() { try { if (_cond()) { _func(); } } catch (...) { } } private: std::function<bool()> _cond; std::function<void()> _func; }; I will use this class like this: int existing_func() { int res = FAILURE; // this is the only additional code auto_logger logger([&](){ return res == SUCCESS; }, [](){ print_to_log(); }); do_some_stuff_that_may_throw(); res = perform_operation(); return res; } I would be happy to get some comments on this code. Are there any problem I am not seeing in this solution? Are there any better solutions? Answer: Echoing the comments from the question, the usual design for these sort of things, often called a scope guard, is to have a function that is called when it no longer needs to be run (frequenlty called dismiss or release. int existing_func() { int res = FAILURE; auto_logger logger([](){ print_to_log(); }); do_some_stuff_that_may_throw(); res = perform_operation(); dismiss_logger(logger, res); return res; } void dismiss_logger(auto_logger& logger, int result) { if (res != SUCCESS) { logger.dont_log(); } } virtual ~auto_logger() Is the expectation that there will be derived classes destroyed via base pointers or via base references? If not, a non-virtual destructor is a common way to signal that a class is not designed to be inherited from. } catch (...) { Note that Visual C++ (at least) can be configured via the /EHa switch (this is usually a misguided configuration, IMHO) to catch structured exceptions (things like access violations/segmentation faults) in a catch (...) handler. A common technique to hedge against this configuration is to catch std::exception& under the assumption that all sensible exceptions will derive from it. Anything else being caught (e.g, an int, a structured exception) is likely indicative of a bug in the program so egregious that termination is the only safe recourse, as there's likely some corrupt state somewhere that prevents any further cleanup. This assumption may or may not be valid. Say this code needs to work in an environment where it is normal and okay to throw and catch things that don't derive from std::exception. (E.g., a framework that has its own base exception class is being used.) If Visual C++ isn't being used of /EHa will never be used, this recommendation may come off as paranoid. :-)
{ "domain": "codereview.stackexchange", "id": 5021, "tags": "c++, c++11" }
What would be likely to completely stop a subatomic particle assuming it was possible?
Question: Suppose that completely stopping a subatomic particle, such as an electron, could happen under certain conditions. What would be likely ways to get an electron to be perfectly still, or even just stop rotating the nucleus and collapse into it by electromagnetic forces? What would likely be required, below absolute-zero temperatures? Negative energy? Or could a 0 energy rest state not exist in any form, of any possible universe imaginable? Let's say there was a magnetic field of a certain shape that we could postulate that is so intensely strong that if we put an electron in the center of it, it could not move at all in any direction. Would the energy requirement of the field be infinite? What would be the particle's recourse under this condition? Further, suppose it were possible and one could trap an electron and stop all motion completely. What would this do to Heisenberg's Uncertainty Principle and/or Quantum Mechanics, because its position and momentum (0) would both be known? If it can be done, is Quantum Mechanics no longer an accurate model of reality under these conditions? Could we say QM is an accurate model under most conditions, except where it is possible to measure both position and momentum of a particle with zero uncertainty? Clarification: Please assume, confined in a thought experiment, that it IS possible to stop a particle so that is has 0 fixed energy. This may mean Quantum Mechanics is false, and it may also mean that under certain conditions the uncertainty commutation is 0. ASSUMING that it could physically be done, what would be likely to do it, and what would be the implications on the rest of physics? Bonus Points Now here's the step I'm really after - can anyone tell me why a model in which particles can be stopped is so obviously not the reality we live in? Consider the 'corrected' model is QM everywhere else (so all it's predictions hold in the 'normal' regions of the universe), but particles can be COMPLETELY stopped {{inside black holes, between supermagnetic fields, or insert other extremely difficult/rare conditions here}}. How do we know it's the case that because the uncertainty principle has lived up to testing on earth-accessable conditions, that it holds up under ALL conditions, everywhere, for all times? Answer: Your question is interesting, and gets specifically to the kinds of questions that quantum mechanics was intended to answer in the first place. It helps to understand the motivation behind the original Bohr model of the atom, and how that led to QM in the first place. The problem Bohr was trying to address can be paraphrased as, "If an electron orbits a nucleus like a planet, why doesn't it gradually lose energy and spiral into the nucleus?" The answer came when Bohr realized that the orbital momentum was quantized, effectively meaning that since the electron had mass then by the relationship $p=mv$, the velocity was also quantized (note: this simple expression is more complicated when relativity is included, but the discussion can continue without including it). These quantized values of momentum/velocity are what one would call eigenvalues, or observables in quantum mechanics. Since the electron can only change orbits by given off specific quantities of energy instead of giving off energy continuously, it remained in a stable orbit relative to the nucleus, thus preventing it from spiraling in. What is important to understand is the idea of the potential well. In the Bohr model, the electrons closest to the nucleus have higher velocity than the electrons further away. In other words, they have greater kinetic energy ($K.E. = \frac{1}{2} mass \times velocity^2$), but it had less potential energy since it was closer to the nucleus (obeying the relationship $P.E. = mass \times distance \times gravity$). However, the total energy ($K.E. + P.E.$) associated with orbits closer to the nucleus is less than those further away. So in order for the electron to move closer to the nucleus, energy must be given up. This is accomplished by the emission of a photon. Alternatively, if one wants to cause an electron to move into a more distant orbit, then one must add energy through use of a photon. It is in contemplating how to determine the orbit of the electron that the uncertainty principle first became apparent. The only probe that we have available to determine the position of an electron in its orbit is a photon, and the photon must be of sufficient energy in order for it to be small enough to give a meaningful result, however if we use a photon small enough (in terms of wavelength), it will have enough energy to shift the electron into a different orbit, and then we would have to start the process all over again. A free electron has sufficient energy to escape the nucleus. In other words, it has acquired sufficient energy to fill its potential energy deficit. If there were only one nucleus in the universe, the potential deficit would only be eliminated when the electron was at infinite distance from the nucleus. In that situation, the implication is that the electron would also have zero velocity. This situation is obviously unrealistic, first there are more than one nuclei in the universe, and second, to verify that a particle has zero velocity at infinite distance is clearly an impossible task. If we move past the Bohr model and into more modern quantum mechanics, the question then is whether there are eigenstates that have eigenvalues for momentum of a particle that are equal to zero? It is important to review some basic facts about matrix operations and linear algebra If 0 is an eigenvalue of a matrix A, then the equation A x = λ x = 0 x = 0 must have nonzero solutions, which are the eigenvectors associated with λ = 0. But if A is square and A x = 0 has nonzero solutions, then A must be singular, that is, det A must be 0. This observation establishes the following fact: Zero is an eigenvalue of a matrix if and only if the matrix is singular. This means that the matrix in question is not of full rank. In QFT this has a very specific interpretation. The annihilation operator has the power to destroy the vacuum state and map it to zero. This situation is understood to be associated with the free field vacuum state with no particles. This state is necessary because it allows us to find vacuum state solutions for the associated quantum mechanical system. By means of analogy, we can see that the solution to ground state problem is the solution to the homogeneous part of a differential equation. The Schrodinger equation is a linear, homogeneous equation which governs evolution of the wave function of a particle. The solutions of the Schrodinger equation can be used to understand particle motion. The exact position and momentum of a particle can only be known if h (planck's constant) approaches zero, however, in quantum mechanics, planck's constant is fundamental to the theory, so this cannot occur in the single particle case. Because of this, momentum and position uncertainty establish an inverse relation to each other, and if the uncertainty of momentum is zero, then the uncertainty in position is infinite. For these reason's it is not possible to talk about a particle "stopping" or being "stopped" in any meaningful or non-contrived sense.
{ "domain": "physics.stackexchange", "id": 55023, "tags": "quantum-mechanics, heisenberg-uncertainty-principle, subatomic" }
How to solve 'Parameter '~moveit_controller_manager' not specified'?
Question: When I run: roslaunch sam_moveit_wizard_generated move_group.launch It says: [ INFO] [1400647657.748106413, 748.461000000]: Using planning request adapter 'Fix Start State Path Constraints' [FATAL] [1400647657.809613293, 748.476000000]: Parameter '~moveit_controller_manager' not specified. This is needed to identify the plugin to use for interacting with controllers. No paths can be executed. [ INFO] [1400647657.826129681, 748.479000000]: Trajectory execution is managing controllers and I try to run: sam@sam:~/code/groovy_overlay/src/sam_moveit_learning/bin$ rosrun sam_moveit_learning pose right_arm 0.7 -0.2 0.7 0 0 0 [ INFO] [1400647661.271172191, 749.458000000]: Ready to take MoveGroup commands for group right_arm. [ INFO] [1400647661.271335019, 749.458000000]: Move to : x=0.700000, y=-0.200000, z=0.700000, roll=0.000000, pitch=0.000000, yaw=0.000000 [ INFO] [1400647662.016181981, 749.640000000]: ABORTED: Solution found but controller failed during execution ^Csam@sam:~/code/groovy_overlay/src/sam_moveit_learning/bin$ I also got this error: [ INFO] [1400647661.483935586, 749.503000000]: Path simplification took 0.016063 seconds [ERROR] [1400647662.013314708, 749.640000000]: Unable to identify any set of controllers that can actuate the specified joints: [ r_elbow_flex_joint r_forearm_roll_joint r_shoulder_lift_joint r_shoulder_pan_joint r_upper_arm_roll_joint r_wrist_flex_joint r_wrist_roll_joint ] [ERROR] [1400647662.013418353, 749.640000000]: Apparently trajectory initialization failed I try to fix it by install but not works: sam@sam:~/code/groovy_overlay/src/sam_moveit_learning/bin$ sudo apt-get install ros-groovy-pr2-moveit-plugins [sudo] password for sam: Reading package lists... Done Building dependency tree Reading state information... Done ros-groovy-pr2-moveit-plugins is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 168 not upgraded. sam@sam:~/code/groovy_overlay/src/sam_moveit_learning/bin$ How to solve it? Thank you~ Originally posted by sam on ROS Answers with karma: 2570 on 2014-05-20 Post score: 2 Answer: Did you create the moveit config by yourself for your robot Sam, using the setup assistant for instance or by hand? If yes, using the setup assistant is not enough to use Moveit, you need to provide Moveit a way to control your robot, through a controller manager. By default, the setup assistant creates a fake_controller_manager. See the default fake_controller_manager.launch.xml file in the launch/ directory. You my create your own controller manager, or check if it's not already in the package moveit_robots. This controller manager is launched in trajectory_execution.launch. See http://moveit.ros.org/wiki/Executing_Trajectories_with_MoveIt!#The_MoveIt.21_Controller_Manager_Plugin Originally posted by courrier with karma: 454 on 2014-09-17 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by zweistein on 2015-10-08: The link does not work any more and I have the same issue, but I used Moveit assistant and I am using the dynemixel_motor controll manager. I feel that I missed something to set up in the launch files, but I can't figure out what. Any ideas?
{ "domain": "robotics.stackexchange", "id": 18011, "tags": "moveit" }
How is postselection used in quantum tomography?
Question: I refer to this paper but reproduce a simplified version of their argument. Apologies if I have misrepresented the argument of the paper! Alice has a classical description of a quantum state $\rho$. Alice and Bob both agree on a two outcome observable $M$. Now, the goal for Bob is to come up with a classical description of a state $\sigma$ that gives the right measurement statistics i.e. $Tr(M\rho) \approx Tr(M\sigma)$. The way this is done is that Bob has a guess state, say the maximally mixed state, $I$. Alice then tells him the value of $Tr(M\rho)$. Bob then measures the maximally mixed state repeatedly (or he runs a classical simulation of this with many copies of the maximally mixed state) and "postselects" the outcomes where he obtains $Tr(M\rho) \approx Tr(MI)$. In this way, he obtains a state that reproduces the measurement statistics of the true state. What is the meaning of postselection in this context? How does Bob go from $I$ to $\sigma$ in this procedure? Answer: In principle, Bob here just has to guess the $2\times 2$ matrix $\sigma$. If he starts with any parametric state $\sigma(\alpha,\beta)$ with $\alpha,\beta\in\mathbb{C}$ and measures the outcome Tr$(M\sigma)$ with the post measurement state $\sigma '=M\rho M^\dagger/\text{Tr}(M\rho)$, he receives a number and he has to tune $\alpha,\beta$ to come close to the value Tr$(M\rho)$. This tuning is done on the basis of postselection by which it is implied that he selects the state $\sigma$ close to $\rho$ on the constraint of minimising the relative entropy: \begin{equation} \rho\text{ln}\rho-\rho\text{ln}\sigma \end{equation} so as to move closer to the state $\rho$, which can be found out in terms of $\alpha,\beta$. In such problems usually one starts with a parametric state in one variable and optimises over it.
{ "domain": "quantumcomputing.stackexchange", "id": 528, "tags": "quantum-state, measurement, postselection, state-tomography" }
Java 8 CompletableFuture - fan out implementation
Question: I was wondering what is the best way to implement a fan out type of functionality with Java 8 Completable future. I recently rewrote a function that had a bunch of old Future instances and then calling get in a loop, blocking on each one, to a somewhat cleaner variant using CompletableFuture. However I am seeing about 2x drop in performance so I am assuming something is not quite right in the way I'm using the new API. The code looks something like this: if (!clinet.login()) { throw new LoginException("There was a login error"); } CompletableFuture<List<String>> smths = CompletableFuture .supplyAsync(client::getSmth); CompletableFuture<List<Data>> smths2 = smths.thenApply(client::getInformation) .thenApplyAsync((list) -> list.stream().map(obj -> mapper.map(obj, Data.class)).collect(toList())); List<CompletableFuture<Map<String, AnotherData>>> waitGroup = new ArrayList<>(); waitGroup.add(notablesFuture.thenComposeAsync(clientb::getIvPercentileM12M)); waitGroup.add(notablesFuture.thenComposeAsync(clientb::getIvPercentileM6M)); waitGroup.add(notablesFuture.thenComposeAsync(clientb::getIvPercentile2M6M)); waitGroup.add(notablesFuture.thenComposeAsync(clientb::getIvPercentile2M12M)); waitGroup.add(notablesFuture.thenComposeAsync(clientb::getIvPercentile2M24M)); waitGroup.add(notablesFuture.thenComposeAsync(clientb::getHvPercentileM6M)); waitGroup.add(notablesFuture.thenComposeAsync(clientb::getHvPercentile2M6M)); waitGroup.add(notablesFuture.thenComposeAsync(clientb::getHvPercentileM12M)); waitGroup.add(notablesFuture.thenComposeAsync(clientb::getHvPercentile2M12M)); waitGroup.add(notablesFuture.thenComposeAsync(clientb::getHvPercentile2M24M)); CompletableFuture .allOf(waitGroup.toArray(new CompletableFuture[waitGroup.size()])); List<Data> data = smths2.join(); Map<String, Set<AnotherData>> volPercent = waitGroup.stream() .map(CompletableFuture::join) .flatMap((e) -> e.entrySet().stream()) .collect(groupingBy(Map.Entry::getKey, mapping(Map.Entry::getValue, toSet()))); data.forEach((d) -> { Set<AnotherData> asdasd = volPercent.get(d.getSymbol()); if (asdasd != null) { d.add(asdasd); } }); return stocks; client::getInformation is a blocking network call returning a List, all the clientb.* are doing is something like: return CompletableFuture.supplyAsync(() -> blockingNetworkCall(params, symbols) .entrySet().stream() .collect(Collectors.toMap(Map.Entry::getKey, value -> new Data(value.getValue(), TimePeriod.M1, TimePeriod.Y1)))); The original code looked something like this: List<String> symbols = client.block().get(); Future<Map<String, Data>> smth = client.block2(symbols); Future<Map<String, Double>> ivM6MResultsFuture = clientB.getIvdataM6M(symbols); Future<Map<String, Double>> ivM12MResultsFuture = clientB.getIvdataM12M(symbols); Future<Map<String, Double>> iv2M6MResultsFuture = clientB.getIvdata2M6M(symbols); Future<Map<String, Double>> iv2M12MResultsFuture = clientB.getIvdata2M12M(symbols); Future<Map<String, Double>> iv2M24MResultsFuture = clientB.getIvdata2M24M(symbols); Future<Map<String, Double>> hvM6MResultsFuture = clientB.getHvdataM6M(symbols); Future<Map<String, Double>> hvM12MResultsFuture = clientB.getHvdataM12M(symbols); Future<Map<String, Double>> hv2M6MResultsFuture = clientB.getHvdata2M6M(symbols); Future<Map<String, Double>> hv2M12MResultsFuture = clientB.getHvdata2M12M(symbols); Future<Map<String, Double>> hv2M24MResultsFuture = clientB.getHvdata2M24M(symbols); Map<String, Data> doughResults = smth.get(); Map<String, Double> ivM6MResults = ivM6MResultsFuture.get(); Map<String, Double> ivM12MResults = ivM12MResultsFuture.get(); Map<String, Double> iv2M6MResults = iv2M6MResultsFuture.get(); Map<String, Double> iv2M12MResults = iv2M12MResultsFuture.get(); Map<String, Double> iv2M24MResults = iv2M24MResultsFuture.get(); Map<String, Double> hvM6MResults = hvM6MResultsFuture.get(); Map<String, Double> hvM12MResults = hvM12MResultsFuture.get(); Map<String, Double> hv2M6MResults = hv2M6MResultsFuture.get(); Map<String, Double> hv2M12MResults = hv2M12MResultsFuture.get(); Map<String, Double> hv2M24MResults = hv2M24MResultsFuture.get(); with a big for loop to map all the futures together and aggregate a result. Hopefully it's clear from the code what its doing, but essentially: I make one network call that gets a list Based on this list I call an internal service that generates some objects Based on the first list I spawn a bunch of tasks to fetch various data I populate the objects generated in 2 with the items in 3 - essentially 2 and 3 can run concurrently since they are not dependent. Two main problems: Do you see any problems with my CompletableFuture usage, and any room to improve the implementation based on the outlined criteria? Currently it's about 2x slower than regular blocking .get() old Futures, that is given as reference. I am a little bit annoyed by the way joining is done, by having to call .allOf() with a void result, is there a better way to do that in the API that I am missing? As a sidenote, I realize I'm doing a bit more work in the Java 8 variant with a bunch of streams and mapping happening, but the time difference is from 22sec in the old to 45secs in the new, and total items is about 200, so the majority is actually spent in networking and waiting and not the stream operations. Answer: CompletableFuture<List<String>> smths = CompletableFuture .supplyAsync(client::getSmth); CompletableFuture<List<Data>> smths2 = smths.thenApply(client::getInformation) .thenApplyAsync((list) -> list.stream().map(obj -> mapper.map(obj, Data.class)) .collect(toList())); Since smths is only used in the immediate line, you can consider combining both together: CompletableFuture<List<Data>> smths2 = CompletableFuture .supplyAsync(client::getSmth).thenApply(client::getInformation) .thenApplyAsync(list -> list.stream().map(v -> mapper.map(v, Data.class)) .collect(toList())); On the same note, further down you are processing your smths2 by the following: List<Data> data = smths2.join(); // Map something... data.forEach((d) -> { Set<AnotherData> asdasd = volPercent.get(d.getSymbol()); if (asdasd != null) { d.add(asdasd); } }); That can probably be done in a more functional way: // Map something... List<Data> data = smths2.join(); data.forEach(v -> Optional.ofNullable(volPercent.get(v.getSymbol())) .ifPresent(/* add this to v? */)); Your code actually wouldn't work as a List<Data> can't add() a Set<AnotherData>, possibly only an addAll() if Data is the parent class of AnotherData. Please review this part. CompletableFuture .allOf(waitGroup.toArray(new CompletableFuture[waitGroup.size()])); The return value from the above doesn't seem to be used at all, copy-and-paste error? Looks like this is probably OK. I'll like to review the chunk that is waitGroup.add(...), but since it isn't clear what notablesFuture is, and unless one assumes it's the equivalent of doing a List<String> symbols = client.block().get() based on your original code... oh. Now, it looks like notablesFuture is actually smths, and that changes the answer significantly... Take #2 First, you probably can use better method names than just getIvPercentileM12M, getHvPercentile2M24M etc. Second, instead of manually creating the List of CompletableFutures, you can probably Stream it too using a small helper method to use method references in-place: private static <T, U> Function<T, U> ref(Function<T, U> ref) { return ref; } private static Map<String, Set<Double>> getMap(Client client, CompletableFuture<List<String>> notablesFuture) { return Stream.of( ref(client::getIvPercentileM12M), ref(client::getIvPercentileM6M), ref(client::getIvPercentile2M6M), ref(client::getIvPercentile2M12M), ref(client::getIvPercentile2M24M), ref(client::getHvPercentileM6M), ref(client::getHvPercentile2M6M), ref(client::getHvPercentileM12M), ref(client::getHvPercentile2M12M), ref(client::getHvPercentile2M24M)) .map(notablesFuture::thenComposeAsync).map(CompletableFuture::join) .flatMap(e -> e.entrySet().stream()) .collect(Collectors.groupingBy(Map.Entry::getKey, Collectors.mapping(Map.Entry::getValue, Collectors.toSet()))); } Your code in your current method then becomes: CompletableFuture<List<String>> notablesFuture = CompletableFuture .supplyAsync(client::getSmth); Map<String, Set<Double>> volPercent = getMap(clientb, notablesFuture); List<Data> data = notablesFuture.thenApply( client::getInformation).thenApplyAsync( list -> list.stream().map(v -> mapper.map(v, Data.class)) .collect(Collectors.toList())).join(); // use volPercent and data Alternatively, you can consider making use of the CompletableFuture semantics completely (pun unintended): // modify getMap's return value as such: private static CompletableFuture<Map<String, Set<Double>>> getMap(Client client, CompletableFuture<List<String>> notablesFuture) { return CompletableFuture.completedFuture(Stream.of( ref(client::getIvPercentileM12M), /* remaining method references... */) .map(notablesFuture::thenComposeAsync).map(CompletableFuture::join) /* remaining flatMap() and collect() steps... */ ); } And the code in your current method can be then be modified into: CompletableFuture<List<String>> notablesFuture = CompletableFuture .supplyAsync(client::getSmth); List<Double> result = getMap(clientb, notablesFuture).thenCombineAsync( notablesFuture.thenApply(client::getInformation).thenApplyAsync( list -> list.stream().map(v -> mapper.map(v, Data.class)) .collect(Collectors.toList())), lookup()).join(); The lookup() method processes your Map and List results together in the solution you require, and since as mentioned above about not being able to add() a Set<A> to a List<B>, this will have to be open-ended... With that said, here's a sample implementation for reference: private static BiFunction<Map<String, Set<Double>>, List<Data>, List<Double>> lookup() { return (map, list) -> list.stream().map(v -> map.get(v.getSymbol())) .filter(Objects::nonNull).flatMap(Set::stream) .collect(Collectors.toList()); }
{ "domain": "codereview.stackexchange", "id": 14126, "tags": "java, asynchronous, lambda, promise" }
Is $\frac{\partial}{\partial \Phi(y)} \Phi (x) = \delta(x-y)$ correct?
Question: As stated in the heading: Is $\frac{\partial}{\partial \Phi(y)} \Phi (x) = \delta(x-y)$ correct? Here denotes $\Phi(x)$ denotes a scalar field. And if yes, why? Any reference where I can read about this would be great. Answer: It is not. The correct identity is $$\frac{\delta}{\delta \Phi(y)} \Phi (x) = \delta(x-y)$$ where the derivative is the functional derivative. If $F : D(F)\ni \Phi \mapsto F(\Phi)\in \mathbb C$ is a function from a space of functions $D(F)$ to $\mathbb C$, the functional derivative of $F$, if it exists is the distribution $\frac{\delta F}{\delta \Phi}$ acting on smooth compact support functions $g$ such that: $$\left\langle \frac{\delta F}{\delta \Phi}, g \right\rangle := \frac{d}{d\alpha}\biggr\rvert_{\alpha=0} F(\Phi + \alpha g)\:.$$ In the considered case the functional $F$ is that associating the generic $\Phi$ with its value at the given point $x$ in its domain: $$F : \Phi \mapsto \Phi(x)\:.$$ In other words: $$F(\Phi):= \int \Phi(y) \delta(y-x) \:dy$$ hence, $$\frac{d}{d\alpha}|_{\alpha=0} F(\Phi+\alpha g) = \int \delta(y-x) g(y)\:dy$$ which can be re-written as $$ \frac{\delta F}{\delta \Phi} = \delta_x$$ or, adopting the notation of physicists: $$\frac{\delta}{\delta \Phi(y)} \Phi (x) = \delta(x-y)\:.$$ Specifying better the structure of the domain $D(F)$ one can define the functional derivative as a so-called Gateaux derivative.
{ "domain": "physics.stackexchange", "id": 13436, "tags": "field-theory, functional-derivatives" }
roslaunch: is neither a launch file in package
Question: Hi, I'm using Kinetic and I'm trying to launch other's and examples codes but all roslaunch shows this message: [*.launch] is neither a launch file in package [nao_description] nor is [nao_description] a launch file name The traceback for the exception was written to the log file I used source /home/myuser/catkin_ws/devel/setup.bash and roslaunch again, but the problem still. Someone can help please...? Originally posted by synapsido on ROS Answers with karma: 31 on 2020-08-25 Post score: 0 Original comments Comment by billy on 2020-08-25: I get that error when I try roslaunch in the wrong folder, or misspell the file name. If you're not in the folder of the launch file, you have to use the exact path in the call. Try it first from within the folder containing the launch file. Answer: Things to check Is the package compiled with catkin_make or catkin build and was that successful? Can you find it in echo $ROS_PACKAGE_PATH Can you find it in rospack list | grep <package name> Originally posted by Loy with karma: 141 on 2021-03-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dvogureckiy99 on 2023-02-18: if I can't find package, but catkin build went well, what can be the problem ? (but maybe I runed catkin build in different package).
{ "domain": "robotics.stackexchange", "id": 39071, "tags": "ros, roslaunch, ros-kinetic, nao, robot" }
Understanding anti aliasing filter
Question: I'm working on a project for the university about the anti aliasing filter. I have a small audio file, which has a sampling rate of 22KHz. Now when I downsample it 8 times, the file sounds weird. So far so good. Now I should design an anti aliasing filter to avoid that. But how do I choose the cut off frequency based on the spectrum of the 8-time downsampled signal? Anyhelp would be appreciated. Answer: From your statement, i understand a discrete-time downsampling of a sequence $x[n]$ already converted to digital, without aliasing, previously. So to prevent any aliasing on the downsampled sequence $y[n]$, you should first apply an anti-aliasing lowpass filter to the input sequence $x[n]$. The cutoff frequency of the lowpass filter is given by the downsampling ratio $D$ and is: $$ \omega_c = \frac{ \pi }{D }$$ radians per sample. The gain of the lowpass filter is one. The following Matlab\Octave code designs a linear phase, type-I, FIR lowpass filter of odd length $L = 2K+1$ with a group delay of $K$ samples : D = 8; % downsampling factor K = 256; % group delay of FIR linear phase filter. h = fir1(2*K, 1/D); % FIR, LPF, Type-I, L = 2K+1, wc = pi/D
{ "domain": "dsp.stackexchange", "id": 7131, "tags": "audio, anti-aliasing-filter" }
How channel state information is calculated from Sounding Packet
Question: What I understand is for Explicit Transmit Beamforming scenario, the Transmitter sends an NDP packet(aka HT Sounding) which consist of OFDM training symbols in packet preamble (HT-LTF) which are randomly generated +/-1 for each subcarrier(say 56). I do not have much knowledge on this, but if anyone can help me to understand how these training symbols are calculated at the receiver end to evaluate CSI will be helpful. Answer: According to pages 5-7 in this, the Channel State Information (CSI) contains 2 measurements per matrix element. The CSI essentially tells the Access Point (AP) two things: where is the client (bearing) and how far away on that bearing it is. However, because of multipath propagation, the "bearing" might not correspond to the true bearing of the client but the direction at which the signal is "louder". This means that the signal could be bouncing on a wall and then finding the client, rather than traveling to it in a straight line. So, the task we have now is to calculate these two numbers. We assume that the channel identification pseudorandom sequences in the pre-amble are transmitted at the same time from the AP: To estimate the $A$, simply sum the "strength" of the received signal at each antenna. To estimate the $T$, you need to estimate the time differences between the times the signal was received. At this point, you need to decide on the ordering of the elements. You can estimate both using a matched filter whose $h$ is the pseudorandom sequence. The matched filter will return a relatively high output when it detects the pseudorandom sequence at its input. So, imagine a bank of matched filters behind each receiver. Each one of the antennas receives the AP signal at slightly different times and amplitudes. Each one of the matched filters produces a slightly different amplitude (the $A$ component) when it "sees" the pseudorandom sequence at its input. Consequently, because of the difference in arrival times, each one of these high outputs will occur at slightly different times. To estimate $T$, take one of your antennas as a reference element and measure the time that passes between two high outputs at each matched filter. You might also find reading up on the fundamental of the following techniques: The Matched Filter and its applications in Radar and Sonar Beamforming Which sooner or later will lead you to Adaptive Beamforming which is what you are dealing with here. Phased Arrays Which, again, is what you are dealing with here. Pseudorandom binary sequences (PRBS) Which is what your header sequence is right here PRB Sequences with applications to channel characterisation In brief, by examining the distortions that a PRBS undergoes when transmitted through a "channel" (whatever this might be), you get an estimate of the channel's condition or state which you can then use to adapt some aspect of transmission (Forward error control sequence, beamforming, etc). Characterisation of an acoustic communication channel using PRBS Transmitter identification using PRBS And more... Hope this helps.
{ "domain": "dsp.stackexchange", "id": 6050, "tags": "signal-analysis, digital-communications, estimation, ofdm" }
Joint with three degrees of freedom URDF
Question: I am currently trying to attach a caster to my robot. Therefore I need a continuous joint with three degrees of freedom. This is my URDF file. As you can see I tried to stack two continuous joints after each other. Unfortunately, when I try to manipulate them in Rviz I can only see changes in the second joint (base_to_caster2). Does anybody know how to implement revolute joints with more than one degree of freedom properly? Originally posted by Kaonashi on ROS Answers with karma: 38 on 2017-05-17 Post score: 0 Answer: I was able to solve my problem with a workaround by setting mu1, mu2 to zero and slip1, slip2 to one for the caster. This means, that there will be no friction between the ground and the caster. Anyway, I am still interested in my initial question. How do I implement a joint with two rotational degrees of freedom? I think this will be relevant for me in the future, since I want to experiment as well with a robotic arm in ROS. Originally posted by Kaonashi with karma: 38 on 2017-05-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by tnajjar on 2022-02-19: 5 years later I'm still interested in this if someone has an answer
{ "domain": "robotics.stackexchange", "id": 27923, "tags": "ros, joint, urdf" }
How to create a catkin ws and build packahes with catkin build
Question: Dear All, I tried to create a catkin ws using catkin init then enter catkin build and I got the following error: emeric@emeric-desktop:~/catkin_planning_ws$ ls src hector_navigation waypoint_ctl emeric@emeric-desktop:~/catkin_planning_ws$ catkin build --------------------------------------------------------------------------- Profile: default Extending: [env] /home/emeric/catkin_wsp/devel:/opt/ros/kinetic Workspace: /home/emeric --------------------------------------------------------------------------- Source Space: [exists] /home/emeric/src Log Space: [missing] /home/emeric/logs Build Space: [exists] /home/emeric/build Devel Space: [exists] /home/emeric/devel Install Space: [unused] /home/emeric/install DESTDIR: [unused] None --------------------------------------------------------------------------- Devel Space Layout: linked Install Space Layout: None ---------------------------------------------------------- --------------------------------------------------------------------------- Additional CMake Args: DCMAKE_BUILT_TYPE=Release Additional Make Args: None Additional catkin Make Args: None Internal Make Job Server: True Cache Job Environments: False --------------------------------------------------------------------------- Whitelisted Packages: None Blacklisted Packages: None --------------------------------------------------------------------------- Workspace configuration appears valid. ----------------------------------------------------------- --------------------------------------------------------------------------- Traceback (most recent call last): File "/usr/bin/catkin", line 9, in <module> load_entry_point('catkin-tools==0.4.4','console_scripts', 'catkin')() File "/usr/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py", line 267, in main catkin_main(sysargs) File "/usr/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py",line 262, in catkin_main sys.exit(args.main(args) or 0) File "/usr/lib/python2.7/dist-packages/catkin_tools/verbs/catkin_build/cli.py", line 420, in main summarize_build=opts.summarize # Can be True, False, or None File "/usr/lib/python2.7/dist-packages/catkin_tools/verbs/catkin_build/build.py", line 283, in build_isolated_workspace workspace_packages = find_packages(context.source_space_abs,exclude_subspaces=True, warnings=[]) File "/usr/lib/python2.7/dist-packages/catkin_pkg/packages.py",line 86, in find_packages:packages = find_packages_allowing_duplicates(basepath, find_packages_allowing_duplicates(basepath, exclude_paths=exclude_paths, exclude_subspaces=exclude_subspaces,warnings=warnings) File "/usr/lib/python2.7/dist-packages/catkin_pkg/packages.py", line 146, in find_packages_allowing_duplicates xml, filename=filename, warnings=warnings) File "/usr/lib/python2.7/dist-packages/catkin_pkg/package.py",line 509, in parse_package_string raise InvalidPackage('The manifest must contain a single "package" root tag') catkin_pkg.package.InvalidPackage: The manifest must contain a single "package" root tag Besides I do not understand why the workspace is set to /home/emeric and not /home/emeric/catkin_planning_ws ? Because the build and devel directories are created in /home/emeric. Can you tell me what I am missing here? Thank you very much Originally posted by fabriceN on ROS Answers with karma: 59 on 2019-02-04 Post score: 0 Answer: I'm not sure if I got your problem, but, you can init a catkin workspace using this: First, create a folder where it'll be your workspace folder and your source folder: $ mkdir -p my_workspace/src After that, go to my_workspace folder and run it: $ catkin_make And done. Now it is just going to src folder and run it: $ catkin_create_pkg pkg_name roscpp rospy etc Originally posted by Teo Cardoso with karma: 378 on 2019-02-04 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by fabriceN on 2019-02-04: @Teo Cardoso, thank you for your answer, but actually what I do not understand is even if I am using catkin_make, when I look at the messages, I have a workspace : /home/emeric and not /home/emeric/catkin_planning_ws (in my case) - so I have a build and devel space created on my home directory...
{ "domain": "robotics.stackexchange", "id": 32407, "tags": "catkin, ros-kinetic" }
How does energy distribute through a lattice?
Question: The text that I am reading (Concepts in Thermal Physics by Blundell) gives an example of a simulation in which energy distributes throughout a lattice. The lattice contains 400 atoms and the initial configuration assigns 1 quantum of energy to each atom. Then it chooses one atom at random, removes a quantum of energy from that atom and place it on a second, randomly chosen atom. Here is a diagram from the text: As this process repeats for many iterations, we obtain the Boltzmann distribution. The problem I have with this is that energy can move from one atom to a distant atom without going through all of the atoms in between. Why is this allowed? Answer: It is just a conceptual model, don't read too much physics into it. If you like, you can think that between one step and the next the quantum of energy travels along the board and finally settles in another position. I want also to point out that the effect shown can also be obtained by allowing the quanta to move only to adjacent squares. The following is the result of a quick simulation I ran, allowing the quanta to move only to adjacent squares. An the final distribution is
{ "domain": "physics.stackexchange", "id": 82012, "tags": "thermodynamics, statistical-mechanics" }
Is PCl5 really trigonal bipyramidal in shape?
Question: We know that the structure of $\ce{CH4}$ is tetrahedral, and not planar tetragonal, as a tetrahedron enables maximum distribution of $\ce{H}$-atoms in space, as they repulse from each other. This is proved by the fact that each $\ce{H-C-H}$ bond angle is equal in magnitude. In the case of $\ce{PCl5}$, why should the shape be trigonal bipyramidal? Obviously the top and bottom $\ce{Cl}$-atoms, above and below the plane subtend $90^\circ$ with each of the $3$ planar $\ce{Cl}$-atoms; whilst planar ones subtend $120^\circ$ in between each other. This means all bond angles aren't equal. Why so? Shouldn't there be regular distribution of $\ce{Cl}$-atoms in space? To my opinion, the XY-plane distribution of $\ce{Cl}$-atoms should be such that each $\ce{Cl-P}$ is $72^\circ$ apart from the other $\ce{Cl-P}$ bond (atoms would not be in the same plane at all. But we should see that the orthogonal projection of each on XY-plane subtends equal angles, i.e. $72^\circ$). Same for YZ-plane. Hence there is regular distribution of atoms along all planes. Now it is to be calculated what the ultimate structure results in. Sorry, I don't have the necessary tools/knowledge to represent my hypothesis. Else could've just modelled this figure and shown a representative model. Answer: The trigonal bipyramidal shape of $\ce{PCl5}$ is indeed the lowest energy conformation. I have had a very hard time figuring out what your description of the structure meant and I am still not convinced I have actually understood correctly. However I was trying to model it in my head I either arrived at a variety of different angles, or at least two of them were equal to $72^\circ$, or both of these cases. The only structure $\ce{ML5}$, where all $\ce{L-M-L}$ bond angles are equal is a regular pentagon, i.e. all ligands are in one plane, with $\angle(\ce{L-M-L})=72^\circ$, e.g. $\ce{[XeF5]-}$. Another possibility for (almost all) equal bond angles is a square pyramid, where $\angle(\ce{L-M-L})=90^\circ$, e.g. $\ce{ClF5}$. Note however, that there are also other angles, so not all bond angles are equal, $\angle(\ce{L-M-L})=180^\circ$. There is no better solution to arrange 5 point 'uniformly' on a sphere than the trigonal bipyramid. This is also known as the Thompson Problem, which is addressed in the linked question on Mathematics.se, provided by orthocresol: 5 Points uniformly placed on a sphere. Another point of view would be to ask the question whether there is a way to put 5 points on the surface of the sphere so that they are indistinguishable, which is also answered there. Spoiler warning: there isn't. You can dive deeper into the matter. For example for $\mathbb{R}^n$ there can only be $n+1$ points equally equidistant to each other. In our perceived reality $n=3$, which makes the tetrahedron the most complex molecule with this property. See on mathematics.se: Proof maximum number of equidistant points for a given dimension (duplicate), Prove that $n+2$ points in $\mathbb{R}^n$ cannot all be at a unit distance from each other (duplicate), $n$ points can be equidistant from each other only in dimensions $\ge n-1$? (Which are all duplicates, but still have answers attached to them.) And then there are only five Platonic solids, none of which has five vertices. Furthermore, I would like to add to all of that another word of caution: Molecules are flexible. The trigonal bipyramidal arrangement of ligands just happens to be the low energy conformation for a variety of reasons (considering the necessary and useful approximations, e.g. Born-Oppenheimer approximation). The molecule (and other analogues) are special though, as the undergo pseudorotation according to the Berry Mechanism (see Wikipedia). Side note: We know that the structure of $\ce{CH4}$ is tetrahedral, and not planar tetragonal, as a tetrahedron enables maximum distribution of $\ce{H}$-atoms in space, as they repulse from each other. This is proved by the fact that each $\ce{H-C-H}$ bond angle is equal in magnitude. This is only half the truth. It is certainly true that protons repel each other, more importantly though is that electrons repel each other, too. The most important part of which is also built into the very simple VSEPR theory. While this theory has its uses, there is a lot of criticism attached to it and one should be aware of that. I shall stress again at this point that VSEPR theory is unrelated to orbital hybridisation; there are quite a few textbooks which do make this post-hoc rationalisation, which can be misleading and dangerous.
{ "domain": "chemistry.stackexchange", "id": 12814, "tags": "molecular-structure" }
Calculating the black body radiation
Question: A black body is one that absorbs all radiation that falls on it. The radiation that such a body emits, when at thermal equilibrium, is called "Black body radiation". But when doing calculations about the radiation from these bodies, I don't understand some steps in these calculations: When modelling the black body as a small hole in a cavity with walls, why do we assume that the light inside the cavity must be standing waves with nodes at the walls. How does the 1-dimensional condition for standing waves: $n_x = \frac{2L}{\lambda}$ become the 3-dimensional counterpart $n = \sqrt{n_x^2 + n_y^2 + n_z^2} = \frac{2L}{\lambda}$ When finding the number of states $g(\varepsilon)$ with energy $\varepsilon,\;$ $g(\varepsilon)\,d\varepsilon = 2 \frac{1}{8} 4\pi n^{2}\, dn = \pi n^{2}\, dn$, can we write just $\pi n^2$? Shouldn't we account for the fact that it is only lattice points we are considering; something like: $\pi n^2 \times \%\,$of latice points? Answer: Ok, let's go, one topic at a time: We model the blackbody radiation as the one escaping by a small hole on the wall of a metal made object manteined at temperature T (such hole connects the interior cavity to the outside). Because the walls of the cavity are made of a conducting material (metal), the electric field vanishes at the boudaries. This, together with the fact that the incident and emmitted waves are interfering with each other inside the cavities, generates the standing wave pattern of the cavity radiation. We can see this in 2 ways. Firstly, look at the two formulas you've written. The second one is clearly a generalization of the first, if you understand $n$ as a vector (after all, it is closely related to the wave vector, isn't it?). Besides that, this can be shown decomposing a radiation of a certain wavelength $\lambda$ into three components along the three mutually perpedicular directions of a cubic cavity. Because in each direction we must have a standing wave, in 3D we should as well have a standing wave. Writing the distance between the nodes of the 3D wave in terms of the distance between the nodes of the components of the wave in each direction will get you to the second formula. Be careful. It looks to me that both the equation you wrote and the concept you're getting might be mistaken. First of all, with this calculation, you are simply counting up the number of allowed frequencies in the frequency interval $f$ to $f+df$, which is equal to the number of lattice points in the $(n_x,n_y,n_z)$-space between $n$ and $n+dn$, where the $n$ interval that corresponds to the $f$ interval by the second formula you wrote on item 2. We haven't accounted for the energy associated to said waves just yet. To count the number of points in this volume on the $(n_x,n_y,n_z)$-space, because the density of lattice points is 1, we just compute the corresponding volume: $(\frac{1}{8})4\pi n^2 dn$ (your factor of $2$ comes from the two possible independent polarizations of light waves, and the factor of $\frac{1}{8}$ comes from the fact that we are only interested in the positive octant of the $(n_x,n_y,n_z)$-space). We are interested in a interval of frequencies, then, the computation that you suggested, seems to me to not have any physical meaning. Hope it helped. Edit: I talked to my professor and we concluded that, in fact, the material does not need to be made out of metal. Of course, this guarantees that the electric field vanishes at the walls, but even if the cavity is not made of metal, the tangential component of the electric field must vanish (what originates the standing wave pattern). If that did not happen, two things could occur: it could make the electric charges of the walls move, violating the thermal equilibrium of the material and the radiation. In a more restricted case, it would polarize the material, what would violate the isotropy of the radiation inside the cavity.
{ "domain": "physics.stackexchange", "id": 48097, "tags": "thermal-radiation" }
How to interpret Seaborn's regplot() when using x_bins?
Question: I'm walking through a course on Seaborn and I'm having trouble understanding how exactly the x_bins parameter in Seaborn's seaborn.regplot() function works. The documentation has the below: Bin the x variable into discrete bins and then estimate the central tendency and a confidence interval. This binning only influences how the scatterplot is drawn; the regression is still fit to the original data. This parameter is interpreted either as the number of evenly-sized (not necessary spaced) bins or the positions of the bin centers. So I understand that when I mention, say, x_bins = 5, seaborn will bin the X-axis such that there is an equal (as much as possible) number of observations in each bin. And if I mention something like sns.regplot(data=mpg, x="weight", y="mpg", x_bins=np.arange(2000, 5500, 250), order=2) (as given in the docs), the numbers are taken to be the bin centres. My question: what exactly are the bin edges then? For example: In this figure from the docs, what are the first and second bins? Is it 1500 to some number between 1500 to 2,250? In this figure (from a DataCamp course), produced through the code below (by specifying the number of bins), is the first bin 0.0 to ~0.25, and the second bin ~0.25 to somewhere under ~0.4? I'm having trouble understanding the creation of bins and where exactly the point representing the mean is made. Answer: The code for calculating the bins can be found in the bin_predictor method of the _RegressionPlotter class: def bin_predictor(self, bins): """Discretize a predictor by assigning value to closest bin.""" x = np.asarray(self.x) if np.isscalar(bins): percentiles = np.linspace(0, 100, bins + 2)[1:-1] bins = np.percentile(x, percentiles) else: bins = np.ravel(bins) dist = np.abs(np.subtract.outer(x, bins)) x_binned = bins[np.argmin(dist, axis=1)].ravel() return x_binned, bins It determines the center of the bins using np.linspace and np.percentile, after which the distance for each point to the bin centers are calculated. The points are then assigned to the bins with the closest distance.
{ "domain": "datascience.stackexchange", "id": 12122, "tags": "seaborn" }
Create/Verify signed license files
Question: The purpose of the class is to provide an easy way for create and verify a signed license file (or key). I am interested in any improvments / thoughts about the class. Usage: var signer = new LengthSigner(); // creation var properties = new Dictionary<string, string>(); properties.Add("Key1", "Value1"); properties.Add("Key2", "Value2"); var licenseFile = new LicenseFile("Me", "My Company", properties); licenseFile.Sign(new LengthSigner()); var content = licenseFile.Serialize(); Console.WriteLine(content); // actual content: // ExcsNAEsVjMTFCI0BjhUMxgSMgk8TwEOKG97JHVtCXNDTHsJPEkBehMYVGVadwEyLyhJYU8wXlVDTndhBA9uNREvCA== // unconfused content: // 15.07.2016 00:00:00 // Me // My Company // Key1:Value1 // Key2:Value2 // 63 // verification var deserializedLicenseFile = LicenseFile.Deserialize(content); deserializedLicenseFile.Verify(new LengthSigner()); // true LicenseFile.cs public class LicenseFile { public LicenseFile(string licensee, string company, IDictionary<string, string> properties) : this(licensee, company, DateTime.UtcNow.Date, properties, null) { } private LicenseFile(string licensee, string company, DateTime issueDate, IDictionary<string, string> properties, string signature) { if (string.IsNullOrWhiteSpace(licensee)) throw new ArgumentException("licensee"); IssueDate = issueDate; Licensee = licensee; Company = string.IsNullOrWhiteSpace(company) ? "Private License" : company; Signature = signature; var dict = properties ?? new Dictionary<string, string>(); if (dict.Keys.Any(key => key.Contains(":"))) throw new FormatException("Character ':' is not allowed as property key."); Properties = new ReadOnlyDictionary<string, string>(dict); } private static byte[] ConfusingBytes = new byte[] { 34, 34, 2, 4, 54, 2, 100, 3 }; public DateTime IssueDate { get; private set; } public string Licensee { get; private set; } public string Company { get; private set; } private string Signature { get; set; } public IDictionary<string, string> Properties { get; private set; } public static LicenseFile Deserialize(string content) { var unconfused = Unconfuse(content); var lines = (unconfused ?? "").Split(new[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries); if (lines.Length < 4) ThrowInvalidFormatException(); return ReadLicenseFile(lines); } public string Serialize() { var sb = new StringBuilder(); WriteLicenseProperties(sb); WriteSignature(sb); return Confuse(sb.ToString()); } private static LicenseFile ReadLicenseFile(string[] lines) { try { var issueDate = DateTime.Parse(lines[0]); var licensee = lines[1]; var company = lines[2]; var signature = lines.Last(); var properties = new Dictionary<string, string>(); foreach (var line in lines.Skip(3).Take(lines.Length - 4)) { var pair = GetKeyValuePair(line); properties.Add(pair.Key, pair.Value); } return new LicenseFile(licensee, company, issueDate, properties, signature); } catch (Exception ex) { //Log.LogError("Error while deserializing LicenseFile.", ex); ThrowInvalidFormatException(); return null; } } public void Verify(ISigner signer) { var sb = new StringBuilder(); WriteLicenseProperties(sb); if (!signer.Verify(sb.ToString(), Signature)) ThrowInvalidSignatureException(); } public void Sign(ISigner signer) { var sb = new StringBuilder(); WriteLicenseProperties(sb); Signature = signer.Sign(sb.ToString()); } private void WriteSignature(StringBuilder sb) { if (string.IsNullOrEmpty(Signature)) ThrowNotSignedException(); sb.AppendLine(Signature); } private void WriteLicenseProperties(StringBuilder sb) { sb.AppendLine(IssueDate.ToString()); sb.AppendLine(Licensee); sb.AppendLine(Company); foreach (var property in Properties) sb.AppendLine(property.Key + ":" + property.Value); } private static KeyValuePair<string, string> GetKeyValuePair(string line) { var index = line.IndexOf(':'); if (index < 0) ThrowInvalidFormatException(); var key = line.Substring(0, index); var value = line.Substring(index + 1); return new KeyValuePair<string, string>(key, value); } private static string Confuse(string input) { var bytes = Encoding.UTF8.GetBytes(input); for (int i = 0; i < bytes.Length; i++) bytes[i] ^= ConfusingBytes[i % ConfusingBytes.Length]; return Convert.ToBase64String(bytes); } private static string Unconfuse(string input) { var bytes = Convert.FromBase64String(input); for (int i = 0; i < bytes.Length; i++) bytes[i] ^= ConfusingBytes[i % ConfusingBytes.Length]; return Encoding.UTF8.GetString(bytes); } // - Throw helper private static void ThrowInvalidFormatException() { var msg = "License file has not a valid format."; //Log.LogError(msg); throw new LicenseFileException(msg); } private static void ThrowNotSignedException() { var msg = "License file is not signed."; //Log.LogError(msg); throw new LicenseFileException(msg); } private static void ThrowInvalidSignatureException() { var msg = "Signature of license file is not valid."; //Log.LogError(msg); throw new LicenseFileException(msg); } } ISigner.cs public interface ISigner { string Sign(string content); bool Verify(string content, string signature); } LicenseFileException.cs internal class LicenseFileException : Exception { public LicenseFileException(string message) : base(message) { } } LengthSigner.cs Just for testing purposes public class LengthSigner : ISigner { public string Sign(string content) { return content.Length.ToString(); } public bool Verify(string content, string signature) { return content.Length == int.Parse(signature); } } Answer: I think the LicenseFile class has too many resposibilities. It stores the license data, it encrypts/decrypts the license, it serializes/deserializes it, it writes/reads it into/from a file. My suggest to refactor it like this... The LicenseExcryption class would take care of the confusing part: public class LicenseEncryption { private static byte[] ConfusingBytes = new byte[] { 34, 34, 2, 4, 54, 2, 100, 3 }; public static string EncryptLicense(string license) { var bytes = Encoding.UTF8.GetBytes(license); for (int i = 0; i < bytes.Length; i++) { bytes[i] ^= ConfusingBytes[i % ConfusingBytes.Length]; } return Convert.ToBase64String(bytes); } private static string DencryptLicense(string input) { var bytes = Convert.FromBase64String(input); for (int i = 0; i < bytes.Length; i++) bytes[i] ^= ConfusingBytes[i % ConfusingBytes.Length]; return Encoding.UTF8.GetString(bytes); } } The License class would only store the data and know how to turn it into a string and maybe deserialize it: public class License { public License(string license) { // decrypt, parse/deserialize etc. } public License(string licensee, string company, DateTime issueDate, IDictionary<string, string> properties, string signature) { // ... } public DateTime IssueDate { get; private set; } public string Licensee { get; private set; } public string Company { get; private set; } private string Signature { get; set; } public IDictionary<string, string> Properties { get; private set; } public void Verify(ISigner signer) { // verify the signer } public void Sign(ISigner signer) { // sign } private static KeyValuePair<string, string> GetKeyValuePair(string line) { var index = line.IndexOf(':'); if (index < 0) ThrowInvalidFormatException(); var key = line.Substring(0, index); var value = line.Substring(index + 1); return new KeyValuePair<string, string>(key, value); } public override string ToString() { var licenseBuilder = new StringBuilder(); AddProperties(licenseBuilder, this); AddSignature(licenseBuilder, this); return LicenseEncryption.EncryptLicense(licenseBuilder.ToString()); } private static void AddSignature(StringBuilder licenseBuilder, License license) { if (string.IsNullOrEmpty(license.Signature)) ThrowNotSignedException(); licenseBuilder.AppendLine(license.Signature); } private static void AddProperties(StringBuilder licenseBuilder, License license) { licenseBuilder.AppendLine(license.IssueDate.ToString()); licenseBuilder.AppendLine(license.Licensee); licenseBuilder.AppendLine(license.Company); foreach (var property in license.Properties) { licenseBuilder.AppendLine(property.Key + ":" + property.Value); } } private static void ThrowInvalidFormatException() { var msg = "License file has not a valid format."; //Log.LogError(msg); throw new LicenseFileException(msg); } private static void ThrowNotSignedException() { var msg = "License file is not signed."; //Log.LogError(msg); throw new LicenseFileException(msg); } private static void ThrowInvalidSignatureException() { var msg = "Signature of license file is not valid."; //Log.LogError(msg); throw new LicenseFileException(msg); } } Lastly the LicenseFile would only know how to read and write license data from a file or write it into a one: public class LicenseFile { public static void Save(License license, string fileName) { // write the license into a file } public static License From(string fileName) { // read the file contents, create license... } } You could also add the Save/From methods to the License class itself but still keep the writing/reading logic in a specialized unit.
{ "domain": "codereview.stackexchange", "id": 21111, "tags": "c#, security" }
What is the benefit of training an ML model with an AWS SageMaker Estimator?
Question: It looks like there are different routes to deploying an ML model on SageMaker. You can: pre-train a model, create a deployment archive, then deploy create an estimator, train the model on SageMaker with a script, then deploy My question is: are there benefits of taking the second approach? To me, it seems like writing a training script would require a bit of trial and error and perhaps some extra work to package it all up neatly. Why not just train a model running cells sequentially in a jupyter notebook where I can track each step, and then go with the first approach? Does anyone have experience and can compare/contrast these approaches? Answer: The approach one looks best when you are training a small model or one which doesn't take much compute time but when training large model it's always prefered to train via second approach. Reasons are as follows: When training large model you might require distributed training either Data or Model parallel. When running large model it's best practise to train via second approach as if the training stops abruptly, job gets stopped and you will not be charged which is not the case when you run via notebook instance. So,that extra works pays off as it help train faster and saves your bill!!
{ "domain": "datascience.stackexchange", "id": 9765, "tags": "machine-learning, aws, sagemaker" }
Minimum finger search tree complexity
Question: Suppose I have an AVL tree with a pointer to the minimal element. I'd like to conduct a search for some key x, which is the $k$-smallest key in the entire tree. I can do this by "climbing" up the tree's left branch, comparing x with the current key. As long as (x > curr.key) && (x > curr.parent.key) I keep climbing up, but once the second condition is violated, I slide down to the current right-subtree, and from there on it's just a standard BST search. The claim is that the worst case complexity is always $O(\log k)$, for any $k$. But I can't convince myself this is accurate: if x is larger than the tree's root's key (the median, or equivalently $k > {n\over 2}$), that implies I must have traversed the entire left branch, which for a balanced tree is $O(\log n)$ - and only then I can find x in depth of $O(\log k)$. Am I looking at this the wrong way? Answer: Yes, you might be looking it the wrong way. If $x$ is larger than the tree's root's key (the median, or equivalently $k>n/2$), that implies I must have traversed the entire left branch, which for a balanced tree is $O(\log n)$ - and only then I can find $x$ in depth of $O(\log k)$. First, let us be accurate. The root's key may not be the median, although it sits roughly in the middle. Let $r$ be the root node. Let $n$ be the number of nodes in the whole tree and $n_l$ be the number of nodes in the left subtree of $r$. Let $h_l$ be the height of that subtree and $h$ be the height of the tree, so $h\le h_l+2$ since an AVL tree is height balanced. Suppose $x$ is larger than the root's key. That means $$n_l < k \le n.$$ We also have, according to the properties of an AVL tree. $$\log_2 (n+1)\le h \le h_l+2 \le c\log_2(n_l+2)+b +2\lt c\log_2(k+2)+b +2$$ where $c\approx 1.44$, $b\approx -0.328$. So, for all $k\ge3$, $$\log_2(n)\le2\log_2(k)$$ So if you have traversed the entire (height) of the left branch in order find the $k$-th smallest element, you will use $O(\log n) = O(\log k)$ steps. In fact, if we only look at the subtree whose root is the highest node that we will climb to in search of the $k$-th smallest element, we have just proved that the worst case complexity is always $O(\log k)$
{ "domain": "cs.stackexchange", "id": 12799, "tags": "time-complexity, balanced-search-trees" }
Heat rejected by a refrigerator?
Question: What is the amount of heat rejected by the condenser of a vapour compression refrigeration system, typically those found in households? Something in the nature of a 25W compressor, for example Answer: If you have a 25 W compressor, then the net heat flux into the environment will be 25 W. This is made up of the heat extracted from the inside of the refrigerator, plus the heat of losses in the compressor, minus the heat that travels from the environment back to the inside from the fridge (which is why the compressor needs to keep running). If we concentrate just on the compressor as a heat pump, we can make some simple assumptions about the COP (coefficient of performance) of the compressor to come up with an estimate. Assuming the inside of the fridge is at 5°C and the environment at 22°C, the compressor is pumping against a 17°C gradient. A perfect Carnot engine operating between 278 K and 295 K would have an efficiency of $$\eta = 1 - \frac{T_l}{T_h} = 5.7\%$$ Conversely, a perfect heat pump can pump more heat than the energy you put into it (that's why they are sometimes used for heating homes, when a suitable source of nearby "heat" is available, e.g. groundwater). The ratio of heat moved vs work done is $\frac{1}{\eta} = 17.4$. That number is the ratio between the heat rejected and the power consumed - so for an "ideal" refrigerator compressor operating between the limits given, the heat rejected would be about 430 W. Now a refrigerator pump is rarely as efficient as you would like - certainly it doesn't get close to the efficiency of the Carnot cycle. There's a good paper on the measurement of refrigerator efficiency that probably has more information than you ever wanted to know about the subject. It includes measurements of the thermal insulation of a typical refrigerator cabinet (1.21 W/K for the freezer, 0.88 W/K for the refrigerated compartment). From Table 3 of that publication, I find a COP of about 1.7 - suggesting that for our example 1.7*25 W = 42 W of heat is extracted, and 42W+25W = 67 W is rejected You can compare these numbers to the ones from the article, and find they are comparable - although the refrigerator in their example had a given energy consumption of 28 kWh/month which suggests about 39 W average power used. Still, that's the same ball park as 25 W. Afterthought - reading my solution again, I would not normally make an estimate like this and quote a number that seems to be precise to two figures. It would be more reasonable to say "roughly three times the power use is rejected". 1+1.7=2.7 which is approximately 3. Probably a better approach if you are estimating and making tons of assumptions.
{ "domain": "physics.stackexchange", "id": 27867, "tags": "thermodynamics" }
Sinewave in DC powersupply
Question: today in our physics class we talked about the hall-effect. Therefore we have used a Lab Quest with a sensor to measured the magnetic flux induced by a coil. While adjusting the current to gather some values I noticed a steady lowering in amperage. And after replacing the power supply nothing happened. Some time passed and I found an option to plot the data over time and the result was interesting. The graph showed an alternating wave-like relation, like a mixture of a sine and rectangular wave. It looked like a normal sine, but at its peek it stayed there for a full period doing a full leap and again staying... I'm going to school in Germany and our ac power line frequency is about 50Hz and the wave had an duration of about 50 seconds and i can imagine there could be a relation. So my question is now what could possibly generate such an effect? and therefore wich device is broken? Answer: DieKautz, there is at least one issue with your data. Your AC current is 50 Hz. That means that there is one positive peak and one negative peak per cycle. Assuming that you want to plot an accurate representation of these cycles, you need to sample the signal at a frequency that is at least twice as fast as what you want to plot. Since a positive (or negative) peak occurs over a time frame of 1/100 of a second, you need to sample the signal at a frequency of 200 Hz or higher, and the higher the better. I've used a Lab Quest in my high school physics labs while teaching students. I know that the sampling frequency can be adjusted, but I don't remember the fastest setting. Check the device, and attempt to sample at the fastest possible rate. If you can't sample at 200 Hz or higher, you will not get good data. If you sample at too low of a sample rate, you will run into a problem known as aliasing, which you commonly see when auto commercials show a car's wheels spinning "backwards" as it travels down the road. If this is occurring, the signal you see will not be a true representation of what is happening. For more info on aliasing, see https://en.wikipedia.org/wiki/Aliasing
{ "domain": "physics.stackexchange", "id": 52538, "tags": "waves, experimental-physics, electric-circuits, electric-current" }
Electric quadrupole - tensor identity
Question: In classical electrodynamics, we introduce the electric quadrupole moment $$D^{ij}\equiv\int y^i y^j \rho \mathrm{d}^3y$$ and its reduced (trace-less) version $$\mathcal{D}^{ij}\equiv D^{ij} - \frac{1}{3}\delta^{ij}D^{kk}, \;\;\;\; \mathcal{D^{ii}}=0.$$ How could I prove that $$\dddot{D}^{ij}\dddot{D}^{ij}-\frac{1}{3}\dddot{D}^{ii} \dddot{D}^{jj} = \dddot{\mathcal{D}}^{ij}\dddot{\mathcal{D}}^{ij}?$$ I tried to compute $$\dot{D}^{ij}=\int y^i y^j \dot{\rho}\mathrm{d}^3y=\int y^i y^j\partial_k j^k\mathrm{d}^3y=\int\partial_k\left(y^i y^j\right)j^k \mathrm{d}^3y=\int\left(y^j j^i + y^i j^j\right)\mathrm{d}^3y,$$ using the continuity equation, integrating by parts and since we can assume that $j^i$ has compact spatial support. But I'm not sure this is a smart way to proceed. Answer: I'm really sorry if I've missed something, and hence oversimplified the problem. Setting $X=\dddot{D}$ and $\mathcal{X}=\dddot{\mathcal{D}}$ just to avoid having to write all the dots, both $X$ and $\mathcal{X}$ are symmetric tensors, and in matrix form we can write (just by differentiating your starting equation) $$\mathcal{X}=X-\frac{1}{3} \text{Tr}(X) \mathbb{1}$$ where $\mathbb{1}$ is the unit matrix. So $$ \mathcal{X}^2 =X^2 - \frac{2}{3} \text{Tr}(X)\, X + \frac{1}{9}[\text{Tr}(X)]^2 \mathbb{1} $$ and hence taking the trace $$ \text{Tr} (\mathcal{X}^2) =\text{Tr}(X^2) - \frac{1}{3} [\text{Tr}(X)]^2 $$ which is your target equation.
{ "domain": "physics.stackexchange", "id": 54341, "tags": "tensor-calculus, classical-electrodynamics, multipole-expansion" }
Bridge specific topic or except bridge topic with ros2 ros1_bridge
Question: Following the tutorial of ros1_bridge I can bridge the ros1 topics to ros2 and worked very well. And I want to use this feature to communicate between multi-robots through DDS But there is a problem with bridging big_data like image_raw when I use --bridge-all-topics It will cause the network traffic jams Is it possible to bridge specific topic or just except some topics that I don't want it bridged to ros2? Thank you Originally posted by NEET on ROS Answers with karma: 13 on 2019-08-01 Post score: 1 Original comments Comment by Dirk Thomas on 2019-08-01: Which bridge executable are you currently using? There are different ones for different use cases: dynamic_bridge, parameter_bridge, static_bridge, ... Comment by NEET on 2019-08-01: @Dirk Thomas I use dynamic_bridge. I have no idea about other methods... Answer: If you dig into the ros1_bridge source code you will find the answer. dynamic_bridge has the following available flags that are of interest in this case: --bridge-all-topics: Bridge all topics in both directions, whether or not there is a matching subscriber. --bridge-all-1to2-topics: Bridge all ROS 1 topics to ROS 2, whether or not there is a matching subscriber. --bridge-all-2to1-topics: Bridge all ROS 2 topics to ROS 1, whether or not there is a matching subscriber. Regarding your question for bridging specific topics you should look at the parameter_bridge: bridge all topics listed in a ROS 1 parameter. The parameter needs to be an array and each item needs to be a dictionary with the following keys; topic: the name of the topic to bridge type: the type of the topic to bridge queue_size: the queue size to use (default: 100) or static_bridge which gives an example implementation of bridging one topic. Originally posted by pavel92 with karma: 1655 on 2019-08-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by NEET on 2019-08-02: Thank you for your reply. And yes, I found the solution after I review the source code of ros1_bridge. Now, I feel so sorry about I did not try to look into ros1_bridge. Thank you so much!
{ "domain": "robotics.stackexchange", "id": 33571, "tags": "ros2" }
Does the gravity of the planets affect the orbit of other planets in our solar system?
Question: When one planet passes near another during its trip around the sun, does their gravitational pull is strong enough to disrupt noticeably each other's orbit ? Answer: It does - although the term 'disrupt' may be a bit too strong to describe the effect; personally, I think 'influence' would fit better. An interesting consequence of such iterations is something called orbital resonance; after long periods of time - and remember that the current estimate for our planet's existence is 4.54 billion years - the ebb and flow of tiny gravitational pulls cause nearby celestial bodies to develop an interlocked behavior. It's a double-edged sword, though; it may de-estabilize a system, or lock it into stability. Quoting the Wikipedia entry, Orbital resonances greatly enhance the mutual gravitational influence of the bodies, i.e., their ability to alter or constrain each other's orbits. Another gravity-related effect (although, as pointed out by Dieudonné, present only on our solar system between bodies that have very close orbits like the Earth-Moon and Sun-Mercury systems) is known as Tidal locking, or captured rotation. More about orbital resonance on this ASP Conference Series paper: Renu Malhotra, Orbital Resonances and Chaos in the Solar System.
{ "domain": "astronomy.stackexchange", "id": 312, "tags": "orbit, solar-system, gravity, planet" }
Finding acceleration of center of mass in cart pole problem
Question: In this link about finding equations of motion of cart pole problem, There is an equation about acceleration of center of mass of the pole. Screenshots of them below. I don't understand why they have more than two parts about angular acceleration - $\varepsilon \times r_p$ and $\omega \times (\omega \times r_p)$? If I'm being right, first one is torque, and second one is acceleration of a point in circular movement. I guess in some part I'm being incorrect, but I don't understand why they put two angular acceleration of it? It's copy of them, aren't they? Answer: When you transfer velocity from one point to another in a rigid body you end up with an equation like $$ \boldsymbol{v}_P = \boldsymbol{v}_C + \boldsymbol{\omega} \times \boldsymbol{r}_P $$ Acceleration is just the time derivative of the above with $$ \begin{aligned} \tfrac{\rm d}{{\rm d}t} \boldsymbol{v}_P &= \boldsymbol{a}_P \\ \tfrac{\rm d}{{\rm d}t}\boldsymbol{v}_C & = \boldsymbol{a}_C \\ \tfrac{\rm d}{{\rm d}t} \boldsymbol{\omega} &= \boldsymbol{\epsilon} \\ \tfrac{\rm d}{{\rm d}t} \boldsymbol{r}_P &= \boldsymbol{\omega} \times \boldsymbol{r}_P \end{aligned} $$ The last part is because $\boldsymbol{r}_P$ is a fixed vector riding along with the rigid body. The transformation of acceleration is thus $$ \boldsymbol{a}_P = \boldsymbol{a}_C + (\tfrac{\rm d}{{\rm d}t}\boldsymbol{\omega} )\times \boldsymbol{r}_P + \boldsymbol{\omega} \times (\tfrac{\rm d}{{\rm d}t} \boldsymbol{r}_P) $$ $$\boldsymbol{a}_P = \boldsymbol{a}_C + \boldsymbol{\epsilon} \times \boldsymbol{r}_P + \boldsymbol{\omega} \times ( \boldsymbol{\omega} \times \boldsymbol{r}_P) $$
{ "domain": "physics.stackexchange", "id": 69490, "tags": "newtonian-mechanics, rotational-dynamics, reference-frames, vectors" }
Velocity of Sound with an Increase in Pressure
Question: Why does the velocity of sound not increase with an increase in pressure? Let me take an example. Suppose there exists a gas at some pressure. With an increase in pressure, it would turn into a liquid. Sound travels faster in a liquid, and thereby won't this contradict the fact that the velocity of sound does not depend upon pressure? Thanks a lot in advance. Answer: Pressure and density are proportional in gas. Sound speed depends on pressure AND density in opposite way, so the effect cancels out an the speed is only dependent of temperature and kind of gas. This is not related to sound speed in fluids, whir very high densities compared to gas
{ "domain": "physics.stackexchange", "id": 72225, "tags": "pressure, acoustics, velocity, states-of-matter" }
Local and Global storage with multithreading pools + locking threads
Question: I am having difficulty answering the following questions relating to the use of threading. Question 1 is of relating to the possibility of a local storage per thread and a global storage accessible to all threads. Consider the following scenario; A program creates a series of threads each with their own local storage, kind of like a mutex system except for that no thread can access another's memory, and a square matrix stored in global storage which is accessible to all threads. When the space complexity of the following program is calculated, is the space complexity (# of elements in the square matrix)2 + (local storage of finishing thread) or (# of elements in the square matrix)2 + (local storage of all threads at the time that the finishing thread completed)? Question 2 relates to timing threads to go at a very precise rate. Consider the following scenario; A program creates a series of threads that continually add a random set of elements to their local storage. The program finishes and returns the thread that received the shortest number of elements in its random set. If one thread was to go faster than another, the program would be incorrect. Is there a way to "lock" the threads to go at the same speed? Any help on the answering of these questions would be appreciated. Please comment if additional info is required. Answer: I will answer Question 1. As @DavidRicherby said, you should post the other question separately. The space complexity is the maximum amount of space required while the algorithm is running. So it certainly would be more than the (number of elements in matrix) + (local storage of just finishing thread). It might be the total storage of all the threads at the time the program finished but only if (a) you haven't already exited any of the threads, and (b) none of the threads freed up a bunch of memory just before the program finished. You also need to be careful to not overcount. As described here there are a lot of optimizations that allow you to reuse space, so you can't just sum up all the memory allocated by every thread that ever existed. You really need to figure out the point in time during your program execution when the most memory is in use.
{ "domain": "cs.stackexchange", "id": 2856, "tags": "space-complexity, threads" }
Simplification of a multi-index Boolean expression towards computation in fewer steps
Question: Let $x_{ij} \in \{0,1\}$, $1 \leq i \leq M$ (typically, $M = 2000$), $1 \leq j \leq N$ (typically, $N = 10$), be Boolean variables. If possible at all, I would like to simplify the following expression $$ \bigvee_{i_1,\dots,i_N} \left(\bigwedge_{k=1}^N x_{i_k k} \right), $$ where the join operation is taken over all indices $(i_1,\dots,i_N) \in \{1,\dots,M \}^N$, so that it can be computed faster instead of looping through all multi-indices. Apologies if this is a well-known problem (with or without solution). This is very much outside of my field of expertise, so I am hoping someone around here could give some feedback on it. Feel free to edit the tags as you see more appropriate. Thanks! Answer: It is equivalent to $$\bigwedge_{k=1}^{N} \bigvee_{i_k=1}^{M} x_{i_k k}.$$ This is a much simpler expression: $NM$ terms instead of $M^N$ terms, if you expand everything out.
{ "domain": "cs.stackexchange", "id": 15032, "tags": "algorithms, logic, boolean-algebra, performance" }
Graph implementation for solving graph problems (java)
Question: I've currently started solving some competitive programming Graph questions. But Started using adjacency list as HashMap<Integer,ArrayList<Integer>> list . But in some problems, this doesn't work. so I decided to implement class Graph other than this HashMap. I would like a review of efficiency and general code quality. Please let me know if this follow standard adjacency list implementation and best and worst-case scenarios. Node class class Node { public int id; public boolean visited; public List<Node> adjecent; Node(int id) { this.id =id; adjecent = new ArrayList<>(); } } Graph class Graph { private List<Node> nodes; Graph() { nodes = new ArrayList<>(); } boolean check(int id) { boolean flag = false; for(Node temp:nodes) { if(temp.id == id) flag =true; } return flag; } Node getNode(int id) { for(Node temp:nodes) { if(temp.id == id) return temp; } return null; } void addEdge(int src,int dest) { Node s = check(src) ? getNode(src):new Node(src); Node d = check(dest) ? getNode(dest):new Node(dest); s.adjecent.add(d); d.adjecent.add(s); if(!check(src)) nodes.add(s); if(!check(dest)) nodes.add(d); } void print() { for(Node temp : nodes) { System.out.print(temp.id+" -> "); temp.adjecent.forEach(e->System.out.print(e.id+" ")); System.out.println(); } } void dfs(Node root) { if(root == null) return; System.out.print(root.id+" "); root.visited = true; for(Node t : root.adjecent) { if(!t.visited) dfs(t); } } void refresh() { for(Node temp : nodes) { temp.visited =false; } } void bfs(Node root) { Queue<Node> q =new LinkedList<>(); root.visited =true; q.add(root); while(!q.isEmpty()) { Node temp = q.poll(); System.out.print(temp.id+" "); for(Node n:temp.adjecent) { if(!n.visited) { n.visited=true; q.add(n); } } } } public static void main(String[] args) { Graph g = new Graph(); /* * 0 * / | \ * 1 2 3 * / \ \ \ * 4 5 6 7 */ g.addEdge(0, 1); g.addEdge(0, 2); g.addEdge(0, 3); g.addEdge(1, 4); g.addEdge(1, 5); g.addEdge(2, 6); g.addEdge(3, 7); g.print(); System.out.println(g.nodes.size()); g.dfs(g.getNode(0)); System.out.println(); g.refresh(); g.bfs(g.getNode(0)); } } Any suggestions to make this code leverage to better efficiency is appreciated. Answer: Don't store method flow flags as state in classes. This is a breach in object-oriented design. class Node { public boolean visited; // .. other } Search methods like dfs should use a map of some sort to store which nodes are visited. By storing this flag incorrectly as state, you'll get in trouble when multiple threads search the graph concurrently.
{ "domain": "codereview.stackexchange", "id": 35952, "tags": "java, graph" }
A reader and its associated writer for disjoint value access
Question: While extending the functionality of my polymorphic_callable<> type, I designed a way to obtain a result through a reader type that is set by a writer type; this is very similar to what std::future<> and std::promise<> do, but in a non-multi-threaded way. Since that other question is already very long, I've decided to post this question for a review that focuses on these simple classes. Reader reader<> semantics: It can only read from its contained value. It is a move-only type that can only be created by a writer<> instance's create_reader() member function. On destruction, its associated writer<>'s writing pointer is set to nullptr. reader.h #ifndef READER_H #define READER_H #include <utility> #include <type_traits> template<class> class writer; template<class T> class reader { public: using value_type = T; /** * @brief sets the associated writer's pointer to nullptr */ ~reader() noexcept( std::is_nothrow_destructible<value_type>::value ) { writer_->value_ = nullptr; } /** * @brief takes ownership of the contained value of the argument reader as * if by move construction; associates the writer associated with the argument * reader with this reader. */ reader( reader&& rhs ) noexcept( std::is_nothrow_move_constructible<value_type>::value ) : value_{ std::move( rhs.value_ ) } , writer_{ rhs.writer_ } { rhs.writer_->value_ = &value_; } /** * @brief takes ownership of the contained value of the argument reader as * if by move assignment; associates the writer associated with the argument * reader with this reader. */ reader& operator=( reader&& rhs ) noexcept( std::is_nothrow_move_assignable<value_type>::value ) { writer_ = rhs.writer_; value_ = std::move( rhs.value_ ); rhs.writer_->value_ = &value_; } reader( reader const& ) = delete; reader& operator=( reader const& ) = delete; /** * @brief access the contained value */ value_type const& value() const noexcept { return value_; } private: friend class writer<value_type>; /** * @brief constructs a reader and associates it with the parameter writer */ reader( writer<value_type>* w ) noexcept( std::is_nothrow_default_constructible<value_type>::value ) : value_{} , writer_{ w } { w->value_ = &value_; } value_type value_; writer<value_type>* writer_; }; template<> class reader<void> { public: using value_type = void; }; #endif // READER_H Writer writer<> semantics: It is a friend class of the writer<> type. It has a pointer to its associated reader<> instance. It can write to its associated reader; if it's associated reader is not set, then writing does nothing. This is because a writer<> instance can outlive its associated reader<> instance. It can be queried to know whether it has an associated reader or not with the has_reader() member function. writer.h #ifndef WRITER_H #define WRITER_H #include "reader.h" #include <utility> #include <type_traits> template<class T> class writer { public: using value_type = T; ~writer() = default; /** * @brief constructs a writer with no assicated reader. */ constexpr writer() noexcept : value_{ nullptr } {} writer( writer&& rhs ) = default; writer( writer const& ) = default; writer& operator=( writer&& ) = default; writer& operator=( writer const& ) = default; /** * @brief sets the value associated with this writer; if there is no associated * value, the function does nothing. */ template<class... Args> void set_value( Args&&... args ) noexcept( std::is_nothrow_constructible<value_type, Args&&...>::value ) { if ( value_ ) { *value_ = value_type{ std::forward<Args>( args )... }; } } /** * @brief creates a new reader and associates it to the reader. */ reader<value_type> create_reader() noexcept { return reader<value_type>{ this }; } /** * @brief indicates whether the reader has an associated writer or not. * @return true if the reader has an associated writer, false otherwise. */ bool has_reader() const noexcept { return static_cast<bool>( value_ ); } private: friend class reader<value_type>; value_type* value_; }; template<> class writer<void> { public: using value_type = void; private: friend class reader<value_type>; }; #endif // WRITER_H While the true use case of these types is found here, here is an usage sample: #include <iostream> int main() { writer<int> w; reader<int> r{ w.create_reader() }; w.set_value( 3 ); std::cout << r.value() << '\n'; } Additional usage sample: struct updated_t { updated_t( writer<int>& wint, writer<std::string>& wstring ) : rint{ wint.create_reader() } , rstring{ wstring.create_reader() } {} reader<int> rint; reader<std::string> rstring; }; int main() { writer<int> wint; writer<std::string> wstring; updated_t updated{ wint, wstring }; while ( true ) { /* write with wint, wstring */ /* rint and rstring get updated where ever they may be */ wint.set_value( 3 ); std::cout << updated.rint.value() << '\n'; wstring.set_value( "hello world\n" ); std::cout << updated.rstring.value() << '\n'; break; } } I'm interested in a review on design and any other features that are missing or that could be added/improved. Answer: Easy of use: You could make implicit conversion to bool for writer<> class, so it would be less cumbersome to use it. On the other hand, those who are not familiar with the internals might be surprised during their first use. Since classes has explicit names, you could write operator(), since it is explicit that invoking it on reader<> object will perform read, and invoking it with parameter on writer<> object will perform writing. Matter to think about: The code uses circular references. I know that it greatly simplifies the implementation, but probably there is a better solution. I think that the problem leads to creation of raw memory class, which would be supplied into the polymorphic_callable<>. std::optional<> would be great idea too. When std::optional<> object will evaluate to true, it will mean that the function has been called and it properly finished execution. You could provide your own until C++17, since it is pretty trivial to write one.
{ "domain": "codereview.stackexchange", "id": 21853, "tags": "c++, c++14" }
Is there a binding affinity metric for interactions not in equilibria?
Question: I am investigating the strength of binding of a small peptide to a protein by isolating the bound version and subjecting it to collisions with gas molecules (CID mass spectrometry) to dissociate the complex. When I plot the collision energy against the proportion of bound and unbound proteins I get a curve that looks like a Kd curve. However, as the interaction isn't in equilibria (i.e. it is in the gas phase) calculating the Kd value wouldn't be correct. So is there another value I could calculate to describe the strength of the interaction? Answer: Bobthejoe was pointing in the right direction kcat and Km - Michaelis Menton parameters are measured to look at the relative rates of an enzyme in non equilibrium state. Basically kcat is 'how fast does the enzyme work with little or no product in solution'. Km is 'how strongly does the enzyme bind the substrates that it uses. http://www.pearsonhighered.com/mathews/ch11/c11kkkk.htm
{ "domain": "biology.stackexchange", "id": 206, "tags": "biochemistry, proteins, dissociation-constant" }
Is incineration of a solid fuel complete or incomplete?
Question: If I have some solid material like biomass and incinerate it at 1000 Celsius degrees for 15 minutes in an oxidized atmosphere within an incineration oven. As an output it gives me ash. Is the incineration complete or incomplete? Like for example we have incomplete combustion (lack of oxygen) and complete combustion (enriched oxygen medium). What about this incineration? Answer: This depends entirely on the conditions. If there is enough oxygen to make the mixture stoichiometric or oxygen-rich, and it burns for long enough at a high-enough temperature. However, the precise meaning of each of those three "enough"s will change with the material, so it is impossible to give a definitive answer for a non-specific case. As a general rule, the darker the ash produced by the combustion, the less complete the combustion process, as a darker color is typically caused by remaining charcoal (unburned hydrocarbons). If you start with biomass, it will consist of a lot of hydrogen (which forms water and evaporates), carbon (which, if completely combusted, forms gaseous carbon dioxide), nitrogen (which forms gaseous N$_2$ or NO$_x$) as well as a sundry mix of other elements, which will cool down into mineral oxides in the ash. What that will look like will depend on what those added elements were in the initial biomass.
{ "domain": "physics.stackexchange", "id": 75476, "tags": "thermodynamics, material-science, physical-chemistry, combustion" }
How to build a BAM header file with htslib in C++?
Question: I'd like to use C++ to generate a new BAM file programmatically. This is an example how to use htslib to generate a new BCF file on the fly. https://github.com/samtools/htslib/blob/develop/test/test-bcf-translate.c The two key functions are: bcf_hdr_append bcf_hdr_add_sample Now when I navigate to the header file for SAM/BAM functions: https://github.com/samtools/htslib/blob/develop/htslib/sam.h The only "append" function is bam_aux_append. But that's for optional fields. My questions: Given a bam_hdr_t header (potentially empty), how to add a new chromosome reference? (For example, add a new chrAB to the BAM file) Given a non-empty bam_hdr_t * header, how to remove a chromosome? (For example, remove chr4 from the BAM header) My sample code: // Empty BAM header, nothing inside bam_hdr_t * hdr = bam_hdr_init(); // I'd like to add a new synthetic `chrAB` into the header, how? Answer: You're just looking in the wrong header file. The BCF stuff will be in vcf.h (you'll see the two functions you mentioned there). With BAM files one doesn't typically need to add/remove headers, which is why there are not convenience functions for them. To add a header: Increment hdr->ntargets realloc(hdr->target_len, ...) and hdr->target_name. You likely want to append the name/length to the end of hdr->target_name and hdr->target_len, respectively. I don't think you need to update the dictionary or plain text for most uses (or at least I've never needed to). Removing a chromosome would follow the reverse instructions, (you'd need to shift everything in the name and length arrays if you remove entries in the middle. WARNING, doing this is prone to error! I would strongly cautious you to be very careful when doing this. Please remember that the chromosome that entries are mapped to isn't actually stored in alignment entries themselves. Rather, they simply have an index into hdr->target_name, which saves space. Consequently, if you start adding/removing headers to/from a BAM file with alignments then you're going to have to modify all of the alignments to keep the correct alignment:chromosome associations. Otherwise you'll end up with completely useless results (this is the same caveat as with samtools reheader).
{ "domain": "bioinformatics.stackexchange", "id": 229, "tags": "samtools, c++" }
Is overusing gravitational slingshots a real concern?
Question: I was thinking of the trading of kinetic energy during a gravitational slingshot maneuver and wondered if the kinetic energy lost during that process makes any noticeable impact on the orbit of the planet. Since the planet we are performing this maneuver on loses kinetic energy, it is realistically possible to do enough slingshots that we noticeably change the orbit of the planet? And what would be the consequences of changing the orbit? Answer: Typically if a satellite of mass $m$ increases its orbital velocity by $x$ meters per second from the slingshot maneuver the planet of mass $M$ will decrease its orbital velocity by $x(\frac{m}{M})$ meters per second in order for angular momenta around the sun to be conserved. Venus has a mass of $4.867\times10^{24}$ kg. If the satellite has a mass of 1000 kg and increases its orbital velocity by 20 000 meters per second from a slingshot maneuver around Venus then Venus will reduce its orbital velocity by $20000\frac{1000}{4.867\times 10^{24}}\approx4.1\times10^{-18}$ meters per second. The orbit of Venus will shift slightly towards the sun.
{ "domain": "physics.stackexchange", "id": 98208, "tags": "energy-conservation, orbital-motion, estimation, celestial-mechanics, order-of-magnitude" }
Is probabilistic machine learning just the mathematical background of machine learning?
Question: I wanted to begin with machine learning, I went through the contents of the course on ML by Andrew Ng and found that though the course was based on mathematics, but wasn't too much on the probability or statistics. But in many university curriculua the book: "Probabilistic Machine Learning: An introduction" by K. Murphy is being used, and it seemed to be full of probability, statistics etc. I wanted to know if the ML - which is being used in the industry, like the neural networks or other ML techniques - has any statistical or probabilistic foundation which is often not bothered or is it a new emerging field with mathematical basis which is not from probability or statistics? I really want to know if the ML in academia is really the foundation part of the ML used in the industry or is it the ML in its old form? And finally, are the contents of the book: "Probabilistic Machine Learning An Introduction", relevant in the ML being used today? I am aware that these are quite broad and open ended questions from someone entering into the field, I would be happy to informative answers or links? Answer: Knowledge in Probability and Statistics is absolutely important. It might feel you can do quite a bit without that and can get things work using the high level ML libraries that we have today. However if you are a serious practitioner you will figure out that beyond a point your knowledge on how these techniques work and are tuned is very hollow and that will automatically lead you into the more mathematically inclined books that give you the foundation knowledge.
{ "domain": "datascience.stackexchange", "id": 9516, "tags": "machine-learning" }
Is C. elegans always observed with precisely 302 neurons? Are there ever individual viable exceptions?
Question: This answer mentions that the C. elegans hermaphrodite has exactly 302 distinct neurons. This has made it a very effective model for a variety of types of biological research, including neurology and cell differentiation. It is also currently the only organism with a completely mapped connectome. But the word "always" made me wonder - has a viable specimen ever been verified to naturally have a number of neurons other than 302? Not as a result of an experiment, just naturally? Answer: According to the highly respected WORMATLAS: A Database of Behavioral and Structural Anatomy of Caenorhabditis elegans, the number is invariable in this animal, one of the most studied in the world. There are 302 neurons in the nervous system of C. elegans; this number is invariant between animals. Each neuron has a unique combination of properties, such as morphology, connectivity and position, so that every neuron may be given a unique label. Groups of neurons that differ from each other only in position have been assigned to classes. There are 118 classes that have been made using these criteria, the class sizes ranging from 1 to 13. Thus C. elegans has a rich variety of neuron types in spite of having only a small total complement of neurons. (Emphasis mine) From the above, you might guess that the number of synapses are not, however. The full list of synapses for hermaphrodite (including larval stages) and adult male are currently being reviewed and revised for the Wormwiring Project. All data comes from re/analysis of the sections for the hermaphrodite N2U, N2T, N2W and JSE animals, and male N2Y and n930 animals. The total counts of both electrical and chemical synapses are likely to be substantially higher than what was reported in the Mind of a Worm. Would I be surprised if someone found a different number in a particular specimen? No more so than when people are born with four kidneys, a parasitized twin, etc. Edited to Add: An article, Mutations that affect neural cell lineages and cell fates during the development of the nematode Caenorhabditis elegans has identified mutations with more or fewer neurons: Specifically, unc-83 and unc-84 mutations affect certain precursor cells that generate both neural and nonneural descendants; lin-22 and lin-26 mutants lead to the generation of supernumerary neural cells with a concomitant loss of nonneural cells; lin-4, lin-14, lin-28, and lin-29 mutants perturb global aspects of developmental timing, altering the time of appearance (or preventing the appearance) of both neural and nonneural cells... However, access to the paper is restricted, and I don't know if these mutations were induced (most likely were.) HT to @canadianer for the link that led to my link.
{ "domain": "biology.stackexchange", "id": 6840, "tags": "neuroscience, neurophysiology, anatomy, neuroanatomy" }
Should I keep common stop-words when preprocessing for word embedding?
Question: If I want to construct a word embedding by predicting a target word given context words, is it better to remove stop words or keep them? the quick brown fox jumped over the lazy dog or quick brown fox jumped lazy dog As a human, I feel like keeping the stop words makes it easier to understand even though they are superfluous. So what about for a Neural Network? Answer: In general stop-words can be omitted since they do not contain any useful information about the content of your sentence or document. The intuition behind that is that stop-words are the most common words in a language and occur in every document independent of the context. Therefore they contain no valuable information which could hint to the content of the document.
{ "domain": "datascience.stackexchange", "id": 8194, "tags": "word-embeddings, word2vec, nlp" }
Measuring Amplitude of sound wave FFT
Question: Please, pardon my ignorance. This may be a simple question, but I am still unable to explain what is happening. I am using a turtlebot to listen to sound emitted from a stationary sound source. My experiment setup is this: One stationary sound source emits tone of 500Hz. A turtlebot robot located about 3.5m away listens and measures the amplitude of the sound source (used FFT and filtered out 500Hz) at a rate of 100Hz while moving linearly toward the sound source at a velocity of 0.1m/s. The result of the measurements is shown below. NB: X axis is distance in metres, Y axis is amplitude measured by the turtlebot. My question is this: 1. Why is there a noticeable dip in the amplitude as the robot moves closer to the sound source? 2. Please can you help me with links to resources that can help me explain this behaviour? Answer: These are interesting results. Your comment "The X axis represents the distance travelled in metres (from 3.5m away to 0.5m away from the sound source). " seems to contradict "With this explanation, '0' represents 0.5m from sound source, while '3.0' represents 3.5m from sound source." The latter seems to be the one that fits your data better. What is more pronounced is the fluctuations you are getting. Taking the speed of sound at 343 meters per second: $$ \frac{343~\text{meters}/\text{second}}{500~\text{cycles}/\text{second}} = 0.686~\text{meters/cycle} $$ With standing waves, the nodes are at half the wavelength, which corresponds closely to the size of your fluctuations in the graph. So as your robot is moving through the room, it looks like you are measuring the standing wave nodes.
{ "domain": "dsp.stackexchange", "id": 6357, "tags": "fft, sound, amplitude, doppler" }
Node.js building a JSON restful web api structure
Question: I am building a JSON restful web api in Node.js and so far I only got the main structure, however I am not 100% sure I am doing this correct from scratch, I still have to get used to the fact that Node.js is event driven. Directory structure: 1. main.js 2. modules a. router.js b. config.js c. routes.js 1. main.js - contains the main entry point - node main.js 2. modules - folder that contains all the modules. 2a. router.js - contains "singleton" class module that will handle routing. var Router = function(routes){ this.routes = routes; Router.route = function(){ console.log('routing from singleton router...'); }; return Router; }; module.exports = Router; 2b. config.js - contains the global configuration. exports.config = { port: 9090 }; 2c. routes.js - contains the global routing schema, the router module will get the module injected (from main.js) exports.routes = { 'POST /users': { 'controller': 'users', 'action': 'create' }, 'GET /users': { 'controller': 'users', 'action': 'getSelf' }, 'GET /users/:uid/posts/:pid': { 'controller': 'users', 'action': 'getPostsFromUser' } }; This is the structure so far, I am planning to extend it to this directory structure: 1. main.js 2. modules a. router.js b. config.js c. routes.js d. loader.js (model loader, connection pool etc.) e. validator.js (validates input) d. controller.js (main controller, how to inherit from this? through inheritance?) 3. controllers - contains controllers 4. models - contains models How to handle connection pooling in the structure? Would it be an idea to create a "registry" class, I come from a PHP background and that is how I would share objects, instead of dependency injection. sample registry.js var Registry = function(){ this.data = {}; Registry.push = function(key, value){ return this.data[key] = value; }; Registry.pull = function(key){ return this.data[key]; } return Registry; }; module.exports = Registry; As I said, I am kinda new to Node.js and I want to make sure I get it right from scratch. If you got any suggestions, comments, tips I really appreciate your help and time. Answer: Sorry if I'm not answering your specific question, but are asking about overall style, or something else? There's no "right/wrong way" for many things in node. For example, I would have used http://expressjs.com/ for this: var express = require('express') var app = express.createServer() app.use(express.bodyParser()) app.get('/users', function(req, resp, next) { ... }) app.get('/users/:uid/posts/:pid', function(req, resp, next) { ... }) app.post('/users', function(req, resp, next) { ... }) app.listen(9090)
{ "domain": "codereview.stackexchange", "id": 2158, "tags": "javascript, node.js" }
Is the digital representation of acoustic waves directly proportional to displacement(or pressure)?
Question: In most audio editing software(like audacity) waveform can be viewed. Zoomed in these look like actual waveforms. Out of curiosity I downloaded some frequency sweep test files(which are intended to test the linearity of associated devices). In all of these files the amplitude was constant, only the frequency was increasing. But shouldn't the amplitude be decreasing with increasing frequency inverse squarely for constant intensity(and energy)? Or are these somehow universally equalized? If these are, is there any way of viewing the actual waveform? Or am I missing something? Answer: An ideal loudspeaker generates pressure fluctuations proportional to the voltage and an ideal microphone generates voltages proportional to pressure fluctations. In practice (for non-ideal transducers), there will be frequency-dependent phase differences and a frequency-depenent amplitude as well. The energy stored in an acoustic wavetrain in air is equivalent to the work needed to compress the air, which is proportional to $(\Delta p)^2$. The local pressure deviation in an acoustic wave traveling in the $z$ direction is proportional to $\partial u/\partial z$, for a displacement $u$. You can derive that the power must be $\propto u_{\mathrm{max}}^2 \omega^2$. A reason for not specififying a sound wave by its displacement is that this property only applies to a wave in free space. Near a surface, such as the surface of a transducer, the displacement tends to be close to zero. Your statement: shouldn't the amplitude be decreasing with increasing frequency inverse squarely for constant intensity (and energy) also applies to a mass-spring system, where the resonance frequency is $\omega_0=\sqrt{k/m}$, where $k$ is the spring stiffness and $m$ the mass. In such a system, the stored energy is $U=\frac12 \omega_0^2 m A^2$ for amplitude $A$, but the only way to increase $\omega_0$ at fixed mass is by increasing the stiffness of the spring. In the case of acoustic waves propagating in free space, the stiffness of the air is fixed and there is no resonance.
{ "domain": "physics.stackexchange", "id": 69901, "tags": "acoustics, electronics" }
Monkey-patching Jasmine's it() function to log errors
Question: The introduction of async native support in Jasmine doesn't log the errors (i.e., specific line number where the error occurs) to the console. So, to get around this behavior and to make the errors appears in the console, I came up with this following solution to patch Jasmine. I have the following function which I implemented to monkey patch the Jasmine. And I would like to reuse the same function for other functions like beforeEach, beforeAll and so on.. function patchJasmineForAsyncError(method: string) { let oldFn = window[method]; if (method === 'it' || method === 'fit') { window[method] = function (desc: string, fn: Function) { oldFn(description, (done) => { let p = fn(); if (p instanceof Promise) { p.then(() => done()).catch(e => { console.error(e); done.fail(e); }); } }); } } else { window[method] = function (axn: Function) { oldFn((done) => { let p = axn(); if (p instanceof Promise) { p.then(() => done()).catch(e => { console.error(e); done.fail(e); }); } }); } } } As you can see the second part of the function (the else block) has quite a lot of duplication. I was wondering if there is an elegant way to minimize this and implement the function. Note: the it and fit functions have two arguments desc and fn.. but the beforeAll, beforeEach.. etc have only one argument axn. Answer: Just factor it out into a closure function patchJasmineForAsyncError(method: string) { let oldFn = window[method]; function create_logger(fn: Function) { return (done) => { let p = fn(); if (p instanceof Promise) { p.then(done).catch(e => { console.error(e); done.fail(e); }); } } } if (method === 'it' || method === 'fit') { window[method] = function(desc: string, fn: Function) { oldFn(desc, create_logger(fn)); } } else { window[method] = function (axn: Function) { oldFn(create_logger(axn)); } } } We can clean this up a bit more by getting rid of some magic. I also got rid of the () => done() because that's the same as done. function patchJasmineForAsyncError(method: string) { let oldFn = window[method]; function create_logger(fn: Function) { return (done) => { let p = fn(); if (p instanceof Promise) { p.then(done).catch(e => { console.error(e); done.fail(e); }); } } } let allowed_methods = ['it', 'fit']; if (allowed_methods.includes(method)) { window[method] = function(desc: string, fn: Function) { oldFn(desc, create_logger(fn)); } } else { window[method] = function (axn: Function) { oldFn(create_logger(axn)); } } } We can also reduce some of the duplication in assigning to window[method] if we're okay with adding a slight slow-down to the function, which should be fine. function patchJasmineForAsyncError(method: string) { let oldFn = window[method]; function create_logger(fn: Function) { return (done) => { let p = fn(); if (p instanceof Promise) { p.then(done) .catch(e => { console.error(e); done.fail(e); }); } } } let allowed_methods = ['it', 'fit']; window[method] = function() { if (allowed_methods.includes(method)) { oldFn(arguments[0], create_logger(arguments[1])); } else { oldFn(create_logger(arguments[0])); } } }
{ "domain": "codereview.stackexchange", "id": 29725, "tags": "error-handling, logging, promise, typescript, jasmine" }
ROS MoveIt! move_group.launch
Question: When i try to run, the original move_group.launch file, generated by MoveIt!, i get the following error: auto-starting new master process[master]: started with pid [25531] ROS_MASTER_URI=http://localhost:11311 setting /run_id to aea97446-1118-11e5-a1c4-14feb5ae1f20 process[rosout-1]: started with pid [25544] started core service [/rosout] process[move_group-2]: started with pid [25561] [ERROR] [1434123343.429734522]: Robot model not loaded terminate called after throwing an instance of 'ros::InvalidNameException' what(): Character [_] is not valid as the first character in Graph Resource Name [_planning/shape_transform_cache_lookup_wait_time]. Valid characters are a-z, A-Z, / and in some cases ~. [move_group-2] process has died [pid 25561, exit code -6, cmd /home/***/moveit/devel/lib/moveit_ros_move_group/move_group __name:=move_group __log:=/home/***/.ros/log/aea97446-1118-11e5- a1c4-14feb5ae1f20/move_group-2.log]. log file: /home/***/.ros/log/aea97446-1118-11e5-a1c4-14feb5ae1f20/move_group-2*.log Google has no idea, I have no idea what could be the problem:) <include file="$(find torobot_moveit_final)/launch/planning_context.launch" /> Is included in the beginning of the launch file, which loads the robot model. Originally posted by TkrA on ROS Answers with karma: 45 on 2015-06-12 Post score: 0 Original comments Comment by dornhege on 2015-06-12: This seems to be the problem: [ERROR] [1434123343.429734522]: Robot model not loaded Comment by TkrA on 2015-06-12: i just edited the post, i have the include in the beginning of the launch file. It is strange, because i didn't edited the file, it was generated by MoveIt! still, isn't working. Comment by dornhege on 2015-06-12: IT seems like some prefix like your robot name or something is missing. Answer: move_group.launch does not load the URDF itself, it uses planning_context.launch for that (from move_group.launch): <include file="$(find your_moveit_config)/launch/planning_context.launch" /> But, planning_context.launch only loads the URDF if load_robot_description is true: <param if="$(arg load_robot_description)" name="$(arg robot_description)" textfile="/path/to/your.urdf"/> and by default, load_robot_description is false, so planning_context.launch doesn't load it. If you look in demo.launch, you'll see this: <!-- Load the URDF, SRDF and other .yaml configuration files on the param server --> <include file="$(find your_moveit_config)/launch/planning_context.launch"> <arg name="load_robot_description" value="true" /> </include> and finally: <!-- Run the main MoveIt executable [..] --> <include file="$(find your_moveit_config)/launch/move_group.launch"> ... </include> So you'll have to set load_robot_description to true before you start move_group.launch. Whether you do that using rosparam set .. or by using the structure of demo.launch is up to you. Originally posted by gvdhoorn with karma: 86574 on 2015-06-13 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by TkrA on 2015-06-13: I just edited the move_group.launch file: <include file="$(find torobot_moveit_final)/launch/planning_context.launch"> <arg name="load_robot_description" value="true" /> </include> Comment by TkrA on 2015-06-13: But now it gives the following error message: [FATAL] [1434187128.984235619]: Parameter '~moveit_controller_manager' not specified. This is needed to identify the plugin to use for interacting with controllers. No paths can be executed. Comment by gvdhoorn on 2015-06-13: Look at what demo.launch does with move_group.launch. There is much more going on than simply setting load_robot_description.
{ "domain": "robotics.stackexchange", "id": 21906, "tags": "ros, moveit" }
Charge conductivity by Non-Equilibrium Green Function Method
Question: I am reading the calculation of charge conductivity by Non-Equilibrium Green Function (NEGF) method in this following paper. Van-Nam Do, Non-equilibirum Green function method: theory and application in simulation of nanometer electronic devices, Adv. Nat. Sci.: Nanosci. Nanotechnol. 5 (2014) 033001 (21pp) According to it, the charge conductivity is computed by the following formula. $$I=\frac{e_{0}}{\hbar}\int\frac{dE}{2\pi}T(E)[n_L(E)-n_R(E)]$$ Here, $T(E)$ is the transimission coefficient; $n_L(E)$ and $n_R(E)$ are the occupation number of carrier at the energy value $E$ on left and right leads. These occupation number is usually computed by the Fermi-Dirac distribution function. If the left and right leads are the same materials; then, $n_L(E)$ and $n_R(E)$ should have the same value. This means $[n_L(E)-n_R(E)]$ should be zero and the total charge current would be zero as well. Taking the Cu-benzene-Cu nanowire as example, there would be no charge current flowing through it, if the current conductivity is computed by this formula. Obviously, this is not right. Would anyone please tell me what is wrong with my understanding about this formula? Taking the Cu-benzene-Cu nanowire as example, would anyone please tell me how to compute $n_L(E)$ and $n_R(E)$ to make sure the final charge conductivity of the nanowire is not zero? Thank you in advance. Answer: Conductance vs. conductivity Let me first point out that for nanodevices we do not speak of conductivity, but of conductance. Conductivity is a local property, which implies averaging over a physically small volume. Such an averaging is impossible/meaningless for nanodevices. Importantly, some basic phenomena, like conductance quantization, could not be discussed with such averaging. Landauer-Büttiker formalism The equation given in the OP is groudned in Landauer formalism (often referred to as Landauer-Büttikerr formalism, Büttiker having generalized the formalism to more than two terminals.) The formalism applies to non-interacting particles, so there is no much point in using Green's functions, although there are exist generalizations to interacting case (notably, the approach by Jauho, Meir and Wingreen, based on Keldysh Green's functions.) Fermi distributions To drive a current through a nanostructure we need to apply electric field. In case of nanostructures we usually think of applied the electric field as a difference in chemical potentials of the reservoirs feeding and collecting the electrons to/from the structure (this is what Landauer annd Büttiker have taught us). In other words, the distribution functions $n_{L,R}(E)$ are different not only due to the properties of the material, but because the materials held at different chemical potentials: $$ n_{L,R}=\frac{1}{e^{\beta(E-\mu_{L,R})}+1}, \text{ where the potential difference is } \mu_L-\mu_R=eV $$ I suggest several standard references: Introduction to Mesoscopic Physics by Joe Imry Quantum Kinetics in Transport and Optics of Semiconductors by Jauho and Haug Electronic Transport in Mesoscopic Systems by Supriyo Datta The original articles by Landauer, Büttiker, Meir&Wingree, Jauho, Meir&WIngreen are also quite readable and highly recommended.
{ "domain": "physics.stackexchange", "id": 95111, "tags": "quantum-mechanics, nanoscience, quantum-transport" }
Meaning of shake-off in nuclear physics
Question: What does shake-off mean in nuclear physics? The word appears, for example, in the title of this article: "Shake-off in the ¹⁶⁴Er neutrinoless double-electron capture and the dark matter puzzle" Answer: "Shake-off" refers to an electron being ejected from an atom, i.e. ionization. A related term, "shake-up", refers to an electron being moved to a higher energy state within the atom, i.e. excitation. This usage is fairly common, and "shake-off" can be traced back to Robert Millikan, who in his 1917 book "The Electron", states on page 138 : We must conclude, then, that an atom has so loose a structure that another atom, if endowed with enough speed, can shoot straight through it without doing anything more than, in some instances, to shake off from that atom an electron or two.
{ "domain": "physics.stackexchange", "id": 95045, "tags": "nuclear-physics, terminology, definition" }
can some one guide me how to use move it for path planning in case of manipulators?
Question: i am looking forward or the path planning of manipulators to avoid obstacles using potential field method. i am new to ros . so i hope the community will help me to complete my work Originally posted by alexi on ROS Answers with karma: 38 on 2015-10-22 Post score: 1 Answer: Hi!, If you are searching some tutorials to use moveit! you should check the tutorials from the official moveit! website. If what you are trying to do is some research about how the path planning works in moveit!. You should know that moveit! uses ompl as the default path planner library. So you should search for some documentation about this software. Anyway if you are new to ROS and want to learn to use it what you should do first is to do the basic tutorials from the wiki, to start learning about the basic concepts, the key to understand the ROS system. I hope this helps! Originally posted by donmrsir with karma: 363 on 2015-11-03 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 22821, "tags": "moveit" }
Blowing your own sail?
Question: How it this possible? Even if the gif is fake, the Mythbusters did it and with a large sail it really moves forward. What is the explanation? Answer: The concept of blowing your own sail really does have to do with conservation of momentum. In that very episode of Mythbusters you speak of, the sail was removed, the fan was spun around and the ship/boat was propelled forward much faster than with the fan facing into the sail (i.e. figure (1) is much faster than figure (2)). The reason is really quite simple and can be explained with throwing a ball off a boat. Suppose you are on a boat carrying a ball with total mass $m_{ball}+m_{boat}$ where $m_{boat}$ also takes into consideration your mass. Now if you throw the ball off the boat at velocity $v_{ball}$ then you and the boat will have momentum $m_{boat}v_{boat}=-m_{ball}v_{ball}$. This is analogous to the figure (1). Now consider the case of figure (2). In this case, I throw the ball at the sail, it bounces off the sail and into the water behind me. Because the process is inelastic, the ball now leaves the boat with $v'_{ball} < v_{ball}$. Therefore my momentum is now $m_{boat}v_{boat} = -m_{ball}v'_{ball}$. Now just replace the ball with air molecules and the analogy is complete. Therefore it will always be more efficient to spin the fan around and blow the fan in the opposite direction while forgetting the sail. Hope this helps!
{ "domain": "physics.stackexchange", "id": 71107, "tags": "classical-mechanics, popular-science" }
Repulsion of electric charges
Question: If a take a vacuum tube and put electrons in it and put a negative ion strip or something like that outside the tube will the electrons inside the tube start moving due to the repulsion of charges of same sign? Note: Inside the tube there are no charges just electrons and outside its -ve ions so will the electrons inside move? Answer: (Moving this from my comment to an answer) Yes, the electric field simply penetrates the glass wall and charges (the electrons) placed in that field will feel a force and move. The glass does not really interact with the charges on either side, so you might as well remove it completely (theoretically).
{ "domain": "physics.stackexchange", "id": 20119, "tags": "particle-physics, electricity, electrons, vacuum, ions" }
Simple (base) Twitch IRC bot
Question: I've created the beginnings of a Twitch IRC bot. As of current, it only has one command (!echo), though the infrastructure is (hopefully) there to library this code and build an actual bot on top of this base code. This is my first time making a system with this much threaded and i/o operations in it, so I'd especially appreciate comments around those parts of the code. Other key points I'm looking at are: potential for further "good" abstraction of the Twitch IRC API; routines potentially located in the wrong class/package; places to use the Java8 Streams; extensibility of the command handler. And of course, naming and documentation improvements. Uses JetBrains Nullability Annotations. You can view the full bot on GitHub. You can view the Twitch IRC API on GitHub. cad97.twitchapi.Constants package cad97.twitchapi; final class Constants { static final String HOST = "irc.chat.twitch.tv"; static final int PORT = 6667; static final double RATE_LIMIT = 20d / 30d; static final double MOD_RATE_LIMIT = 100d / 30d; static final double INVERSE_RATE_LIMIT = 1d / RATE_LIMIT; static final double INVERSE_MOD_RATE_LIMIT = 1d / MOD_RATE_LIMIT; private Constants() { throw new UnsupportedOperationException("No Constants instance for you!"); } } cad97.twitchapi.TwitchIRCSocket package cad97.twitchapi; import org.jetbrains.annotations.NotNull; import java.io.*; import java.net.Socket; import java.util.Optional; import java.util.function.Consumer; import java.util.function.Supplier; import java.util.regex.Matcher; import java.util.regex.Pattern; public class TwitchIRCSocket implements Closeable { private final @NotNull Socket socket; private final @NotNull BufferedReader reader; private final @NotNull PrintWriter writer; private TwitchIRCSocket() throws IOException { socket = new Socket(Constants.HOST, Constants.PORT); reader = new BufferedReader(new InputStreamReader(socket.getInputStream())); writer = new PrintWriter(new OutputStreamWriter(socket.getOutputStream())); } /** * A twitch connection over IRC with given methods to handle input and output. * * I/O is done raw to the Twitch IRC server. * * @param supplier A supplier of Strings to send messages. * Should block until a message is available to be sent. * * @param consumer A consumer of Strings to handle received messages. * Called whenever a message is received from the server. * Exception: PING PONG is handled for you. * * @param mod If the bot is connecting as a mod. * Messages rate-limited to 20 messages / 30 seconds if false. * Messages rate-limited to 100 messages / 30 seconds if true. * * @throws IOException if an I/O connection occurs during connection to the server. */ public TwitchIRCSocket(Supplier<String> supplier, Consumer<String> consumer, boolean mod) throws IOException { this(); setReceiver(consumer); setSender(supplier); } private Thread receiver; private void setReceiver(@NotNull Consumer<@NotNull String> consumer) { System.out.println("Set receiver to " + consumer); if (receiver != null) receiver.interrupt(); receiver = new Thread(() -> { try { while (!Thread.currentThread().isInterrupted()) { receiveMessageInto(consumer); } } catch (IOException e) { System.err.println("Error during socket read"); e.printStackTrace(); } }, "Message Receiver"); receiver.setDaemon(true); receiver.start(); System.out.println("Message Receiver thread started"); } private void receiveMessageInto(@NotNull Consumer<@NotNull String> consumer) throws IOException { String message = reader.readLine(); System.out.println(message); if (message == null) throw new IOException("Socket closed"); // PING PONG needs to be handled here for quick response times if (message.startsWith("PING ")) { sendMessageFrom(()->"PONG " + message.substring(5)); } else { consumer.accept(message); } } private Thread sender; private void setSender(@NotNull Supplier<@NotNull String> supplier) { System.out.println("Set sender to " + supplier); if (sender != null) sender.interrupt(); sender = new Thread(() -> { try { while (!Thread.currentThread().isInterrupted()) { sendMessageFrom(supplier); } } catch (IOException e) { System.err.println("Error during socket write"); e.printStackTrace(); } }, "Message Sender"); sender.setDaemon(true); sender.start(); System.out.println("Message Sender thread started"); } private void sendMessageFrom(@NotNull Supplier<@NotNull String> supplier) throws IOException { String message = supplier.get(); if (message.endsWith("\n")) { System.out.print(">>>" + message); writer.print(message); } else { System.out.println(">>>" + message); writer.println(message); } writer.flush(); } /** * {@inheritDoc} */ @Override public void close() throws IOException { socket.close(); reader.close(); writer.close(); if (receiver != null) receiver.interrupt(); if (sender != null) sender.interrupt(); } private final static Pattern PRIVMSG = Pattern.compile("^:(\\w+)!\\1@\\1\\.tmi\\.twitch\\.tv PRIVMSG #(\\w+) :(.*)$"); public static @NotNull Optional<@NotNull TwitchMessage> convertRawMessage(@NotNull String message) { Matcher match = PRIVMSG.matcher(message); if (!match.matches()) { return Optional.empty(); } else { return Optional.of(new TwitchMessage(match.group(1), match.group(3))); } } } cad97.twitchapi.TwitchMessage package cad97.twitchapi; import org.jetbrains.annotations.NotNull; public class TwitchMessage { public final @NotNull String user; public final @NotNull String message; TwitchMessage(@NotNull String user, @NotNull String message) { this.user = user; this.message = message; } } cad97.twitchbot.Main package cad97.twitchbot; import cad97.twitchapi.TwitchIRCSocket; import javafx.application.Application; import javafx.fxml.FXMLLoader; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.stage.Stage; import org.jetbrains.annotations.NotNull; public class Main extends Application { public static void main(String[] args) { launch(args); } @Override public void start(@NotNull Stage primaryStage) throws Exception { FXMLLoader fxmlLoader = new FXMLLoader(); Parent root = fxmlLoader.load(getClass().getResourceAsStream("../../display.fxml")); final Controller controller = fxmlLoader.getController(); primaryStage.setTitle("Twitch Bot"); primaryStage.setScene(new Scene(root, 300, 275)); primaryStage.show(); Bot bot = new Bot( SensitiveConstants.NICKNAME, SensitiveConstants.CHANNEL, SensitiveConstants.OAUTH_TOKEN, controller::display ); bot.registerCommand("!echo ", (message) -> message.message.substring(6)); Thread botThread = new Thread(bot); botThread.setDaemon(true); botThread.start(); TwitchIRCSocket ircSocket = new TwitchIRCSocket(bot::provideResponse, bot::receiveMessage, true); System.out.println("Main finished"); } } cad97.twitchbot.Controller package cad97.twitchbot; import javafx.animation.AnimationTimer; import javafx.fxml.FXML; import javafx.scene.control.TextArea; import javafx.scene.control.TextField; import org.jetbrains.annotations.NotNull; import java.util.ArrayList; import java.util.List; import java.util.concurrent.BlockingQueue; import java.util.concurrent.LinkedBlockingQueue; public class Controller { @FXML TextArea textArea; public Controller() { new AnimationTimer() { @Override public void handle(long now) { if (!queue.isEmpty()) { List<String> list = new ArrayList<>(); queue.drainTo(list); for (String s : list) { textArea.appendText(s); if (!s.endsWith("\n")) { textArea.appendText("\n"); } } } } }.start(); } private @NotNull BlockingQueue<@NotNull String> queue = new LinkedBlockingQueue<>(); boolean display(@NotNull String s) { return queue.offer(s); } } cad97.twitchbot.bot package cad97.twitchbot; import cad97.twitchapi.TwitchMessage; import org.jetbrains.annotations.Contract; import org.jetbrains.annotations.NotNull; import java.util.HashMap; import java.util.Optional; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; import java.util.function.Consumer; import java.util.function.Function; import static cad97.twitchapi.TwitchIRCSocket.convertRawMessage; class Bot implements Runnable { private final @NotNull String nickname; private final @NotNull String channel; private final @NotNull String oauthToken; private final @NotNull Consumer<String> display; Bot(@NotNull String nickname, @NotNull String channel, @NotNull String oauthToken, @NotNull Consumer<String> display) { this.nickname = nickname; this.channel = channel; this.oauthToken = oauthToken; this.display = display; } private @NotNull HashMap<@NotNull String, @NotNull Function<TwitchMessage, String>> commands = new HashMap<>(); void registerCommand(@NotNull String command, @NotNull Function<TwitchMessage, String> response) { commands.put(command, response); } /** * {@inheritDoc} */ public void run() { responses.add(String.format("PASS %1s", oauthToken)); responses.add(String.format("NICK %1s", nickname)); responses.add(String.format("JOIN #%1s", channel)); while (!Thread.currentThread().isInterrupted()) { try { TwitchMessage message = messages.take(); commands.keySet().stream() .filter(message.message::startsWith) .map(commands::get) .map(command -> command.apply(message)) .map(this::formatMsg) .forEach(responses::offer); } catch (InterruptedException e) { System.err.println("Error on bot thread"); e.printStackTrace(); } } } private @NotNull BlockingQueue<String> responses = new ArrayBlockingQueue<>(10); private @NotNull BlockingQueue<TwitchMessage> messages = new ArrayBlockingQueue<>(10); void receiveMessage(@NotNull String message) { Optional<TwitchMessage> twitchMessage = convertRawMessage(message); if (twitchMessage.isPresent()) { TwitchMessage tm = twitchMessage.get(); System.out.println(String.format("[%1s] %2s", tm.user, tm.message)); display.accept(String.format("[%1s] %2s", tm.user, tm.message)); messages.offer(tm); } else { display.accept(message); } } @Contract("null -> null; !null -> !null") private String formatMsg(String msg) { if (msg == null) return null; return String.format("PRIVMSG #%1s :%2s", channel, msg); } String provideResponse() { try { return responses.take(); } catch (InterruptedException e) { e.printStackTrace(); return formatMsg("An internal error has occurred."); } } } cad97.twitchbot.SensitiveConstants package cad97.twitchbot; final class SensitiveConstants { static final String NICKNAME = "REDACTED"; static final String CHANNEL = "REDACTED"; static final String OAUTH_TOKEN = "oauth:REDACTED"; } Answer: The annotations are nice, if a bit verbose. There are also some more (unpopular) annotations (like Lombok) that can remove even more boilerplate, but I guess that depends on personal taste and the contributors to a project. I don't see much point with package-private visibility to be honest, similarly the private constructor in Constants is not helping the reader much. The code doesn't look very testable, I'd suggest to move the inline anonymous classes into their own top-level ones and if possible to pass in objects in the constructor instead of constructing them there so you could instead pass in mock objects. The comment for the mod parameter in TwitchIRCSocket is oddly specific and likely to be out of sync with the code very quickly. It's probably enough to link to the corresponding fields in the constants class or wherever the values are coming from (a future configuration file for example). Splitting the two constructors like that looks super weird. I'd rather have a separate method to initialise the fields, or rather put it all in one constructor. Use a logging framework sooner rather than later. System.out and friends will get quite old very quickly. I'd suggest to move the fields together at the start or end of the class so it's tidier. If there are too many fields that's also an indicator that the class is growing too big. In convertRawMessage the variable should be named matcher because it's a ... Matcher object. In close I'd catch and throw away (or perhaps accumulate them) all the exceptions from the other close calls so that as many of them are done as possible. Can't say anything about the UI part. The separation between the "raw" interface and having a separate, more customisable bot part is nice and the message-passing between the different actors does too. With a little bit less repetition and possibly splitting it up some more I think it's on the right way.
{ "domain": "codereview.stackexchange", "id": 21462, "tags": "java, javafx, chat" }
What is the integral trick for a Coulomb potential?
Question: I recall reading in Feynman and Hibbs about an integration trick that can be used to solve certain types of integrals that involve the Coulomb potential. What was the trick? Answer: The trick is, multiply the integrand by $\exp(-\epsilon r)$, solve the integral, then let $\epsilon\rightarrow0$. For example, consider the following momentum transfer function where $\breve p=p_a-p_b$. $$ v(\breve p)=\frac{4\pi\hbar}{\breve p} \int_0^\infty \sin\left(\frac{\breve pr}{\hbar}\right)V(r)r\,dr $$ For the Coulomb potential $V(r)=-Ze^2/r$ we have $$ v(\breve p)=-\frac{4\pi\hbar Ze^2}{\breve p} \int_0^\infty \sin\left(\frac{\breve pr}{\hbar}\right)\,dr $$ The integral diverges so use the trick. $$ v(\breve p)=-\frac{4\pi\hbar Ze^2}{\breve p} \int_0^\infty \sin\left(\frac{\breve pr}{\hbar}\right)\exp(-\epsilon r)\,dr $$ Solve the integral. $$ v(\breve p)=-\frac{4\pi\hbar Ze^2}{\breve p} \;\frac{\breve p/\hbar}{(\breve p/\hbar)^2+\epsilon^2} $$ Let $\epsilon\rightarrow0$. $$ v(\breve p)=-\frac{4\pi\hbar^2Ze^2}{\breve p^2} $$
{ "domain": "physics.stackexchange", "id": 97463, "tags": "quantum-mechanics" }
Helmholtz decomposition in the plane
Question: Prove or disprove the following proposition: For any smooth plane vector field $\mathbf{H}=\left(H_x,H_y\right)$, there exist scalar potentials $\phi$, $\psi$ such that $H_x=\frac{\partial \phi }{\partial x}+\frac{\partial \psi }{\partial y}$ $H_y=\frac{\partial \phi }{\partial y}-\frac{\partial \psi }{\partial x}$ Answer: Your proof is right (and I voted it up accordingly). But this is a result that's worth proving a few different ways, because the different ways lead to different insights. So I'll give some alternative proofs. Proof 2 (yours is proof 1): Taking linear combinations of the equations we're trying to solve, we get an equivalent pair of equations. $$ H_x+iH_y=\left({\partial\over\partial x}+i{\partial\over\partial y}\right)(\phi-i\psi) $$ $$ H_x-iH_y=\left({\partial\over\partial x}-i{\partial\over\partial y}\right)(\phi+i\psi). $$ If we define some shorthand, $$ \partial_\pm={\partial\over\partial x}\pm i{\partial\over\partial y}, $$ $$ f_{\pm}=\phi\pm i\psi, $$ $$ H_{\pm}=H_x\pm iH_y, $$ we can write these as $$ H_+=\partial_+f_-, $$ $$ H_-=\partial_-f_+. $$ Now we can think of these as two uncoupled equations for the two unknowns $f_\pm$. Once we've solved for them, we can get $\phi,\psi$ from them. So we have to prove that each of these two equations has a solution. To solve for $f_-$, we start by finding a solution $g$ to $$ \nabla^2g=H_+. $$ (This is Poisson's equation, so it has a solution.) Then set $f_-=\partial_-g$. Because of the identity $$ \nabla^2=\partial_+\partial_-, $$ we have $$\partial_+f_-=\partial_+\partial_-g=\nabla^2g=H_+, $$ which is what we wanted. A similar construction works for $f_-$. In fact, every step of the argument for $f_-$ is just the complex conjugate of the corresponding step for $f_+$ (as long as the $H$'s are real). That's the way it has to be for $\phi,\psi$ to end up real. I have a personal reason I like this argument. I work a lot with maps of linear polarization, which are spin-2 fields rather than spin-1 (vector) fields. The equivalent of the Helmholtz theorem for spin-2 fields is called the $E$-$B$ decomposition (in the cosmology literature anyway). The above argument generalizes in a nice way to spin-2 (and presumably higher spins). Proof 3: Intuitively, it seems like we ought to be able to get the 2-D result from the 3-D Helmholtz theorem, and it turns out that we can. Here's one way to think about it. Extend the vector field ${\bf H}$ to be a function of $(x,y,z)$: $$ {\bf H}_{3D}(x,y,z)={\bf H}(x,y). $$ For simplicity, imagine that $z$ extends only over a finite interval, say 0 to $2\pi$, and has periodic boundary conditions (i.e., points with $z=0$ are identified with those with $z=2\pi$). It's not necessary to do this, but it makes things cleaner. Helmholtz's theorem says that there are functions $\phi,{\bf G}$ such that $$ {\bf H}_{3D}=\nabla\phi+\nabla\times{\bf G}. $$ If we knew that $\phi,{\bf G}$ were independent of $z$ and that ${\bf G}$ pointed in the $z$ direction, we'd be done. But we don't (yet) know that. Here's the trick. Both $\phi$ and ${\bf G}$ are nice, smooth functions, so they can be expanded in Fourier series in $z$: $$ \phi(x,y,z)=\sum_{n=-\infty}^\infty \phi_n(x,y)e^{inz}, $$ and similarly for ${\bf G}$. Substituting these into the previous equation, we get $$ {\bf H}_{3D}=\sum_n\left(\nabla(\phi_ne^{inz})+\nabla\times({\bf G}_ne^{inz})\right). $$ Each term on the right has $z$-dependence of the form $e^{inz}$ (even after writing out the derivatives). But the left side is independent of $z$. By the uniqueness of Fourier series, it follows that all the terms with $n\ne 0$ on the right vanish! So $$ {\bf H}_{3D}=\nabla\phi_0+\nabla\times{\bf G}_0. $$ We know that $\phi_0,{\bf G}_0$ depend only on $(x,y)$, not $z$. Setting $\phi(x,y)=\phi_0(x,y)$ and $\psi(x,y)=G_z(x,y)$ gives the desired result. This one could almost certainly be phrased more compactly in more formal mathematical language, probably involving symmetry groups. The basic idea is that the problem we're solving is invariant under translations in $z$, and the operations we're performing (in some sense) "commute" with these translations. That means it's possible to find a solution that respects that symmetry. "Proof" 4: The mathematicians won't like this one, but I think it's a nice way to think about it anyway. Take 2-D fourier transforms of everything in sight: $$ {\bf H}(x,y)=\int \tilde{\bf H}(k_x,k_y)e^{i{\bf k}\cdot{\bf r}}d^2k, $$ and similarly for $\phi,\psi$. When you do, the equations you're trying to solve decouple -- that is, each value of ${\bf k}$ can be solved independently: $$ \tilde{\bf H}({\bf k})=i{\bf k}\tilde{\phi}({\bf k})+i\hat{\bf z}\times{\bf k}\tilde\psi({\bf k}). $$ (There may be a sign error in the above, but it's "morally" true.) This equation can be solved algebraically for each ${\bf k}$. In fact, the solution has a nice physical meaning: decompose $\tilde{\bf H}$ into components parallel and perpendicular to ${\bf k}$. $\tilde\phi$ is the parallel component, and $\tilde\psi$ is the perpendicular component. I like this one because it too generalizes to the spin-2 case nicely, and provides better intuition than anything else I can think of about the "meaning" of the $E$-$B$ decomposition for spin-2 fields. The reason I say the mathematicians won't like it is because not everything has a Fourier transform, and the convergence of Fourier transforms isn't normal pointwise convergence. But for physics applications, it's generally a fine way to think about things.
{ "domain": "physics.stackexchange", "id": 1084, "tags": "mathematical-physics, mathematics, vector-fields" }
Is it possible for one of reactants to deplete before achieving equilibrium
Question: Consider the following reaction where $\ce{A}$ & $\ce{B}$ reacts to produce $\ce{C}$ & $\ce{D}$. $$\ce{ A(g) + B(g) <=> C(g) + D(g)}$$ Is it possible for $\ce{A}$ or/and $\ce{B}$ to run out before equilibrium at some given starting moles of $\ce{A}$,$\ce{B}$,$\ce{C}$ & $\ce{D}$. Answer: No, that is not possible. Consider the definition of equilibrium from the IUPAC gold book: chemical equilibrium Reversible processes [processes which may be made to proceed in the forward or reverse direction by the (infinitesimal) change of one variable], ultimately reach a point where the rates in both directions are identical, so that the system gives the appearance of having a static composition at which the Gibbs energy, $G$, is a minimum. At equilibrium the sum of the chemical potentials of the reactants equals that of the products, so that: \begin{align} \Delta G_\mathrm{r} &= \Delta G_\mathrm{r}^\circ + R T \ln K &&= 0\\ \Delta G_\mathrm{r}^\circ &= − R T \ln K\\ \end{align} The equilibrium constant, $K$, is given by the mass-law effect. The important thing is that the reaction proceeds in both directions. It means, that as soon as some of the product is formed, it will react back towards the reactants. The equilibrium is reached when the rates of both of the directions are identical. From the formula in the definition, you can also see, that the location of the equilibrium, i.e. the concentration of all species or $K$, is only determined by the difference in the Gibbs energy between products and reactants. In other words, a system that forms an equilibrium will always form an equilibrium, independent of the concentration of the starting materials.
{ "domain": "chemistry.stackexchange", "id": 5548, "tags": "equilibrium" }
Mie scattering intensity and spherical particles on
Question: I am running an experiment where I shine infrared light on (almost) spherical particles on the micron scale (PM2.5 - PM10). I then look at the (90 deg) scattering properties to try and size the particles. I am looking for the theory of Mie scattering and its intensity, specifically how the scattered intensity depends on the surface area of the particles. Looking for suggestions of good literature on this, or just simple equations. Answer: If the spherical approximation is good enough, you should be able to convert the surface areas into radii. I mention that because in my work with light scattering, all the equations are usually written in terms of the radius of the scatterers. It also looks like all the Mie scattering tables are written in terms of the radius of the scatterers as well. The best I can do in terms of actual formulas is that the scattering is proportional to $1/a^2$, times an intensity factor that you have to look up in a table. $a$ is the radius of the scatterer. (If it seems weird that scattering could go down as radius increases, see below for an explanation.) Knowing that you're only looking at one angle eliminates the angular portion of the intensity factor, but it's a complicated function of the size of the scatterer due to resonance effects when $a/\lambda \approx 1$. I found two tables of Mie coefficients. The first only has values for $n=1.40$. The second has more indexes of refraction, but might have access restrictions (it initially told me I had access thanks to my university library). Section 10.4 of Jackson derives some equations for the scattering of electromagnetic radiation from spherical particles, beginning from Maxwell's equations. He eventually leaves off without discussing the full problem. That might be a useful starting point for the theory. Wikipedia just pointed me to an English translation of the original paper by Mie, which I didn't know existed. I haven't yet had a chance to read it, so I don't know how useful it is. Everyone seems to refer back to Kerker as the first textbook that contains a full treatment of the Mie problem, but it's difficult to find, and very expensive. I would consider it only if you can get it through your university's library. I have a copy of the Dover edition of van de Hulst, which focuses specifically on the light scattering problem. It appears that there is a Dover edition of Stratton, now, as well. Why does the scattering intensity go down as the particle's radius increases? The first caveat is that the scattering isn't just proportional to $1/a^2$; the intensity factor is also a function of particle radius. I'm only marginally familiar with the general Mie problem, so I don't fully know all the complications that introduces. The other issue, and one that I am comfortable with, can be explained with reference to the Rayleigh scattering problem. In the experimetns I'm used to, we plot $1/I$ on the y-axis, and $\sin^2(\theta/2)$ on the x-axis ($I$ is the scattering intensity, and $\theta$ is the scattering angle). Under a set of approximations, that gives a straight line. The y-intercept is proportional to 1 over the molecular weight of the particle. The slope is proportional to the square of the radius of the particle. So for a given molecular weight, increasing the particle's radius will increase the slope of that line. So for a given angle, you have to increase $1/I$, which means that $I$ must go down. I think the underlying explanation for that that the density of the particle decreases, because you have increased the volume while keeping the same mass.
{ "domain": "physics.stackexchange", "id": 7382, "tags": "scattering" }
Great Attractor's gravity vs Universe Expansion
Question: I would like to know if the trajectory of our galaxy has been calculated because it is usually said that the cosmos is emptying due to the expansion of the universe but at the same time there is a region of space that we are traveling to along with many other galaxies in the Laniakea supercluster. Space is expanding in all directions and getting faster and faster but due to the proximity to the Great Attractor, will the Milky Way end up joining together in a sea of galaxies or will it be left alone with Andromeda and satellite galaxies? Answer: There is a recent paper (Shaya, E., Tully, R. B., Pomarède, D., & Peel, A. (2022). Galaxy flows within 8,000 km/s from Numerical Action methods. arXiv preprint arXiv:2201.12315, more material ) that addresses this, and has some nice animations. In particular, this video shows an extrapolation of galactic cluster trajectories 10 billion years into the future. Note that most parts show locations in co-moving coordinates that expand with the universe (much more convenient for examination), but about a minute in there is a demonstration how the distances actually scale when the expansion is plotted. In particular, it turns out that while the Local Cluster is moving towards the Virgo cluster and the core of Laniakea with the Great Attractor, the speed is not enough to catch up with them. The end result is that we will be isolated.
{ "domain": "astronomy.stackexchange", "id": 6215, "tags": "observational-astronomy, gravity, astrophysics, milky-way" }
Using SLAM map in a custom node
Question: I have followed the SLAM Gmapping tutorial. I have used rviz to set the goal for the robot. I want to write a program in which robot moves autonomously. Therefore, I want to ask how can I set the current pose and navigation goal in the code (cpp node) without using the rviz? And also how can I used the learned (and saved) map in my code (cpp node)? Originally posted by Safeer on ROS Answers with karma: 66 on 2014-05-07 Post score: 0 Answer: The goal can be set by using the actionlib server provided by the move_base node. There are tutorials on how to do this. Originally posted by Hendrik Wiese with karma: 1145 on 2014-05-08 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 17869, "tags": "navigation, rviz, turtlebot, asus-xtion-pro-live, gmapping" }
SMART string for alkyl group excluding alcohols
Question: I'm quite new to the usage of SMARTS strings. I would like to find all alkyl groups (but no $\ce{CH3})$ in a component, here e.g. cyclohexanol (C1CCC(CC1)O). My SMARTS string is: [CX4;H0,H1,H2] However, this matches every carbon. I would like to write my smart string, so that C:3 is not matched, because it is bound to something else than a carbon (here: oxygen). How is this possible with SMARTS? Answer: How to find carbon with four connections not -CH3 and only connected to carbon As in your SMILES there are only implicit hydrogen, you do not need to count Hs. Count the explicit bonds and exclude the carbon with one explicit bond to avoid -CH3. [CX4;D2,D3,D4] To find the carbons that are connected to others than carbon C[!C] Now you can use recursive SMARTS, and set the second pattern to NOT. [CX4;D2,D3,D4;!$(C[!C])] Now only carbons would be found that match both patterns. CX4 C with total connections of 4 D<n> number of explicit bonds $() recursive SMARTS ! NOT ; AND , OR In cyclohexanol it will find and here is a dummy molecule for testing it. CCC(F)C=CCC1CCC(O)C(C(C)(C)C)C1
{ "domain": "chemistry.stackexchange", "id": 14962, "tags": "computational-chemistry, cheminformatics" }
Black body radiation
Question: I have a few questions related to the emission of electromagnetic radiation by black bodies. Consider the following image: On the above image I have drawn the rays of light that are emmited by black bodies assuming that they have only 8 points of emmision (They are marked with red dots). Which of these two images shows the real situation? The single point emits the radiation in all directions or only in one direction? If my intuition is not wrong and the image on the right is correct consider the next image where I assume that there is only one point of emmission on the object: Is the intensity of radiation the same in every direction? Does the radiation in each direction have the same intensity-wavelength distribution (presented in the image below)? Answer: This is a difficult concept to talk about without using a few proper definitions. Unfortunately these radiometric definitions all sound very similar and have similar meanings but important differences. Intensity is the rate of energy transfer per unit area. Intensity does not have direction and thus we cannot even talk about radiation having 'different' intensities in different directions by definition. The radiant intensity, however, is the amount of energy transfer per unit solid angle. The radiance is the amount of energy transfer per unit area per unit solid angle. Lambert's cosine law states that the radiant intensity of an ideally diffuse emitter (e.g. a perfect black body) is proportional to $\cos^2\theta$, where $\theta$ is the angle from surface normal. However, the apparent surface area of a flat object is also reduced by a factor of $\cos^2\theta$ when viewed from an angle $\theta$ from normal. Thus the radiance does not vary with angle. When you use the term intensity in your question, you probably mean radiance, which is the most natural term to talk about. It is the same as talking about how bright an object appears. To quote wikipedia: Radiance is useful because it indicates how much of the power emitted by an emitting or reflecting surface will be received by an optical system looking at the surface from some angle of view. In summary: Is the radiance the same in every direction? Yes. Does the radiance in each direction have the same spectral distribution? Yes.
{ "domain": "physics.stackexchange", "id": 8211, "tags": "electromagnetic-radiation, thermal-radiation" }
How is it that a cistern can collect rainfall, but a well cannot, even though both of them are holes?
Question: I imagined that a regular downpour or flood of rain would fall into the well directly, so I did a search on Google and was surprised to find this answer. But the article didn't make sense other than that the water would seep through the ground and in ten years, it would be at the water table, where it might, perhaps, refill the well. Still, I'm not sure what sets a cistern apart at being able to collect rainwater, since the gravity should be able to pull rain down. If there is a better SE site for this question, feel free to move it. Answer: Without directing runoff rainwater into an open well it can only catch the rain that lands on its exposed surface area. This could only add a few inches at most in a single rainfall. A cistern directs runoff from a larger area into it, so it is fed by a much larger area than just the surface area of the tank.
{ "domain": "physics.stackexchange", "id": 81607, "tags": "water, flow" }
Snippets Manager
Question: This basically manages a set of snippets, saving and loading it. snippets.cpp #include "Features/snippets.h" #include <iostream> Snippets::Snippets(QObject* parent) : QObject(parent) , m_Snippets(nullptr) { bool success; LoadSnippets(success); } void Snippets::LoadSnippets(bool& success) { success = false; QFile file(SNIPPETS_FILE); QMap<QString, QString> data = QMap<QString, QString>(); if (file.open(QIODevice::ReadOnly)) { QDataStream in(&file); in >> data; file.close(); } if (data.isEmpty()) { data.insert(tr("API Help"), tr( "# Full API\n" "# ---------------------------\n" "# get method's have no parameters and others have one\n" "#\n" "# get_input - get input textbox's text\n" "# set_input - set input textbox's text\n" "# get_output - get output textbox's text\n" "# set_output - get output textbox's text\n" "# get_code - get code textbox's text\n" "# set_code - set code textbox's text\n" "# write_output- append to output box\n" "# get_apppath - get exe path\n\n" "# API Help/Code Sample\n" "# ---------------------------\n" "\n" "# get text from input box\n" "# parameters - none\n" "txt = get_input()\n" "\n" "# change output box's text\n" "# parameters - string\n" "set_output(\"\")\n" "\n" "# append to output box\n" "# does not add a new line\n" "# parameters - string\n" "write_output(\"Hi You,\\n\")\n" "\n" "# get_apppath() -> get exe path\n" "print (\"PyRun.exe is at :\", get_apppath())\n\n")); } m_Snippets = new QMap<QString, QString>(data); success = true; } void Snippets::SaveSnippets(bool& success) { success = false; if (m_Snippets != nullptr) { QFile file(SNIPPETS_FILE); if (!file.open(QIODevice::WriteOnly)) { return; } QDataStream out(&file); out << *m_Snippets; file.close(); success = true; } } void Snippets::AddSnippet(const QString& name, const QString& code, bool& success) { success = false; if (m_Snippets != nullptr) { success = true; m_Snippets->insert(name, code); } } void Snippets::RemoveSnippet(const QString& name, bool& success) { success = false; if (m_Snippets != nullptr && m_Snippets->contains(name)) { success = (m_Snippets->remove(name) > 0); } } QString Snippets::GetSnippet(const QString& name, bool& success) { success = false; if (m_Snippets != nullptr && m_Snippets->contains(name)) { success = true; return m_Snippets->value(name); } return QString(); } QList<QString> Snippets::GetKeys(bool& success) { success = false; if (m_Snippets != nullptr) { success = true; return m_Snippets->keys(); } return QList<QString>(); } Snippets::~Snippets() { if (m_Snippets != nullptr) { bool success; SaveSnippets(success); // save on the destructor if (!success) { std::cerr << "Writing Snippets to Database on save failed" << std::endl; } delete m_Snippets; } } snippets.h #ifndef SNIPPETS_H #define SNIPPETS_H #include <QObject> #include <QMap> #include <QList> #include <QIODevice> #include <QFile> #include <QApplication> #define SNIPPETS_FILE QApplication::applicationDirPath() + "/snippets.dat" class Snippets : public QObject { Q_OBJECT public: explicit Snippets(QObject* parent = 0); QString GetSnippet(const QString& name, bool& success); void RemoveSnippet(const QString& name, bool& success); void AddSnippet(const QString& name, const QString& code, bool& success); void SaveSnippets(bool& success); void LoadSnippets(bool& success); ~Snippets(); QList<QString> GetKeys(bool& success); signals: public slots: private: QMap<QString, QString>* m_Snippets; }; #endif // SNIPPETS_H What to review: Best practices in C++, C++11, Qt5, and anything else Versions/settings: Qt5.3.2 mingw482_32 QMAKE_CXXFLAGS += -std=c++11 Answer: Not being a frequent Qt user, I don't have much to say. But there a few things: SNIPPETS_FILE: That would be a lot better as a function. Macros and C++ are not a very good match. Macros have quite a few drawback, the most annoying ones are probably not respecting scope and being able to silently redefine them by mistake. I'm not saying that macros don't have any use whatsoever, but C++ offers better alternatives to most cases. A simple function (possibly inline) would be the cleanest solution for this specific case: inline QString GetSnippetsFileName() { return QApplication::applicationDirPath() + "/snippets.dat"; } Perhaps you could use a smart pointer for m_Snippets to make your code more exception safe and free yourself from the burden of deallocating the object by hand. But then again, is it really necessary to dynamically allocate m_Snippets? Why don't you declared it by value. Avoid a dynamic memory allocation where an instance declared by value will do. The way you are returning the result of functions in SaveSnippets(), LoadSnippets() and others, by passing a bool& success to the function, is quite unusual. You should be returning that boolean as the function's return value instead. It would be a lot more conventional. Also, you could consider throwing and exception in some cases. I don't think all of the #includes in the header file are necessary. QIODevice and QFile are probably only required by the class implementation in the .cpp file. Don't import those dependencies to all users of the class if they are not needed for someone including snippets.h.
{ "domain": "codereview.stackexchange", "id": 10807, "tags": "c++, beginner, c++11, qt" }
Wall visible after light source removed
Question: Let's say you have a lamp pointed at a wall, in a dark room with no other light source, if you turned off that lamp Would the wall still be visible for any amount of time? Answer: Visible needs a definition. As the velocity of light is practically 300.000.000 meters per second and the distance to the wall maybe 1 meter it will be visible for the order of (1/3)*10^-8 seconds, considering the light source will have some spread. Not visible for the human eye except as an after image on the retina, if the light is strong enough. The energy will turn into thermal energy of the wall. If the wall were painted with phosphorescent paint or a photoluminescent paint then it might be visible from minutes to hours.
{ "domain": "physics.stackexchange", "id": 23474, "tags": "visible-light" }
Quickly applying gravity force between bodies
Question: I have a function for applying gravity forces between every possible pair of bodies on my game. It is the most used function, and can run more than 100k times per frame so every minor improvement on performance will make a HUGE difference here. I replaced some divisions by multiplications and the general FPS increased from 5fps to 20fps with ~1000 bodies. This is how performance is affected by this function. Precision can be decreased if it increases performance considerably. Just make sure you make it clear that a change will decrease precision in your review. applyGravityBetween(bodyA, bodyB, collisionCallback) { var distX = bodyB.x - bodyA.x, distY = bodyB.y - bodyA.y, distSqr = distX * distX + distY * distY, forceA, forceB, dist; if (distSqr > (bodyA.radius + bodyB.radius) * (bodyA.radius + bodyB.radius)) { // ALERT: Radical actions were taken here to make faster code. Division was avoided at max. dist = 1 / Math.sqrt(distSqr); // Dividing one by the distance allows us to multiply instead of dividing later when setting actual velocities, which is more performant. forceA = bodyB.mass * dist * dist; forceB = bodyA.mass * dist * dist; // Instead of dividing by `distSqr` we can multiply by `dist` twice. bodyA.vx += forceA * distX * dist; bodyA.vy += forceA * distY * dist; bodyB.vx -= forceB * distX * dist; bodyB.vy -= forceB * distY * dist; } else if (typeof collisionCallback === "function" && bodyA.collidable && bodyB.collidable) collisionCallback(bodyA, bodyB); } EDIT: Here's a benchmark for each separate piece from this function, so you can focus on what you're going to improve: var declarations took 3.199ms on average to run. Collision check took 3.342ms. Math.sqrt() took 3.122ms. This question is also related to this one, so if you're interested you can go there too and... Review. Answer: Forward note: To benchmark I simply commented out //self.addCollision(bodyA, bodyB); in step(). Referencing your github code. To optimize just this method without looking at anything else we can do the following. This gives a physics workload at ~22ms compared with the original ~27ms on my system. applyGravityBetween(bodyA, bodyB, collisionCallback) { var dx = bodyB.x - bodyA.x, dy = bodyB.y - bodyA.y, r = (bodyA.radius + bodyB.radius); if ( ( ( dx > r || -dx > r ) || ( dy > r || -dy > r ) ) || ( dx * dx + dy * dy > r * r ) ) { r = Math.sqrt(dx * dx + dy * dy); if( bodyB.mass == bodyA.mass ){ r = bodyA.mass / (r * r * r); bodyA.vx += r * dx; bodyB.vx -= r * dx; bodyB.vy -= r * dy; bodyA.vy += r * dy; }else{ r = 1 / (r * r * r); bodyA.vx += bodyB.mass * r * dx; bodyB.vx -= bodyA.mass * r * dx; bodyB.vy -= bodyA.mass * r * dy; bodyA.vy += bodyB.mass * r * dy; } } else if( typeof collisionCallback === "function") collisionCallback(bodyA, bodyB); } The above code improves by doing the following. simple bounding box collision check before using a more intensive distance check remove setting of variables as much as possible, i found in practice this is worth 2 or 3 multiplications in terms of cost. sometimes it's better to repeat the calculations. Use a comparison of the masses to speed up similar calculations - this is slightly faster where everything starts off with the same mass and could be useful for high N and similar mass simulations, otherwise I would take it out. Areas for improvement; This is not coded up but it is common in n-body simulations to group and approximate close,similar & far away masses by grouping them together as one. This can really help as N gets larger because it grows really fast. 2. Reduce Callbacks. You only asked to optimise the gravity function so this is just an extra.. If you put the following directly into your step() method you can instantly double the speed of the simulation. I found a physics time of 8.8 ms on average - Incorporating all improvements that's almost a third of the original time. var i1, i2, bodyA, bodyB, dx, dy, r; for (i1 = 0; i1 < this.dynamicBodies.length; ++i1) { bodyA = this.dynamicBodies[i1]; for (i2 = i1 + 1; i2 < this.dynamicBodies.length; ++i2) { bodyB = this.dynamicBodies[i2]; dx = bodyB.x - bodyA.x; dy = bodyB.y - bodyA.y; r = (bodyA.radius + bodyB.radius); if ( ( ( dx > r || -dx > r ) || ( dy > r || -dy > r ) ) || ( dx * dx + dy * dy > r * r ) ) { r = Math.sqrt(dx * dx + dy * dy); if( bodyB.mass == bodyA.mass ){ bodyA.vx += (r = bodyA.mass / (r * r * r)) * dx; bodyB.vx -= r * dx; bodyB.vy -= r * dy; bodyA.vy += r * dy; }else{ bodyA.vx += bodyB.mass * (r = 1 / (r * r * r)) * dx; bodyB.vx -= bodyA.mass * r * dx; bodyB.vy -= bodyA.mass * r * dy; bodyA.vy += bodyB.mass * r * dy; } } else { //this.addCollision(bodyA, bodyB); } } }
{ "domain": "codereview.stackexchange", "id": 20515, "tags": "javascript, performance, ecmascript-6, physics" }
Mean field 3d Ising model in Landau theory
Question: I am solving a problem for some course I am following. I am given the Hamiltonian: $$H = 3NJ \langle \sigma\rangle ^2 -6J\langle \sigma\rangle \sum_i\sigma_i,$$ where $i$ sums over all sites ($=N$). This is the Hamiltonian for the 3D mean field Ising model. The average spin per site is $\langle\sigma\rangle\equiv m$ from now on. $J$ is some interaction energy, most references take here $J/2$ but we had to use $J$, does not matter that much of course. I found the canonical partition function as follows: $$Z= \sum_{configs} e^{-\beta H} = e^{-3\beta NJm^2} \sum_{configs} \prod_{i}e ^{6\beta Jm \sigma_i} = e^{-3\beta NJm^2} \prod_i 2^{N-1} (2\cosh(6\beta Jm)) = e^{-3\beta JNm^2}(2\cosh(6\beta Jm))^N,$$ where $configs$ denotes the sum over all configurations and $\sigma_i = \pm1$ is used. From this I found the consistency equation for $m$: $$m = \tanh(6\beta Jm),$$ using $h= 6Jm$ and $m = \frac{1}{N\beta} \frac{\partial \ln(Z)}{\partial h}$. From this I derived the critical temperature: $$T_c = \frac{6J}{k_B}.$$ From this I had to derive the Landau parameters $a$ and $b$. I had to do this by expanding the consistency equation around $m\approx 0$ and $T\approx T_c$ using the reduced temperature $t=(T-T_c)/T_c$. For the problem I had to use $t$ and $M = mN$ for the total magnetization. This gives: $$1 \approx \frac{T_c}{T} - \frac{M^2}{3 N^2}(\frac{T_c}{T})^3 + \frac{2M^4}{15 N^4}(\frac{T_c}{T})^5,$$ and using $1+t = \frac{T}{T_c}$ we find: $$1 \approx \frac{1}{1+t} - \frac{M^2}{3N^2}(1+t)^{-3}+ \frac{2M^4}{15N^4}(1+t)^{-5}.$$ We can taylor expand this for $t \approx 0$ as follows: $$1 = 1 -t -t^2 - \frac{M^2}{3N^2}(1-3t+6t^2) + \frac{2M^4}{15N^4}(1-5t+15t^2).$$ I had to expand this expression to second order in $t$ and second order in $M^2$. Now I have to find the coefficients $a,b$ in the following where $\phi$ is the canonical potential so $\ln(Z)$: $$\phi(T,M) = \phi_0(T) -aJtM^2 -bJM^4.$$ Does anyone have an idea how to connect my previous equation to this one? I am really stuck at this point and I do not see how to connect these two equations. Answer: So the free energy functional, with parameters $m$ and $T$ is given by the logarithm of the partition function: $$ F(m,T) = - \frac{1}{\beta} \log Z = 3 J N m^2 + \frac{N}{2\beta} \log \cosh(6\beta J m)^2 - \frac{N}{\beta} \log 2 \ . $$ It is convenient of replacing $\cosh$ by $\cosh^2$ for the following reason: we have $$ \tanh(6\beta J m)^2 = \frac{\cosh(6\beta J m)^2-1}{\cosh(6\beta J m)^2} $$ and thus, using the consistency equation $$ \cosh(6\beta J m)^2 = \frac{1}{1- \tanh(6\beta J m)^2} = \frac{1}{1-m^2} \ . $$ Hence we get $$ F(m,T) = 3 J N m^2 + \frac{N}{2\beta} \log \frac{1}{1-m^2} - \frac{N}{\beta} \log 2 $$ Expanding in $m$: $$ F(m,T) = (1 - 6\beta J) \frac{N m^2}{2} + \frac{N m^4}{4 \beta} + \mathcal{O}(m^6) \ .$$ Note that if $t = \frac{T-T_c}{T}$, then $$ (1 - 6\beta J) = \frac{T}{T_c} \frac{T-T_c}{T_c} = t + t^2 $$ and $$ \frac{1}{\beta} = T_c + t T_c $$ So $$ F(m,t) = \frac{t N m^2}{2} + \frac{T_c N m^4}{4} + \mathcal{O}(t^2,m^6,m^4 t) $$ Which is a sensible Landau free energy functional, but is not formulated in terms of the total magnetization $M$, so that remains unclear to me.
{ "domain": "physics.stackexchange", "id": 52800, "tags": "homework-and-exercises, statistical-mechanics, phase-transition" }
How is the distance to a $\gamma \mathrm{-ray}$ burst (GRB) measured in just a few days?
Question: Recently the Fermi Gamma-ray Space Telescope recorded the most energetic Gamma Ray burst (GRB 130427A) yet observed with a peak $\gamma \mathrm{-ray}$ energy of $94\, \mathrm{GeV}$. Various sources have reported that the burst was determined to be $3.6 \times 10^9\, \mathrm{lightyears}$ away. How can a distance measurement like this be made so quickly? It has only been a few days since the GRB. No articles mention any supernova remnant being seen yet or anything else besides just the GRB. I don't see how the photons in a GRB could have absorption or emission spectra that would help and there isn't any mention of the GRB being localized to a particular galaxy. NASA says: The burst subsequently was detected in optical, infrared and radio wavelengths by ground-based observatories, based on the rapid accurate position from Swift. Astronomers quickly learned that the GRB was located about 3.6 billion light-years away, which for these events is relatively close. How are distance measurement like this made so quickly and what sort of accuracy do they have? Answer: Here is the actual announcement of the redshift from Andrew Levan. For those unfamiliar with astronomy practices, circulars are generally where simple things like discoveries, first spectra, redshifts, etc. are announced just as soon as the data is gathered and a preliminary reduction is performed, usually issued the morning after the observation. In this case, the Gemini Multi Object Spectrograph (GMOS), attached to the $8\ \mathrm{m}$-class Gemini-North telescope, was used to get a visible/near-UV spectrum of the afterglow the same day the burst was detected. Absorption lines for calcium1 and magnesium2 were used to get a redshift of $0.34$. So yes, it is optical astronomy pinning down the redshift. As to how this was done so fast, GRB's (and other interesting transients) are followed by many observatories, which have measures in place to slew to "targets of opportunity" the moment they are reported to get data before they vanish. For GRB's in particular, they are often detected in gamma rays first, but gamma ray "telescopes" are basically blocks of scintillating material sitting in space, and so they have terrible angular resolution. Here, as is often the case, the Swift satellite was triggered to search the general area with its X-ray instrument, getting a somewhat more precise sky position. Then other facilities operating in other EM bands were able to look for the object. This happens largely automatically over the course of minutes to hours. 1 Specifically, the H and K lines were used. These are very common for getting redshifts of galaxies hosting distant transient phenomena. 2 $\mathrm{Mg}~\mathrm{I}$ is atomic; $\mathrm{Mg}~\mathrm{II}$ is singly-ionized. The latter has a nice, recognizable doublet at rest wavelengths of 2796.352 Å and 2803.530 Å, as described in this Astrobites article. By the way, you will often see the doublet referred to with the numbers 2796/2803 rather than 2796/2804, which is not an effect of truncation instead of proper rounding, but rather a leftover tradition from back when spectroscopy was reported in air wavelengths rather than vacuum ones.
{ "domain": "physics.stackexchange", "id": 8148, "tags": "astrophysics, observational-astronomy, supernova, gamma-rays" }
Where is the preffered position to put a simple control loop in a ROS2 Node (C++) for obstacle avoidance
Question: I am working on a turtlebot simulation with ROS2, and I would like the robot to stop when it hits a wall. Where is the proper place to implement simple logic (for example, if distance < 10 then stop)? Right now, I have code in the callback, but this isn't ideal especially if the code took a while to run it could cause sensor messages to be missed. Also, if I wanted to only check for obstacles every 2secs, putting a delay in the callback is also not a good idea. What is the preferred method/where is the preferred place to add this logic, and also how can I delay it so the logic only checks the laser scan every 2 seconds? #include "rclcpp/rclcpp.hpp" #include "sensor_msgs/msg/laser_scan.hpp" #include "geometry_msgs/msg/twist.hpp" class LaserScanReader : public rclcpp::Node { public: LaserScanReader() : Node("laser_scan_reader") { subscription_ = this->create_subscription<sensor_msgs::msg::LaserScan>( "scan", 10, std::bind(&LaserScanReader::topic_callback, this, std::placeholders::_1)); cmd_vel_publisher_ = this->create_publisher<geometry_msgs::msg::Twist>("/cmd_vel", 10); } void publish_cmd_vel(geometry_msgs::msg::Twist cmd_vel) { cmd_vel_publisher_->publish(cmd_vel); } private: void topic_callback(const sensor_msgs::msg::LaserScan::SharedPtr msg) const { int center_index = msg->ranges.size() / 2; RCLCPP_INFO(this->get_logger(), "Center reading: '%f'", msg->ranges[center_index]); if (msg->ranges[center_index] < 10) { // stop robot here/publish 0 to cmd_vel (is there a better place to do this) // how can I cehck every 2secs (insted of everytime callback runs) } } rclcpp::Subscription<sensor_msgs::msg::LaserScan>::SharedPtr subscription_; }; int main(int argc, char *argv[]) { rclcpp::init(argc, argv); rclcpp::spin(std::make_shared<LaserScanReader>()); rclcpp::shutdown(); return 0; } ``` Answer: In general computations specific to the node are implemented as member functions. If it gets way to big you should think about using different threads / callback groups or split your tasks up into different nodes or components. Otherwise there is not really a preferred place to put a computation. It really depends on your software design and your requirements. Executing something every 2s is much simpler, than dealing with functions that take too long and should execute faster. For your case you could add a timer callback as a member function, where you do your computation: void timer_callback() { // do periodic task } And add a timer that calls the function every 2 seconds when the node is spinning: timer_ = this->create_wall_timer(2000ms, std::bind(&LaserScanReader::timer_callback, this)); The message (or just the data) that you need can be stored as a member variable to share it between the topic_callback and timer_callback. Don't forget to declare the variable in the class. Also I would suggest to remove the member function publish_cmd_vel and store the publisher cmd_vel_publisher_ as a member variable, because you will have less code and it is clearer for other persons who read it. Edit: A good example of how to use timer callbacks is the guide on how to write a simple publisher node (C++) in the ROS2 doc.
{ "domain": "robotics.stackexchange", "id": 38774, "tags": "ros2, callback, laserscan, obstacle-avoidance, logic-control" }
Removing numbers from the middle of an array
Question: I have an array like this (make sure you scroll to the right): [0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0] Where it will always be the same length. And there will always be some number of 0's surrounding some number of 1's. I am trying to find a good/efficient/smart way to turn the middle of the array of into 0's when there is a padding of 4 1's on each side. The result would be something like: [0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0] Where you now have a padding of 4 1's on either side. This is how I implemented it: var a = [0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0]; var hitOne = false; var oneCount = 0; var oneStopPoint = a.lastIndexOf(1) - 3; for(var n in a){ if(n == oneStopPoint){ break; } if(a[n] == 1 && !hitOne) { hitOne = true oneCount++; } else if(a[n] == 1 && hitOne) { oneCount++; } if(oneCount > 4){ a[n] = 0; } } console.log(a); http://jsfiddle.net/4gk9L9uv/1/ Answer: Firstly: Don't use for...in on arrays. Secondly, I suppose you can find the 1-boundaries, and add/subtract 4: var left = array.indexOf(1) + 4; var right = array.lastIndexOf(1) - 4; for(; left <= right ; left++) { array[left] = 0; } Not particularly clever, but it does the job. (Note: lastIndexOf is widely supported in modern runtimes, but older ones may not have it) But also take @itsbruce's advice from the comments, and consider another structure for these data.
{ "domain": "codereview.stackexchange", "id": 10944, "tags": "javascript, array" }
Would a human body float in the dense atmosphere of Venus?
Question: To survive high in the atmosphere of Venus, all you would nead to wear is suit that protects you from the sulfuric acid vapors in the air and a supply of breathable air. Assuming (for simplicity) that this gear has the same density as the human body (the air supply would probably much heavier), could you float on top of denser gas layers below you? I believe the density of the gas would only have to be about as high as the density of water on the surface of the sea. And would that mean that when you pour out a bottle of water, the water would float in the air (given the temperature/pressure ratio is below the boiling point)? Answer: No. According to the Nasa Venus fact sheet the density of the atmosphere at the surface is ~65 kg/m^3. For comparison, water is 1000 kg/m^3. We only just float in water. So if you were there, in a suit that could somehow withstand the heat (464 C), the "air" would feel thick. Maybe a person could strap on some wings and fly ...
{ "domain": "astronomy.stackexchange", "id": 872, "tags": "planetary-atmosphere, venus, space-travel" }
[SOLVED] Return a value from a callback function (subscribing to topics for moveit_python_interface)
Question: I'm trying to return a value from a callback function, just like in this post. The objective is to run the MoveIt-Python interface to program robot trajectories from topics defined in the terminal with rostopic pub /<topic> "data: <number>". So, what I've made until know is creating 3 subscribers, one for each coordinate value (x, y, z). I'm running them in a single node, which is created in the group_interface.py. I'm not sure if it's the best way, but I think it should work. The problem is that I need to return the values from the subscriber callbacks to send them to the group interface of MoveIt. After trying different approaches presented in the mentioned post, the solution has been using a class (OOP ROS node) and save the values as member variables to the class. I'm attaching the code for future possible help (note that probably it could be simplified): #!/usr/bin/env python import rospy from std_msgs.msg import Float64 import group_interface class ListenerX: def __init__(self, parent): self.parent = parent self.value = 0 self.subs = rospy.Subscriber("/trajectPos_mtx_1_4_", Float64, self.callback_x) def callback_x(self, msg): # rospy.loginfo(rospy.get_caller_id() + "I heard: x = %s", msg.data) self.value = msg.data self.parent.receive_data("x", msg.data) class ListenerY: def __init__(self, parent): self.parent = parent self.value = 0 self.subs = rospy.Subscriber("/trajectPos_mtx_2_4_", Float64, self.callback_y) def callback_y(self, msg): # rospy.loginfo(rospy.get_caller_id() + "I heard: y = %s", msg.data) self.value = msg.data self.parent.receive_data("y", msg.data) class ListenerZ: def __init__(self, parent): self.parent = parent self.value = 0 self.subs = rospy.Subscriber("/trajectPos_mtx_3_4_", Float64, self.callback_z) def callback_z(self, msg): # rospy.loginfo(rospy.get_caller_id() + "I heard: z = %s", msg.data) self.value = msg.data self.parent.receive_data("z", msg.data) class Listener: def __init__(self): self.interface = group_interface.GroupInterface() # this generates the node print("") print("MoveIt group: manipulator") print("Current joint states (radians): {}".format(self.interface.get_joint_state("manipulator"))) print("Current joint states (degrees): {}".format(self.interface.get_joint_state("manipulator", degrees=True))) print("Current cartesian pose: {}".format(self.interface.get_cartesian_pose("manipulator"))) print("") print("Planning group: manipulator") print(" |-- Reaching named pose...") self.interface.reach_named_pose("manipulator", "up") # robotic arm completely up print(" |-- Reaching cartesian pose...") self.pose = self.interface.get_cartesian_pose("manipulator") self.pose.orientation.w = 1.0 self.cont = 0 print("") print('[INFO] Waiting for target position...') self.listener_x = ListenerX(self) self.listener_y = ListenerY(self) self.listener_z = ListenerZ(self) def receive_data(self, id, val): if (id == "x"): self.pose.position.x = self.listener_x.value self.cont += 1 if (id == "y"): self.pose.position.y = self.listener_y.value self.cont += 1 if (id == "z"): self.pose.position.z = self.listener_z.value self.cont += 1 if (self.cont == 3): print("") print('[INFO] Target position already set!') print("") print('[INFO] Reaching target position...') self.interface.reach_cartesian_pose("manipulator", self.pose) print("") print('[INFO] Target position already reached!') print("") print("Current cartesian pose: {}".format(self.interface.get_cartesian_pose("manipulator"))) if __name__ == '__main__': # rospy.init_node('listener_xyz') listen = Listener() rospy.spin() Thanks to the comments. PD: I can't attach the code of group_interface.py because of low karma. Originally posted by jon.aztiria on ROS Answers with karma: 21 on 2022-05-05 Post score: 1 Answer: Not necessarily an answer to your question, but an answer to your problem. Why do you publish the 3 numbers separately? Would it not be easier to use a vector? You could use the vector3 message from geometry_msgs and publish the 3 points in one go. This would remove a lot of headache, I feel. If you do want to publish them individually, the answer to the question you posted suggested using a class and save the values as member variables to the class. Have you tried this? Originally posted by Joe28965 with karma: 1124 on 2022-05-05 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Alex-SSoM on 2022-05-05: Here's a good reference on how to write an OPP ROS node. https://roboticsbackend.com/oop-with-ros-in-python/#The_Python_ROS_program_with_OOP Comment by jon.aztiria on 2022-05-11: Thanks Joe and Alex, your answers were spot on. I've updated the question with my solution based on your approach.
{ "domain": "robotics.stackexchange", "id": 37642, "tags": "ros2" }
Conversion of 4-nitrotoluene to 2-bromobenzoic acid
Question: In the above conversion why doesn't bromine add to methyl group? Is the first reaction following free radical or polar reaction (Friedel-Crafts). If it's following free radical mechanism then Br should add to CH3, if it's F-C substitution then why there is no catalyst when NO2 is deactivating group? What would be the change if I first do seconds step (i.e reduction to amine) and then bromination? Would it be wrong to do bromination at last? Is the intermediate (2-bromo-4-nitrotoluene) the only major product of first step? Answer: Sorry but I would like to answer your questions from last to first, (1)Yes,2-bromo-4-nitrotoluene is major product as -CH3 is ortho-para orienting group and preference of -CH3(activating) OVER -NO2(deactivating) will take place (2) -COOH is deactivating meta directing group (due to presence of >C=O) so friedel-crafts bromination would take place at meta position. (3)Yes,as -NH2 is better activator than -CH3 so bromination would take place ortho with respect to -NH2 (4)I think it is mistake of your book for not showing any catalysts like AlCl3 but It should be friedel-crafts bromination as -Br replaced one of the -H at ortho (to -CH3) of benzene. (5) As it is friedel-crafts bromination,Br+ ion will be easily satisfied(to be octet) by pi bond of benzene than sigma bond -C-H of-CH3.(since sigma bond is more stronger than pi bond so pi bond can be easily broken and form meta-stable sigma complex)
{ "domain": "chemistry.stackexchange", "id": 5434, "tags": "organic-chemistry, reaction-mechanism, synthesis" }
Customer Analysis - How to handle unbalanced data?
Question: I'm taking a data analysis course and decided to work on a customer analysis project. In the data, I have three countries: USA (539 unique users) BRA(385 unique users) TUR (129 unique users) I'm trying to analyze the country that brings in the most income, so I've decided to look at the mean revenue for each country. However when I do that I get the following result: It's not possible for Turkey to generate the most mean, because it has the lowest number of users and the lowest sum. I think it's showing the highest mean because the denominator when calculating the average is small. How would you go about solving this issue? Should I randomly sample data from each country and then calculate the mean? I'd really appreciate any pointers you could give or direction on how this would be solved in a real scenario. Thank you in advance! :) Answer: You can try Stratified Sampling. You can try SMOTE analysis You can try methods like 10 fold cross validation if you want train the ML model You can also use bagging, boosting or random forest algorithms.
{ "domain": "datascience.stackexchange", "id": 11667, "tags": "python, data-analysis, market-basket-analysis" }
Is the tension force (instead of normal force) the apparent weight here?
Question: According to this link, An object's weight, henceforth called "actual weight", is the downward force exerted upon it by the earth's gravity. By contrast, an object's apparent weight is the upward force (the normal force, or reaction force), typically transmitted through the ground, that opposes gravity and prevents a supported object from falling. In my picture, the elevator is accelerating upwards with an acceleration $a$. An object is attached to the ceiling of the elevator with a massless string. The actual weight of the object is $mg$. No normal force exists here, so can we say that the apparent weight of the object is the tension force $T$? Answer: . . . . . so can we say that the apparent weight of the object is the tension force $T$? - Yes Imagine that the string was a massless spring balance. The reading on the spring balance, which is the apparent weight of the object, would be the same as that of the tension on the string. Using $F=ma$ with "up" as the positive direction $t-mg = ma \Rightarrow T = Mg+Ma$ and that is the apparent weight of the object. If $a=0$ then $T=mg$ which is the weight of the object. If $a=-g$, ie the system is in free fall, $T=0$ and so the onject appears to be weightless ie the reading on a spring balance would be zero.
{ "domain": "physics.stackexchange", "id": 89422, "tags": "weight" }
Doppler Shift & Light Quanta
Question: I have a couple of (noob) questions regarding Doppler Shift and light from a quantum physics perspective: a) Since different observers will see the light at different frequencies depending on their reference frame / velocity thus resulting in Doppler Shift, does that mean that any light emitted exists in an infinite variation / probability of frequencies, and only the "observed" / measured frequencies will materialize? b) If there are an infinite / very large number of observers, would the emitted light (say a very brief burst) run out of observable light? Because if a single photon is emitted, then even if there are 2 detectors, only one will fire. Likewise, if an emitted light burst contains only say 1 million photons, does that mean the 1,000,001th observer (or detector) will not see anything? Thank you for "shedding light on the matter". :) Answer: A) Yes and No. Doppler shift is about apparent changes in frequency. Emphasis on apparent changes.The relative frequency of the light depends on the observer. Frequency is not, however, an inherent quality of the light. You may be conflating frequency and wavelength-a common error. B) Perhaps. If you are firing them sequentially onto a different detector each time, and you cease firing photons (you've fired your quota of 1,000,000) then the 1,000,001st detector will probably not detect a photon. If you fire 1,000,000 photons all at once, or in some other grouping, then it depends on your setup. Depending on the setup you could get all sorts of bizarre interference and data. It's possible under some setups for it to appear that your 1,000,000 photons registered in more than 1,000,000 places. It's also possible that with 1,000,000 photons and 1,000,001 simultaneous observers there will be one that doesn't get a photon to observe.
{ "domain": "physics.stackexchange", "id": 37690, "tags": "quantum-mechanics, visible-light, photons, doppler-effect" }
Define Pressure at A point. Why is it a Scalar?
Question: I have a final exam tomorrow for fluid mechanics and I was just looking over the practice exam questions. They do not provide solutions. But pretty much I have to define pressure at a point and also say why pressure is scalar instead of a vector. I am thinking pressure at a point is $P=\lim_{\delta A \to 0} \frac{\delta F}{\delta A}$. Please let me know if I am wrong. But I do not know at all why pressure is a scalar instead of a vector. I know it has something to do with $d \mathbf{F}=-Pd \mathbf{A}$ Answer: The pressure is defined as the flow rate of x-momentum in the x-direction plus the flow-rate of the y-momentum in the y-direction plus the flow rate of the z-momentum in the z-direction divided by three. Each component of momentum is conserved, and flows locally from point to point in a fluid. Each component is like a charge, and has it's own current, which makes a tensor of stresses. A fluid does not sustain shear, and this is true whether it is still or moving, by the principle of relativity. This means that if you put fluid between two plates, and squeeze, the force per-unit-area with which you squeeze (the local flow of momentum in the direction perpendicular to the plates) is equal to the force per unit area pushing outward at the edge of the plates. The flow of momentum is the same in all directions. You can measure the pressure by putting a homogenous solid at the point in the liquid, and noting how much it compresses. You can also do it by noting the very slight change in density of the fluid with pressure. The pressure of fluids is not an exact description of the fluid when there is viscosity, but it is close to perfect, and the viscosity and the pressure are independent ideas which can be treated separately. For the purpose of your class, you can think of the pressure as the amount a tiny box-spring will compress if you place a scale model in the fluid, moving along with the water. This pressure is the same in all directions in the fluid, despite the answer you got earlier.
{ "domain": "physics.stackexchange", "id": 50484, "tags": "forces, fluid-dynamics, pressure, definition" }
A flyby of orbiting supermassive black holes
Question: Consider two supermassive black holes of equal mass orbiting about their common centre of mass. Is it the case that a free-fall trajectory along the axis of rotation would be outside of either event horizon at all black hole separation distances > 0 (based on the symmetry of the situation)? To rephrase this, would you be able to navigate a rocket along a path at right angles to the orbital plane and bisecting the line between the two black holes with no ill effects whatsoever even when the black holes were very close to each other? Supplementary question: What is the shape of each of the event horizons prior to coalescence? Note: I’m assuming that tidal forces would be small until the two singularities were extremely close. Note also that the point midway between the two BHs is the L1 Lagrangian point. Answer: You can't have a stable configuration of two orbiting black holes at all separations. Even in the case of a test particle orbiting a single non-spinning black hole, the innermost stable orbit for the test particle is located at $r=6M$, with closer orbits plunging into the hole (circular orbits closer than $6M$ are unstable solutions analogous to balancing a rigid pendulum vertically.) If you have two black holes of appreciable mass, they will radiate away energy, gradually falling in closer and closer to each other, and then at some time before a common horizon is formed (also meaning before the "center of mass" of the system is inside the horizon), the two holes will hit their last stable orbit, and then will very rapidly combine, giving off a large burst of radiation, and an eventual end state of a single black hole. As a final aside, I should note that a spinning black hole's singularity is a ring, not a point, and that the center of this ring IS locally flat. Since a binary black hole collision will definitely have net angular momentum (even if just from the orbital angular momenta of the holes), this means that the end state will almost certainly be a spinning hole, so it is not necessary that the "center point" ever not be locally flat.
{ "domain": "physics.stackexchange", "id": 47345, "tags": "general-relativity, black-holes" }
How to dispose of pure potassium
Question: I have about 2g of 99.5% Potassium. I want to get rid of it, right now it's in a glass jar sealed. What is the proper protocol? I was going to combust it in water and wash it down the drain but the by-product potassium hydroxide is caustic and hazardous so I don't think that's best. Could I just throw it away and it goes to a garbage facility where the vial probably breaks and it reacts with rain water and the potassium hydroxide is so small and buried that it never really has an impact? Answer: In a suggested lab, the RSC recommends using propan-2-ol: If there is any unused potassium remaining at the end of the experiment, remove it with the tweezers and place it in a beaker containing about 100 cm3 of propan-2-ol to dissolve away – it will fizz, giving off bubbles of hydrogen. When the fizzing has stopped, dispose of the resulting alkaline solution down the sink flushing it away with plenty of water. The acidic solution remaining from the experiment may also be disposed of down the sink flushing it away with plenty of water. The same is recommended for sodium.
{ "domain": "chemistry.stackexchange", "id": 3872, "tags": "safety" }
Identify This Plant (Flower)
Question: I just bought this plant from flower shop, after it flowered it was very beautiful red flower, can anyone tell me what is this flower? Answer: This is a Gerbera, which belongs to the daisy family (Asteraceae). Compare: There are various species. The domesticated cultivars are mostly a result of a cross between Gerbera jamesonii and another South African species Gerbera viridifolia.[5] The cross is known as Gerbera hybrida. Thousands of cultivars exist. (from Wikipedia)
{ "domain": "biology.stackexchange", "id": 9679, "tags": "species-identification, botany, flowers" }
Energy without temperature
Question: If you know the entropy $S$ of your system. Is there a general way to calculate the internal energy $U$ of your system? So the entropy $S$ is the only thing I know of my system. I have no information about the temperature $T$, so I can't simply use the fact that $$S= \left.\frac{\partial U}{\partial T} \right|_V.$$ Will it be possible to tell something about my system, given that I only know it's entropy ? Answer: No, if $S$ is really the only thing you know about your system then there is no way to know its energy. There is no relationship between the energy and the entropy that doesn't involve some other quantity such as temperature. ...but surely you know something about your system, other than its entropy? I mean, you must know something about what it's made of, how big it is, and so on, otherwise you'd have no reason to be interested in it in the first place. Perhaps by taking some of that into account you might be able to say something useful, though of course this is highly dependent on your particular situation.
{ "domain": "physics.stackexchange", "id": 17269, "tags": "thermodynamics, energy, statistical-mechanics" }
Map unique raw words to a list of code words
Question: Problem Write a function that replaces the words in raw with the words in code_words such that the first occurrence of each word in raw is assigned the first unassigned word in code_words. If the code_words list is too short, raise an error. code_words may contain duplicates, in which case the function should ignore/skip them. Examples: encoder(["a"], ["1", "2", "3", "4"]) → ["1"] encoder(["a", "b"], ["1", "2", "3", "4"]) → ["1", "2"] encoder(["a", "b", "a"], ["1", "1", "2", "3", "4"]) → ["1", "2", "1"] Solution def encoder(raw, code_words): cw = iter(code_words) code_by_raw = {} # map of raw item to code item result = [] seen = set() # for ignoring duplicate code_words for r in raw: if r not in code_by_raw: for code in cw: # cw is iter(code_words), "persistent pointer" if code not in seen: seen.add(code) break else: # nobreak; ran out of code_words raise ValueError("not enough code_words") code_by_raw[r] = code result.append(code_by_raw[r]) return result Questions My main concern is the use of cw as a "persistent pointer". Specifically, might people be confused when they see for code in cw? What should be the typical best practices in this case? Might it be better if I used the following instead? try: code = next(cw) while code in seen: code = next(cw) except StopIteration: raise ValueError("not enough code_words") else: seen.add(code) Answer: My main concern is the use of cw as a "persistent pointer". Specifically, might people be confused when they see for code in cw? No. Instead, you can just remove the line cw = iter(code_words) as long as it's a native iterable. "Persistent Pointer" isn't a thing in python, because all python knows are Names. What should be the typical best practices in this case? That would be building a dictionary and using it for the actual translation. You're basically already doing this with your code_by_raw, if a bit more verbose than others might. The only real difference would be that, in my opinion, it would be better to first establish the translation, and then create the result. Except for your premature result generation, I would say your current function isn't bad. It does what it needs to do, it does it well without stupid actions, but it's not very readable. It's said often, I think you need to factor out a bit of code. Specifically, the bit that handles the fact that your inputs don't have to yield unique values, and how you need to handle duplicates. I would suggest a generator to handle that. This simplifies the main function a ton. (A comment pointed me towards the unique_everseen recipe, which is a slightly broader function. We don't quite need all it's functionality, but it might be worth the effort if you need some more flexibility.) def unique(iterable): """ Generator that "uniquefies" an iterator. Subsequent values equal to values already yielded will be ignored. """ past = set() for entry in iterable: if entry in past: continue past.add(entry) yield entry def encoder(raw_words, code_words): # Create mapping dictionary: code_by_raw = dict(zip(unique(raw_words), unique(code_words)) # Check if we had sufficient code_words: if len(code_by_raw) < len(raw_words): raise ValueError("not enough code_words") # Do translation and return the result return [code_by_raw[raw] for raw in raw_words] I can't completely tell your experience level with python. For result creation, I'm using comprehensions here. Might it be better if I used the following instead? It would not be bad functionally to use a structure like that, but it's still ugly (but opinions may differ). It basically does the same as my unique() generator up there.
{ "domain": "codereview.stackexchange", "id": 36138, "tags": "python, iterator, iteration, generator" }