anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Collision of 2 black holes
Question: There is a large number of visible Supernovas. Each week - about a 20 Supernovas Type 1a around the Universe. Is there collisions of Black Holes? What is this collision called? E.g. 2 black holes come close to each other. There are event horizons for both black holes, will this made such collision super long in Earth's time? Answer: It's called a black hole merger, or coalescence. Here a simulation video. Even the formation of the event horizons of the two initial black holes takes "super long" in Earth's time. That's similar with the merger. On the other hand we are very close to the completed merger within a short "Earth's" time seen from a distance, as soon as the merger starts. General relativity as well as quantum theory are incomplete with what will happen very close to a presumed singularity or at the presumed event horizon; this will remain disputed until a satisfying theory of quantum gravity is found. Mergers of black holes are likely to occur, e.g. when two galaxies collide, the momentum of the central supermassive black holes (SMBH) is slowed down by consumption of gas, dust and stars, until the SMBHs merge to the central SMBH of the merged galaxy. Here a galaxy merger simulation. Here a simulation of the coalescence of two black holes within a collapsing star. More on black hole binaries on Wikipedia.
{ "domain": "astronomy.stackexchange", "id": 320, "tags": "black-hole" }
Does polychromatic light fallows different laws of physics then monochromatic light?
Question: My understanding is that Yang's double-slit experiment only works with monochromatic light. Does it mean that polychromatic light behaves differently then monochromatic light? Or are we simply limited in our measurement tools to detect the same effect with polychromatic light? Assuming there are different levels of polychromatic and monochromatic light, how does the interference pattern will be effected going from single to many weave lengths. Answer: The spatial period of the fringes on the screen depends on the wavelength of the light, and the distance between the slits. That means, a distinct pattern for each distinct wavelength. The different wavelengths do not interact with each other. They simply add up. If you use a "white" light source, as Thomas Young did when he originally performed the experiment, you get a pattern that looks like this. The spacing of the "red" pattern is a bit wider than the spacing of the "blue" pattern because the wavelengths of "red" light are longer than "blue" light. That's why the outer edges of the fringes are reddish and the inner edges are blueish. The farther you get from the center fringe, the more the different patterns diverge. Of course, there's more than just a single "red" wavelength and a single "blue" wavelength making up that picture. There is a continuum of different wavelengths, which gives the pattern an over-all fuzzy appearance.
{ "domain": "physics.stackexchange", "id": 68432, "tags": "double-slit-experiment" }
what's wrong with rgbdslam?
Question: i have followed the wiki http://www.ros.org/wiki/rgbdslam. and i have try sequence at the bottom of rgbdslam's wiki page. everything seems ok. but when i type :rosmake rgbdslam_freiburg . i got the massage: [ rosmake ] rosmake starting... [ rosmake ] Packages requested are: ['rgbdslam'] [ rosmake ] Logging to directory /home/lg/.ros/rosmake/rosmake_output-20121023-105508 [ rosmake ] Expanded args ['rgbdslam'] to: ['rgbdslam'] [rosmake-0] Starting >>> bullet [ make clean ] [rosmake-1] Starting >>> geometry_msgs [ make clean ] [rosmake-2] Starting >>> sensor_msgs [ make clean ] [rosmake-1] Finished <<< geometry_msgs No Makefile in package geometry_msgs [rosmake-1] Starting >>> roslang [ make clean ] [rosmake-3] Starting >>> roscpp [ make clean ] [rosmake-1] Finished <<< roslang No Makefile in package roslang [rosmake-1] Starting >>> rosconsole [ make clean ] [rosmake-3] Finished <<< roscpp No Makefile in package roscpp [rosmake-2] Finished <<< sensor_msgs No Makefile in package sensor_msgs [rosmake-0] Finished <<< bullet ROS_NOBUILD in package bullet [rosmake-2] Starting >>> angles [ make clean ] [rosmake-3] Starting >>> rospy [ make clean ] [rosmake-0] Starting >>> rostest [ make clean ] [rosmake-2] Finished <<< angles ROS_NOBUILD in package angles [rosmake-2] Starting >>> roswtf [ make clean ] [rosmake-0] Finished <<< rostest No Makefile in package rostest [rosmake-0] Starting >>> message_filters [ make clean ] [rosmake-3] Finished <<< rospy No Makefile in package rospy [rosmake-3] Starting >>> tf [ make clean ] [rosmake-2] Finished <<< roswtf No Makefile in package roswtf [rosmake-2] Starting >>> std_msgs [ make clean ] [rosmake-0] Finished <<< message_filters No Makefile in package message_filters [rosmake-0] Starting >>> pcl [ make clean ] [rosmake-0] Finished <<< pcl No Makefile in package pcl [rosmake-1] Finished <<< rosconsole No Makefile in package rosconsole [rosmake-1] Starting >>> rosbuild [ make clean ] [rosmake-0] Starting >>> rosbag [ make clean ] [rosmake-3] Finished <<< tf ROS_NOBUILD in package tf [rosmake-3] Starting >>> roslib [ make clean ] [rosmake-2] Finished <<< std_msgs No Makefile in package std_msgs [rosmake-0] Finished <<< rosbag No Makefile in package rosbag [rosmake-0] Starting >>> pluginlib [ make clean ] [rosmake-3] Finished <<< roslib No Makefile in package roslib [rosmake-1] Finished <<< rosbuild No Makefile in package rosbuild [rosmake-2] Starting >>> bond [ make clean ] [rosmake-1] Starting >>> smclib [ make clean ] [rosmake-3] Starting >>> bondcpp [ make clean ] [rosmake-0] Finished <<< pluginlib ROS_NOBUILD in package pluginlib [rosmake-0] Starting >>> nodelet [ make clean ] [rosmake-1] Finished <<< smclib ROS_NOBUILD in package smclib [rosmake-3] Finished <<< bondcpp ROS_NOBUILD in package bondcpp [rosmake-1] Starting >>> rosservice [ make clean ] [rosmake-2] Finished <<< bond ROS_NOBUILD in package bond [rosmake-2] Starting >>> dynamic_reconfigure [ make clean ] [rosmake-3] Starting >>> nodelet_topic_tools [ make clean ] [rosmake-0] Finished <<< nodelet ROS_NOBUILD in package nodelet [rosmake-0] Starting >>> common_rosdeps [ make clean ] [rosmake-1] Finished <<< rosservice No Makefile in package rosservice [rosmake-1] Starting >>> pcl_ros [ make clean ] [rosmake-0] Finished <<< common_rosdeps ROS_NOBUILD in package common_rosdeps [rosmake-3] Finished <<< nodelet_topic_tools ROS_NOBUILD in package nodelet_topic_tools [rosmake-0] Starting >>> opencv2 [ make clean ] [rosmake-2] Finished <<< dynamic_reconfigure ROS_NOBUILD in package dynamic_reconfigure [rosmake-3] Starting >>> cv_bridge [ make clean ] [rosmake-2] Starting >>> visualization_msgs [ make clean ] [rosmake-0] Finished <<< opencv2 ROS_NOBUILD in package opencv2 [rosmake-0] Starting >>> rgbdslam [ make clean ] [rosmake-3] Finished <<< cv_bridge ROS_NOBUILD in package cv_bridge [rosmake-1] Finished <<< pcl_ros ROS_NOBUILD in package pcl_ros [rosmake-2] Finished <<< visualization_msgs No Makefile in package visualization_msgs [rosmake-0] Finished <<< rgbdslam [PASS] [ 0.25 seconds ] [rosmake-0] Starting >>> bullet [ make ] [rosmake-0] Finished <<< bullet ROS_NOBUILD in package bullet [rosmake-0] Starting >>> geometry_msgs [ make ] [rosmake-0] Finished <<< geometry_msgs No Makefile in package geometry_msgs [rosmake-1] Starting >>> roslang [ make ] [rosmake-1] Finished <<< roslang No Makefile in package roslang [rosmake-1] Starting >>> roscpp [ make ] [rosmake-0] Starting >>> sensor_msgs [ make ] [rosmake-1] Finished <<< roscpp No Makefile in package roscpp [rosmake-1] Starting >>> rosconsole [ make ] [rosmake-0] Finished <<< sensor_msgs No Makefile in package sensor_msgs [rosmake-2] Starting >>> angles [ make ] [rosmake-0] Starting >>> rospy [ make ] [rosmake-1] Finished <<< rosconsole No Makefile in package rosconsole [rosmake-0] Finished <<< rospy No Makefile in package rospy [rosmake-3] Starting >>> rostest [ make ] [rosmake-2] Finished <<< angles ROS_NOBUILD in package angles [rosmake-2] Starting >>> message_filters [ make ] [rosmake-1] Starting >>> roswtf [ make ] [rosmake-0] Starting >>> std_msgs [ make ] [rosmake-3] Finished <<< rostest No Makefile in package rostest [rosmake-2] Finished <<< message_filters No Makefile in package message_filters [rosmake-1] Finished <<< roswtf No Makefile in package roswtf [rosmake-3] Starting >>> rosbag [ make ] [rosmake-0] Finished <<< std_msgs No Makefile in package std_msgs [rosmake-0] Starting >>> pcl [ make ] [rosmake-2] Starting >>> rosbuild [ make ] [rosmake-3] Finished <<< rosbag No Makefile in package rosbag [rosmake-0] Finished <<< pcl No Makefile in package pcl [rosmake-3] Starting >>> roslib [ make ] [rosmake-2] Finished <<< rosbuild No Makefile in package rosbuild [rosmake-1] Starting >>> tf [ make ] [rosmake-2] Starting >>> smclib [ make ] [rosmake-0] Starting >>> rosservice [ make ] [rosmake-3] Finished <<< roslib No Makefile in package roslib [rosmake-1] Finished <<< tf ROS_NOBUILD in package tf [rosmake-1] Starting >>> common_rosdeps [ make ] [rosmake-0] Finished <<< rosservice No Makefile in package rosservice [rosmake-2] Finished <<< smclib ROS_NOBUILD in package smclib [rosmake-3] Starting >>> pluginlib [ make ] [rosmake-3] Finished <<< pluginlib ROS_NOBUILD in package pluginlib [rosmake-1] Finished <<< common_rosdeps ROS_NOBUILD in package common_rosdeps [rosmake-0] Starting >>> bond [ make ] [rosmake-2] Starting >>> dynamic_reconfigure [ make ] [rosmake-3] Starting >>> opencv2 [ make ] [rosmake-0] Finished <<< bond ROS_NOBUILD in package bond [rosmake-3] Finished <<< opencv2 ROS_NOBUILD in package opencv2 [rosmake-1] Starting >>> visualization_msgs [ make ] [rosmake-0] Starting >>> bondcpp [ make ] [rosmake-2] Finished <<< dynamic_reconfigure ROS_NOBUILD in package dynamic_reconfigure [rosmake-3] Starting >>> cv_bridge [ make ] [rosmake-3] Finished <<< cv_bridge ROS_NOBUILD in package cv_bridge [rosmake-1] Finished <<< visualization_msgs No Makefile in package visualization_msgs [rosmake-0] Finished <<< bondcpp ROS_NOBUILD in package bondcpp [rosmake-0] Starting >>> nodelet [ make ] [rosmake-0] Finished <<< nodelet ROS_NOBUILD in package nodelet [rosmake-0] Starting >>> nodelet_topic_tools [ make ] [rosmake-0] Finished <<< nodelet_topic_tools ROS_NOBUILD in package nodelet_topic_tools [rosmake-0] Starting >>> pcl_ros [ make ] [rosmake-0] Finished <<< pcl_ros ROS_NOBUILD in package pcl_ros [rosmake-0] Starting >>> rgbdslam [ make ] [ rosmake ] Last 40 linesbdslam: 100.8 sec ] [ 1 Active 30/31 Complete ] {------------------------------------------------------------------------------- [ 41%] Generating src/moc_ros_service_ui.cxx [ 43%] Generating src/moc_qtros.cxx [ 46%] Generating src/moc_openni_listener.cxx [ 48%] Generating src/moc_qt_gui.cxx [ 51%] Generating src/moc_graph_manager.cxx [ 53%] Generating src/moc_glviewer.cxx Scanning dependencies of target rgbdslam make[3]:正在离开目录 /home/lg/ros/rgbdslam_freiburg/rgbdslam/build' make[3]: 正在进入目录 /home/lg/ros/rgbdslam_freiburg/rgbdslam/build' [ 56%] Building CXX object CMakeFiles/rgbdslam.dir/src/gicp-fallback.o [ 58%] Building CXX object CMakeFiles/rgbdslam.dir/src/main.o [ 61%] Building CXX object CMakeFiles/rgbdslam.dir/src/qtros.o [ 64%] Building CXX object CMakeFiles/rgbdslam.dir/src/openni_listener.o [ 66%] Building CXX object CMakeFiles/rgbdslam.dir/src/qt_gui.o [ 69%] Building CXX object CMakeFiles/rgbdslam.dir/src/node.o [ 71%] Building CXX object CMakeFiles/rgbdslam.dir/src/graph_manager.o [ 74%] Building CXX object CMakeFiles/rgbdslam.dir/src/glviewer.o [ 76%] Building CXX object CMakeFiles/rgbdslam.dir/src/parameter_server.o [ 79%] Building CXX object CMakeFiles/rgbdslam.dir/src/ros_service_ui.o [ 82%] Building CXX object CMakeFiles/rgbdslam.dir/src/misc.o [ 84%] Building CXX object CMakeFiles/rgbdslam.dir/src/sift_gpu_wrapper.o [ 87%] Building CXX object CMakeFiles/rgbdslam.dir/src/moc_qtros.o [ 89%] Building CXX object CMakeFiles/rgbdslam.dir/src/moc_openni_listener.o [ 92%] Building CXX object CMakeFiles/rgbdslam.dir/src/moc_qt_gui.o [ 94%] Building CXX object CMakeFiles/rgbdslam.dir/src/moc_graph_manager.o [ 97%] Building CXX object CMakeFiles/rgbdslam.dir/src/moc_glviewer.o [100%] Building CXX object CMakeFiles/rgbdslam.dir/src/moc_ros_service_ui.o Linking CXX executable ../bin/rgbdslam CMakeFiles/rgbdslam.dir/src/openni_listener.o:openni_listener.cpp:function OpenNIListener::retrieveTransformations(std_msgs::Header_std::allocator<void >, Node*): error: undefined reference to 'clock_gettime' CMakeFiles/rgbdslam.dir/src/openni_listener.o:openni_listener.cpp:function OpenNIListener::retrieveTransformations(std_msgs::Header_std::allocator<void >, Node*): error: undefined reference to 'clock_gettime' CMakeFiles/rgbdslam.dir/src/openni_listener.o:openni_listener.cpp:function OpenNIListener::processNode(Node*): error: undefined reference to 'clock_gettime' CMakeFiles/rgbdslam.dir/src/openni_listener.o:openni_listener.cpp:function OpenNIListener::processNode(Node*): error: undefined reference to 'clock_gettime' collect2: ld 返回 1 make[3]: *** [../bin/rgbdslam] 错误 1 make[3]:正在离开目录 /home/lg/ros/rgbdslam_freiburg/rgbdslam/build' make[2]: *** [CMakeFiles/rgbdslam.dir/all] 错误 2 make[2]:正在离开目录 /home/lg/ros/rgbdslam_freiburg/rgbdslam/build' make[1]: *** [all] 错误 2 make[1]:正在离开目录 `/home/lg/ros/rgbdslam_freiburg/rgbdslam/build' [ rosmake ] Output from build of package rgbdslam written to: [ rosmake ] /home/lg/.ros/rosmake/rosmake_output-20121023-105508/rgbdslam/build_output.log [rosmake-0] Finished <<< rgbdslam [FAIL] [ 100.79 seconds ] [ rosmake ] Halting due to failure in package rgbdslam. [ rosmake ] Waiting for other threads to complete. [ rosmake ] Results: [ rosmake ] Cleaned 31 packages. [ rosmake ] Built 31 packages with 1 failures. [ rosmake ] Summary output to directory [ rosmake ] /home/lg/.ros/rosmake/rosmake_output-20121023-105508 how can i deal with the info ? i use ROS fuerte,ubuntu 11.10 64 bit. thanks a lot. Originally posted by longzhixi123 on ROS Answers with karma: 78 on 2012-10-22 Post score: 0 Original comments Comment by longzhixi123 on 2012-10-22: @Felix Endres @yygyt Comment by Lorenz on 2012-10-22: Do not add people's names to comments just to notify them. If you tag your post correctly, people will see it and if they have time and know the answer, they will help. Comment by yigit on 2012-10-23: sorry @longzhixi123, I don't have the answer to that.If I were you, as Felix suggested, I'd google the error.Maybe you would like to try to compile the code here http://goo.gl/MKTYg to see if it works fine. If so, it means librt is fine and you may want to check cmake file to verify linking with lrt Answer: Add rt to the libraries to be linked. Change SET(LIBS_LINK GL GLU ${G2O_LIBS} ${QT_LIBRARIES} ${QT_QTOPENGL_LIBRARY} ${GLUT_LIBRARY} ${OPENGL_LIBRARY} ${OpenCV_LIBS} -lboost_signals) in CMakeLists.txt to SET(LIBS_LINK rt GL GLU ${G2O_LIBS} ${QT_LIBRARIES} ${QT_QTOPENGL_LIBRARY} ${GLUT_LIBRARY} ${OPENGL_LIBRARY} ${OpenCV_LIBS} -lboost_signals) Originally posted by Vikas with karma: 106 on 2012-11-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by longzhixi123 on 2012-11-18: thanks a lot , i have make the rgbdslam package ok http://www.ros.org/wiki/rgbdslam
{ "domain": "robotics.stackexchange", "id": 11468, "tags": "slam, navigation" }
Is it also the case that $\langle f(t)\rangle = 0$ for $f(t) = A \cos(\omega t)$? And how does one get that $\langle f(t)\rangle = 0$?
Question: This page discusses time averaging. It says that time averages are often important when considering oscillating waves of the form $f(t) = A \sin(\omega t)$, where $\omega$ is the angular frequency and $A$ is the amplitude. It is then said that the instantaneous value of this wave varies between $-A$ and $A$, but the time average of this wave over one period is $\langle f(t)\rangle = 0$. Is it also the case that $\langle f(t)\rangle = 0$ for $f(t) = A \cos(\omega t)$? And how does one get that $\langle f(t)\rangle = 0$? Answer: Time averages over a finite time span $T$ do depend on $T$. However, as already noticed in another answer, if $T$ coincides with the period the average is zero. Even more important, since $$ \left<f\right>= \frac{1}{T}\int_0^Tf(t)dt $$ provided the integral on the right-hand side of the previous formula is bounded, the average goes to zero when $T \rightarrow \infty$. For instance, in the case of $f(t)=cos(\omega t)$, $$ \left<f\right>= \frac{1}{T}\int_0^T cos(\omega t)dt = \frac{sin(\omega T)}{\omega T} $$ that goes to zero for $T \rightarrow \infty$.
{ "domain": "physics.stackexchange", "id": 79535, "tags": "waves, frequency" }
$N$ copies of 1D bosonic harmonic oscillator partition function
Question: I am trying to understand the partition function of $N$ copies of 1D bosonic harmonic oscillator. $$ Z_N{}^B = q^{\frac{N}{2}} \prod_{n=1}^N \frac{1}{1-q^n}\quad\text{ with }\quad q=e^{-\beta \hbar w}.$$ My trial are follows For bosonic case, Hilbert space is spanned by states labelled $N$ integers such that, $0\leq k_1 \leq k_2 \leq \cdots k_n \cdots \leq k_N$, The energy states can be \begin{align} H |k_1, \cdots k_N> = \left( \frac{N}{2} + \sum_{n=1}^N k_n \right) | k_1, \cdots k_N> \end{align} Then partition function \begin{align} Z_N{}^B &= tr(q^H) = q^{\frac{N}{2}} \sum_{k_1=0}^{\infty} \sum_{k_2=k_1}^{\infty}\cdots \sum_{k_N=k_{N-1}}^{\infty} q^{\sum_{n=1}^N k_n} \\ & = q^{\frac{N}{2}} \prod_{n=1}^N \frac{1}{1-q^n} \end{align} what i have trouble with is the step of first line to second line. First what think about the first term $(1+q+q^2+ \cdots)$, are the case of $k_2=k_3=\cdots k_N=0$, so $\sum_{k_1=0}^{\infty} q^{k_1}=1+q+q^2+\cdots$ then how i can inteprete the second term via $k_2, \cdots?$ $(1+q^2+ \cdots)?$ I found some wrong points in my formula, i correct it. Then it makes sense. Answer: This was meant more as comment, but turned out to be too long. The key word here is "bosonic": What you wrote down as $Z_N^B$ in your attempt is the partition function for N identical but distinguishable oscillators, while $Z_N^B$ from the paper is the partition function for N indistinguishable oscillators. Which means the degeneracy factors for the energy levels are different. The fastest way to sees this is the $N=2$ case. Your attempt gives $Z_2 = q\frac{1}{(1-q)^2}$, whereas the correct result is $Z_2^B = q \frac{1}{1-q^2}\frac{1}{1-q}$, with the different degeneracies compounded in the $\frac{1}{(1-q)^2}$ and $\frac{1}{1-q^2}\frac{1}{1-q}$ factors. But look at the actual degeneracies by re-expanding the series: $$ \frac{1}{(1-q)^2} = (1+q+q^2+q^3+\dots)(1+q+q^2+q^3+\dots) = \\ = 1 + 2q + 3q^2 + 4q^3 + \dots $$ while $$ \frac{1}{1-q^2}\frac{1}{1-q} = (1+q^2+q^4+q^6+\dots)(1+q+q^2+q^3+\dots) = \\ = 1 + q + 2q^2 + 2q^3 + 3q^4 + 3q^5 +\dots $$ The identical unit term corresponds to the unique ground state, but all excited states, even the first one, display different degeneracies. Explicitly: 1st excited state: 2 levels for the distinguishable case, $(n_1=1,n_2=0)$ and $(n_1=0,n_2=1)$, 1 level only for the indistinguishable case, $(0,1)$. 2nd excited state: 3 levels for the distinguishable case, $(0,2)$, $(1,1)$, and $(2,0)$, 2 levels only for the indistinguishable case, $(0,2)$ and $(1,1)$, and so on. As for how to retrieve the bosonic partition function, Sec.III pg.663 in Am.J.Phys.71(7), 661(2003) (U Mass link) will give you the general idea.
{ "domain": "physics.stackexchange", "id": 35673, "tags": "quantum-mechanics, homework-and-exercises, statistical-mechanics, partition-function, bosons" }
Is it possible to use a lens to focus sunlight to a point hot enough to fuse hydrogen?
Question: Is it possible to use a lens to focus sunlight to a point hot enough to cause atomic fusion? The point of the diagram below is to create a focused point of light at a point in space that is far from the walls of the chamber and is hot enough to fuse a fuel material. The fused material is very hot at the point of fusion but cools as it mixes with the other unfused fuel. Hopefully enough mixing occurs so that the temperature at the walls of the chamber and piping is low enough to avoid melting them. The fuel material is actively circulated by a pump to a heat exchanger. Most of the material circulates in a loop, but obviously fused material needs to be removed at some point, and new fuel added. I used hydrogen in my diagram, but I will accept answers that use another fuel material if its more suitable. Answer: No. The hottest temperature you could achieve by focusing sunlight is the temperature of the surface of the sun*. That is a few thousand K, whereas fusion requires temperatures in the millions of K. *heat flows from a hotter object to a cooler object. So if the focal point ever became hotter than the surface of the sun, then heat would flow from the focal point to the sun.
{ "domain": "physics.stackexchange", "id": 89364, "tags": "fusion" }
How can the accuracy of the dictionary-based approach be measured and improved?
Question: I recently used TextBlob and the NLTK library to do sentiment analysis. I used both dictionary-based and machine learning-based approaches. It is relatively easy to measure accuracy when we use machine learning approach, just define a test set. The same goes for improving accuracy, just modify the training set. But what about dictionary-based approaches instead? How do you measure and improve their accuracy? Answer: Evaluation is always based on the task, not on the method. Since the dictionary-based method gives an output similar to the ML-based approach, you can evaluate it in the same way, using a test set with gold-standard labels (preferably the same test set as the other method, or at least similar in size). Maybe what confuses you is that the dictionary-based method doesn't require training so you there's no need for splitting the data between training and test set. Note: the dictionary-based approach is a heuristic method.
{ "domain": "datascience.stackexchange", "id": 9696, "tags": "python, sentiment-analysis" }
Custom Sum implementation
Question: There is a custom summary function needed as to receive an integer input (e.g. 32456) and returns 3+2+4+5+6. Can you think of any other (or better) solution besides this? public class CustomSum { public int sum(int input) { int sum = 0; while ( input >= 1 ) { sum += input % 10; input = input / 10; } return sum; } public static void main(String[] args) { int input = Integer.parseInt(args[0]); //assuming args[0] is integer CustomSum c = new CustomSum(); int sum = c.sum(input); System.out.println("sum: " + sum); } } In every iteration I intended to extract the right digit by input % 10, and omit the extracted right digit for next iteration by input / 10. Now consider when we are in last iteration and have a 1 digit number. The input / 10 would be greater than 0 and less than 1 (if actually it was float or double not int). So I didn't actually changed that specific condition, although with int it will always be 0. Answer: Your sum(...) function is fine, given the specification, but there is jut an issue of input validation... what about negative values? Your current loop condition is input >= 1. This would be better written as input > 0. When you see the >= 1 it makes you wonder if there's a condition that's weird. In this case, there's not. Further, your code excludes all negative values. What about the sum of -123? That makes me think that the code should all be looped on the condition input != 0, but that would sum -123 as -6, and not 6. I was initially tempted to suggest that you have the code: input = Math.abs(input); int sum = 0; while ( input != 0 ) { sum += input % 10; input = input / 10; } return sum; Unfortunately, that has a flaw with Integer.MIN_VALUE (the abs of Integer.MIN_VALUE is.... Integer.MIN_VALUE - a negative number). The net result, is that I considered doing an abs on each digit value inside the loop, but then realized, that all the digits will be negative, so the 'sum' will just be a large negative value, and we can take the absolute of the sum safely, because the sum of the digits will never be large enough to be a problem... Thus, the solution I would recommend is: public static int sumSumAbs(int input) { int sum = 0; while ( input != 0 ) { sum += input % 10; input = input / 10; } return Math.abs(sum); } That code (adjusted to fit in to my Eclipse IDE with a static method call, and different name), will work for any input value, of any sign, and ignore the sign in the result.
{ "domain": "codereview.stackexchange", "id": 12601, "tags": "java" }
Tension in a string, at an angle
Question: This was a question on a mechanics exam. Part i wants me to assume that the tension in both parts of the string is the same. Even though I got the correct answer( 18.9N) by assuming so, I don't understand how this assumption makes sense. Consider the bit of string under the ring. The net force on it must be zero ( light string ) so that implies the net vertical component is zero. This is not possible if I assume the tensions to be the same, or at least that's how I see it ( see image) Basically in my mind: If the tensions are equal, the net force on the bit in contact with the ring can not be zero since the angles are different. But since the string is light, this does not add up. ( Or the system is at rest. Net force on that bit of rope must be zero, regardless of light/ not light) But the question assumes the opposite. I would like to know what part of my argument here is flawed. Answer: First answering to your doubt of tension being same in both parts of string.Young's modulus of a material is constant irrespective of shape length etc. (at least from the point of view of solving high school questions ) , you can study about it a bit on the internet if you don't know it already. $$Y={{F\over A}\over {\Delta L\over L}} $$where $Y$ is the Young's modulus of elasticity of your string.From here $$F={YA\over L}(\Delta L)$$As you can see from this equation, the force or the tension for a realistic string depends on its extension and the constants beside it.Moreover the Force equation depends inversely on the actual length of string too (see L in the denominator ), so unless you cut the string where the ring is and then attach both parts above and below the ring , the force equations are not going to change and the string as a continuous entity will have same tension everywhere , because $\Delta L$ here accounts for whole string. The purpose of invoking elasticity in the picture is to explain the tension aspect in near ideal conditions , there is nothing like a mass less string but the above explanation is a close analysis. Coming to the actual question (tension is now same throughout the string). HORIZONTAL EQUILIBRIUM $$T(cos50+cos20)=X$$ VERTICAL EQUILIBRIUM $$T(sin50)=T(sin20)+(0.8)g$$ Upon solving these two , $T=18.66 N$ and $X=29.29 N$ approx.
{ "domain": "physics.stackexchange", "id": 57041, "tags": "homework-and-exercises, forces, free-body-diagram, string, statics" }
Calculating the number 8 chars long which contains all possible numbers containing the numbers from 0 to n
Question: I have the following code: Integer cr = 3; String y = "\"\r\n\""; for (Integer i = 0; i < cr; i++) { for (Integer j = 0; j < cr; j++) { for (Integer k = 0; k < cr; k++) { for (Integer l = 0; l < cr; l++) { for (Integer m = 0; m < cr; m++) { for (Integer n = 0; n < cr; n++) { for (Integer o = 0; o < cr; o++) { for (Integer p = 0; p < cr; p++) { consolus.append(i.toString()+j.toString()+k.toString()+l.toString()+m.toString()+n.toString()+o.toString()+p.toString() + y); } } } } } } } } Is there some way I can write this more efficiently? Essentially, the output is a number 8 chars long which contains all possible numbers containing the numbers from 0 to cr. This method currently works, however it doesn't seem efficient, and then writing to the TextView consolus, only occurs after all the for statements complete. Answer: Just for fun I took your code and ran it as it with timing around the loops.. on average 48 milliseconds. I then took the code and correct the usage of .append and gathered timing.. on average 41 milliseconds. And then I changed Integer to int in the for loops and took out the Integer.toString calls.. giving on average 21 milliseconds. I am not sure what your timing requirements are but both are "fast". Original code with timing public class test { public static void main(String args[]) { new test(); } public test() { StringBuffer consolus = new StringBuffer(); Integer cr = 3; String y = "\"\r\n\""; long start = System.currentTimeMillis(); for (Integer i = 0; i < cr; i++) { for (Integer j = 0; j < cr; j++) { for (Integer k = 0; k < cr; k++) { for (Integer l = 0; l < cr; l++) { for (Integer m = 0; m < cr; m++) { for (Integer n = 0; n < cr; n++) { for (Integer o = 0; o < cr; o++) { for (Integer p = 0; p < cr; p++) { consolus.append(i.toString()+j.toString()+k.toString()+l.toString()+m.toString()+n.toString()+o.toString()+p.toString() + y); } } } } } } } } long stop = System.currentTimeMillis(); System.out.println(stop-start); } } Update code with timing public class test { public static void main(String args[]) { new test(); } public test() { StringBuffer consolus = new StringBuffer(); Integer cr = 3; String y = "\"\r\n\""; long start = System.currentTimeMillis(); for (Integer i = 0; i < cr; i++) { for (Integer j = 0; j < cr; j++) { for (Integer k = 0; k < cr; k++) { for (Integer l = 0; l < cr; l++) { for (Integer m = 0; m < cr; m++) { for (Integer n = 0; n < cr; n++) { for (Integer o = 0; o < cr; o++) { for (Integer p = 0; p < cr; p++) { consolus.append(i.toString()); consolus.append(j.toString()); consolus.append(k.toString()); consolus.append(l.toString()); consolus.append(m.toString()); consolus.append(n.toString()); consolus.append(o.toString()); consolus.append(p.toString()); consolus.append(y); } } } } } } } } long stop = System.currentTimeMillis(); System.out.println(stop-start); } } ReUpdated code with timing public class test { public static void main(String args[]) { new test(); } public test() { StringBuffer consolus = new StringBuffer(); Integer cr = 3; String y = "\"\r\n\""; long start = System.currentTimeMillis(); for (int i = 0; i < cr; i++) { for (int j = 0; j < cr; j++) { for (int k = 0; k < cr; k++) { for (int l = 0; l < cr; l++) { for (int m = 0; m < cr; m++) { for (int n = 0; n < cr; n++) { for (int o = 0; o < cr; o++) { for (int p = 0; p < cr; p++) { consolus.append(i); consolus.append(j); consolus.append(k); consolus.append(l); consolus.append(m); consolus.append(n); consolus.append(o); consolus.append(p); consolus.append(y); } } } } } } } } long stop = System.currentTimeMillis(); System.out.println(stop-start); } }
{ "domain": "codereview.stackexchange", "id": 3029, "tags": "java, android" }
How can I speed up INDEL calling/correction on BAM files?
Question: The samtools mpileup command has quite a neat feature that it is able to correct mapping errors associated with misalignment of INDELs. By default, the mpileup command will not work for reads that have more than 250X coverage of the reference genome. While this limit can be increased, very high coverage causes the mpileup program to grind to a halt, so it'd be nice to know if there's some easy way to make that faster. To add a bit more context, I've been doing this with mitochondrial genome reads that were extracted from both Illumina whole-genome sequencing (coverage ~1000X), and from targeted amplicon sequencing done on the IonTorrent (coverage up to ~4000X). I see that @rightskewed has mentioned the downsampling ability of samtools with samtools view -s <float> (see here), which seems like it might work as a solution for this if used prior to the mpileup operation. Answer: I wasn't aware of the samtools subsampling when I had this problem a couple of years ago, so ended up writing my own digital normalisation method to deal with mapped reads. This method reduces the genome coverage, but preserves reads where coverage is low. Because I was working with IonTorrent reads (which have variable length), I came up with the idea of selecting the longest read that mapped to each location in the genome (assuming such a read existed). This meant that the highly variable coverage for different samples (sometimes as low as 200X, sometimes as high as 4000X) was flattened out to a much more consistent coverage of about 100-200X. Here's the core of the Perl code that I wrote: if(($F[2] ne $seqName) || ($F[3] != $pos) || (length($bestSeq) <= length($F[9]))){ if(length($bestSeq) == length($F[9])){ ## reservoir sampling with a reservoir size of 1 ## See https://en.wikipedia.org/wiki/Reservoir_sampling ## * with probability 1/i, keep the new item instead of the current item $seenCount++; if(!rand($seenCount)){ ## i.e. if rand($seenCount) == 0, then continue with replacement next; } } else { $seenCount = 1; } if(($F[2] ne $seqName) || ($F[3] != $pos)){ if($output eq "fastq"){ printSeq($bestID, $bestSeq, $bestQual); } elsif($output eq "sam"){ print($bestLine); } } $seqName = $F[2]; $pos = $F[3]; $bestLine = $line; $bestID = $F[0]; $bestFlags = $F[1]; $bestSeq = $F[9]; $bestQual = $F[10]; The full code (which works as a filter on uncompressed SAM files) can be found here.
{ "domain": "bioinformatics.stackexchange", "id": 57, "tags": "sam, samtools, variant-calling, indel" }
Is Trace cyclic with respect to tensor product?
Question: Two alternative expressions for the expectation value of energy are \begin{align} \langle H\rangle = \langle \psi|H|\psi\rangle \end{align} which holds only for pure state, and \begin{align} \langle H\rangle = \text{Tr}(\rho H) \end{align} which holds for mixed states. So that got me wondering, in the case where $\rho = |\psi\rangle\langle \psi|$, a pure state, \begin{align} \langle H\rangle = \text{Tr}(|\psi\rangle\langle \psi|H) = \text{Tr}(\langle \psi|H|\psi\rangle) = \langle \psi|H|\psi\rangle \end{align} I know that trace is cyclic w.r.t matrix product, i.e. $\text{Tr}(ABC) = \text{Tr}(BCA) = \text{Tr}(CAB)$, but does the above mean that it is also cyclic w.r.t tensor product? (Since $|\psi\rangle\langle \psi|$ is really just $|\psi\rangle \otimes \langle \psi|$). It intuitively feels wrong since you're basically ripping Hilbert space in half, but maybe even if it doesn't work in general there may be a restricted set of cases where it does work? What do people think? Answer: $Tr(A\otimes B \otimes C)=Tr(A) Tr(B) Tr(C).$ So sure, it's also equal to $Tr(C\otimes A \otimes B)$, but cyclic isn't the property being used here. Edit: Just caught myself: This only works when A, B, C are square so the trace is well-defined. For you, this is not the case, so I don't see how you can use the tensor product to explain this property. In this case you really have to use the fact that $|\psi \rangle \langle \psi |$ is also matrix multiplication: If $|\psi \rangle$ is $n$ x $1$, then by ordinary matrix multiplication, $|\psi \rangle \langle \psi |$ is ($n$ x $1$)($1$ x $n$) = $n$ x $n$ matrix. I prefer this way of thinking about it, since usually tensor products separate different systems, but here the product is between two objects ($|\psi \rangle $ and $\langle \psi |$ ) from the same system, so it's a bit unorthodox - though mathematically not incorrect - to think of it as a tensor product. Then we have $$Tr(| \psi \rangle \langle \psi | H ) = Tr(\langle \psi | H | \psi \rangle )= \langle \psi | H | \psi \rangle$$ For the first equality, I used ordinary cyclicity of the trace: For matrix multiplication, the trace is cyclic for any product for which the matrix multiplication is still defined. Including for "$1$x$1$ matrices" like $\langle \psi | H |\psi \rangle$, whose trace is just themselves.
{ "domain": "physics.stackexchange", "id": 63226, "tags": "quantum-mechanics, hilbert-space, trace" }
Filler Algorithm
Question: I am curious to know if there is an algorithm that, given an array of decimals and integers, and given an integer, returns a sequence in descending order, composed of those numbers of the input array whose sum is as close as possible to the number of input, or even be identical, but not greater. I wonder if there is something similar, and I wonder if it also has a name, this algorithm. Thanks in advance! Answer: There are two famous problems that are similar to what you're looking for. Yours comes closest to the subset sum problem https://en.wikipedia.org/wiki/Subset_sum_problem. Similarly we have the knapsack problem which revolves around the same idea with the general restriction that items have specific values which should also be taken into account. https://en.wikipedia.org/wiki/Knapsack_problem Since any integer can be represented as a decimal number your problem can be transformed into either of these problems. In the case of the knapsack problem you would simply set the value of each item to its decimal value. Both of these problems are NP-complete which means that the only way to calculate the actual maximal achievable value (at the moment) is brute force, although there are pseudopolynomial dynamic programming approaches. The fact that the output seemingly needs to be ordered is unimportant; there are some very efficient sorting algorithms out there.
{ "domain": "cs.stackexchange", "id": 14550, "tags": "algorithms, data-structures, arrays" }
For QuickFind, which sub problem we should consider
Question: Question: Suppose we have the following array where we want to find the smallest $i$th smallest element in the array using QuickFind algorithm (similar to QuickSort): $$ QuickFind\left( A,n,i \right) =\left\{ \begin{array}{l} QuickFind\left( A,k-1,i \right) ,\ i<k\\ QuickFind\left( A,n-k,k-i \right) ,\ i>k\\ x,\ i=k\\ \end{array} \right. $$ Now, we can see that we don't know which part is larger the left part where $<x$ or the right part where elements are greater than $>x$. So, what we do in this case is to take the largest of the two subprblems: $$T(n) = max\{T(k-1), T(n-k)\} \tag{1}\label{1}$$ Problem: why we take the max of two subproblems? If $i<k$, then we simply take the left part, otherwise $i>k$ we take the right part of $x$ at the $k$th element. So, why please taking the max of 2 recursions and what does even mean? Problem 2: then to calculate the expected position of $k$, $$E[T(n)] = \sum_{j=1}^{n}(Pr(k=j))E\left[ max\{T(j-1), T(n-j)\}\right] \tag{2}\label{2}$$ Also, adding to problem 2, can you please show how we get \ref{2}? I know that $E[X] = \Sigma_i x_i \times p(x_i)$, so what quantities in \ref{2} corresponds to equation of expectation $E[X]$? From what I see, \ref{2} should be written if we treated $T(n)$ as a random variable that takes several values based on \ref{1}, so we should have it without $E[\cdot]$ around $ max\{T(j-1), T(n-j)\}$ $$E[T(n)] = \sum_{j=1}^{n}Pr(k \in T(n) = j) \times max\{T(j-1), T(n-j)\} \tag{3}\label{3}$$ what do you think please about how I rewrote \ref{2}? Also should it based on expectation definition to have $Pr(X=x_i)$ corresponds to $Pr(k \in T(n) = j)$ as this is how I interpreted $Pr(k=j)$, so what you think please? Edit: I just look for please to understand $T(n)$ rule above and nothing more. Since QuickFind is given recursively, so what is the point of taking max of $T(n)$ as defined above in \ref{1} please? The justification I got was the one we should put into the recursion of $T(n)$ is because we don't know whether our element will be on the left or on the right, so we take $max$ of the two subproblems. So again, how we don't know if the ith smallest element is on the left on right given we can compare with $x$ at kth position to help us whether to go left or righ, why max $T(n)$? Answer: I assume $T(n)$ represents the run-time for an input of size $n$. Problem 1 You are right that if $i<k$ then we take the left part, and if $k<i$ then the right one. However, creating a recurrence relation with this way of choosing the subproblem size can be problematic - we don't know beforehand which part we will need to choose, so we can't solve the equation. To solve this problem, we consider only giving a bound on $T(n)$ rather than getting the exact solution. So, if we could either take the left subproblem or the right one, we either case perform at most as the maximum between them. Hence, we can write a simpler recurrence relation in the form of $T(n)\le \max\{T(k-1),T(n-k)\}$. This relation might be simpler to solve now, and gives a bound on the run-time of the algorithm. Problem 2 As you have said, $T(n)$ can be thought of as a random variable. Therefore, $\max\{T(j-1),T(n-j)\}$ will also be a random variable, and that should be a big red flag in equation $3$ - since an expected value must always be a real number (or infinite). The expected value is simply never a random variable. You can derive equation $2$ without all the expected values - from the complete probability theorem. Then, apply the expected value on both sides and use its linearity properties. Now, as we didn't directly use the definition of expectation to get equation $2$, the term $\Pr[k=j]$ should make more sense to you now.
{ "domain": "cs.stackexchange", "id": 19050, "tags": "algorithms, quicksort" }
Why is point_cloud2.py missing from sensor_msgs in ROS2?
Question: After subscribing to a PointCloud2 message in ROS2, I want to extract the PointCloud2 points from the message. In ROS1 I could do that with: sensor_msgs.point_cloud2.read_points(). However, point_cloud2 seems to be missing from sensor_msgs in ROS2. Here is the basic code I am trying to write: import rclpy from rclpy.node import Node from sensor_msgs.msg import PointCloud2 from sensor_msgs import point_cloud2 class my_node(Node): def __init__(self): super().__init__('my_node') self.subscription = self.create_subscription(PointCloud2,'/points',self.lidar_callback,1) def lidar_callback(self, msg): pts = point_cloud2.read_points(msg, field_names=['x','y','z','intensity']) #Now process pts... The 4th line fails, namely: from sensor_msgs import point_cloud2 Any insights on how to do this in ROS2 Foxy? Originally posted by Morris on ROS Answers with karma: 35 on 2020-07-24 Post score: 0 Answer: https://github.com/ros2/common_interfaces/tree/master/sensor_msgs Not ported yet. https://github.com/ros/common_msgs/tree/noetic-devel/sensor_msgs/src/sensor_msgs looks pretty trivial to port though if you need it. Submit a PR! Originally posted by stevemacenski with karma: 8272 on 2020-07-24 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Morris on 2020-07-24: Ah yes, I will do that, but I'm new to this. How to submit a PR? Thanks! Comment by stevemacenski on 2020-07-24: Through GitHub, you have to understand Git to do so (if you don't know Git, you should really learn, its the basis for modern software development). Comment by Morris on 2020-07-24: Okay, I submitted this as a feature request. Hopefully that is the right thing to do. Thanks. Comment by stevemacenski on 2020-07-24: Sure thing, can you mark this answer as correct then?
{ "domain": "robotics.stackexchange", "id": 35322, "tags": "ros2, sensor-msgs" }
Relativity of velocity while riding a bike
Question: I was trying to think of a situation in which an observer would be able to determine whether he is moving or not. Since velocity is a relative quantity I was unable to do so. However, consider a situation in which an observer is sitting on a cycle moving with constant velocity (he is not pedaling the bike so, as such he cannot say whether he is moving or not). In this situation, there is a way he can tell whether the cycle and he are moving or not. This is because if a cycle is moving then it will not fall. However, if it is not moving then it will fall with the observer like it would normally do. Thus the observer would be able to say whether he is moving or not. Can someone clarify as to what point I am missing and how velocity is relative in this scenario. Answer: Imagine you are on a planet the size of Earth, but which has a rotational period of 1 month 21 days. To avoid complexity let's assume this planet has no Sun: it's just sitting in space far from any star. You are on the equator, and you are cycling west at 20 miles an hour (someone has built a road around the equator). Do you fall off your bike? Because if you do the maths (and if I've done the maths right) you will discover that you are stationary (relative to the centre of the planet): you are cycling west at just the right speed to overcome the rotation of the planet towards the east. Well, the answer is, of course, no, you don't. Because motion is relative, and the bike can't somehow magically know that it is going just the right speed that it is stationary.
{ "domain": "physics.stackexchange", "id": 45013, "tags": "newtonian-mechanics, rotational-dynamics, velocity, relative-motion" }
Turtlebot navigation tf error
Question: When running the Turtlebot navigation demo I often get the following error: Waiting on transform from /base_link to /map to become available before running costmap, tf error Which occurs repeatedly until I quit. On about 20% of attempts the navigation starts normally. Any ideas on what might be causing this? One thing which I'm doing differently is using a map with resolution of 0.02, so perhaps it could be something related to the map being larger than expected. tf_monitor output when the problem occurs is as follows: RESULTS: for all Frames Frames: Frame: /base_footprint published by /robot_pose_ekf Average Delay: 0.0555048 Max Delay: 0.0800429 Frame: /base_link published by /robot_state_publisher Average Delay: -0.484329 Max Delay: 0 Frame: /front_wheel_link published by /robot_state_publisher Average Delay: -0.484305 Max Delay: 0 Frame: /gyro_link published by /robot_state_publisher Average Delay: -0.484302 Max Delay: 0 Frame: /kinect_depth_frame published by /robot_state_publisher Average Delay: -0.4843 Max Delay: 0 Frame: /kinect_depth_optical_frame published by /robot_state_publisher Average Delay: -0.484297 Max Delay: 0 Frame: /kinect_link published by /robot_state_publisher Average Delay: -0.484324 Max Delay: 0 Frame: /kinect_rgb_frame published by /robot_state_publisher Average Delay: -0.484294 Max Delay: 0 Frame: /kinect_rgb_optical_frame published by /robot_state_publisher Average Delay: -0.484291 Max Delay: 0 Frame: /laser published by /robot_state_publisher Average Delay: -0.484289 Max Delay: 0 Frame: /left_cliff_sensor_link published by /robot_state_publisher Average Delay: -0.484321 Max Delay: 0 Frame: /left_wheel_link published by /robot_state_publisher Average Delay: -0.950178 Max Delay: 0 Frame: /leftfront_cliff_sensor_link published by /robot_state_publisher Average Delay: -0.484319 Max Delay: 0 Frame: /plate_0_link published by /robot_state_publisher Average Delay: -0.484286 Max Delay: 0 Frame: /plate_1_link published by /robot_state_publisher Average Delay: -0.484284 Max Delay: 0 Frame: /plate_2_link published by /robot_state_publisher Average Delay: -0.484282 Max Delay: 0 Frame: /plate_3_link published by /robot_state_publisher Average Delay: -0.484279 Max Delay: 0 Frame: /rear_wheel_link published by /robot_state_publisher Average Delay: -0.484277 Max Delay: 0 Frame: /right_cliff_sensor_link published by /robot_state_publisher Average Delay: -0.484313 Max Delay: 0 Frame: /right_wheel_link published by /robot_state_publisher Average Delay: -0.950172 Max Delay: 0 Frame: /rightfront_cliff_sensor_link published by /robot_state_publisher Average Delay: -0.48431 Max Delay: 0 Frame: /spacer_0_link published by /robot_state_publisher Average Delay: -0.484274 Max Delay: 0 Frame: /spacer_1_link published by /robot_state_publisher Average Delay: -0.484272 Max Delay: 0 Frame: /spacer_2_link published by /robot_state_publisher Average Delay: -0.484269 Max Delay: 0 Frame: /spacer_3_link published by /robot_state_publisher Average Delay: -0.484267 Max Delay: 0 Frame: /standoff_2in_0_link published by /robot_state_publisher Average Delay: -0.484265 Max Delay: 0 Frame: /standoff_2in_1_link published by /robot_state_publisher Average Delay: -0.484262 Max Delay: 0 Frame: /standoff_2in_2_link published by /robot_state_publisher Average Delay: -0.484259 Max Delay: 0 Frame: /standoff_2in_3_link published by /robot_state_publisher Average Delay: -0.484256 Max Delay: 0 Frame: /standoff_2in_4_link published by /robot_state_publisher Average Delay: -0.484254 Max Delay: 0 Frame: /standoff_2in_5_link published by /robot_state_publisher Average Delay: -0.484251 Max Delay: 0 Frame: /standoff_2in_6_link published by /robot_state_publisher Average Delay: -0.484247 Max Delay: 0 Frame: /standoff_2in_7_link published by /robot_state_publisher Average Delay: -0.484243 Max Delay: 0 Frame: /standoff_8in_0_link published by /robot_state_publisher Average Delay: -0.48424 Max Delay: 0 Frame: /standoff_8in_1_link published by /robot_state_publisher Average Delay: -0.484237 Max Delay: 0 Frame: /standoff_8in_2_link published by /robot_state_publisher Average Delay: -0.484234 Max Delay: 0 Frame: /standoff_8in_3_link published by /robot_state_publisher Average Delay: -0.484231 Max Delay: 0 Frame: /standoff_kinect_0_link published by /robot_state_publisher Average Delay: -0.484228 Max Delay: 0 Frame: /standoff_kinect_1_link published by /robot_state_publisher Average Delay: -0.484225 Max Delay: 0 Frame: /wall_sensor_link published by /robot_state_publisher Average Delay: -0.484307 Max Delay: 0 All Broadcasters: Node: /robot_pose_ekf 10.051 Hz, Average Delay: 0.0555048 Max Delay: 0.0800429 Node: /robot_state_publisher 31.0555 Hz, Average Delay: -0.499502 Max Delay: 0 And when navigation starts normally: RESULTS: for all Frames Frames: Frame: /base_footprint published by /robot_pose_ekf Average Delay: 0.0562317 Max Delay: 0.0831862 Frame: /base_link published by /robot_state_publisher Average Delay: -0.483662 Max Delay: 0 Frame: /front_wheel_link published by /robot_state_publisher Average Delay: -0.483628 Max Delay: 0 Frame: /gyro_link published by /robot_state_publisher Average Delay: -0.483624 Max Delay: 0 Frame: /kinect_depth_frame published by /robot_state_publisher Average Delay: -0.48362 Max Delay: 0 Frame: /kinect_depth_optical_frame published by /robot_state_publisher Average Delay: -0.483616 Max Delay: 0 Frame: /kinect_link published by /robot_state_publisher Average Delay: -0.483654 Max Delay: 0 Frame: /kinect_rgb_frame published by /robot_state_publisher Average Delay: -0.483612 Max Delay: 0 Frame: /kinect_rgb_optical_frame published by /robot_state_publisher Average Delay: -0.483608 Max Delay: 0 Frame: /laser published by /robot_state_publisher Average Delay: -0.483604 Max Delay: 0 Frame: /left_cliff_sensor_link published by /robot_state_publisher Average Delay: -0.48365 Max Delay: 0 Frame: /left_wheel_link published by /robot_state_publisher Average Delay: -0.948303 Max Delay: 0 Frame: /leftfront_cliff_sensor_link published by /robot_state_publisher Average Delay: -0.483645 Max Delay: 0 Frame: /odom published by /amcl Average Delay: -0.856934 Max Delay: 0 Frame: /plate_0_link published by /robot_state_publisher Average Delay: -0.4836 Max Delay: 0 Frame: /plate_1_link published by /robot_state_publisher Average Delay: -0.483596 Max Delay: 0 Frame: /plate_2_link published by /robot_state_publisher Average Delay: -0.483593 Max Delay: 0 Frame: /plate_3_link published by /robot_state_publisher Average Delay: -0.483589 Max Delay: 0 Frame: /rear_wheel_link published by /robot_state_publisher Average Delay: -0.483585 Max Delay: 0 Frame: /right_cliff_sensor_link published by /robot_state_publisher Average Delay: -0.483641 Max Delay: 0 Frame: /right_wheel_link published by /robot_state_publisher Average Delay: -0.948296 Max Delay: 0 Frame: /rightfront_cliff_sensor_link published by /robot_state_publisher Average Delay: -0.483636 Max Delay: 0 Frame: /spacer_0_link published by /robot_state_publisher Average Delay: -0.483581 Max Delay: 0 Frame: /spacer_1_link published by /robot_state_publisher Average Delay: -0.483577 Max Delay: 0 Frame: /spacer_2_link published by /robot_state_publisher Average Delay: -0.483573 Max Delay: 0 Frame: /spacer_3_link published by /robot_state_publisher Average Delay: -0.483569 Max Delay: 0 Frame: /standoff_2in_0_link published by /robot_state_publisher Average Delay: -0.483564 Max Delay: 0 Frame: /standoff_2in_1_link published by /robot_state_publisher Average Delay: -0.48356 Max Delay: 0 Frame: /standoff_2in_2_link published by /robot_state_publisher Average Delay: -0.483556 Max Delay: 0 Frame: /standoff_2in_3_link published by /robot_state_publisher Average Delay: -0.48355 Max Delay: 0 Frame: /standoff_2in_4_link published by /robot_state_publisher Average Delay: -0.483546 Max Delay: 0 Frame: /standoff_2in_5_link published by /robot_state_publisher Average Delay: -0.483541 Max Delay: 0 Frame: /standoff_2in_6_link published by /robot_state_publisher Average Delay: -0.483536 Max Delay: 0 Frame: /standoff_2in_7_link published by /robot_state_publisher Average Delay: -0.483532 Max Delay: 0 Frame: /standoff_8in_0_link published by /robot_state_publisher Average Delay: -0.483528 Max Delay: 0 Frame: /standoff_8in_1_link published by /robot_state_publisher Average Delay: -0.483523 Max Delay: 0 Frame: /standoff_8in_2_link published by /robot_state_publisher Average Delay: -0.483518 Max Delay: 0 Frame: /standoff_8in_3_link published by /robot_state_publisher Average Delay: -0.483514 Max Delay: 0 Frame: /standoff_kinect_0_link published by /robot_state_publisher Average Delay: -0.483509 Max Delay: 0 Frame: /standoff_kinect_1_link published by /robot_state_publisher Average Delay: -0.483505 Max Delay: 0 Frame: /wall_sensor_link published by /robot_state_publisher Average Delay: -0.483631 Max Delay: 0 All Broadcasters: Node: /amcl 14.3912 Hz, Average Delay: -0.856934 Max Delay: 0 Node: /robot_pose_ekf 10.0275 Hz, Average Delay: 0.0562317 Max Delay: 0.0831862 Node: /robot_state_publisher 31.0251 Hz, Average Delay: -0.498419 Max Delay: 0 The difference between rostopic listings is that when the error occurs many fewer move_base topics are listed. /move_base/current_goal /move_base/goal /move_base_simple/goal Compared to the normal condition /move_base/NavfnROS/NavfnROS_costmap/inflated_obstacles /move_base/NavfnROS/NavfnROS_costmap/obstacles /move_base/NavfnROS/NavfnROS_costmap/robot_footprint /move_base/NavfnROS/NavfnROS_costmap/unknown_space /move_base/NavfnROS/plan /move_base/TrajectoryPlannerROS/cost_cloud /move_base/TrajectoryPlannerROS/global_plan /move_base/TrajectoryPlannerROS/local_plan /move_base/TrajectoryPlannerROS/parameter_descriptions /move_base/TrajectoryPlannerROS/parameter_updates /move_base/cancel /move_base/current_goal /move_base/feedback /move_base/global_costmap/inflated_obstacles /move_base/global_costmap/obstacles /move_base/global_costmap/parameter_descriptions /move_base/global_costmap/parameter_updates /move_base/global_costmap/robot_footprint /move_base/global_costmap/unknown_space /move_base/goal /move_base/local_costmap/inflated_obstacles /move_base/local_costmap/obstacles /move_base/local_costmap/parameter_descriptions /move_base/local_costmap/parameter_updates /move_base/local_costmap/robot_footprint /move_base/local_costmap/unknown_space /move_base/parameter_descriptions /move_base/parameter_updates /move_base/result /move_base/status /move_base_simple/goal By inserting a lot of INFOs into move_base.cpp it seems that when the problem occurs move_base gets stuck at this line: planner_costmap_ros_ = new costmap_2d::Costmap2DROS("global_costmap", tf_); Originally posted by JediHamster on ROS Answers with karma: 995 on 2011-11-19 Post score: 1 Original comments Comment by eitan on 2011-11-28: It seems like AMCL is not publishing the transform from map to odom in the case where you see the warning. Are you sure that its getting odometry and laser data and running correctly? Answer: I had the same error and found that minimal.launch (turtlebot) wasn't running. Double check roscore? Originally posted by brianpen with karma: 183 on 2012-02-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 7349, "tags": "navigation, turtlebot, transform" }
What is the interpretation of this integral equation $\psi(x,t_2)=\int G(x,y)\psi(y,t_1)dy$ of the generalized Huygens principle?
Question: I have came recently across the following equation $$\psi(x,t_2)=\int G(x,y)\psi(y,t_1)dy$$ I want to understand its interpretation. Here is what i understand, I see that this equation gives the form of the new wavefront (which i believe is a straight line) $\psi(x,t_2)$ at the new position $x$ and the new time $t_2$ given the integral of the previous small wavelets multiplied by the propagator $G$ over the entire y axis, that is given $\int G(x,y)\psi(y,t_1)dy$ Answer: What you are looking at is the representation of a linear operator as in integral (which is not possible for all of them). More specifically, the operator you are looking at, is a time-translation invariant, time-evolution operator $U(t_2, t_1)$. This operator evolves your solution at time $t_1$ to your solution at $t_2$, in other words $$ \psi(x, t_2) = \big(U(t_2, t_1) \psi\big)(x, t_1) $$ If such an operator is reasonably nice, it can be represented as an integral (note that the word "operator" typically implies linearity): $$ \psi(x, t_2) = \int dx'\, G(x, x') \psi(x', t_1) $$ The label $y$ is just chosen randomly in your original equation (I have chosen $x'$), it's just a dummy integration variable, you can label it as you want). $x$ need not even be a number in this notation, it could be an element of Euclidean space $\mathbb{R}^n$ and the equations would all look the same. It refers to the same physical space as the variable $x$ of the resulting $\psi(x, t_2)$. $G(\cdot, x)$ can be understood as the wave after a evolution time of $t_2 - t_1$ when the initial conditions where $\psi(x, t_1) = \delta(x)$. With this we can make your intuitive description of the structure of the solution explicit, here we write $U(t_2, t_1)$ as $U_x$ to make clear that it only acts on functions $x \mapsto f(x)$ in the free variable $x$: $$ \psi(x, t_2) = U_x \psi(x, t_1) = U_x \int dx' \psi(x', t_1) \delta(x - x') = \int dx' \psi(x', t_1) U_x \delta(x - x') = \int dx' \psi(x', t_1) G(x, x'). $$ So the solution $\psi(x, t_2)$ is the superposition of the time-evolution of the delta spikes weighted by the initial state. We could pull the operator into the integral here since the operator is linear (and we assume it and our initial data to be "reasonable nice") and $U_x$ does not act on $x'$ so the $\psi(x', t_1)$ are just coefficients. Note, that this is not the Huygens principle as usually given – the solution denpends not only on the points on the wavefront but on the value of the solution in the entire support of $G$ at an earlier time. (The wave equation $\partial_t^2 \phi = c^2 \Delta \phi$ in odd dimensions, however, does permit a solution that can be interpreted as strictly following the Huygens principles). In other words: In general, you don't construct wavefronts from wavefronts, but rather wave configurations from wave configurations. And in this sense it is a strongly generalized Huygens principle. Note: Wikipedia has quite a nice discussion of more or less exactly these issue: https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel_principle.
{ "domain": "physics.stackexchange", "id": 94030, "tags": "quantum-mechanics, optics, waves, greens-functions, huygens-principle" }
Reading a text file that contains several CSV-like tables
Question: I wrote some extension methods to read CSV-styled text directly into a datatable or dataset or write them to this format. Is it right to use the methods as extensions or should I create a separate class that contains this functionality, and so on? Formats explained: The ReadFromCsv and WriteToCsv will work with a normal CSV style like: Column1;Column2;Column3;... Value11;Value12;Value13;... Value21;Value22;Value23;... The ReadFromSectionedCsv and WriteToSectionedCsv methods use a format like this: [Table1] Column1;Column2;Column3;... Value11;Value12;Value13;... Value21;Value22;Value23;... [Table2] Column1;Column2;Column3;... Value11;Value12;Value13;... Value21;Value22;Value23;... where each table is read to a separate DataTable in the DataSet. using System; using System.Data; using System.IO; using System.Linq; using System.Text; using System.Text.RegularExpressions; namespace CsvExtensions { /// <summary> /// Erweiterungen für die Klassen System.Data.DataTable und System.Data.DataSet /// zum einlesen von an CSV angelehnten Daten direkt in eine Instanz dieser Typen /// </summary> public static class DataCsvExtension { //Trennzeichen der einzelnen Spalten private const char SEPERATOR = ';'; /// <summary> /// Liest die Daten einer CSV Datei ein /// </summary> /// <param name="table">DataTable object</param> /// <param name="filepath">Pfad zur CSV Datei</param> public static void ReadFromCsv(this DataTable table, string filepath) { using (Stream filestream = File.Open(filepath, FileMode.Open)) { table.ReadFromCsv(filestream); } } /// <summary> /// Liest die Daten einer CSV Datei ein /// </summary> /// <param name="table">DataTable object</param> /// <param name="filestream">Stream der CSV Datei</param> public static void ReadFromCsv(this DataTable table, Stream filestream) { table.Clear(); Encoding encoding = Encoding.UTF8; //Encoding.Default; //if (Utf8Checker.IsUtf8(filestream)) // encoding = Encoding.UTF8; StreamReader sr = new StreamReader(filestream, encoding); string line = sr.ReadLine(); //empty line is considered the end of the table if (String.IsNullOrEmpty(line)) return; string[] array = line.Split(SEPERATOR); foreach (string value in array) { DataColumn dataColumn = new DataColumn(value.Trim()) { Caption = value.Trim() }; table.Columns.Add(dataColumn); } table.NewRow(); while (sr.Peek() > -1) { line = sr.ReadLine(); if (line == null || (line.Trim() == "" || !line.Contains(SEPERATOR) || String.IsNullOrEmpty(line.Replace(';', ' ').Trim()))) continue; array = line.Split(SEPERATOR); int count = table.Columns.Count; if (array.Length < count) { string[] newArray = new string[count]; for (int s = 0; s<array.Length;s++) { newArray[s] = array[s]; } for( int s = array.Length; s<count;s++) { newArray[s] = ""; } array = newArray; } if (array.Length > table.Columns.Count) { //More Values than Columns found throw new Exception( String.Format( "Fehlerhafte Zeile: Wertanzahl entspricht nicht der Anzahl der Spalten: {0}", line)); } table.Rows.Add(array); } } /// <summary> /// Liest die Daten aus einer sektionierten CSV Datei in das DataSet /// Format der CSV-Datei: /// [Tabellenname1] /// Spalte1;Spalte2;Spalte3 /// Wert11;Wert12;Wert13 /// Wert21;Wert22;Wert23 /// .... /// WertN1;WertN2;WertN3 /// [Tabellenname2] /// Spalte1;Spalte2... /// ... /// </summary> /// <param name="dataset">DataSet object</param> /// <param name="filepath">Pfad zur CSV Datei</param> public static void ReadFromSectionedCsv(this DataSet dataset, string filepath) { const string PATTERN = @" ^ # Beginning of the line ((?:\[) # Section Start (?:[ ]*) (?<Section>[^\]^ ]*) # Actual Section text into Section Group (?:[ ]*) (?:\]) # Section End then EOL/EOB (?:[ ;]*) (?:[\r\n]{1,}) (?<Data>[^\[]*) (?:[\r\n]{0,}) )"; dataset.Clear(); using (Stream filestream = File.Open(filepath, FileMode.Open)) { Encoding encoding = Encoding.UTF8; //Encoding.Default; //if (Utf8Checker.IsUtf8(filestream)) // encoding = Encoding.UTF8; string fileContetnt; using (StreamReader sr = new StreamReader(filestream, encoding)) { fileContetnt = sr.ReadToEnd(); } var match = Regex.Matches(fileContetnt, PATTERN, RegexOptions.IgnorePatternWhitespace | RegexOptions.Multiline); foreach (Match m in match) { var sectionmatch = m.Groups["Section"]; var datamatch = m.Groups["Data"]; //Refactor: Is there a Better way to read the Tables of a Section? using(MemoryStream stream = new MemoryStream()) using (StreamWriter writer = new StreamWriter(stream)) { writer.Write(datamatch.Value); writer.Flush(); stream.Position = 0; dataset.Tables.Add(sectionmatch.Value).ReadFromCsv(stream); } } } } /// <summary> /// Schreibt die Daten der DataTable als neue CSV Datei /// </summary> /// <param name="table">DataTable object</param> /// <param name="filepath">Pfad zur CSV Datei</param> public static void WriteToCsv(this DataTable table, string filepath) { using (Stream filestream = File.Open(filepath, FileMode.CreateNew)) { table.WriteToCsv(filestream); } } /// <summary> /// Schreibt die Daten der DataTable im CSV Format in den angegebenen Stream /// </summary> /// <param name="table">DataTable object</param> /// <param name="filestream">Stream der CSV Datei</param> public static void WriteToCsv(this DataTable table, Stream filestream) { using (StreamWriter sw = new StreamWriter(filestream, Encoding.UTF8)) { int numberOfColumns = table.Columns.Count; for (int i = 0; i < numberOfColumns; i++) { sw.Write(table.Columns[i]); if (i < numberOfColumns - 1) sw.Write(SEPERATOR); } foreach (DataRow dr in table.Rows) { sw.WriteLine(); for (int i = 0; i < numberOfColumns; i++) { sw.Write(dr[i].ToString()); if (i < numberOfColumns - 1) sw.Write(SEPERATOR); } } sw.Flush(); } } /// <summary> /// Schreibt die Daten des DataSet als neue sektionierte CSV Datei /// Format der CSV-Datei: /// [Tabellenname1] /// Spalte1;Spalte2;Spalte3 /// Wert11;Wert12;Wert13 /// Wert21;Wert22;Wert23 /// .... /// WertN1;WertN2;WertN3 /// [Tabellenname2] /// Spalte1;Spalte2... /// ... /// </summary> /// <param name="dataset">DataSet object</param> /// <param name="filepath">Pfad zur CSV Datei</param> public static void WriteToSectionedCsv(this DataSet dataset, string filepath) { using (Stream filestream = File.Open(filepath, FileMode.CreateNew)) { using (StreamWriter sw = new StreamWriter(filestream, Encoding.UTF8)) { foreach (DataTable table in dataset.Tables) { sw.WriteLine("[{0}]", table.TableName); int numberOfColumns = table.Columns.Count; for (int i = 0; i < numberOfColumns; i++) { sw.Write(table.Columns[i]); if (i < numberOfColumns - 1) sw.Write(SEPERATOR); } foreach (DataRow dr in table.Rows) { sw.WriteLine(); for (int i = 0; i < numberOfColumns; i++) { sw.Write(dr[i].ToString()); if (i < numberOfColumns - 1) sw.Write(SEPERATOR); } } sw.WriteLine(); } sw.Flush(); } } } } } Questions: How would you handle malformed files? Are all the Streams and usings necessary? How can I organize the code with respect to easy unit testing? What parts of the code would you parametrize? For instance, letting the user select the separating char. Answer: if (array.Length < count) { string[] newArray = new string[count]; for (int s = 0; s<array.Length;s++) { newArray[s] = array[s]; } for( int s = array.Length; s<count;s++) { newArray[s] = ""; } array = newArray; } You can use Array.Resize to simplify this. if (array.Length < count) { var length = array.Length; Array.Resize(ref array, count); for (var i = length; i < array.Length; i++) { array[i] = string.Empty; } } if (line == null || (line.Trim() == "" || !line.Contains(SEPERATOR) || String.IsNullOrEmpty(line.Replace(';', ' ').Trim()))) I think you want to be using SEPERATOR instead of ; here. sw.WriteLine("[{0}]", table.TableName); int numberOfColumns = table.Columns.Count; for (int i = 0; i < numberOfColumns; i++) { sw.Write(table.Columns[i]); if (i < numberOfColumns - 1) sw.Write(SEPERATOR); } foreach (DataRow dr in table.Rows) { sw.WriteLine(); for (int i = 0; i < numberOfColumns; i++) { sw.Write(dr[i].ToString()); if (i < numberOfColumns - 1) sw.Write(SEPERATOR); } } sw.WriteLine(); This can be simplified. writer.WriteLine("[{0}]", table.TableName); writer.WriteLine(string.Join(SEPERATOR.ToString(), table.Columns.Cast<DataColumn>())); foreach (DataRow row in table.Rows) { writer.WriteLine(string.Join(SEPERATOR.ToString(), row.ItemArray)); } How can I organize the code with respect to easy unit testing? I would consider making the methods take a TextReader (TextWriter) instead of a Stream. You can then pass a StringReader (StringWriter) from your unit tests, while client code will normally pass a StreamReader (StreamWriter). This will also allow client code to choose the encoding (which they really should be doing) instead of being forced to use UTF-8. Another reason to consider doing this is that someone calling WriteToCsv might want to write to the stream after the call returns. But they will get an exception, since StreamWriter disposes of the underlying stream. For example, we get an ObjectDisposedException when we call WriteByte here: using (var stream = new MemoryStream()) { using (var writer = new StreamWriter(stream)) { } stream.WriteByte(0); } Finally, it makes code re-use a bit easier. For instance, WriteToSectionedCsv can be written in terms of WriteToCsv. public static void WriteToCsv(this DataTable table, TextWriter writer) { writer.WriteLine(string.Join(SEPERATOR.ToString(), table.Columns.Cast<DataColumn>())); foreach (DataRow row in table.Rows) { writer.WriteLine(string.Join(SEPERATOR.ToString(), row.ItemArray)); } } public static void WriteToSectionedCsv(this DataSet dataSet, TextWriter writer) { foreach (DataTable table in dataSet.Tables) { writer.WriteLine("[{0}]", table.TableName); table.WriteToCsv(writer); } }
{ "domain": "codereview.stackexchange", "id": 12470, "tags": "c#, csv, file-structure" }
Atoms Inside a Lightning Bolt
Question: What happens to atoms trapped in lightning? Why do electrons not split an atom but can change them inside the bolt? Can atoms travel on a bolt? Answer: Lightning is a large-scale natural spark discharge that occurs within the atmosphere or between the atmosphere and the Earth’s surface. On discharge, a highly electrically conductive plasma channel is created within the air, and when current flows within this channel, it rapidly heats the air up to about 25 000°C. The lightning channel is an example of terrestrial plasma in action. Plasma is the fourth state of matter, solid, liquid, gas, plasma. A plasma can be created by heating a gas or subjecting it to a strong electromagnetic field applied with a laser or microwave generator. This decreases or increases the number of electrons, creating positive or negative charged particles called ions, and is accompanied by the dissociation of molecular bonds, if present. With the above in mind: What happens to atoms trapped in lightning? A number of neutral atoms, i.e. a positive nucleus and a negative orbital cloud of electrons, remain neutral and are carried by the convection and turbulence induce by the energy of the discharge. A number have one or two electrons dissociated by the scattering energies of the bolt and so the gas has free charges, electrons and ions, following the field. Why do electrons not split an atom but can change them inside the bolt? Electrons are bound about the charge of the atom normally in stable orbitals. . The energy needed to split an electron off an atom is not very large, and the lightning bolt has it from the potential difference of cloud/ground. The plasma electrons may ionize other atoms in the turbulence. Can atoms travel on a bolt? Neutral atoms are carried by the eddies induced by the bolt. Ions ( atoms that have lost one or more electrons) travel by the attraction of the potential difference between cloud and ground in addition to the convection. The free electrons and ions carry the currents of the bolt.
{ "domain": "physics.stackexchange", "id": 26937, "tags": "electromagnetism, molecules, lightning, subatomic" }
LaserScan gives nan data to close round or sharp cornered objects
Question: Quite new to this was toying with Turtlebot GMapping on ROS Kinetic when I encountered this error. According to the topic description, any distance above 10 m will be referred to as nan. But while simulating, the msg.ranges[300], which refers to the front , gives nan value, especially if the object is round or has cylindrical surface. I changed the logic, so that on such occurrence, it keeps rotating. But the RViz map gets severely distorted, as it estimates the value at infinity or very far. How do I fix that? RViz map distortion: https://imgur.com/TXFw3qV Node: https://imgur.com/mpQvbtY Originally posted by Yokai- on ROS Answers with karma: 23 on 2021-07-12 Post score: 0 Original comments Comment by Mike Scheutzow on 2021-07-12: On http://wiki.ros.org/gmapping I see parameters ~maxRange and ~maxUrange. Have you set them to reasonable values? Comment by mgruhler on 2021-07-13: Also, could you please clarify if this is an issue with gmapping or if those are the actual values that get reported by your laserscanner? If yes, which laserscanner are you using (model, simulated)? Comment by Yokai- on 2021-07-13: @Mike Scheutzow, I'm using the gmapping_demo provided by the turtlebot tutorial, which has a max range set to 80 m by default. But the issue is with LaserScan which gives the distance of very close round objects as nan (>10 m). Even if I'm not using the turtlebot_gazebo gmapping_demo.launch file, the LaserScan alone gives faulty results Comment by Yokai- on 2021-07-13: @mgruhler, I see no issue with GMapping, the LaserScan is providing faulty nan values. As for the laserscanner sensor used by the model is "360 Laser Distance Sensor LDS-01", according to https://emanual.robotis.com/docs/en/platform/turtlebot3/features/#features Comment by mgruhler on 2021-07-13: I'm confused. Are you running in simulation (gazebo) or a real robot with a real sensor? Comment by Yokai- on 2021-07-13: @mgruhler , sorry for not understanding earlier. It is a simulation. Comment by Mike Scheutzow on 2021-07-13: I have not run this particular simulation, but why are you surprised that a nan clears out the map up to 80 meters? You have told gmapping that the laser data is reliable out to that distance. Also, be aware that you can get nan in the real world because not every surface cleanly reflects the laser beam back to the lidar. Comment by Yokai- on 2021-07-13: @Mike Scheutzow , I have no issues with nan clearing out the map. My issue is that the laserscan gives nan data for objects as close as 1 m from the robot, which is incorrect. This happens especially if the object is cylindrical. Maybe I failed to clarify my question. Sorry for that. You can find the snap shot here : https://imgur.com/TXFw3qV Comment by Mike Scheutzow on 2021-07-13: All lidars I'm familiar with also have a minimum distance. They do not work down to 0 meters. You would need to look at the simulation model to know how yours is configured. Comment by Mike Scheutzow on 2021-07-13: Please describe what you mean by "distortion". You first image is too small for me to make out the details, but it looks to me like the robot has not moved around that section enough to build up a good map. That cylindar adjacent to the robot is blocking a very big arc of the lidar scan. Answer: All lidars have a minimum range. Objects closer than this distance usually return either 0 or NotANumber (nan). To get optimal results, you must properly configure gmapping for the sensors you are using. Getting a nan sample from a lidar is normal and means "no light bounced back." gmapping will interpret this value as "no obstacles out to maximum range" for that ray. If you move the robot to a new pose, it may then see an object that was previously not visible. Originally posted by Mike Scheutzow with karma: 4903 on 2021-07-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 36695, "tags": "rviz, ros-kinetic" }
Non-linear optics, non-linear polarization reference system?
Question: in the Boyd's book about non-linear optics he defines the non-linear polarization for sum frequency generation, under particular symmetries, as $$ \left[\begin{array}{c} P_{x}(2 \omega) \\ P_{y}(2 \omega) \\ P_{z}(2 \omega) \end{array}\right]=2 \epsilon_{0}\left[\begin{array}{llllll} d_{11} & d_{12} & d_{13} & d_{14} & d_{15} & d_{16} \\ d_{21} & d_{22} & d_{23} & d_{24} & d_{25} & d_{26} \\ d_{31} & d_{32} & d_{33} & d_{34} & d_{35} & d_{36} \end{array}\right]\left[\begin{array}{c} E_{x}(\omega)^{2} \\ E_{y}(\omega)^{2} \\ E_{z}(\omega)^{2} \\ 2 E_{y}(\omega) E_{z}(\omega) \\ 2 E_{x}(\omega) E_{z}(\omega) \\ 2 E_{x}(\omega) E_{y}(\omega) \end{array}\right] $$ what is the reference system of the electric field? Suppose that only the index $d_{36}$ is not zero (for example like in the GaAs) and the input electric field is linear polarized. Changing the reference system we can have zero or non zero non linear polarization. Why? Answer: The reference system is that of the crystal. Have a look for example at this document: "Nonlinear Optics Franz X. Kärtner and Oliver D. Mücke Center for Free-Electron Laser Science, DESY Department of Physics, University of Hamburg" (of December 18, 2016) and therein page 33. I think it is a nice argumentation how to figure out which elements will be zero.
{ "domain": "physics.stackexchange", "id": 99632, "tags": "optics, visible-light, electric-fields, non-linear-systems" }
Parallel algorithm for LU-decomposition
Question: I need to implement LU-decomposition in Kaira. In Kaira the programmer writes the "parallel part" as the diagram similar to Petri Nets. So, could you, please, recommend me some parallel algorithms for LU-decomposition which are really easy to understand and implement? The low difficulty of the implementation has the highest priority for me, because I'm not very familiar with Kaira and I'm in hurry a little. I looked at a Fined-grained LU-factorization, but I'm curious if some other algorithms are used. Answer: Finally I've found some decriptions of these algorithms here.
{ "domain": "cs.stackexchange", "id": 4015, "tags": "parallel-computing, matrices, decomposition" }
Should manifest.xml add include cflags for package automatically?
Question: When creating a package whose header files will be included by others, we have to add the following to the manifest.xml as per to specify the package's include path. <export> <cpp cflags="-I${prefix}/include" /> </export> Should this be done automatically as it's done for libraries? Originally posted by ubuntuslave on ROS Answers with karma: 347 on 2011-07-19 Post score: 0 Answer: Not all packages export the same location and some don't export anything, thus we do not do it automatically. It is our convention that we usually do the same place, but it's not required. This is especially true for python packages etc. Originally posted by tfoote with karma: 58457 on 2011-07-19 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 6191, "tags": "ros, include, manifest.xml" }
How are propagator and two-point function related?
Question: Assume that we have a QFT with one scalar field $\phi$ with mass $m$ and the Lagrangian $$\begin{aligned} \mathcal{L}_{\mathrm{EFT}, \mathrm{off}}=& \frac{1}{2}\left(\partial_{\mu} \phi\right)^{2}-\frac{1}{2} m^{2} \phi^{2} \\ &-\frac{C_{4}}{4 !} \phi^{4}-\frac{C_{6}}{6 ! M^{2}} \phi^{6}-\frac{\tilde{C}_{6}}{4 ! M^{2}} \phi^{3} \square \phi-\frac{\hat{C}_{6}}{2 M^{2}}(\square \phi)^{2} \end{aligned}.$$ The propagator for $\phi$ in momentum space will then be something like $$\frac{i}{p^2-m^2 + i\epsilon}.$$ The Feynman rule for this propagator is usually represented by a straight line. In some lecture notes (that I'm unfortunately not allowed to share here) we consider all $1PI$ diagrams at tree-level which contribute to the two-point function, i.e. only one diagram, a straight line. The amplitude of this diagram is written down as $$\mathcal{M}_2 = i (p^2-m^2).$$ Question I don't understand why the propagator and amplitude don't coincide. I mean, just looking at the units these two things don't seem to be related, but we still use the same description in terms of Feynman diagrams, which seems weird. Is there a connection? How can I see it? Answer: The main point is that the 2-pt functions for the generator $W_c[J]$ of connected diagrams and the generator $\Gamma[\phi_{\rm cl}]$ of 1PI diagrams are each other's inverse (up to factors of $i$), cf. e.g. this & this related Phys.SE posts. In particular note that for a 1PI diagram the external legs are stripped/amputated. In this process, the free propagator/2pt-function then turns into its own inverse.
{ "domain": "physics.stackexchange", "id": 76768, "tags": "quantum-field-theory, feynman-diagrams, greens-functions, correlation-functions, propagator" }
Acquiring the necessary background for research
Question: I'm starting my mater's thesis in computer science. I'm interested in computational Origami (algorithms mainly); I've read a little about the subject and I'm worried because I lack some of the knowledge that I think is necessary to do research in that area. But still I'd like to start and want to ask: which is better an approach: learn the necessary background as I go and wheneve a necessity pops up or learn it first and then start. I have 4 to 5 months to finish my thesis. Answer: In my experience, it is better to learn stuff as you need it. Otherwise, you don't have the motivation to really "get" it, and there is always the risk of getting lost in the forest of interesting stuff out there (that in the end is of no use). Be at the lookout for results that could help you (Google or citeseerx or ..., or even this site, are your friends). Another piece of advise: Start writing right now. Editing your ramblings into something coherent later on is easy, sitting in front of a blank screen with writer's block when the deadline approaches is extremely painful. Write down what you do right now; if it works out it stays for later, if not it gets erased (and you don't get embarassed by your dumb attempts ;-). That way it doesn't happen to you that you know something is right, but don't remember why... and have to redo lots of work (happened to me!). Make a short (3-4 line) summary of each paper or other reference that looks relevant, add it to your bibliography. Reading the stuff so you are able to extract what is relevant to you helps understanding the stuff, the summaries help in locating material later. And the collection of summaries will be your chapter on state of the art (after cleanup). As a final comment, use LaTeX (it looks much nicer, has very good handling of bibliographies ;-), and perhaps asymptote for figures (it is a bit hard to learn, but has C++-like syntax to manipulate points and define lines, and produces excelent results). Put the whole stuff under version control (my favorite is git, but use whatever you are confortable with). That way you'll be able to have an up-to-date backup (preferrably on some other machine!), and go back if you really mess up (Murphy's lat assures us you will).
{ "domain": "cs.stackexchange", "id": 952, "tags": "research" }
Style-changing handler for an HTML drop-down box
Question: How can I maybe do some loop that will compress the amount of JavaScript/jQuery I need to use. I have a function s3episodesChange() linked to a <select> tag. This <select> tag has a few different selectable values that change style properties of various different <button> tags. All it is doing is showing the appropriate button and making sure all other buttons are hidden well before showing a new one. It also makes sure that when you load the page it will show the first button tied to the first select tags value. Have a look at my code and tell me what steps I could possibly take to try and achieve this using less overall code. function season3episodesChange() { //Episodes: var episode1 = "1 - The Thin White Line"; var episode2 = "2 - Brian Does Hollywood"; var episode3 = "3 - Mr. Griffin Goes to Washington"; var episode4 = "4 - One If by Clam, Two If by Sea"; var episode5 = "5 - And the Wiener Is..."; var episode6 = "6 - Death Lives"; var episode7 = "7 - Lethal Weapons"; -------------------/\--------------------------- //There are 22 total vars just I cut most of them out. var selectseason3episode = document.getElementById('selectseason3episode'); var season3episode1 = document.getElementById('season3episode1'); if(selectseason3episode.value == episode1){ season3episode1.style.display = 'inline-block'; } else { document.getElementById('season1episode1').style.display = 'none'; } if(selectseason3episode.value == episode2){ season3episode1.style.display = 'none'; document.getElementById('season3episode2').style.display = 'inline-block'; } else { document.getElementById('season3episode2').style.display = 'none'; } if(selectseason3episode.value == episode3){ season3episode1.style.display = 'none'; document.getElementById('season3episode3').style.display = 'inline-block'; } else { document.getElementById('season3episode3').style.display = 'none'; } if(selectseason3episode.value == episode4){ season3episode1.style.display = 'none'; document.getElementById('season3episode4').style.display = 'inline-block'; } else { document.getElementById('season3episode4').style.display = 'none'; } if(selectseason3episode.value == episode5){ season3episode1.style.display = 'none'; document.getElementById('season3episode5').style.display = 'inline-block'; } else { document.getElementById('season3episode5').style.display = 'none'; } if(selectseason3episode.value == episode6){ season3episode1.style.display = 'none'; document.getElementById('season3episode6').style.display = 'inline-block'; } else { document.getElementById('season3episode6').style.display = 'none'; } if(selectseason3episode.value == episode7){ season3episode1.style.display = 'none'; document.getElementById('season3episode7').style.display = 'inline-block'; } else { document.getElementById('season3episode7').style.display = 'none'; } if(selectseason3episode.value == episode8){ season3episode1.style.display = 'none'; document.getElementById('season3episode8').style.display = 'inline-block'; } else { document.getElementById('season3episode8').style.display = 'none'; } if(selectseason3episode.value == episode9){ season3episode1.style.display = 'none'; document.getElementById('season3episode9').style.display = 'inline-block'; } else { document.getElementById('season3episode9').style.display = 'none'; } if(selectseason3episode.value == episode10){ season3episode1.style.display = 'none'; document.getElementById('season3episode10').style.display = 'inline-block'; } else { document.getElementById('season3episode10').style.display = 'none'; } if(selectseason3episode.value == episode11){ season3episode1.style.display = 'none'; document.getElementById('season3episode11').style.display = 'inline-block'; } else { document.getElementById('season3episode11').style.display = 'none'; } if(selectseason3episode.value == episode12){ season3episode1.style.display = 'none'; document.getElementById('season3episode12').style.display = 'inline-block'; } else { document.getElementById('season3episode12').style.display = 'none'; } if(selectseason3episode.value == episode13){ season3episode1.style.display = 'none'; document.getElementById('season3episode13').style.display = 'inline-block'; } else { document.getElementById('season3episode13').style.display = 'none'; } if(selectseason3episode.value == episode14){ season3episode1.style.display = 'none'; document.getElementById('season3episode14').style.display = 'inline-block'; } else { document.getElementById('season3episode14').style.display = 'none'; } if(selectseason3episode.value == episode15){ season3episode1.style.display = 'none'; document.getElementById('season3episode15').style.display = 'inline-block'; } else { document.getElementById('season3episode15').style.display = 'none'; } if(selectseason3episode.value == episode16){ season3episode1.style.display = 'none'; document.getElementById('season3episode16').style.display = 'inline-block'; } else { document.getElementById('season3episode16').style.display = 'none'; } if(selectseason3episode.value == episode17){ season3episode1.style.display = 'none'; document.getElementById('season3episode17').style.display = 'inline-block'; } else { document.getElementById('season3episode17').style.display = 'none'; } if(selectseason3episode.value == episode18){ season3episode1.style.display = 'none'; document.getElementById('season3episode18').style.display = 'inline-block'; } else { document.getElementById('season3episode18').style.display = 'none'; } if(selectseason3episode.value == episode19){ season3episode1.style.display = 'none'; document.getElementById('season3episode19').style.display = 'inline-block'; } else { document.getElementById('season3episode19').style.display = 'none'; } if(selectseason3episode.value == episode20){ season3episode1.style.display = 'none'; document.getElementById('season3episode20').style.display = 'inline-block'; } else { document.getElementById('season3episode20').style.display = 'none'; } if(selectseason3episode.value == episode21){ season3episode1.style.display = 'none'; document.getElementById('season3episode21').style.display = 'inline-block'; } else { document.getElementById('season3episode21').style.display = 'none'; } if(selectseason3episode.value == episode22){ season3episode1.style.display = 'none'; document.getElementById('season3episode22').style.display = 'inline-block'; } else { document.getElementById('season3episode22').style.display = 'none'; } } Note: The JavaScript is being loaded on the script tag like so: <select onload="javascript:season1episodesChange()" onchange="javascript:season1episodesChange()"> I have onload so that it loads the season1 episode1 button when the page loads. I haven't actually tested if it even needs to be there, but I'm pretty sure it does. Answer: Instead of having individual variables for each episode, and then iterating over all of them, use an array. By using an array, you can use a for loop to iterate over all of them, turning the potentially massive amount of loops into one loop: function season3episodesChange() { //Episodes: var episodes = [ "1 - The Thin White Line", "2 - Brian Does Hollywood", "3 - Mr. Griffin Goes to Washington", "4 - One If by Clam, Two If by Sea", "5 - And the Wiener Is...", "6 - Death Lives", "7 - Lethal Weapons" ]; var selectseason3episode = document.getElementById('selectseason3episode'); for (var i = 1; i <= episodes.length; i++){ if (selectseason3episode.value == episodes[i - 1]){ document.getElementById('season3episode'+ i).style.display = 'inline-block'; } else { document.getElementById(['season3episode' + i].join('')).style.display = 'none'; } } }
{ "domain": "codereview.stackexchange", "id": 15653, "tags": "javascript, form, event-handling" }
Subscriber declaration in while loop
Question: If I declare a subscriber in the while (ros::ok()) loop is it is allowed to change the topic which I read from at each loop? Actually it doesn't work: the topics which i read from are always the same, the initial ones: "..Data1","...Data2" What am I doing wrong? string targetI_id="1"; string targetII_id="2"; while (ros::ok()) { // here i change the string at each loop ros::Subscriber vrepVideoSubscriber_trgII = n.subscribe("/vrep... /visionSensorData"+targetII_id,1,newImageTrigger_trgII); ros::Subscriber vrepVideoSubscriber_trgI = n.subscribe("/vrep... /visionSensorData"+targetI_id,1,newImageTrigger_trgI); ..... Originally posted by mateo_7_7 on ROS Answers with karma: 90 on 2013-06-11 Post score: 1 Answer: In principle this is allowed. However, you are probably changing the subscriptions very fast. To successfully get a message through after the subscribe call, the nodes must arrange for a connection to be opened and a message must be sent. This usually takes some time. As soon as one loop cycle ends your subscribers go out of scope and will be destroyed, so depended on the code after ..... this code won't do what you want it to do. Originally posted by dornhege with karma: 31395 on 2013-06-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by mateo_7_7 on 2013-06-12: actually i receive the name of the topic (the string), which i have to read from, from another subscriber (and for sure the updating of this string is performed correctly)....but, as I said, i cannot read from this new topic"/vrep/visionSensorData"+targetII_id because the topic remains: Comment by mateo_7_7 on 2013-06-12: "/vrep/visionSensorData"+NULL even if the string is updated. The updating of this string (apart from the first time that is performed after 1 loop) is performed very rarely ....how can i do??? Comment by dornhege on 2013-06-12: You have to give the subscribers some time to connect and receive messages in your code design. If you really just want to get something quickly, a service might be better suited.
{ "domain": "robotics.stackexchange", "id": 14530, "tags": "ros, c++" }
Is there a known convolutional net architecture to calculate object masks for images?
Question: I would like to train a convnet to do the following: Input is a set of single channel (from black to tones of grey to white) pictures with a given object, let's say cars. Target is, for every picture in the set, the same picture, however pixels are either black or white. The pixels corresponding to the car object are in white (i.e. intensity 255) and the pixels corresponding to the background are black (i.e. intensity 0). After training I would like to feed the net with pictures of cars and I would like the prediction - the ideal prediction at least - to be a picture with pixels either black or white, where white corresponds to the object and black to the background. I assume that the input layer is a 2D convolutional layer and the output layer is also a 2D convolutional layer, each one with as many neurons as pixels in the pictures. Can anyone please explain what kind of network architecture would accomplish just that? It could be either the architecture (in theory) or implemented in code. I expect to tweak it, but it would be nice not to start from scratch. Answer: I'm surprised nobody mention fully convolutional neural networks (FCNs) for semantic segmentation. They are inspired by the original AlexNet style convnets that ended with one or two densely connected layers and softmax classifier. But FCNs dispense with the dense layers and stay fully convolutional all the way to the end. Here's the basic principle. Take AlexNet, or VGG or something like that. But instead of using the parameters in the classifier to compute a scalar for each category, use them to compute a whole array (i.e. image) using a 1x1xNUM_CATEGORIES convolution. The output will be NUM_CATEGORIES feature maps, each representing a coarse-grained "heat map" for that category. A map of dogness, a map of catness. It can be sharpened by including information from earlier layers with "skip connections". EDIT: Just one further bit of good news: the authors of that paper provide implementations of their nets in Caffe's Model Zoo. Tweak away!
{ "domain": "datascience.stackexchange", "id": 1054, "tags": "deep-learning, convolutional-neural-network" }
Why are black mosquito nets so much less visible than white ones?
Question: I have just been installing a couple of mosquito nets for windows in our apartment. The material is polyester, fibers are approximately 0.2mm thick, and the hexagonal "holes" apprixmately 1.5mm in diameter. Just like the packaging says ("buy black ones if you want to have a better view"), the black ones are almost invisible from a little distance (~2 meters, its a little bit darker, almost in effect like an ND photo filter). From the same distance, the white ones are clearly visible, with good eyes you can also spot the "holes" (the effect looks almost like if you put the black point in photoshop into a much whiter area, making the contrast much less). Why is that so? Is there a physical effect that objectively reduces the "image information" that reaches the human eye? I would appreciate an answer that -- if applicable -- contains a short average human explanation, and additional one that goes a bit more in-depth into the "hobbyist physics" level Answer: This is similar to the reasons one way mirrors work. If you look through a black net then no light is reflected from the net so the eye sees only the light coming from the objects on the far side of the net. The amount of the external light that reaches you is reduced, but the brain is pretty good at reconstructing images from only partial data, so the view looks unchanged. If you look through a white net then the eye receives a mixture of the light reflected from the net and the light from outside transmitted through it. If the room you are in is dark and the outside is bright, then the amount of light reflected from the net is small compared to the transmitted light and you still don't see the net. However if the room is light and the outside dark then the light reflected from the net swamps the light transmitted and you only see the net. In between you'll see both the net and the view.
{ "domain": "physics.stackexchange", "id": 14518, "tags": "visible-light, vision" }
Hokuyo - No map received in RViz
Question: Hi everyone! I am trying to use the package "GMapping" to visualize the map acquired with Hokuyo laser scanner! I have no odometry, so I simulated a TF with a launch file, which worked already in other applications! <launch> <node pkg="tf" type="static_transform_publisher" name="US6" args="0 7 2 1.5708 0 0 base_link laser 100" /> </launch> When running RViz, in Global Options, the Fixed frame is map; In Grid, the Reference Frame is map; In TF I have the following warnings: No transform from [map] to [base_link] / No transform from [odom] to [base_link] In map, the topic is map and I receive this warning and error message: No map received / No transform from [] to [base_link] What did I do wrong? I notice that if I change the Fixed Frame in Global Options and the one in Grid, the error disappear and some other errors appear... what are the correct options to be selected in Global Options and in Grid? Thank you for your time! Originally posted by anamcarvalho on ROS Answers with karma: 123 on 2014-08-28 Post score: 1 Answer: As bvbdort already mentioned, you need to also publish the base_link -> odom transform. If you don't have an odometry system, you can do it with a static transform publisher in your launch file like this: <launch> <node pkg="tf" type="static_transform_publisher" name="odom_to_base_link" args="0 0 0 0 0 0 odom base_link 100" /> <node pkg="tf" type="static_transform_publisher" name="US6" args="0 7 2 1.5708 0 0 base_link laser 100" /> </launch> That way every transform gmapping expects to run is set up. Mind that gmapping won't really be able to extimate the position of the laser in the map, since you publish odometry information manually as a static transformation. But at least you should see a map built from your laser. PS: If it still doesn't work that way, please provide info about what gmapping prints out. Originally posted by Malefitz with karma: 136 on 2014-09-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by anamcarvalho on 2014-09-02: Thank you Malefitz, it works =) now I have another question but i'll open a new one, since it isn't related with the transforms!
{ "domain": "robotics.stackexchange", "id": 19225, "tags": "navigation, mapping, rviz, hokuyo, gmapping" }
Massalin's Synthesis Quajects equivalent to ASM generating macros used in Game Oriented Assembly LISP?
Question: Alexia Massalin's Dissertation on Synthesis was a Phd thesis on Operating Systems that contained a concept called 'Quajects' (see Chapter 4). This is some additional commentary on the Phd Thesis. Best I can work out - a Quaject is construct that generates Assembler customised for the function being used at the time. (Perhaps like a JIT). The project that I've seen that came closest to this was Game Oriented Assembly LISP (GOAL), a framework used in Crash Bandicoot that used ASM-generating LISP macros to speed up the development iteration process and generate the production code. Can we say that the Macros generating ASM in GOAL were quajects? (yes or no question - please explain why if yes, and reasons if no.) Answer: As I see it, a Quaject is a kind of software component in the sense that it has a provided interface (the callentry functions), which contains the functions it offers, and a required interface (the callout functions), which contains the functions it needs to operate. The callback part of the interface is for call back functions, which are functions that are available to call the caller of a function. (In Java you cannot call the caller unless an explicit reference is passed.) I don't think it is correct to say that "a Quaject is construct that generates Assembler customised for the function being used at the time." For example, in Section 4.1.4 of the thesis it states that Higher-level kernel services are built by composing several basic quajects. So the quajects are the ingredients that are used to build the custom services. They do not build them. GOAL seems to be a variant of Scheme that allows inline assembly language, which I can image allows quite some flexibility in the programming. The language is compiled to assembly. The assembly language it includes is not in the shape of any sort of component, as far as I can tell. Based on this, I would say that the answer to your question is no.
{ "domain": "cs.stackexchange", "id": 267, "tags": "operating-systems" }
Does the rotation of the earth dramatically affect airplane flight time?
Question: Say I'm flying from Sydney, to Los Angeles (S2LA), back to Sydney (LA2S). During S2LA, travelling with the rotation of the earth, would the flight time be longer than LA2S on account of Los Angeles turning/moving away from our position? Or, in the opposite direction, would the flight to Sydney be faster since the Earth turns underneath us and moves Sydney closer? === Please ignore jet stream effects and all other variables; this is a control case in an ideal environment. By "dramatically" I suppose I mean a delay of 1 hour or more. Answer: During the flight, you need to get up to use the restroom. There's one 10 rows in front of you, and another 10 rows behind you. Does it take longer to walk to the one that's moving away from you at 600 mph than the one that's moving towards you at 600 mph? No, because you're moving at 600 mph right along with it -- in the ground-based frame of reference. In the frame of reference of the airplane, everything is stationary. Similarly, the airplane is already moving along with the surface of the Earth before it takes off. The rotation of the Earth has no direct significant effect on flight times in either direction. That's to a first order approximation. As others have already said, since the Earth's surface is (very nearly) spherical and is rotating rather than moving linearly, Coriolis effects can be significant. But prevailing winds (which themselves are caused by Coriolis and other effects) are more significant that any direct Coriolis effect on the airplane.
{ "domain": "physics.stackexchange", "id": 1828, "tags": "geometry, aircraft, rotation" }
How do I reduce the smell of store-bought liquid soap?
Question: I recently bought a brand of liquid soap I've never used before and it turns out they formulated it with a bit too much of the scent ingredients, to the point where I find it gets uncomfortably close to nauseating during use, and also it persists too long on my hands afterwards, compared to what I find normal. How do I neutralize some of this scent, what can I try to mix in? All I can think of are baking soda or vinegar (but then vinegar also persists a lot and is unpleasant, plus it's acidic and I'm going to mix it with a probably-alkaline soap, and I don't want any crazy reactions and byproducts if possible). Answer: If the scent reaches out and touches you that much, it is likely volatile. Pour it in a pot, add a volume of water, heat on a hot plate outside your dwelling till the smell goes away. Evaporate to volume and check ingredients. You may want to add back a small amount of a desirable ingredient that boiled off. Raise the temperature slowly. A vigorous boil will make a real mess.
{ "domain": "chemistry.stackexchange", "id": 11726, "tags": "smell" }
Tkinter stopwatch
Question: As part of a personal project, I wanted to make a stopwatch with Python. I wish to show my friend but at the moment it is a little messy and long. Is there any way to make it more compact or easier to read? #stopwatch from tkinter import* import time root=Tk() root.configure(background=("black")) root.title("stopwatch") root.geometry("1000x800") time_elapsed1=0 time_elapsed2=0 time_elapsed3=0 i=0 j=0 time1=0 def create_label(text,_x,_y): label = Label(root, text=text,fg='white', bg="black",font=("default",10,"bold")) label.place(x=_x,y=_y,width=100,height=45) def start(): start_button.place_forget() stop_button.place(x = 20, y = 300, width=300, height=100) global time_elapsed1,time_elapsed2,time_elapsed3,time1,self_job,time2 time2=int(time.time()) if time2!=time1: time1=time2 if time_elapsed1<59: time_elapsed1+=1 clock_frame.config(text=str(time_elapsed3) + ":" + str(time_elapsed2)+ ":" + str(time_elapsed1)) else: time_elapsed1=0 if time_elapsed2<59: time_elapsed2+=1 clock_frame.config(text=(str(time_elapsed3) + ":" + str(time_elapsed2)+ ":" + str(time_elapsed1))) else: time_elapsed2=0 if time_elapsed3<23: time_elapsed3+=1 clock_frame.config(text=(str(time_elapsed3) + ":" + str(time_elapsed2)+ ":" + str(time_elapsed1))) else: print("you left it on for too long") self_job=root.after(1000,start) def stop(): global self_job if self_job is not None: root.after_cancel(self_job) self_job = None stop_button.place_forget() start_button.place(x = 20, y = 300, width=300, height=100) def clear(): global time_elapsed1,time_elapsed2,time_elapsed3,time1,self_job,time2,label,i,j try: stop() except: start() stop() clock_frame.config(text="0:0:0") time_elapsed1=0 time_elapsed2=0 time_elapsed3=0 time_1=0 time_2=0 i=0 j=0 wig=root.winfo_children() for b in wig: b.place_forget() start_button.place(x = 20, y = 300, width=300, height=100) lap_button.place(x = 660, y = 300, width=300, height=100) reset_button.place(x = 340, y = 300, width=300, height=100) clock_frame.place(x = 200, y = 50, width=600, height=200) def lap(): global time_elapsed1,time_elapsed2,time_elapsed3,time1,self_job,time2,i,j if i<9: create_label((str(time_elapsed3)+":"+str(time_elapsed2)+ ":" + str(time_elapsed1)),20+(110*i),400+(j*50)) else: j+=1 i=0 create_label((str(time_elapsed3)+":"+str(time_elapsed2)+ ":" + str(time_elapsed1)),20+(110*i),400+(j*50)) i+=1 clock_frame=Label(text="0:0:0",bg="black",fg="blue",font=("default",100,"bold")) start_button=Button(text="START",bg="green",fg="black",command=start,font=("default",50,"bold")) stop_button=Button(text="STOP",bg="red",fg="black",command=stop,font=("default",50,"bold")) lap_button=Button(text="LAP",bg="#4286f4",fg="black",command=lap,font=("default",50,"bold")) reset_button=Button(text="RESET",bg="orange",fg="black",command=clear,font=("default",50,"bold")) start_button.place(x = 20, y = 300, width=300, height=100) lap_button.place(x = 660, y = 300, width=300, height=100) reset_button.place(x = 340, y = 300, width=300, height=100) clock_frame.place(x = 200, y = 50, width=600, height=200) root.mainloop() Answer: Have you learned how to make objects in Python? Typically when you have a GUI, you want to have a class in charge of displaying all the GUI pieces, and it has reference to an object such as a Stopwatch that takes care of all of the logic. Stopwatch class would look something like this: import time class Stopwatch: def __init__(self): self.start_time = 0 self.laps = [] def start(self): # Implement your starting of the timer code here self.start_time = time.time() def lap(self): # Implement your lapping logic lap = time.time() self.laps.appen(lap) def stop(self): # Implement your stop timer logic elapsed = time.time() - self.start_time def reset(self): # Implement your watch reset logic here self.start_time = 0 def display_time(self): # Return the time to display on the GUI elapsed = time.time() - self.start_time # Figure out how to break the time into hour:minute:second # The time class might even have convenience functions for this sort of thing, look up the documentation display = elapsed # after you made it look nice return display Then in the GUI code you can make a Stopwatch object and let it take care of the messy work of saving times and doing math with them. The GUI class is just concerned with showing stuff the right way. Might look something like (minus positioning all the GUI components): # GUI root = Tk() root.configure(background=("black")) root.title("stopwatch") root.geometry("1000x800") stopwatch = Stopwatch() def create_label(text,_x,_y): label = Label(root, text=text,fg='white', bg="black",font=("default",10,"bold")) label.place(x=_x,y=_y,width=100,height=45) def setup(): pass # Create all of the GUI components and build all the visuals def start(): stopwatch.start() # let it do the logical work # do your GUI updates create_label(stopwatch.display_time()) stop_button.place() def stop(): stopwatch.stop() # Logic and math here # Do GUI updates for stop create_label(stopwatch.display_time()) def clear(): stopwatch.reset() # Logic again handled in the Stopwatch class # Clean up GUI components def lap(): # The Stopwatch class can keep a list of all lap times and make your life easier stopwatch.lap() # Next update the GUI create_label(stopwatch.display_time()) # Good form to have all the logic run inside functions #instead of hanging around to be accidentally executed if __name__ == "__main__": setup() root.mainloop() It will clean up your code and make it easier for you to make changes. The sooner you can learn how to use Objects, the easier your life will become, no matter the language! I leave it up to you to Google the best tutorials on Python objects. Finally, if you are confused on where Stopwatch class should go, best practice would be to create a file called "stopwatch.py" and paste the Stopwatch class in that file. Then in your main file that is running the code, import that stopwatch by calling from stopwatch import Stopwatch. As long as the stopwatch.py file is right next to your main file, it should be able to import it.
{ "domain": "codereview.stackexchange", "id": 34000, "tags": "python, timer, tkinter" }
Will blastn remove sequences from a search with low identity?
Question: I'm using this command (excuse the duplicate naming, I know it's bad form): blastn -query cm-seqs/combined_seqs.fna -db combined_seqs.fna -out cm-matched.txt -num_alignments 1 -outfmt 10 to take a set of fasta sequences to blast against another data base I built with blast. My query file has 4,364,417 sequences. The files resulting from the above command spat out 4,362,639 results. The difference is about 1800. Is this due to poor alignments for some sequences that didn't pass any of blasts default parameters? I assumed the result would be 1 "best" match in the database for each query sequence. Answer: Yes, if a query sequence has no good alignment, blast will not return a result for it. Your command will indeed output only the best hit for each query sequence, but there is no reason to assume all of your sequences will have a match. Those that don't have a match won't be included in the output. This sounds perfectly normal unless you are blasting the same input file against a DB created from that file. Otherwise, you don't expect all of your sequences to necessarily have a match. Also note that since the default blastn settings are quite permissive (usually an e-value of 10, for example), don't make the mistake of assuming that a blast hit means anything. The details will always depend on the specific search you are running (I have discussed this a little in my answer here), but in the vast majority of cases, any hit with an e-value > 1 can safely be ignored. In most cases, you are only interested in e-values well under 1, actually. So you probably already have all sorts of spurious results in there anyway. For more details, please edit your question and explain what the query and DB files are.
{ "domain": "bioinformatics.stackexchange", "id": 534, "tags": "blast" }
Are surface stress-energy tensors in a space-time junction conserved?
Question: Consider two space-times with respective metric $g^{\pm}_{\mu\nu}$ separated by a hypersurface junction. Assuming that the full metric given by $$g_{\mu\nu} = \Theta(\ell) g^{+}_{\mu\nu}+\Theta(-\ell) g^{-}_{\mu\nu}\tag{3.48}$$ is continuous at the junction i.e. at $\ell = 0$, $$[g_{\mu\nu}] = g^{+}_{\mu\nu}-g^{-}_{\mu\nu} = 0.$$ Using the Einstein equations, one can compute the stress-energy tensor due to this metric which is given by (Eric Poisson's A relativists toolkit, Eq. (3.54)) $$T_{\mu\nu} = \Theta(\ell) T^{+}_{\mu\nu}+\Theta(-\ell) T^{-}_{\mu\nu}+ \delta(\ell)S_{\mu\nu},\tag{3.54}$$ where $S_{\mu\nu}$ is surface stress-energy tensor or a thin shell of matter. I do not see how the above stress-energy tensor satisfies $\nabla_{\mu} T^{\mu\nu} = 0$. We are bound to get derivatives of the $\delta$-function if one computes $\nabla_{\mu} T^{\mu\nu}$. Of course, $S_{\mu\nu} = 0$ is basically a junction condition that allows us to avoid this surface layer but in case $S_{\mu\nu}\neq 0$, its presence prevents the conservation of $T_{\mu\nu}$. So, how can we still interpret this as something physical (like it is done in Eric Poisson's A relativists toolkit)? Answer: Yes, $$\nabla_{\mu} T^{\mu\nu}~\stackrel{m}{\approx}~0,$$ cf. e.g. my Phys.SE answer here. OP ponders if the covariant derivative $\nabla_{\mu} T^{\mu\nu}$ of the SEM tensor (3.54) could produce contributions proportional to the derivative of the Dirac delta distribution $\delta(\ell)$? Well, let's check. First of all, it is enough to consider the partial derivative $\partial_{\mu} T^{\mu\nu}$. We calculate $$ \partial_{\mu}\delta(\ell)~\stackrel{(3.46)}{=}~\varepsilon n_{\mu}\delta^{\prime}(\ell). $$ Next recall that the surface stress-energy tensor $$S^{\mu\nu}~=~S^{ab}e^{\mu}_ae^{\nu}_b \tag{3.55}$$ is tangent to the hypersurface $$S_{\mu\nu}n^{\nu}~=~0.\tag{above 3.55}$$ Hence, the contribution proportional to $\delta^{\prime}(\ell)$ must vanish, cf. $$ e^{\mu}_an_{\mu}~=~0.\tag{below 3.7}$$ Alternatively, use that $$ e^{\mu}_a\partial_{\mu}\delta(\ell)~\stackrel{(3.7)}{=}~\frac{\partial \delta(\ell)}{\partial y^a}~=~0, $$ which is zero because $(y^1,y^2,y^3,\ell)$ constitute independent coordinates of spacetime sufficiently close to the hypersurface $\Sigma$. References: Eric Poisson, A Relativist's Toolkit, 2004; Subsections 3.4.2 + 3.7.4 + 3.7.5.
{ "domain": "physics.stackexchange", "id": 85365, "tags": "general-relativity, metric-tensor, curvature, boundary-conditions, stress-energy-momentum-tensor" }
Transfer Matrix for Ising model- Notation Issue
Question: I having difficulties in understanding "transfer matrix" in the paper Metastability in the two-dimensional Ising model. They consider a periodic $N \times \infty$ lattice with the energy $$ E = -J \sum_{nn} \sigma \sigma - H \sum \sigma $$ for spins $\sigma = \pm 1$, where the first summation occurs over nearest neighbours. Now they say, "<...> The associated $2^N \times 2^N$ symmetric transfer matrix $L$ is defined as follows. for two column configurations $\vert \mu \rangle = (\sigma_1, \cdots, \sigma_n)$ and $\vert \mu' \rangle = (\sigma_1', \cdots, \sigma_n')$ $$ \langle \mu \vert L \vert \mu' \rangle = \exp \bigg\{{ \frac{\nu}{2} \sum_{i=1}^{N} \sigma_i \sigma_{i+1} + \sigma_i' \sigma_{i+1}' + \frac{1}{2} h \sum{i=1}^{N} (\sigma_i + \sigma_i') + \nu \sum_{i=1}^N \sigma_i \sigma_i' \bigg\}} $$ where $\nu = J/T$ and $h = H/T$ <...>". I cannot understand this definition, let alone reconcile it with the one I am used to. First of all, the column configuration $\mu$ has $N$ components, while $L$ is a $2^N \times 2^N$ matrix, so I cannot make even basic sense of the left-hand side, what operation it represents. For a say one-dimensional lattice model with nearest-neighbours interaction $U_{ij} = U(\sigma_i, \sigma_j)$ the transfer matrix $V$ is a $2^N \times 2^N$ matrix and has components $$ V_{ij} = -\exp \big\{ -\frac{U_{ij}}{kT} \big\} $$ What does the first notation mean? Hopefully I will then see how it relates to the latter definition, thanks Answer: I having difficulties in understanding "transfer matrix" in the paper [Metastability in the two-dimensional Ising model][1]. They consider a periodic $N \times \infty$ lattice with the energy $$ E = -J \sum_{nn} \sigma \sigma - H \sum \sigma $$ for spins $\sigma = \pm 1$, where the first summation occurs over nearest neighbours. Consider, for example, a periodic 1d lattice of $N=3$ spins. There are eight possible "states," which I choose to number as below: $|--->$ $|--+>$ $|-+->$ $|-++>$ $|+-->$ $|+-+>$ $|++->$ $|+++>$ where, I my notation |xyz> means the spin on site one is x, site two is y, and site three is z. For example, |+-+> means the spin is up on site 1, down on site 2, and up on site 3. For example, |--+> means the spin is down on site 1, down on site 2, and up on site 3. The energy of these states are: $-3J + 3H$ $J+H$ $J+H$ $J-H$ $J+H$ $J-H$ $J-H$ $-3J-3H$ I could write the energy as a 8x8 matrix: $$ \begin{bmatrix} -3J+3H & 0& 0& 0& 0& 0& 0& 0 \\0 & J+H & 0& 0& 0& 0& 0& 0 \\0 & 0 & J+H & 0& 0& 0& 0& 0 \\0 & 0 & 0 & J-H & 0& 0& 0& 0 \\0& 0& 0& 0& J+H& 0& 0& 0 \\0& 0& 0& 0& 0& J-H& 0& 0 \\0& 0& 0& 0& 0& 0& J-H& 0 \\0& 0& 0& 0& 0& 0& 0& -3J-3H \end{bmatrix} $$ In this case I would write the state $|--->$ as: $$ \begin{bmatrix} 1 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \end{bmatrix} $$ In this case I would write the state $|--+>$ as: $$ \begin{bmatrix} 0 \\1 \\0 \\0 \\0 \\0 \\0 \\0 \end{bmatrix} $$ And so on.
{ "domain": "physics.stackexchange", "id": 88829, "tags": "ising-model" }
Check if the given string has duplicates
Question: Description: Given a string find if it contains duplicate characters. The string contains only ASCII characters. Code: import java.util.Arrays; class Main { /** * Given a string check if it duplicate characters. * [] -> true * [f, o, o] -> false * [b, a, r] -> true * * @param {String} str * * @return false if string contains duplicate else true */ public static boolean isUnique(String str) { if (str.length() == 0) { return true; } boolean[] seen = new boolean[25]; for (int i = 0; i < str.length(); i++) { int index = Character.toLowerCase(str.charAt(i)) - 'a'; if (seen[index]) { return false; } else { seen[index] = true; } } // invariant return true; } public static void main(String[] args) { System.out.println(isUnique("")); // true System.out.println(isUnique("a")); // true System.out.println(isUnique("foo")); // false System.out.println(isUnique("bar")); // true System.out.println(isUnique("hello")); // false System.out.println(isUnique("Hello")); // false } } The above algorithms runs in O(n), the alternate solution is to sort the string and the check the adjacent elements but it would be O(nlogn). I need to understand if I am using Java correctly, any other styles related comments are most welcome. PS: I would like to know if I need to include more tests to cover all cases. Answer: Documentation mismatch The documentation and the functionality do not fit. Your function is supposed to "check if [a string] has duplicates". From the documentation, I expect a function hasDuplicates that returns true if I have duplicate characters and false otherwise. However, you provide isUnique, which does the opposite: return true if there are no duplicates and false if there are duplicates. The internal documentation fits, but I would say "checks whether it contains no duplicate characters" or "contains unique characters only". Possible bugs You checked your string only on alphabetic strings. What happens on "12345678"? You will index -48 due to '1' - 'a'. That's a bug waiting to happen. Also, what happens if your string contains a 'z'? What happens if str is null? Maybe str shouldn't be null, but that should get mentioned in the documentation (see above). Also, you should mention that your function is case-insensitive. But why does "Aa" count as a duplicate to begin with? Both cases (non-alpha character and same letters in different cases) should get added to your tests. Algorithm Your seen algorithm is fine if you know exactly how many buckets you need. But you often don't know that. A Map<char,bool> can happen here. If you use a TreeMap you end up with the mentioned \$\mathcal O(n \log n)\$, but if you use a HashMap the asymptotic complexity is \$\mathcal O(n)\$ again. Alternatively, use a set. But that's asymptotic complexity. If you don't consider any non-alpha strings (see section above), then you can immediately return false if the string is longer than 26 characters due to the pigeon hole principle.
{ "domain": "codereview.stackexchange", "id": 29962, "tags": "java, strings, interview-questions" }
Libraries for Voice Activity Detection (Not Speech Recognition)
Question: As follow up to my previous question I was wondering if there are any speech detection libraries in existence. By speech detection I mean passing in an audio buffer and getting back an index of where speech starts and stops. So if I have 10 seconds of audio sampling at 44kHz, I would expect an array of numbers such as: 44000 88000 123000 190334 ... This would indicate for example that speech starts one second in and then finishes at the two second point, etc. What I'm not looking for is speech recognition which writes out text from spoken word. This unfortunately is what I see a lot of when I google 'speech detection'. It would be great if the library was in C, C++ or even Objective-C as I'm writing an app for the iPhone. Thanks! Answer: In my answer to your that question, I had mentioned that Voice Activity Detection is a standard feature for codecs like G.729 and such others. You should look for reference encoders and decoders for algorithms that applies this. One such example is - http://www.voiceage.com/openinit_g729.php Another possible source is Speex codec. Which implements VAD BTW: You should google "Voice Activity Detection" or "Talk Spurt" rather than "Speech Detection".
{ "domain": "dsp.stackexchange", "id": 2335, "tags": "audio, speech" }
What do the colors in false color images represent?
Question: Every kid who first looks into a telescope is shocked to see that everything's black and white. The pretty colors, like those in this picture of the Sleeping Beauty Galaxy (M64), are missing: The person running the telescope will explain to them that the color they see in pictures like those isn't real. They're called "false color images", and the colors usually represent light outside the visual portion of the electromagnetic spectrum. Often you see images where a red color is used for infrared light and purple for ultraviolet. Is this also correct for false color astronomy images? What colors are used for other parts of the spectrum? Is there a standard, or does it vary by the telescope the image was taken from or some other factor? Answer: Part of why you don't see colors in astronomical objects through a telescope is that your eye isn't sensitive to colors when what you are looking at is faint. Your eyes have two types of photoreceptors: rods and cones. Cones detect color, but rods are more sensitive. So, when seeing something faint, you mostly use your rods, and you don't get much color. Try looking at a color photograph in a dimly lit room. As Geoff Gaherty points out, if the objects were much brighter, you would indeed see them in color. However, they still wouldn't necessarily be the same colors you see in the images, because most images are indeed false color. What the false color means really depends on the data in question. What wavelengths an image represents depends on what filter was being used (if any) when the image was taken, and the sensitivity of the detector (eg CCD) being used. So, different images of the same object may look very different. For example, compare this image of the Lagoon Nebula (M8) to this one. Few astronomers use filter sets designed to match the human eye. It is more common for filter sets to be selected based on scientific considerations. General purpose sets of filters in common use do not match the human eye: compare the transmission curves for the Johnson-Cousins UBVRI filters and the SDSS filters the the sensativity of human cone cells. So, a set of images of an object from a given astronomical telescope may have images at several wavelengths, but these will probably not be exactly those that correspond to red, green, and blue to the human eye. Still, the easiest way for humans to visualise this data is to map these images to the red, green, and blue channels in an image, basically pretending that they are. In addition to simply mapping images through different filters to the RGB channels of an image, more complex approaches are sometimes used. See, for example, this paper (2004PASP..116..133L). So, ultimately, what the colors you see in a false color image actually mean depends both of what data happened to be used to be make the image and the method of doing the mapping preferred by whoever constructed the image.
{ "domain": "physics.stackexchange", "id": 2956, "tags": "astronomy, visible-light, astrophotography" }
Can Textures be added to simple objects from my own source?
Question: If I build a simple object such as a cylinder or a box I can add texture from the gazebo.material file e.g.: <script> <uri>file://media/materials/scripts/gazebo.material</uri> <name>Gazebo/CeilingTiled</name> </script> And this is matched with the block from the gazebo.material file: material Gazebo/CeilingTiled { receive_shadows on technique { pass { ambient 0.5 0.5 0.5 1.000000 texture_unit { texture ceiling_tiled.jpg } } } } ... and then I can get a nice textured and properly lit object in Gazebo. But this seems to be limited to just the materials in this particular file - about 20 of them. How can I specify my own .material file to map my own texture onto the object model? Originally posted by techy on Gazebo Answers with karma: 21 on 2013-02-22 Post score: 0 Answer: So there's two ways (that I know of) that you can do this: Within the model directory: create a materials/scripts and materials/textures directory in the directory of the corresponding model and put the texture into materials/textures and the corresponding *.material into materials/scripts ... have a look at your very own brick_box_3x1x3 for example. ;) If you want to specify materials that can be used by multiple models you can create a folder "media" that, again, contains "materials/scripts" and "materials/textures" as sub-directories and then add the directory that your "media" directory is part of to your GAZEBO_RESOURCE_PATH. So for example with the default setup you have a "/usr/local/share/gazebo-1.4/media" directory (compiled from source, debian package installation would be in /usr/share/...) and "/usr/local/share/gazebo-1.4" is in the GAZEBO_RESOURCE_PATH. To add your own materials you simply create a "<my_package>/media/materials/" directory with the desired scripts and textures and add "<my_package>" to the GAZEBO_RESOURCE_PATH. Originally posted by ThomasK with karma: 508 on 2013-02-22 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 3062, "tags": "gazebo-model" }
Eigenvalues of Lagrangian/action operator?
Question: Is it possible to make a prediction for the eigenvalues of the Lagrangian-/action-operator of a quantum system if I know the ones of the Hamiltonian, the position- and momentum-operator? Can I use the formula $$ L = \sum_a \frac{1}{2} ( \dot{q}_a \cdot p_a + p_a\cdot \dot{q}_a )- H $$ to find the eigenvalues since the operators $H, p, \dot{q}$ do not commute with each other (and so they are not diagonizable with the same basis)? So the eigenvalues of $L$ are not $T-V$. Which orthonormal basis can we choose for the calculation instead? I know that the Hamilton and Heisenberg equations also hold for this operator version (see Quantum kinematics and dynamics by J. Schwinger). So I can use $$ \dot{q}=\frac{\partial H}{\partial p}$$ or a bit more general in terms of the Heisenberg equations: $$ \dot{q(t)} = {\frac {i}{\hbar }}[H,q(t)]+\left({\frac {\partial q}{\partial t}}\right)_{H}$$ to reformulate the Lagrangian which then depends only on $q,p$ since $H(q,p)$: $$ L(q(t),p(t)) = \sum_a \frac{1}{2} [{\frac {i}{\hbar }}[H,q_a(t)] \cdot p_a(t) + p_a(t) \cdot {\frac {i}{\hbar }}[H,q_a(t)]+ ]- H(q(t),p(t)) $$ But I also didn't know (with this Lagrangian) which eigenvectors I have to choose. Since I'm studying Schwinger's quantum action principle I'm interestend in this eigenvalues because they should be minimized by the condition $\delta \hat{S} = 0$ Answer: I will recast your problem, but, full disclosure, I will not attempt to answer open-ended unstated questions on Schwinger’s action principle, also see Milton 2005, or the actual mainstream equivalent formulations of Dirac’s seminal 1933 path integral paper which Feynman streamlined so magnificently that everyone in the identical question is admonishing you to not even ask. I will narrowly recast your question to a more systematically answerable one. In the Heisenberg representation, ignoring red-herring explicit time dependent Schr operators for simplicity, $$ A(t)=e^{itH/\hbar} A e^{-itH/\hbar}\equiv U A U^\dagger. $$ Applying this to q(t) and p(t), and resolving the commutators for a conventional Hamiltonian $p^2/2 + V(q)$ complete your return calculation to the Lagrangian, $$ L= U (p^2/2 -V(q))U^\dagger ~. $$ Now, the key point that everybody is reminding everybody is that, in general, that parenthesis, and so L , does not commute with H, so it is not time-invariant, unlike H. So you may not commute the U on the left to annihilate its unitary inverse on the right, easily. The expression, in general, is a time-dependent mess that most would sensibly not deign to look at. But, if you must, look at its spectrum at t =0, just the parenthesis. The (real!) eigenvalue spectrum of the expression in the Hermitian operator of the parenthesis, barring freak complications, is gotten by finding the spectrum of a bizarre inverted potential in any picture you like, e.g. the Schroedinger picture. As a formal wisecrack, imagine your original Hamiltonian potential is the inverted oscillator Yuce, Kilic & Coruh Phys Scr 74 (2006) 114, also available on the arXiV , that is $V=-\omega^2 q^2$, so, then your L will be identical to the plain oscillator H of sophomore physics! (Forget about your awful spectrum of your H, addressed in the refs provided.) Conclusion: the spectrum of this L is that of the plain oscillator, $\hbar \omega (n+1/2)$, being totally schematic and unashamedly cavalier about boundary conditions. This is all to dramatize that, the spectral problem of L, in principle, is a freak Schroedinger problem that does not, and should not!, get anybody anyplace. (Which is the reason most people recoil from Schwinger’s variational principle and map it to Dirac-Feynman.) But, to dispel an apparent technical misconception of yours, the (real) spectrum of a Hermitian operator need not reflect simultaneous eigenstates of the component operators involved in the operator. (So, in Sophomore physics, the eigenstates of the oscillator hamiltonian are neither eigenstates of p nor q, of course.)
{ "domain": "physics.stackexchange", "id": 36485, "tags": "quantum-mechanics, lagrangian-formalism, operators, hamiltonian, eigenvalue" }
Multiple questions about how to implement practical resampling
Question: I am learning resampling theory, and for the time being I am specifically interested in downsampling. I have a textbook that is not a dsp textbook but has a section on resampling. The way they put it, resampling has three steps: Rebuild the analog signal, LPF it, then resample at the new rate. So the equivalent of: DAC, LPF, ADC. All of this happens in a single calculation. To reconstruct the signal, we multiply the sampled signal by a LPF with frequency response H(w), with the cutoff at ws/2. The filter h(t) is called the "interpolation filter." $$ x(t)=x_s(t) \ast h(t)=\sum_{n=-\infty}^{\infty}x_nh(t-t_n) $$ We can obtain new samples with: $$ \DeclareMathOperator{\sinc}{sinc} x_m=\sum_{n=-\infty}^{\infty}x_nh(t_m-t_n) $$ If we are downsampling, we want to filter at the bandwidth of our new sampling rate. $\omega_s$ is the new sampling rate. For an ideal LPF, $h(t)=\sinc(\frac{\omega_st}{2})$ and $x_m=\sum_{n=-\infty}^{\infty}x_n\sinc(\frac{\omega_s(t_m-t_n)}{2})$ therefore we are doing sinc interpolation. My book points out that this is impractical for two reasons: All of the above assumes we have an infinite number of samples. It is based on an ideal LPF which is not realizable. The book then goes on to analyze truncated sinc filters with a hann window applied. Now I have a few questions: The FT of a pulse train is itself a pulse train. Therefore our sampled signal we treat as an infinite number of copies of the original frequency spectrum. However, if we have a limited number of samples, $h(t)=\sum_{n=0}^{N-1}\delta(t-nT_s)$ and $H(\omega)=\sum_{n=0}^{N-1}e^{-j\omega(t-nT_s)}$. If the limits of summation were infinite, this would be an infinite pulse train. When they are finite, I'm not so sure. I'm having trouble visualizing what this would look like. Is this just a pulse train from n=0 to n=N-1? If not, what does it look like and how does it affect $X_s(\omega)$? Is it a reasonable approximation for a large number of samples to estimate $X_s(\omega)$ as an infinite number of copies of $X(\omega)$? And if so, why does my book list this as being an issue with the above resampling theory? For digital filters in general (such as a FIR low pass filter used as an interpolation filter), do they only operate in the range of $\frac{-\omega_s}{2}$ to $\frac{\omega_s}{2}$? Surely a "true" LPF (analog) would cut out the copies of the spectrum and leave us with a scaled version of $X(\omega)$. I'm imagining digital filters as operating in the above range, and therefore affecting all copies of the original spectrum as well. Very similar to the question above, couldn't an analog LPF be used as a DAC? Simply output the samples as high voltage impulse "spikes" through an analog LPF to filter out the copies and you are left with the original spectrum. The main reason I understand LPFs to be nonrealizable is that they are noncausal. I don't really understand why noncausal filters are not possible. For a real-time filter such as an RC low-pass this makes sense, but when the filter is applied in software after collecting all the samples, I see no reason a "previous" sample cannot be used to determine the weighting for a "future" sample. Isn't this a noncausal filter? Answer: We assume here that "down-sampling" means reducing the sample rate by an integer factor Q and that "up-sampling" is increasing the sample rate by an integer factor P. Changing the sample rate by a rational factor $r = P/Q$ is called sample rate conversion and typically implemented by up-sampling by P followed by down-sampling by Q. Down-sampling is quite simple. Assuming your original sample rate was $f_0$ your new sample rate after down-sampling by Q will be $f_q = f_0/Q$. To avoid aliasing, you need a low-pass filter with a cutoff frequency below the new Nyquist frequency i.e. $f_c < f_q/2$ Since a real-world low-pass filter will need a non-trivial transition band, the cutoff frequency is typically a good chunk (maybe 10%-20%) below the Nyquist frequency. For example standard audio sample rates are 44.1kHz or 48 kHz, but anti-aliasing filters start cutting of at 20 kHz. So the process is simple Apply lowpass filter Throw away the samples you don't need Step 2 is why FIR filters are popular for this task. Instead of throwing samples away, you simply don't calculate them in the first place. The way they put it, resampling has three steps: Rebuild the analog signal, LPF it, then resample at the new rate I think that's incorrect or at least misleading. You are not rebuilding an analog signal, you just apply an anti-aliasing filter and discard samples. Question 1 Sampling in time creates periodically repeated signal in frequency (and vice versa). It doesn't matter if you have a finite or infinite number of input samples. The main consequence of a finite number of samples is aliasing. A finite signal has unlimited bandwidth so there is always some aliasing happening. For digital filters in general (such as a FIR low pass filter used as an interpolation filter), do they only operate in the range of −ωs2 to ωs2 ? No. They operate on the entire frequency range but since the filter is time discrete the entire frequency range is periodic. What happens in the interval $[-f_0/2,f_0/2]$ is exactly the same as in for example $[16.5f_0,17.5 f_0]$ Typically you only look at one period of the spectrum since that already tells you everything about the entire spectrum. Surely a "true" LPF (analog) would cut out the copies of the spectrum and leave us with a scaled version of X(ω) Not sure what you mean by "true" LPF. You can't build an LPF with an arbitrary narrow transition range. Not in digital and sure as heck not in analog. Very similar to the question above, couldn't an analog LPF be used as a DAC? Of course not. A DAC is DAC, an LPF is an LPF: they do completely different things. Simply output the samples as high voltage impulse "spikes" through an analog LPF to filter out the copies and you are left with the original spectrum. That's impractical since voltage peaks would have to be infinitely high and narrow. Instead a real DAC will output a step curve which is low-pass filtered version of the ideal "spike" curve and apply a small spectral correction in the pass-band. You can certainly follow this with an analog LPF to reduce the mirror spectra but that depends on the application. The main reason I understand LPFs to be nonrealizable is that they are noncausal. Let's be clear about the terminology. LPF means low pass filter and they are used all over the place. What is NOT realizable is a "perfect" LPF, i.e. that's exactly 1 in the passband, exactly 0 in the stop band and has a infinitely small transition band. This can't be done in practice because the impulse response of such a filter is a $\sin(x)/x$ function which extends infinitely in time in both directions. It's not just non-causal, it's infinitely non-causal. Even if it were causal you can't implement a filter with an infinite number of coefficients. I see no reason a "previous" sample cannot be used to determine the weighting for a "future" sample. Correct, but this only works if the number of previous samples required is finite. It also adds to the latency. The more samples you need, the longer you need to delay the output. That's show stopper in many applications
{ "domain": "dsp.stackexchange", "id": 11785, "tags": "lowpass-filter, interpolation, resampling" }
Is the fundamental mode the only mode produced in free vibration?
Question: Let’s say I strike a 2D circular plate with some impulse. Is the fundamental mode the mode that’s always being produced, or is it possible to sustain only higher modes such as a (1,1) mode etc. Let’s also say that the 2D circular plate is being damped, if we only observe the fundamental mode, is it because higher energy modes get damped out more quickly? Usually one is able to solve for the fundamental frequency using the flexural rigidity and density in a fourth order PDE, but does its solutions tell you anything about which mode is preferred? Answer: Of course you can excite any mode by hitting a drum. If you could only ever excite the fundamental, then every drum should sound exactly like a pure sine wave, which is very far from realistic. There are a few factors that control the distribution of energy in the harmonics. Where you hit the drum. A mode with a node at the point you hit the drum is sure to not be excited; a mode with an antinode will be maximally excited. More formally, you can compute this by taking the inner product between the mode's amplitude profile and the impulse. Decay of harmonics. Assuming standard linear damping, all modes decay exponentially, but higher harmonics decay much more quickly. How you hit the drum. Some mallets remain in contact with the drum for a significant time $T$, where $T$ is on the order of milliseconds. That means that harmonics with period $\leq T$ are suppressed.
{ "domain": "physics.stackexchange", "id": 50266, "tags": "classical-mechanics, waves, acoustics, vibrations" }
Why do we still use pseudo forces?
Question: So when I was reading about Newton's laws, and my textbook (Sears and Zemansky's University Physics) gave the classic examples of when we might be tempted to create an additional "centrifugal force" and later went on to say that since an object going around in a circle is actually accelerating, we do not, rather can not include it in our free body diagrams (FBDs). They say that the term "centrifugal force" will no longer be used anywhere in the text and strongly advised the reader against using it too. Now I was swayed by the argument "since it is a non inertial frame, we cannot avail of the privilege of using Newton's laws and hence cannot draw FBD like our "common sense" tells us to. Everything was smooth sailing until I saw people all over the Internet use the concept to explain basic phenomena that I could actually explain without using it. This led to me to a rabbit hole, and now I am stuck with this so called "pseudo force". It's obviously not a force, why depict it as such. Even the wikipedia article on the topic says that the concept was used to explain events in general relativity, but it is no longer needed now. I also think it's "anti-physics" to analyse forces when in reality they don't even exist! I would really appreciate if someone could explain whether we actually need them? If yes, can we not use other methods to analyse motion in a non-inertial frame? P.S.: As may probably be clear, I am not a graduate student yet, so I could really do with an explanation that does not go into the nitty gritties of general relativity. Also sorry for such a long question. Answer: In practice, in newtonian mechanics, pseudoforces are used, because they're simple and convenient. At the end of the day, they're only inertial terms that you're "disguising" as forces. You can choose to keep those inertial terms raw, or treat them as forces. It makes no difference mathematically, so it's a matter of opinion. There is nothing "anti-physics" about them: they provide a mathematical model that correctly predicts the behavior of systems. Also, if you're standing in a bus that slows down, you will feel a force pushing you forward. Even though this force has a special status in mechanics, it's natural to treat it as a force instead of introducing new tools. If your book promotes an alternative way to handle non inertial frames, keep in mind that: Other people from other backgrounds might not use this alternative way, so you'll have to adapt anyway. That sort of alternative way rarely makes things simpler. So ask yourself if you really find it pleasant to use. It's only in general relativity that discussing the nature of pseudo-forces really bring something interesting to the table.
{ "domain": "physics.stackexchange", "id": 94511, "tags": "newtonian-mechanics, reference-frames, inertial-frames, centripetal-force, centrifugal-force" }
Cauchy boundary conditions and Greens functions with Fourier transform
Question: Consider the wave equation \begin{align} (\partial_t^2-\nabla^2)\phi(t,x)=0, \end{align} with Cauchy boundary conditions $\phi(0,x)=f(x)$ and $\dot{\phi}(0,x)=g(x)$. Suppose we perturb the system with an external source $J=J(t,x)$. The solution to this is given by \begin{align} \phi(t,x)=\phi_0+\int_{-\infty}^\infty dt'dx'\,G(t,t';x,x')J(t',x'), \end{align} where $\phi_0$ is the homogeneous solution without the external source that fulfills the Cauchy boundary conditions, and $G(t,t',x,x')$ is the Greens function, compatible with the differential equation \begin{align} (\partial_t^2-\nabla^2)G(t,t';x,x')=\delta(t-t')\delta(x-x'). \end{align} A common approach to solve this is to Fourier expand the Greens function, i.e., to consider \begin{align} G(t,t';x,x')=\int_{-\infty}^\infty \frac{d\omega d^3k}{(2\pi)^4}\tilde{G}(\omega,\vec{k})e^{i\left[\vec{k}\cdot(\vec{x}-\vec{x}\,')-\omega(t-t')\right]}, \end{align} and replace it in the Greens equation where we can invert the differential operator in momentum and frequency space. Regarding the wave operator, we can find \begin{align} \tilde{G}(\omega,\vec{k})=\frac{1}{-\omega^2+k^2}, \end{align} where $k=\left|\vec{k}\right|$, and then if we are able to solve the integral \begin{align} G(t,t';x,x')=\int_{-\infty}^\infty \frac{d\omega d^3k}{(2\pi)^4}\frac{e^{i\left[\vec{k}\cdot(\vec{x}-\vec{x}\,')-\omega(t-t')\right]}}{k^2-\omega^2}, \end{align} we are "done"—we obtained the Greens function, and then if we integrate it with the external source we can obtain the total solution of the system, in principle. However, regarding this method, how does the Greens function, or the total solution "feel" the Cauchy boundary conditions in this example? Moreover, where can I find literature about this discussion? Answer: Your problem is not specific to d'Alembert's PDE, but can be traced back to ODE's, even the simple harmonic oscillator. Say you want to solve: $$ \frac{d^2G}{dt^2}+G = \delta $$ Going to Fourier transform: $$ \tilde G(\omega)=\int\frac{d\omega}{2\pi}e^{i\omega t}G(t) \\ G(t)=\int dte^{-i\omega t}\tilde G(t) \\ (-\omega^2+1)\tilde G = 1 $$ You want to divide by $1-\omega^2$. However, when $\omega=\pm1$ this is zero, so the "value" there is undefined. Using the theory of distributions, you can prove that $\hat G$ is necessarily: $$ \tilde G = PV\frac{1}{1-\omega^2}+C_+\delta(\omega-1)+C_-\delta(\omega+1) $$ with $PV$ denoting the Cauchy principal value and $C_\pm$ being constants. The converse is easy to check, using the fact that: $$ f(t)\delta(t) = f(0)\delta(t) $$ Going back to time: $$ G = \frac{\sin |t|}{2} +C_+e^{-it}+C_-e^{it} $$ with $H$ you see that $C_\pm$ is specified by the initial conditions $G(t_0),\frac{dG}{dt}(t_0)$. Back to your case, the similar situation happens when you divide by $k^2-\omega^2$. From: $$ (-\omega^2+k^2)\tilde G = 1 $$ you deduce: $$ \tilde G = PV\frac{1}{-\omega^2+k^2}+C_+(k)\delta(\omega-|k|)+C_-(k)\delta(\omega+|k|) $$ with now $C_\pm$ are functions defined on $R^3$. They can be determined by the Fourier transforms of $G,\partial_tG$ at $t_0$ which will give the unique solution to the Cauchy problem. In classical physics, you are often interested in the causal solutions, i.e. $G$ is supported on positive times (in field theory, you are rather more interested in the Feynman propagator which you obtain by Wick rotation or time ordering). It turns out, using Jordan's lemma and the residue theorem, that it amounts to taking the limit of $\epsilon\to 0^+$ of: $$ \tilde G = \frac{1}{k^2-(\omega-i\epsilon)^2} $$ and more generally making the substitution $\omega\to \omega-i\epsilon$. Hope this helps. Answer to comment a) Actually the prescription retarded/advanced/Feynman will determine the $C_\pm$. Once again, there is an ambiguity in defining $\frac{1}{k^2-\omega^2}$ which these prescriptions help resolve. If you want to interpret it as the Cauchy $PV$, then you need to add the appropriate $C_\pm$. However, in this case, it is easier to think directly in terms of contour integration since you are not interested in solving the Cauchy problem. Indeed, writing explicitly what the prescriptions mean for the $C_\pm$ is needlessly complicated. b) It depends. The problem arises when you divide by zero, in which case you need additional information to uniquely define your Gren's function (boundary condition, causality ...). However, this is not always necessary. Take for example the operator $-\Delta+m^2$ with $m$ real in arbitrary dimensions. Then its Green's function is (in Fourier): $$ \tilde G = \frac{1}{k^2+m^2} $$ which makes sense no matter what since the denominator never vanishes. Adding boundary conditions would overdetermine it. An important caveat is that when you are taking Fourier transforms, you are restricting your attention to tempered solutions. In general, you could have more solutions thus requiring further restrictions. Take for example a modification of the previous simple case: $$ -\frac{d^2G}{dt^2}+G = \delta $$ By Fourier transform, you obtain the unique tempered Green's function $$ \tilde G = \frac{1}{1+\omega^2} \\ G = \frac{e^{-|t|}}{2} $$ however, if you are looking for distribution solutions in general, you could get for example: $$ G = \frac{e^{-|t|}}{2}+C_+e^t+C_-e^{-t} $$ with $C_\pm$ arbitrary constants and can be resolved by Cauchy boundary conditions. Note that when the coefficients are not zero, $G$ is not tempered, so the usual Fourier transform is ill defined. In physics, you usually sweep under the rug such technical assumption (temperedness ...). To really guarantee unicity you'll need to do some rigorous math. Check out any course on distributions for more information.
{ "domain": "physics.stackexchange", "id": 92250, "tags": "fourier-transform, boundary-conditions, greens-functions" }
How to get time from equation of linear uniformly accelerated motion?
Question: I have had a problem solving this equation for time (from Linear Uniformly Accelerated Motion (LUAM)): $$ s= v_0t + \tfrac{1}{2}at^2 $$ I'll appreciate if someone could provide me some step-by-step solution to this. Answer: Simple. You see that it's a quadratic equation in $t$. $$ 0= -s + v_0t + \tfrac{1}{2}at^2 $$ Solve for t. $$ t=\frac{-v_{0}\pm \sqrt{v_{0}^2-4(\frac{1}{2}a)(-s)}}{a} $$ The fundamental theorem of algebra tells you should get exactly two solutions to the equation. Chose the $t$ that is most likely to make sense
{ "domain": "physics.stackexchange", "id": 14092, "tags": "homework-and-exercises, kinematics, time, acceleration" }
Which among phenol and 1,2-dihydroxybenzene has the higher boiling point?
Question: My attempt Based on symmetry: I think that looking for symmetry phenol has a higher boiling point than 1,2-dihydroxybenzene because the two -OH groups projecting out from the benzene ring of 1,2-dihydroxybenzene will not allow close packing of them which results in a lower boiling point. Based on hydrogen bonding: In this sense phenol would have a lower boiling point because 1,2-dihydroxybenzene would have more inter-molecular hydrogen bonding than phenol. How do I know which factor wins? Answer: Your structures (grabbed from Wikimedia Commons) Phenol: Catechol (1,2-dihydroxybenzene) The three main influences on boiling point are: Shape - molecules with a more spherical shape have lower boiling points than molecules that are more cylindrical, all else as equal as possible. Spheres have the lowest surface area to volume ratio of any of the solids. This is where symmetry comes in. More symmetric molecules are more spherical. The shape/symmetry argument does not work here since the two compounds are not isomers. The issue of close packing is relevant to the melting points of solids, since in a solid, most particles do not have translational motion - they are locked in a crystal lattice. In a liquid, such random translational motion is common. In the liquid state, molecules of most compounds are so disorderly arranged that it does not matter how well they pack as a solid. In your case, both molecules are rather like flat disks with a little bit of polarness sticking off one side. Their shapes are not so different as far as surface area to volume goes. Boiling point will be dominated by mass and polarity. Molecular mass - bigger molecules require more energy to vaporize because they have more mass, and $KE=\frac{1}{2}mv^2$, all else as equal as possible. Mass difference applies in your case because the two molecules are similar enough in shape and polarity, except one has an extra oxygen in its formula. Polarity - polarity controls the types and number of intermolecular forces, including hydrogen bonding. The more and stronger intermolecular forces between two molecules, the higher the boiling point, all else as equal as possible. Since catechol has two hydroxy groups and phenol has one, you have your strong argument for increased intermolecular forces. The other easy thing to do is use the internet to look up the boiling points, and then explain the difference using the three factors. Once you know what the boiling points are, it is easy to choose the structural features that support the difference. Phenol - $181.7 \ ^\circ\text{C}$ Catechol - $245.5\ ^\circ\text{C}$
{ "domain": "chemistry.stackexchange", "id": 656, "tags": "organic-chemistry, boiling-point" }
Potassium permanganate (VII) + glucose + sodium hydroxide =?
Question: I saw this experiment many times but still can't explain why. The experiment is first prepare two solutions. The first one is to dissolve potassium permanganate crystal into distilled water to form potassium permanganate (VII) while the second one is to put sodium hydroxide and glucose (sugar) into distilled water. Secondly, I pour the second solution into the first one. My observation is the purple solution will slowly turn into blue, then slowly turn into green and lastly turn into yellowish orange. So, what really happen in the solution? Answer: This is a redox reaction in which the permanganate ion is reduced and the glucose is oxidised. Potassium permanganate is usually used in acid solution and under these conditions is a very powerful oxidising agent in which the manganese is reduced from the oxidation state of +7 to +2 with a colour change from purple to colourless (actually extremely pale pink). In alkaline solution manganese is only reduced from +7 to +6 changing colour from purple to green. (The blue colour you mention is due to a mixture of purple and green). The green manganate(VI) ion is unstable and slowly disproportionates to manganate(VII) (purple) and manganese(IV) oxide which is brown. The manganate(VII) goes on to react as before until the brown manganese(IV) oxide is all that remains. In alkaline solution this tends to form a colloidal suspension which (if fairly dilute) can appear orange. The oxidation of the glucose forms no coloured productss.
{ "domain": "chemistry.stackexchange", "id": 2233, "tags": "redox" }
Will current be induced in the conducting loop in this scenario?
Question: The question is simple. We know induced emf in a conducting loop due to a changing flux is given by $$ E = -\frac{d\Phi}{dt} $$ My question is if the flux is changing only in a small part of the loop , will the emf still be induced in the loop? Example: Answer: In Faraday's equation: $$ E = -\frac{d\Phi}{dt} \tag{1} $$ the flux $\Phi$ is the total magnetic flux passing through the loop. That is, there is some magnetic field $B$ passing through the loop and we integrate this field across the area of the loop to get the flux $\Phi$. The magnetic field through the loop $B$ does not have to be constant across the loop. Indeed it can vary across the loop in any fashion you want and that makes no difference. All that matters in equation (1) is the total flux $\Phi$ through the loop and not how that flux is distributed across the loop. So the answer is that yes in the diagram you have drawn an EMF will be induced even when the field $B$ is non-zero only in part of the loop. As long as the total flux $\Phi$ is changing with time an EMF will be induced.
{ "domain": "physics.stackexchange", "id": 71505, "tags": "electromagnetism, electromagnetic-induction" }
[roscpp] unsubscribe within a callback and resubscribe in while(ros::ok)
Question: Hi there, as new to ros and c++, I have a question to ask about unsubscribe and resubscribe. I am working on a multi-quadrotor path planning project and need to define squad leader everytime the team split into new squads. In that case, the squad member will subscribe to its leader for further commanding. But I have trouble on how to unsubscribe old leader's command and resubscribe to a new leader's. Below is the part of code that doing this little function: ros::Subscriber splitcmd_sub; // get split command from the leader and determine leader(s) for squad(s) void getsplitcmd(const geometry_msgs::Point::ConstPtr& msg) { //...some code to find out whom to follow squad_leader_no = ...; } int main(int argc, char **argv) { ros::init(argc, argv, "mycontrol_4"); ros::NodeHandle n; ros::Rate cycle(8); while(ros::ok()){ // ...some other code here if(squad_leader_no == 1){splitcmd_sub = n.subscribe("/uav1/split_cmd", 1000, getsplitcmd);} if(squad_leader_no == 2){splitcmd_sub = n.subscribe("/uav2/split_cmd", 1000, getsplitcmd);} if(squad_leader_no == 3){splitcmd_sub = n.subscribe("/uav3/split_cmd", 1000, getsplitcmd);} if(squad_leader_no == 4){splitcmd_sub = n.subscribe("/uav4/split_cmd", 1000, getsplitcmd);} // ...some code here ros::spinOnce(); cycle.sleep(); } return 0; } After a numerous tests, only once I have the subscriber resubscribed. At other times, it just seem not to work at all(no disconnection and reconnection). Also, I have tried splitcmd_sub::shutdown() within the callback, but it just did not work. I have searched and read a lot of tutorials and questions by other people online, it seems that I will have to use boost::bind if I want to pass parameters from callback to while() in main everytime it receives new messages. Could anybody tell me what to do if I want to unsubscribe a topic and resubscribe to another one at correct times, please? Thanks a lot! Originally posted by PJ on ROS Answers with karma: 30 on 2014-08-14 Post score: 0 Original comments Comment by PJ on 2014-08-14: to be more clear, the publisher split_cmd has been defined and publishing in the same .cpp file Comment by bvbdort on 2014-08-14: try using if-else to ensure in one loop only one call back and splitcmd_sub::shutdown() inside callback. Comment by PJ on 2014-08-14: I have tried those, but it did not work Comment by PJ on 2014-08-15: It was my own mistake that caused the problem, and sorry about that! Anyway, thanks a lot for your time! Answer: Thanks a lot for your time, t.pimentel!!! I have found where I did wrong, and it was a careless mistake. The comparison I made between squad_leader_no and integer value was in different format. Sorry about wasting your time. :/ Originally posted by PJ with karma: 30 on 2014-08-15 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19064, "tags": "ros" }
How does a Y connection in a 3 phase motor create 230V without a neutral/ground
Question: I know that a 230/400V 3phase motor requires 230V on each coil. No more and no less. So on a 3*230V grid it would be wired in Delta, on a 3*400V it would wired in Y. I went to evening school to learn about this and my course on the subject has several diagrams on this, most resembling these 2: We also put this in practice and wired the motors and tested errors etc. But thinking about it now I noticed that we never grounded the center point of these connections. How can a Y-config at 3*400V produce 230V on the coil? Is there a mathematical way of calculating this? It's confusing since there's 3 voltages that are all averaged at 400V but measured at any given moment they're 3 very different voltages connected to eachother. I have however also seen images online of a delta/Y connections that do ground the centerpoint: And here it totally makes sense that the coil get 230V because we know that L-N voltage is sqrt(3) of L-L voltage. I feel like I'm missing something here. Answer: Your understanding so far is correct. If you draw your phasor diagrams to scale you can solve this from basic trigonometry. Figure 1. Phasor diagram for delta and star connections You should, when you have finished, see that the phase to phase voltages are $ \sqrt 3 $ times the phase to star voltages. But thinking about it now I noticed that we never grounded the center point of these connections. How can a Y-config at 3*400V produce 230V on the coil? Now realise that without a neutral connection the star / wye point will only be in the centre if all three phases are balanced. If one phase is more highly loaded then the star point will be pulled towards that phase. The reason that we have a neutral star point at all is that the currents into this point all sum to zero at every instant. Is there a mathematical way of calculating this? It's confusing since there's 3 voltages that are all averaged at 400V but measured at any given moment they're 3 very different voltages connected to each other. I tend to think graphically and my illustration below may help. Figure 2. (1) The red phase current is the geometric sum of the black and blue. (2) When black and blue are equal and opposite the red must be zero. (3) When black falls to zero the other two must sum to zero. I hope that helps.
{ "domain": "engineering.stackexchange", "id": 1669, "tags": "electrical-engineering, motors" }
AC current and voltage in an Inductor
Question: This is the simulation of a RL circuit with an AC source. That AC source is connected with the circuit at the moment when the source voltage has zero phase angle with 5 volts peak. It can be seen from the simulation that the peak voltage of the first half cycle (which is the transient I know) is less than the peak voltage of the negative half cycle and rest of the cycles. I want to know whether that the peak voltage of the positive half cycle is less than the peak voltage of the rest of the cycles is because of any simulation error or is there any other reason? Plus When the value of the resistor is increased the transient current and voltage die off more quickly. Please anyone tell me that why increasing resistance makes the transient to die off quicker? Infact why resistance make the transients to die off? I need detail answers of these question. So please if anybody can help! Answer: First, the simulation tells the truth. The transient behavior at the beginning is not an error. As @BobD already suggested, do not begin with looking at the simulation result. Instead, you should choose the mathematical way to get an understanding where this result comes from. I assume you already know, that the voltage across a resistor $R$ is $V_R(t)=RI(t)$ and the voltage across an inductor $L$ is $V_L(t)=L\frac{dI(t)}{dt}$. So for your electric circuit you have the differential equation for the current $I(t)$ $$V_\text{peak}\sin(\omega t)=RI(t)+L\frac{dI(t)}{dt}$$ together with the start condition $I(0)=0$. I will not solve this differential equation for you here. You should work through the math by yourself. Then you will find, the solution $I(t)$ has two components: a transient component proportional to $e^{-Rt/L}$ (i.e. it dies off after $t\gg\frac{L}{R}$), an oscillating component proportional to $\cos(\omega t)$ and $\sin(\omega t)$. Finally, from the current $I(t)$ you can easily calculate the voltages $V_L(t)$ and $V_R(t)$. You will find, they also have a transient and an oscillating component.
{ "domain": "physics.stackexchange", "id": 67175, "tags": "electromagnetism, electric-circuits, inductance" }
Is $g(\nabla_{e_0}X,)$ is equal to $\nabla_{e_0}g(X,)$?
Question: Let $g$ be a Riemannian metric , $e_0$ and $X$ vector field with $g(e_0,e_0)=-1$ .From the compatibility condition of the metric we have for another field $Y$ we have $$\nabla_{e_0}g(X,Y)= g(\nabla_{e_0}X,Y)+g(X,\nabla_{e_0}Y)$$ Now if put $e_0=e^{\mu}_0 \frac{\partial}{\partial x^\mu}$ and since $g(x,)=X^ag_{ab}dx^b$ \begin{multline} \nabla_{e_0}g(X,)= \nabla_{e_0}\left(X^ag_{ab}dx^b\right)=e^{\mu}_0\nabla_\mu\left(X^ag_{ab}dx^b\right)=e^{\mu}_0 \left[ \frac{\partial}{\partial x^\mu}\left(X^ag_{ab}\right)dx^b+X^ag_{ab}\nabla_\mu dx^b\ \right] = e^{\mu}_0 \left[ \frac{\partial}{\partial x^\mu}\left(X^ag_{ab}\right)-X^ag_{ac} \Gamma^c_{\mu b } \ \right]dx^b \end{multline} Now $$\nabla_{e_0} X=e^{\mu}_0 \left[ \frac{\partial}{\partial x^\mu}X^c+X^a \Gamma^c_{\mu a } \ \right]\frac{\partial}{\partial x^c}$$ so $$g(\nabla_{e_0}X,)= e^{\mu}_0 \left[g_{c b} \frac{\partial}{\partial x^\mu}X^c+g_{c b}X^a \Gamma^c_{\mu a } \ \right]dx^b$$ Now in the book The Many Faces of Maxwell, Dirac and Einstein Equations they claim that $$\nabla_{e_0}g(X,)= g(\nabla_{e_0}X,)$$ Why is the last expression true? Answer: This follows immediately from metricity. In local coordinates, $$ \nabla_e (g_{ab} X^{a})= e^c \nabla_{c}(g_{ab} X^{a}) = g_{ab} e^c \nabla_c X^a = g_{ab} \nabla_e X^a\ . $$ In coordinate free notation you can see that $\nabla_{e}$ acting on nothing is still zero, so the $g(X,\nabla_{e})$ term vanishes. Response to edit: this doesn't change the answer posted above - metricity, $\nabla_{a} g_{bc} =0$, immediately implies what I've written above.
{ "domain": "physics.stackexchange", "id": 78196, "tags": "general-relativity, differential-geometry" }
If a flow is unsteady in Eulerian description is also unsteady in Lagrangian description?
Question: The flow is steady/unsteady in both of the approaches or how does that relation go? Answer: No. Remember that a lagrangian description tracks the motion of a specific particle on time, and the eulerian description tracks the motion that happens in a specific point in space. Think about about a bucket with a hole in the bottom that is being fed with water on top and leaking water on bottom. At some point the water level in the bucket will stop changing (Steady state) but all the particles of water will still be moving from the entrance of the bucket to the exit of the bucket. The specific spatial region of the bucket will be unchanging(Steady in an eulerian sense) with time, but the water particles are still crossing the boundaries of the bucket(Unsteady in an lagrangian sense) You can see this easily mathematically by the definition of material derivative: $$\frac{df(x,t)}{dt}_{X=constant} = \frac{\partial{f(x,t)}}{\partial{t}}+ \frac{\partial{x_{i}(X,t)}}{\partial{t}} \frac{\partial{f(x,t)}}{\partial{x_i}} $$ X(capital x) represents a specific particle; x(lower case x) represents a specific point in space and f is a function. You can se that you can have $\frac{\partial{f(x,t)}}{\partial{t}}=0$, that is, steady eulerian flow, with $\frac{\partial{x_{i}(X,t)}}{\partial{t}} >0$, that is, "lagrangian unsteady flow" (The correct term is simply that the individual particles of fluid are moving) In general you can only have steady lagrangian motion if $\frac{\partial{x_{i}(X,t)}}{\partial{t}} =0$ which means that no particles are moving and the system is static as a whole. EDIT: notice that the sense of "static" I employed here is with respect to motion, not overall change in time. You can still change some property of a specific region without motion happening. For instance, you take a fluid and put it the bottle until its full and the start heating the bottle. The temperature of the fluid inside the bottle will change, but no motion is happening, so $\frac{\partial{f(x,t)}}{\partial{t}} >0$ but $\frac{\partial{x_{i}(X,t)}}{\partial{t}} =0$
{ "domain": "physics.stackexchange", "id": 85998, "tags": "fluid-dynamics, coordinate-systems, flow" }
Does The Big Bang Require An Infinitesimal Point, Or Is Another Shape Possible?
Question: Einstein's Spacetime has four dimensions. If the size of one of these dimensions is zero, then the four-dimensional 'volume' - or whatever the corollary to 3D volume is called in 4D - would be zero. Is that enough to explain the observations that lead to the Big Bang theory, rather than an infinitesimal 'point'? Answer: It is a common misconception in science popularization that "the universe was contracted to a point at the big bang". It is ok to think about it this way to get a picture, but it's not really what the math tells us was happening in the beginning of the universe. Without going into too many details, the universe in Einstein's theory of general relativity is described by two things: a manifold and a metric tensor. The manifold You can think of the manifold as every point in spacetime, it looks like a 4-dimensional sheet that can bend, stretch, etc. For example, if we actually lived in a 2dimensional space + 1 dimension of time, the manifold would just be an infinitely big cube! each slice of the cube would be all of space, at different times. The metric tensor The metric tensor is a grid over that manifold, if you give me any two points in the manifold, I can use the metric tensor to figure out what's the physical distance between the two points. It's basically a ruler for each point in space. For example, if Alice is standing at some point in space, and she sees Bob at some other point in space, she can use the metric tensor to calculate the distance between them and conclude that they are, say, 5 meters apart. If they don't move and she repeats the calculation at some later time, she might find that they are actually 6 meters apart. The notion of an expanding universe has to do with this "grid" expanding with time, not with the manifold itself expanding with time, since neither Alice nor Bob actually moved from their respective points in spacetime, it's just that the physical distance between them got bigger. So what's the big bang then? The big bang is this grid collapsing to zero. In other words, it is a point in time when the grid breaks down and tells you that the distance between any two points is zero. That doesn't mean that the universe is a point, since I can still speak of different points in the manifold, it's just that their distance goes to zero. An immediate consequence of this is that the big bang actually happened at every point in space, not in just one tiny point. Quantum disclaimer Have in mind that the fact that our best physical theories tell us that the grid broke down and that every point was infinitely close to every other point means that our theories are probably wrong! At very small distances we should be using quantum mechanics, and the theory of general relativity is not compatible with quantum physics, so we need a full quantum description of general relativity to know what really happened at the big bang! I hope this helps!
{ "domain": "physics.stackexchange", "id": 96458, "tags": "spacetime, space-expansion, big-bang, singularities" }
Time complexity of optimal algorithm for solving this problem
Question: Given an $N\times N$ matrix $M$ whose elements are integers, find the largest element that occurs in each row of the matrix. I tried using hashtable as follow: Idea is to use hash-table(let's call it $ht_1$) to keep track of count of occurrence of each element. Now, To handle case where element repeats in same row we use another hash-table(call this $ht_2$). Now before performing increment in $ht_1$ we check whether this element is already occurred in this row itself or not. If not then and then we perform increment in $ht_1$. Then once you finish processing whole array as just described make one more pass on array and this time keep track of max element which have exactly $N$ occurrence as noted by $ht_1$. So overall this runs in $O(N^2)$ expected time. But Because use of hash-table worst case time complexity is still $O(N^4)$. Now, I wonder can I do better than this? Means is there any way to solve in $O(N^2)$ in worst case. Note: There is no other constraint on this. Answer: $O(n^2 log n)$ is easy: Create a sorted array of the items in the first row. Lookup the items in the second row in this array using binary search, and remove items that were not present. Same with the third row etc. The last remaining item is the largest one present in evert row. The main effort is $n^2$ lookups in an array of size n. Using a hash table probably makes it $O (n^2)$, especially if you make the hash table say size 3n (which adds only a constant factor). Not guaranteed but likely faster than a O(n^2 log n) array.
{ "domain": "cs.stackexchange", "id": 20597, "tags": "algorithms" }
Short text-based adventure game
Question: I'm totally new to coding and tried myself on a short text based adventure game. I got the feeling that the functions are the same ones all the time and the whole game is a bit monotonous... I'd be very thankful if anybody has some improvements or additional ideas! import random import time def nameFunction(): #hello what's your name? name = input("Hello! What's your name, adventurer?\n") time.sleep(2) print ("Hi " + name + "...!") time.sleep(3) def storyFunction(): #the story until hard part #half story is told print ("A few miles away,") time.sleep(3) print ("behind a huge wall out of undestroyable iron and concrete,") time.sleep(4) print ("is the most dangerous and frightening place you'll ever see.") time.sleep(5) print ("Once, many, many years ago the town had the euphonious name Therondia.") time.sleep(5) print ("The people living there where called 'the happy people', because of the prevailing satisfied and sunshiny atmosphere.") time.sleep(5) print ("Inhabitants of the sorrounding areas, envied people living in this town for the friendliness \nbetween the citizens and the completely safe neighborhoods, nothing bad had ever happened in Therondia.") time.sleep(6) print ("Everybody thought of it as the perfect place to spend their life.") time.sleep(5) print ("...") time.sleep(5) print ("But then it happened...") time.sleep(5) print ("... and everything changed.") def readyFunction(): # ready to join? ready = input("Are you brave enough to join us exploring this haunted town? Then say 'yes'! But if you are too scared, you can say no...\n") if ready == "yes" or ready == "Yes" or ready == "yes!": print ("You're coming with us? That is amazing, we need every help we can get! But let us tell you the whole story so that you know in what you engage.") time.sleep(4) storyFunction() if ready == "no" or ready == "No": print ("You're not coming with us? What a petty... See you next time!") else: print ("That's not what I asked for...") readyFunction() #Problem: wenn danach das richtige eingegeben wird geht er trotzdem wieder in die Loop zu are you ready # def missionFunctionAS(): #missionFunction after school print ("Well... Where should we search now?") time.sleep(3) schoolhomechurch = input ("What do you think? Should we search for her at school (1), at her families home (2) or at church (3)?\nJust answer '1', '2' or '3'!\n") time.sleep(3) if schoolhomechurch == "1": print("- you go to the school -") time.sleep(3) print("Hmmmmm... Let's split up, so if she's here, we can find her faster. But be careful that her friends do not see you!") schoolyesno = random.randint(1,2) if schoolyesno == 1: time.sleep(4) print ("HERE SHE IS!!!") time.sleep(3) print (".....") time.sleep(3) print ("Now that we know what happened to her, we can go home and carry on research for couring sickened people.") time.sleep(3) print ("And when we find a remedy, we'll come back and try to bring this town the peace it once held.") time.sleep(5) print ("... to be continued,extended and improved...") if schoolyesno == 2: time.sleep(4) print ("I think there is nobody around... Let's search somewhere else...") missionFunctionAS() if schoolhomechurch == "2": print ("- you go to the families home -") time.sleep(3) print ("Hmmmmm... Let's split up, so if she's here, we can find her faster. But be careful that her parents do not see you!") homeyesno = random.randint(1,2) if homeyesno == 1: time.sleep(4) print ("HERE SHE IS!!!") time.sleep(3) print (".....") time.sleep(3) print ("Now that we know what happened to her, we can go home and carry on research for couring sickened people.") time.sleep(3) print ("And when we find a remedy, we'll come back and try to bring this town the peace it once held.") time.sleep(5) print ("... to be continued,extended and improved...") if homeyesno == 2: time.sleep(4) print ("I think there is nobody around... Let's search somewhere else...") missionFunctionAS() if schoolhomechurch == "3": print ("- you go to the church -") time.sleep(3) print ("*whispering* Wow... this is an amazing building!... But pretty scary as well...") time.sleep(3) print ("LOOK! There she is... At the confessional...") time.sleep(3) print (" ......... ") time.sleep (3) print ("Now that we know what happened to her, we can go home and carry on research for couring sickened people.") time.sleep(3) print ("And when we find a remedy, we'll come back and try to bring this town the peace it once held.") time.sleep(5) print ("... to be continued,extended and improved...") def rightpathFunction(): #which path to get into the town? -> 1. stop to be kicked out of the game!! time.sleep(4) print ("The only way to get into the city is to climb the wall with our special equipment.") time.sleep(4) print ("There is still one problem we would have to deal with if we made it to the other side of it:") time.sleep(4) print ("If anyone sees us, they will fight us. We have to be prepared for a battle.") time.sleep(4) print ("We are five people, they will be max. 10. We can overcome them if we hit 5 of them quickly. Let's go!") time.sleep(4) seen = random.randint(1, 2) if seen == 1: print ("Puhhh... We did it without drawing any attention towards us. Good Job!") missionFunction() if seen == 2: print ("Ohhhh sh***!! Okay guys we have to fight!") time.sleep(4) print ("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~") print (" Fighting... ") print ("YOU MUST HIT ABOVE 5 OF YOUR ENEMIES TO WIN THIS FIGHT") print (" IF THE ZOMBIES HIT MORE OFTEN THAN YOU, YOU WILL DIE" ) print ("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~") time.sleep(4) humans = int(random.randint(3, 10)) zombies = int(random.randint(1, 5)) print ("You hit ", humans, "times.") time.sleep(2) print ("The Zombies hit ", zombies, "times.") time.sleep(2) if zombies > humans: print ("The Zombies have dealt more damage than you!") time.sleep(3) print ("You can't make it out alive... You die a heroic death....") time.sleep(3) playAgain = input ("Do you want to try to climb the wall again? God seems to give you another chance...") if playAgain == "yes" or playAgain == "Yes" or playAgain == "yes!": rightpathFunction() else: print ("Okay! See you next time!") else: print ("You won the fight!") time.sleep(3) print ("That was amazing! But we have no to time to rest on our success, let's go on.") missionFunction() #def schoolFunction(): #print ("Hmmmmm... Let's split up, so if she's here, we can find her faster.") #time.sleep(3) #print ("If somebody finds her, he contacts the rest of us via the walkie talkie!") #->choose random string from List #schoolyesno = ["shhhht....shtshht... (out of the walkie talkie) I found her! Come to room 143.", "shhhht....shtshht... (out of the walkie talkie) I think there is nobody around... Let's search somewhere else..."] #print (random.choice(schoolyesno)) #if schoolyesno == "shhhht....shtshht... (out of the walkie talkie) I found her! Come to room 143.": #room143Function() #if schoolyesno == "shhhht....shtshht... (out of the walkie talkie) I think there is nobody around... Let's search somewhere else...": #missionFunction() #schoolyesno = random.randint(1, 2) #if schoolyesno == 1: #print ("shhhht....shtshht... (out of the walkie talkie) I found her! Come to room 143.") #room143Function() #if schoolyesno == 2: #print ("shhhht....shtshht... (out of the walkie talkie) I think there is nobody around... Let's search somewhere else...") def missionFunction(): #where to go and if the girl is found print ("We need to find the girl. She is the key to everything that happended here. If we can find out what turned her to what ever she might be now,\nwe might find the cause and can save our families... ") time.sleep(4) print ("We assume the girl to be around either her school, her home or the church of the town. As you brang us good luck last time, you should decide again.") time.sleep(3) schoolhomechurch = input ("What do you think? Should we search for her at school (1), at her families home (2) or at church (3)?\nJust answer '1', '2' or '3'!\n") time.sleep(3) if schoolhomechurch == "1": print("- you go to the school -") time.sleep(3) print("Hmmmmm... Let's split up, so if she's here, we can find her faster. But be careful that her friends do not see you!") schoolyesno = random.randint(1,2) if schoolyesno == 1: time.sleep(4) print ("HERE SHE IS!!!") time.sleep(3) print (".....") time.sleep(3) print ("Now that we know what happened to her, we can go home and carry on research for couring sickened people.") time.sleep(3) print ("And when we find a remedy, we'll come back and try to bring this town the peace it once held.") time.sleep(5) print ("... to be continued,extended and improved...") if schoolyesno == 2: time.sleep(4) print ("I think there is nobody around... Let's search somewhere else...") missionFunctionAS() if schoolhomechurch == "2": print ("- you go to the families home -") time.sleep(3) print ("Hmmmmm... Let's split up, so if she's here, we can find her faster. But be careful that her parents do not see you!") homeyesno = random.randint(1,2) if homeyesno == 1: time.sleep(4) print ("HERE SHE IS!!!") time.sleep(3) print (".....") time.sleep(3) print ("Now that we know what happened to her, we can go home and carry on research for couring sickened people.") time.sleep(3) print ("And when we find a remedy, we'll come back and try to bring this town the peace it once held.") time.sleep(5) print ("... to be continued,extended and improved...") if homeyesno == 2: time.sleep(4) print ("I think there is nobody around... Let's search somewhere else...") missionFunctionAS() if schoolhomechurch == "3": print ("- you go to the church -") time.sleep(3) print ("*whispering* Wow... this is an amazing building!... But pretty scary as well...") time.sleep(3) print ("LOOK! There she is... At the confessional...") time.sleep(3) print (" ......... ") time.sleep (3) print ("Now that we know what happened to her, we can go home and carry on research for couring sickened people.") time.sleep(3) print ("And when we find a remedy, we'll come back and try to bring this town the peace it once held.") time.sleep(5) print ("... to be continued,extended and improved...") def rightpathFunction(): #which path to get into the town? -> 1. stop to be kicked out of the game!! time.sleep(4) print ("There are two ways to get into the city: We could climb the wall (1) or take the secret tunnel (2).\nBoth have their risks.") time.sleep(4) print ("If we climb the wall and somebody sees us we will be dead by the time we arrive on the other side.\nIn the tunnel there are poisonous plants that randomly eject deadly gas that kills immediatly when inhaled.") time.sleep(4) chosenPath = input ("Which path should we try? 1 or 2 ? \n") time.sleep(3) print ("Okay... Let's take it! There's no wrong or right decision, it's just good or bad luck for us right now...") time.sleep(3) correctPath = random.randint(1,2) if str(chosenPath) == str(correctPath): print ("YES! You are amazing! We're inside! You will make all the decisions from now.") missionFunction() if str(chosenPath) != str(correctPath): print ("Oh....") time.sleep(3) print ("Ooooooooh NO !!!") time.sleep(3) print ("We...won't make it.") time.sleep(3) print ("----------------------------game over------------------------------") time.sleep(3) playAgain = input ("Do you want to choose again?") if playAgain == "yes" or playAgain == "Yes" or playAgain == "yes!": rightpathFunction() else: print ("Okay! See you next time!") def stillinFunction(): stillin = input ("Are you still willing to come with us?\n") if stillin == "yes" or stillin == "Yes" or stillin == "yes!": print ("Wow, I'm impressed. We can really need somebody brave like you. Which path shall we take?") rightpathFunction() if stillin == "no" or stillin == "No" or stillin == "no!": print ("Okay... We totally understand your choice. Goodbye!") if stillin != "yes" and stillin != "Yes" and stillin != "yes!" and stillin != "no" and stillin != "No": stillinFunction() #still in after hearing the whole story #still in after hearing the whole story? def wholestoryFunction(): #beeing told the whole story wholestory = input("Do you still want to hear the whole story? You might change your mind after hearing it...\n") if wholestory == "yes" or wholestory == "Yes" or wholestory == "yes!": print ("So you seem to be one of the fearless ones...") time.sleep(3) print ("Fine. Here's the end of the tragedy.") time.sleep(3) print ("One day the little daughter of the mayor got missing.") time.sleep(3) print ("Nobody could imagine what could've happened to her, the girl was a good kid who woul've never run away.") time.sleep(4) print ("Moreover there was no ransom demand or anything similar that could've indicated a kidnap.") time.sleep(4) print ("The whole town was searching for her for 3 whole months.") time.sleep(3) print ("But then one day, all of a sudden, she came home to the front door as if nothing had ever happened.") time.sleep(5) print ("She was completly unharmed and at first showed no signs of mental problems,\nexcept that she couldn't remember anyhting from the last quarter of the year.") time.sleep(6) print ("The little girl went back to school just one day after beeing back home and everything seemed normal until the some kids\nin the class of the little girl got sick.") time.sleep(6) print ("From one day to another, they lost their ability to speak.") time.sleep(4) print ("In addition, they had anxiety attacks and always their face was contorted with pain. No doctor knew what to do...") time.sleep(4) print ("At first many children fell ill, but when the first parent fell sick, everyone knew, that it was a matter of time when the whole town would be infected.") time.sleep(4) print ("Panic broke out. Parents needed to decide - stay with their infected children or leave before it was too late.") time.sleep(5) print ("Many people left. But some stayed...") time.sleep(3) print ("And they still 'live' there. But they became something similar to zombies.") time.sleep(4) print ("Until now, they couldn't leave because of the wall the fleeing inhabitants built around their hometown,\nbut it's getting ramsackle...") time.sleep(5) print ("We need to get in and find the cause for the virus or whatever it may be, before they break out and infect all the adjoining villages.") stillinFunction() if wholestory == "no" or wholestory == "No" or wholestory == "no!": print ("I understand, let's just get in there and try to get out alive. You can choose which path we take to go there.") rightpathFunction() if wholestory != "yes" and wholestory != "Yes" and wholestory != "yes!" and wholestory != "no" and wholestory != "No": wholestoryFunction() #else: #wholestoryFunction() def mainFunction():#mainFunction nameFunction() readyFunction() wholestoryFunction() Answer: DRY Principle Gurkan Cetin has mentioned this. This code is a lot simpler than yours. A lot cleaner as well. time_per_word=0.4 def display(text): print(text) sleep(len(text.split())*time_per_word) story=["A few miles away,", ... "... and everything changed." ] def story_mode(): for x in story: display(x) Even better is to store all the story lines in a text file and read it simply by using story=open("story.txt").readlines() Then you can supply each line to the display function as shown. You can even have multiple stories at multiple times and load each with the same function. Do not make functions out of the program flow. Instead, make functions out of reusable parts of the program. Learn and Use Simple Data Structures A tree is a perfect use of this type of general (question/answer/conditional next question) type of programs. Read about trees here. Implement a tree (python does not have an inbuilt type). Each node carries a question and responses to the question along with the parent's response. Then take all the children from a node and calculate the next question/node. def get_next_node(present_question,answer_choices,chidrens_of_node): user_answer=get_input(present_question,answer_choices) for node in childrens_of_node: if node.parents_response==user_answer: return node def get_input(input_prompt,input_options) lower_input_options=[input_option.lower() for input_option in input_options] user_input=input(input_prompt).lower() while(user_input not in lower_input_options): user_input=input(input_prompt).lower() return user_input Load your tree into memory using a text file as well. These short reusable functions make your life a whole lot easier. Even if you have 10 answers to a given question giving 10 possible solutions it will still work. And the story can span across many nodes giving you thousands of different storylines. The ready function is recursive. That is why you are having problems Do not write a recursive function when you don't need to and don't understand it. Read about recursion here. It is harder to implement recursion correctly even for very experienced programmers so keep that in mind.
{ "domain": "codereview.stackexchange", "id": 31865, "tags": "python, beginner, adventure-game" }
How to safely ship 5 litre container of liquid chemicals into glove box?
Question: I need to transfer a 5 litre container of oxygen sensitive liquid chemicals within an inert atmosphere to smaller 100ml bottles. I'm going to do so using a vacuum glove box purged with nitrogen. I have very little experience using a glove box but as far as I understand, you'd first pull a vacuum for the main chamber a few times to remove any oxygen, moisture and contaminants, then purge it with nitrogen. Afterwards place the item you wish to bring into the glove box inside the antechamber, pull a vacuum within the antechamber a few times like before then purge with nitrogen to meet the same kind of atmosphere as the main chamber, then open the antechamber latch within the main chamber to bring in your item. Proceed with the operation. The obvious issue here is a vacuum would boil the liquid chemicals if not effectively sealed. But the whole point of this operation is to only open the 5 litre container once inside an inert atmosphere, but I can't bring it into that atmosphere without risking the safety hazard of placing it under a vacuum prior to purging. The only idea I have is to simply not run a vacuum for the antechamber and just purge it with nitrogen, but of course would defeat the purpose of working in an effective inert atmosphere. The state of the unopened container is sealed with a standard commercial heat induction sealed liner like you'd find for unopened liquid medicines, and with a child safe cap. Any recommendations to how I can ship this safely into the glove box? Answer: Check if there is no better way (e.g., cannulation on a Schlenk line), especially if the solvent of your solution is flammable. If you know the density of the solution (even an approximate value), you may infer the volume transferred by monitoring the mass of the smaller flasks empty vs. flask filled. As an example, the steal barrels of predried solvents on a solvent system (like this) are equally put on a (e.g., bathroom) balance to check how much these still contain. If it still has to be the glovebox, check with the responsible for the glovebox if the bottle still is small enough to be carried safely on the lock's table. Equally consider the weight of such a flask, the table might not slide as easy as for a small $\pu{100 mL}$ flask on the tray. If the flask is stuck or (beware) tipping, the constraint access of the lock from inside of the chamber may be a source of trouble. If possible, let him/her transfer the bottle for you (especially at this scale). Check if the containers (both the one for $\pu{5 L}$ as well as the subsequently to be used to $\pu{100 mL}$ each) are in good mechanical condition. Be fine if he/she refuses this operation.
{ "domain": "chemistry.stackexchange", "id": 16076, "tags": "inorganic-chemistry, everyday-chemistry" }
Does Standard Model confirm that mass assigned by Higgs Mechanism creates gravitational field?
Question: I am not comparing passive gravitational mass with rest inertial mass. Is there an evidence in Standard Model which says that active gravitational mass is essentially mass assigned by Higgs mechanism. Answer: I will answer your deleted question, which is relevant. Read this link to get a framework of where the SM stands. Look at table 1 and you will see that in the microcosm of quarks and leptons the gravitational interaction is so weak that it is completely irrelevant and certainly its effect on the values used of the standard model cannot be measured with our present experimental accuracies. You ask: Is there an evidence in Standard Model which says that active gravitational mass is essentially mass assigned by Higgs mechanism. The standard model is mainly descriptive, it is a method to mathematically tie together a very large number of experimental data, economically, for the three stronger forces, strong, electromagnetic and weak. Because the strength of the coupling of the gravitational interaction is very much smaller than the coupling of these three forces ( Table 1 in link) there is no measurable predictable effect. In any case, as the quark and lepton masses are parameters in the SM, not predicted values,any tiny effect will be absorbed in the definitions. The Stndard Model is not a theory of everything, but must be embedded in a theory of everything because it is really a shorthand for all the data we have up to now on quarks and leptons. A theory of everything will of course incorporate the gravitational force, and theorists are working on it, with prime candidate the string theories.
{ "domain": "physics.stackexchange", "id": 3951, "tags": "mass, standard-model, higgs" }
Spectrum of FSK signal
Question: I have implemented a simple V.23-like FSK modem in C here. The peculiarity of the chosen modulation is such that 0's and 1's are sent as tones of two different frequencies (2100 Hz and 1300 Hz respectively) and the duration of each symbol is 1/1200th of a second, which is between one and two full periods of the symbol tone frequency. The band-pass filter that I used in the receiver is from about 875 Hz to about 2350 Hz. This range was determined empirically. The question is, how do you calculate this frequency range for a signal like that from the tone frequencies and symbol duration? EDIT: A similarity with amplitude modulation has been suggested, where the modulated signal falls into the band from Fcarrier - Message Bandwidth to Fcarrier + Message Bandwidth Hz. If I try to apply this logic directly to my case, then I should expect the bandwidth of my FSK signal to be the union of: F1 - bit rate to F1 + bit rate F0 - bit rate to F0 + bit rate Or, if I plug in the numbers, the union of: 1300-1200=100 to 1300+1200=2500 2100-1200=900 to 2100+1200=3300 Or, simply, from 100 to 3300 Hz. If I look at the spectrum of my FSK signal, however, it looks like it's roughly contained in the band from 2100-1200=900 to 1300+1200=2500 Hz instead of from 1300-1200=100 to 2100+1200=3300 Hz. Can this empirical result be explained and proven? EDIT2: Here's the spectrum as I'm seeing it in Audacity: Answer: With Frequency Shift Keying, the modulation (digital data) takes up bandwidth, so you can't just keep only the frequencies of the mark and space tones. A firm lower bound on how little bandwidth you can use is the distance between the mark and space frequencies, plus half the baud rate on either side. So for 1200 baud with frequencies of 1300 hertz and 2100 hertz, the absolute minimum bandwidth is (1300-(1200/2)) [700 hertz] to (2100+(1200/2)) [2700 hertz] which is a bandwidth of 2 kHz. People have tried to filter it tighter but if the reception still provides the correct data, it is only because of chance. Usually there is also some pulse shaping in the FSK signal before modulation to make the filter's job easier.
{ "domain": "dsp.stackexchange", "id": 421, "tags": "frequency-spectrum, modulation, fsk" }
What do rs id, allele coded 0 and allele coded 1 mean?
Question: So, for a project I've been working on (different story), I've been looking at the HapMap Project, and their free online files. In their README file, they talk about how for each legend file for each chromosome/region, there is an rs id, an alelle coded 0, 1, and a base pair position. Now it's fairly obvious after staring at this for a while that base pair position means where each nucleotide is located along the genetic sequence... is this correct? And what do rs id, and the other words mean? Any help would be greatly appreciated! Here's the link to the README file, in that same directory are the files about the participants, only including SNPs (snips!). HapMap Project README file link HapMap Project sequence data link (phase 2) Answer: rs id is reference SNP cluster ID see here. It's basically a unique identifier. This table is taken from your link: rs position 0 1 rs11089130 14431347 C G rs738829 14432618 A G rs915674 14433624 A G The allele codes are the 3rd and 4th columns. An SNP is site where a different base is found in different versions of the same gene (different versions of genes are alleles). For a given SNP the different alleles are referred to as the 0 or the 1 allele. So in the table the first SNP, rs11089130, has two alleles: allele 0 has a C at the SNP position (14431347) whereas allele 1 has a G at that position. The allele code does not imply any biological significance. I'm not sure what would happen if there were three alleles at an SNP, but presumably there would then also be an SNP coded as 2. Edit: Allele 0 is the residue from the reference genome. Allele 1 is the residue being studied, the SNP.
{ "domain": "biology.stackexchange", "id": 1107, "tags": "human-biology, genetics, bioinformatics, human-genetics, terminology" }
How much oxygen would be consumed on a 1 cm squared surface which is on fire?
Question: I'm trying to figure out how much oxygen the Human Torch produces when he is on fire. I figure if I knew how much oxygen on average (per second?) is consumed by a 1 cm squared surface which is producing flame ( rapid combustion) I would then be able to take the average surface area of a human and figure out his oxygen burn rate. Answer: Let's suppose that the Human Torch produces heat by burning methane (maybe he eats a lot of chilli), and suppose he produces a total heat output of 10,000W - I pulled this figure out of the air so feel free to modify it up and down. The enthalpy of combustion of methane is 882kJ/mol, so, to generate 10,000W, he needs to burn 0.011 moles of methane per second. The equation for the combustion of methane is: $$CH_4 + 2O_2 \to CO_2 + 2H_2O$$ so one mole of methane requires 2 moles of oxygen to burn. That means the Human Torch consumes 0.022 moles or 0.7 grams of oxygen per second. The area of skin per human is about 2m$^2$ or 20,000cm$^2$ so the Human Torch consumes about 1.1 x 10$^{-6}$ moles or 3.6 x 10$^{-5}$ grams of oxygen per cm$^2$ per second. Later: Let's revisit that power output of 10kW that I guessed at the start of the calculation. maybe it would be better to ask what the Human Torch's surface temperature is, and use this to calculate the power. Assume the Human Torch is a black body. This probably isn't a good approximation at ambient temperatures, but is probably OK when he's really hot. The power output of a black body is given by the Stefan–Boltzmann law: $$j = \sigma T^4$$ where $\sigma$ is about 5.67 x 10$^{-8}$Js$^{-1}$m$^2$K$^4$. So what temperature would my guess of 10kW correspond to? Taking the area as 2m$^2$ we get: $$T = \left(\frac{5000}{\sigma}\right)^{-4} = 545K = 272^o C$$ so not that hot really. Good if you want to make a cup of tea, but not great for burning through steel. Suppose the Human torch is really going for it and burns as hot as the surface of the Sun - 6000K to keep it a round number. The power is just: $$j = 2 \times \sigma \times 6000^4 = 1.5 \times 10^8W$$ Using the working above he now consumes 340 moles or 10.9kg of oxygen per second or about 0.55g per cm$^2$ per second. So you wouldn't want to be in the same room as him. Not only would you be roasted, you'd be suffocated too!
{ "domain": "physics.stackexchange", "id": 2683, "tags": "thermodynamics" }
Priority Queue in C#
Question: I develop a small game using C# and need A* in it. For that, I need a PriorityQueue, which .NET does not have. I wanted to make my own one for practice. Here is it, please comment on performance and usability: public class PriorityQueue<T> : IEnumerable { List<T> items; List<int> priorities; public PriorityQueue() { items = new List<T>(); priorities = new List<int>(); } public IEnumerator GetEnumerator() { return items.GetEnumerator(); } public int Count { get { return items.Count; } } /// <returns>Index of new element</returns> public int Enqueue(T item, int priority) { for (int i = 0; i < priorities.Count; i++) //go through all elements... { if (priorities[i] > priority) //...as long as they have a lower priority. low priority = low index { items.Insert(i, item); priorities.Insert(i, priority); return i; } } items.Add(item); priorities.Add(priority); return items.Count - 1; } public T Dequeue() { T item = items[0]; priorities.RemoveAt(0); items.RemoveAt(0); return item; } public T Peek() { return items[0]; } public int PeekPriority() { return priorities[0]; } } I tested it with... PriorityQueue<String> pQ = new PriorityQueue<string>(); for (int i = 0; i < 100; i++) { int prio = Provider.Rnd.Next(0, 1000); pQ.Enqueue(prio.ToString(), prio); if (pQ.Count > 0 && Provider.Rnd.Next(0, 2) == 0) pQ.Dequeue(); } while (pQ.Count > 0) { Console.WriteLine(pQ.Dequeue()); } Answer: There are other data structures for priority queues. You might consider implementing the queue as a binary heap instead, which gives you a run-time complexity of O(1) for accessing the "largest" (or "smallest", depending on the comparison) element, and O(log n) for insertion and removal (of the largest).
{ "domain": "codereview.stackexchange", "id": 5572, "tags": "c#, queue" }
If the candela is a base SI unit, why isn't the sone an SI unit at all?
Question: Related: Why is the candela a base unit of the SI? In the answers given in the previous question, the candela is included because lighting is important for humans. By the same argument, hearing is also important for humans, so there should also be an SI unit for subjective loudness of sound. So why is none of the subjective loudness units, such as sone, phon, or some similar units, included in SI units? Answer: In addition to the greater variance of the human ear compared to the human eye, I think there is at least one other reason sones (or whatever) aren't SI units and it boils down to this: while all candles of the same color and brightness look the same, not all strings of the same pitch and loudness sound the same. While loudness corresponds to brightness and pitch corresponds to color, sound has a third quality, let's call it "timbre", that doesn't really have an analogue in light. The closest, for our purposes, is color mixing but here the differences between optic perception and acoustic perception cause a serious divergence. For optic perception, only three colors of light get their own receptor, and even these have some serious overlap (so much so that calling them "red", "green" and "blue" is more than a little disingenuous). As a result, color mixing is simply how most colors are perceived. For acoustic perception, every (or near enough) pitch of sound gets its own receptor, none of this "oh, look, the green and red cones were both set off, but there's more red than green, must be orange" nonsense. No; each pitch gets its own hair cell. So what happens if more than one hair cell is set off by one note? This is (our definition of) timbre. The lowest-perceived or "fundamental" pitch is the nominal pitch, but the higher "harmonic" pitches tell you what kind of sound it is. This is what distinguishes pianos from harps, flutes from clarinets, plain vowels from nasal vowels from rhotic vowels, and content purrs from disgruntled growls. Pitch and loudness interact analogously to the way color and brightness interact (the threshold of hearing, ie, the quietest sound a human can perceive, depends on pitch), but timbre also interacts with both. A sound that carries multiple distinct pitches will sound louder than a sound with only one pitch at the same amplitude; the "fuzzier" and "noisier" the note, the louder it sounds, up to a point, although "pure" tones can also be quite piercing. And of course "noisier" sounds can't really be said to even have a pitch. This is where the difference between color-mixing and timbre really shines (pun very much intended). Because all colors except the reddest of reds and the bluest of blues are perceived as mixed colors, even if they were originally pure, color mixing can be smoothed away to give a fairly consistent luminosity function, even for white light, the "noisiest" of light. (Heck, the candela used to be defined by the brightness of a certain black body, which by definition emits light that is as mixed as possible.) But pure tones are perceptually distinct from mixed sounds, and timbre is, if anything, more exaggerated than frequency-mixing. A proper "sonosity function" would have to take into account pitch, loudness and timbre and all the messiness that comes with it; the fact that "noises" don't actually have a pitch, that many pitches played at once tend to be louder than if each were played separately, that timbre can't really be properly ordered, even in the way color can in two or three (or more) dimensions.
{ "domain": "physics.stackexchange", "id": 49410, "tags": "acoustics, notation, si-units, metrology" }
Do the minimum spanning trees of a weighted graph have the same number of edges with a given weight?
Question: If a weighted graph $G$ has two different minimum spanning trees $T_1 = (V_1, E_1)$ and $T_2 = (V_2, E_2)$, then is it true that for any edge $e$ in $E_1$, the number of edges in $E_1$ with the same weight as $e$ (including $e$ itself) is the same as the number of edges in $E_2$ with the same weight as $e$? If the statement is true, then how can we prove it? Answer: Claim: Yes, that statement is true. Proof Sketch: Let $T_1,T_2$ be two minimal spanning trees with edge-weight multisets $W_1,W_2$. Assume $W_1 \neq W_2$ and denote their symmetric difference with $W = W_1 \mathop{\Delta} W_2$. Choose edge $e \in T_1 \mathop{\Delta} T_2$ with $w(e) = \min W$, that is $e$ is an edge that occurs in only one of the trees and has minimum disagreeing weight. Such an edge, that is in particular $e \in T_1 \mathop{\Delta} T_2$, always exists: clearly, not all edges of weight $\min W$ can be in both trees, otherwise $\min W \notin W$. W.l.o.g. let $e \in T_1$ and assume $T_1$ has more edges of weight $\min W$ than $T_2$. Now consider all edges in $T_2$ that are also in the cut $C_{T_1}(e)$ that is induced by $e$ in $T_1$. If there is an edge $e'$ in there that has the same weight as $e$, update $T_1$ by using $e'$ instead of $e$; note that the new tree is still a minimal spanning tree with the same edge-weight multiset as $T_1$. We iterate this argument, shrinking $W$ by two elements and thereby removing one edge from the set of candidates for $e$ in every step. Therefore, we get after finitely many steps to a setting where all edges in $T_2 \cap C_{T_1}(e)$ (where $T_1$ is the updated version) have weights other than $w(e)$. Now we can always choose $e' \in C_{T_1}(e) \cap T_2$ such that we can swap $e$ and $e'$¹, that is we can create a new spanning tree $\qquad \displaystyle T_3 = \begin{cases} (T_1 \setminus \{e\}) \cup \{e'\}, &w(e') \lt w(e) \\[.5em] (T_2 \setminus \{e'\}) \cup \{e\}, &w(e') \gt w(e) \end{cases}$ which has smaller weight than $T_1$ and $T_2$; this contradicts the choice of $T_1,T_2$ as minimal spanning trees. Therefore, $W_1 = W_2$. The nodes incident of $e$ are in $T_2$ connected by a path $P$; $e'$ is the unique edge in $P \cap C_{T_1}(e)$.
{ "domain": "cs.stackexchange", "id": 4417, "tags": "graphs, spanning-trees, weighted-graphs" }
Olympiad physics 1996 problem
Question: I don't understand the official solution of the first problem of the 1996 International Physics Olympiad. They give this circuit: Each black box is a resistor of resistance $1\Omega$. They then claim that the following circuit is equivalent: I no not see the equivalence. Why are these two equivalent and what principle(s) is(are) used to understand the equivalence? Answer: Start with the initial diagram, but let's color code everything: Now move some wires around, without actually changing the connectivity: Finally, rotate the left and right blocks while again not changing the connectivity:
{ "domain": "physics.stackexchange", "id": 19484, "tags": "homework-and-exercises, electric-circuits, electrical-resistance" }
Why does this condition ensure that the residue of the propagator is 1?
Question: The corrected propagator is given by $$\Delta'(q)=\frac{1}{q^2+m^2-\Pi^*(q^2)-i\epsilon}$$ ($\Pi^*$ is the sum of all irreducible one-particle amplitudes) I get that the residue of the original propagator around the pole $q^2=-m^2$ is $$\frac{1}{2\pi i}\oint_{\text{around }q^2=-m^2} \frac{dq^2}{q^2+m^2-i\epsilon}=\lim_{q^2\rightarrow -m^2}\frac{q^2+m^2}{q^2+m^2}=1$$ and that the corrected propagator must have the same residue $$\frac{1}{2\pi i}\oint \Delta'(q)dq^2=1$$ So how does the condition $$\left[\frac{d\Pi^*(q^2)}{dq^2}\right]_{q^2=-m^2}=0$$ ensure the second integral above? EDIT: Devouring complex analysis literature. Have already edited some things that weren't quite right. For anyone interested, I'm using Weinberg Vol 1 and this in section 10.3, ~p. 430. Answer: The original propagator has a pole at $q^2=-m^2$, the mass shell. For $m=\sqrt{-q^2}$ be the true mass of the particle, we have $\Pi^*(-m^2)=0$ and require that the residue of the modified propagator is unity around $q^2=-m^2$. Recall that for a meromorphic function $f(z)$ we have $$\oint f(z)dz=2\pi i\sum_k\operatorname{Res}_{z_k}(f)$$ Thus $$\oint \Delta'(q^2)dq^2=2\pi i\operatorname{Res}_{-m^2}(\Delta')$$ Note that the pole in $\Delta'$ is simple. Thus we have $$\operatorname{Res}_{-m^2}(\Delta')=\lim_{q^2\rightarrow-m^2}(q^2+m^2)\Delta'(q^2)=1$$ Inserting the definition of $\Delta'$, we get $$\lim_{q^2\rightarrow-m^2}\frac{(q^2+m^2)}{q^2+m^2-\Pi^*-i\epsilon}=1$$ which is $0/0$ on the left, i.e. indeterminate. Using L'Hopital's rule, we find $$\left.\frac{d\Pi^*}{dq^2}\right|_{-m^2}=0$$ as was to be shown.
{ "domain": "physics.stackexchange", "id": 19937, "tags": "quantum-mechanics, homework-and-exercises, quantum-field-theory" }
Why are triangles drawn like so when working with gravity on an inclined plane?
Question: This is my first year as a physics student, and I've never learned about vectors past a basic level, so this is confusing me. When we have gravity on an inclined plane, we separate it into two components, which I understand. However, consider the image below, and there's a box at point A. When separating the gravity components, you draw a triangle AGC (sorry, the G and D are on top of each other and difficult to distinguish). AG becomes the force of gravity in the y-direction, and GC becomes the force of gravity in the x-direction. Then you do the trig functions from there. However, when I tried this myself, I drew triangle ADF instead and tried the trig functions from there. It didn't work. I'm having trouble understanding why you can't compute the trig functions from AGF. The only partial solution I came up with was that force of gravity in the y-direction can't be the hypotenuse, as the force of gravity in the y-direction is always less than the force of gravity. But I think I'm missing something more. Answer: You can decompose the forces along AD and DF rather than AG and GC, but they won't be the relevant forces you're looking for. What makes this confusing is we often work entirely in magnitudes, whereas fundamentally we are manipulating vectors, and we only get away with this with good choices of decompositions. Suppose the downward force of gravity is $\vec{f}$ with magnitude $f$ and direction along AC. The "right" method will say there is a normal force $\vec{f}_{\!\perp}$ in the direction of AG and a parallel force $\vec{f}_{\!\Vert}$ in the direction of GC, with $\vec{f} = \vec{f}_{\!\perp} + \vec{f}_{\!\Vert}$. The formulas for the magnitudes are $$ f_{\!\perp} = f \cos\beta, \qquad f_{\!\Vert} = f \sin\beta. $$ Moreover, because AG and GC are orthogonal, we know $\vec{f}_{\!\Vert}$ cannot have any effect on pushing into the inclined plane; all such effects are captured by $\vec{f}_{\!\perp}$. Now consider the "wrong" decomposition of $\vec{f} = \vec{f}_1 + \vec{f}_2$, with $\vec{f}_1$ along AD and $\vec{f}_2$ along DF. We can get these magnitudes too: $$ f_1 = f \sec\beta, \qquad f_2 = f \tan\beta. $$ The problem is, $\vec{f}_1$ doesn't fully capture the normal force, because $\vec{f}_2$ contributes to this as well. For an extreme example, imagine a mass sitting on horizontal ground with weight $\vec{f}$ directed downward. We could write $\vec{f} = \vec{f}_1 + \vec{f}_2$ with both $\vec{f}_1$ and $\vec{f}_2$ also pointing downward. We cannot just look at $\vec{f}_1$ and neglect $\vec{f}_2$ when considering the weight of the mass on the ground. Another way of looking at things is that we are silently taking dot products. The real, unambiguous definition of the normal force of the mass on the block is the dot product of its weight vector with the unit normal vector to the surface, $\vec{f}_{\!\perp} = (\vec{f} \cdot \hat{n}) \hat{n}$ (give or take a sign). When computing dot products, you can only ignore components of $\vec{f}$ that are orthogonal to $\hat{n}$; if you do a decomposition where neither component is orthogonal, you have to include both terms. In equations, this is the difference between $$ \vec{f} \cdot \hat{n} = \vec{f}_{\!\perp} \cdot \hat{n} + \vec{f}_{\!\Vert} \cdot \hat{n} = (f_{\!\perp}) (1) \cos0^\circ + (f_{\!\Vert}) (1) \cos90^\circ = f_{\!\perp}. $$ and $$ \vec{f} \cdot \hat{n} = \vec{f}_1 \cdot \hat{n} + \vec{f}_2 \cdot \hat{n} = (f_1) (1) \cos0^\circ + (f_2) (1) \cos69.91^\circ. $$
{ "domain": "physics.stackexchange", "id": 25680, "tags": "homework-and-exercises, newtonian-mechanics, forces, newtonian-gravity, vectors" }
Apply same format function to each python print() parameter
Question: I have a python print statement and inside the string I wish to print are four digits. I would like to apply the same formatting function to each param. I am not familiar with the latest and greatest features of python PEPs. Is there a slick way to do this? Code statement = "Splitting up the {} file into {} chunks, with the filesystem block size of {}, causing {} extra space to be used" print(statement.format( sizeof_fmt(input_file_size), sizeof_fmt(chunk_size), sizeof_fmt(block_size), sizeof_fmt(surrendered_space))) Format Function def sizeof_fmt(num, suffix='B'): for unit in ['','Ki','Mi','Gi','Ti','Pi','Ei','Zi']: if abs(num) < 1024.0: return "%3.1f%s%s" % (num, unit, suffix) num /= 1024.0 return "%.1f%s%s" % (num, 'Yi', suffix) Answer: Time will tell if your question is considered a worthy Code Review question, but till then I'ld like you to give a short review on your code nevertheless. Format function You could reduce the code duplication in the format function and make use of .format or f-strings (from Python 3.6 onwards). def sizeof_fmt_rev(num, suffix='B'): for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']: if abs(num) < 1024.0: break num /= 1024.0 else: # this part is only executed if the loop was not left with a break unit = 'Yi' return f"{num:.1f}{unit}{suffix}" This uses for ... else, one of the less well-known features of Python and only has a single line where the format expression has to be written. I see a chance to build something using math.log instead of that loop, but I will leave that as an exercise to you. You can even build something that works without a loop, but at least the version I came up with (found below) is actually slower than the original implementation. def sizeof_fmt_rev_log(num, suffix='B'): exponent = min(int(math.log(abs(num), 1024)), 8) num /= 1024**exponent unit = ('', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi', 'Yi')[exponent] return f"{num:.1f}{unit}{suffix}" I used for i in range(10): num = 3.8 * 1024**i print(sizeof_fmt_rev(num)) assert sizeof_fmt(num) == sizeof_fmt_rev(num) assert sizeof_fmt(-num) == sizeof_fmt_rev(-num) to test the revised version. Code As @AJNeufeld mentions in his comment, you could use map to save yourself some typing print( statement.format(*map(sizeof_fmt, (input_file_size, chunk_size, block_size, surrendered_space))) ) which is functionally equivalent to using a list comprehension: print( statement.format(*[ sizeof_fmt(i) for i in (input_file_size, chunk_size, block_size, surrendered_space) ]) ) Both build upon a technique called tuple unpacking, but as you can see it can also be used with lists, other sequences, and maybe also iterables (if it is a generator, it will be consumed - thanks @Graipher, who confirmed it/pointed it out in a comment).
{ "domain": "codereview.stackexchange", "id": 34969, "tags": "python, python-3.x, formatting" }
How can sex ratios remain Fisherian (1:1) in species where only the dominant male gets to mate
Question: In certain species only the dominant male gets to mate (or given strong preference), and yet the sex ratio remains 1:1. (I'm thinking in particular of gorillas). How does this happen? It doesn't seem like Fisher's argument should apply in this case. Answer: Fisher's principle applies to such cases as much as it does to species where only pairs mate. Consider a species where a successful male has exclusive mating with a harem of 20 females, and for each such male, 19 other males are not able to mate. A female has 100% chance of mating, and a male has a 5% (1 in 20) chance of mating. Assume a female has two offspring. In this scenario, an equal sex ratio would mean having a female offspring would lead to an expected number of grand-offspring of 2 (100% chance of 2 offspring from that female). Having a male offspring would also lead to an expected number of 2 grand-offspring (95% times 0 offspring plus 5% times 20 harem females each having 2 offspring). What would happen if the ratio of births was off from 1 to 1, say 5 females were born for each male? Then expected number of grand offspring would be for a female is still 2, but for a male it is 25% (chance of taking a harem of 20 females against the 3 other males born alongside those 20 females) x 40 (20 harem matings x 2 offspring) = 10. Hence with a lopsided sex ratio in this situation, having a male is much more valuable genetically than having a female. If a mutant arose which produced more male offspring in this imbalanced situation, it would have success until the population sex ratio became close to 1:1.
{ "domain": "biology.stackexchange", "id": 1240, "tags": "evolution, selection, sex-ratio" }
Animal UV vision
Question: It is reasonably well known that many species, such as bees and some types of birds as examples can see into the ultraviolet (UV). How is the structure of their eye different to humans to allow this? Also, how are they shielded from some of the harmful effects of ocular UV exposure? Answer: Firstly most UV perceiving organisms only perceive far UV (~300-400nm) which is less damaging. There are different opsins, which get activated by different wavelengths. Like other light UV is perceived by opsins sensitive to UV wavelength [ref]. I am not sure about this but opsin sensitivity to UV must be high so that low irradiation is sufficient for perception and excess is filtered off. UV filter mechanism in reported for human and squirrel eye; mostly uses kynurenine derivaties. May be similar mechanism is present in other organisms too.
{ "domain": "biology.stackexchange", "id": 8891, "tags": "vision" }
understanding complex fft results
Question: i use this for complex fft. Output expected $fft[3].real= 32$ (peak at 3rd bin) $fft[61].real= 32$ ((peak at negative frequency pair of 3rd bin)) All other values negligibly small The input is $ y.real = \sin (2\pi*3i/64)$ where $i = 0 \to 63$ $ y.imaginary = \sin (0*3i/64)$ where $i = 0 \to 63$ (all zero) The output i got $fft[3].imaginary = -32$ (peak at 3rd bin) $fft[61].imaginary = 32$ ((peak at negative frequency pair of 3rd bin)) All other values negligibly small This is the first time i am working with complex input fft. Can somebody explain me why i am getting peaks in imaginary part and not in real part of fft. As of my understanding doing Real FFT is nothing but using one half of input as Real part input and other half as Imaginary part input to the Compelx radix 2 FFT algorithm [i remember from John G Proakis textbook]. But i don't understand why this pseudo complex signal produces this kind of output. Also can someone explain me the phase information of this pseudo complex wave ? Answer: Your expected output can be achieved when the input is as follows: real part = cos(2*pi*[0:63]*3/64) imag part = 0 So I think you have a couple of issues: You are missing a factor 1/64 in your description of the input above. You should use cos() instead of sin() for the real part of the input. Try fixing issue 1 in your post, and try fixing issue 2 in your code.
{ "domain": "dsp.stackexchange", "id": 1604, "tags": "fft, fourier-transform, c" }
What is synaptic clearance?
Question: Please explain what the term synaptic clearance means. For example, what would dopamine synaptic clearance be? It is important to me in context of dopamine signaling variation due to difference in synaptic clearence level. Some gene alleles are associated with greater synaptic clearance in dopamine pathways. It used to explain dopamine level variation and throught it some brain areas activity (for example Nucleus Accumbens) difference among people with different genotypes. For more information see for example. Also, reduced synaptic clearence related to greater dopamine signaling levels. So I was interested in this term from prespective of brain activity in areas with dopamine receptors Answer: "Synaptic clearance" is referring to the clearing of a neurotransmitter from a synaptic cleft. A synapse is a place where one neuron can stimulate another neuron. The tiny gap between the neurons is called the synaptic cleft. The presynaptic (stimulating) neuron releases a neurotransmitter (such as dopamine) into the cleft and some of the neurotransmitter is recognized by receptors in the membrane of the postsynaptic (stimulated) neuron. In order for the synapse to stimulate the postsynaptic neuron a second time, the concentration of the neurotransmitter in the cleft must be reduced to its original levels. This reduction is called clearance, and can occur in several ways (mainly dependent on the type of neurotransmitter). The neurotransmitter may be transported back into the presynaptic neuron for reuse; this is called reuptake. The neurotransmitter may also be degraded (broken down chemically) into an inactive form. To a lesser extent, neurotransmitter can also be absorbed by astrocytes or simply diffuse away out of the cleft.
{ "domain": "biology.stackexchange", "id": 5272, "tags": "neuroscience, neurophysiology, terminology, neurotransmitter, synapses" }
ros msg definitions, difference between std_msgs/TYPE and type
Question: Hello, What is the difference in between using std_msgs/Float32 someVariable and float32 someVariable in msg files? I noticed the latter can be accessed by my_data.someVariable.data where the first needs to be accessed by my_data.someVariable I am sending these messages over rosserial, so what differences are there when using std_msg/Type in a msg file Best, C. Originally posted by wintermute on ROS Answers with karma: 180 on 2019-07-17 Post score: 0 Answer: What is the difference in between using std_msgs/Float32 someVariable and float32 someVariable in msg files? The former (std_msgs/Float32) is a message, the latter (float32) is a type. A message can be published. A bare type cannot. That is the main difference. The messages in std_msgs are essentially bare types wrapped in a message, such that they can be published and received. I noticed the latter can be accessed by my_data.someVariable.data where the first needs to be accessed by my_data.someVariable As the messages in std_msgs are wrappers and not the types themselves, any field that is declared to be of type std_msgs/Float32 essentially embeds a message inside another message. As the base message std_msgs/Float32 already wraps a plain float32, to access the actual data, you first have to go through all the wrappers and finally get to the data field that contains the wrapped data type. For normal usage in custom messages, it doesn't make sense to use the wrapped version of these plain types, as the message you are creating already wraps them in a message (that's the whole point of creating the message). I would go so far as to say: using any message from std_msgs as the type of a field in a custom message is essentially wrong: use the plain type it wraps (so just use float32 instead of std_msgs/Float32 for your field). Finally: even though it's convenient they are there, the messages in std_msgs should only be used very sparingly if at all. Publishing a std_msgs/Float32 carries no semantics whatsoever. As a publication, it could be the temperature of your coffee. As a subscription, it could be the desired throttle value of your car. Both of these can be encoded with a float32. But connecting the two (ie: coffee_temperature -> engine_throttle) makes no sense. By using generic messages, there is no way to prevent setting up such connections. If you'd use two different message types with proper semantics, say ThrottleCommand and Temperature: it becomes impossible to easily connect them (ie: by remapping fi), and it's immediately clear that one of the float32s actually encodes a temperature and the other the desired throttle value and that these represent entirely different things Originally posted by gvdhoorn with karma: 86574 on 2019-07-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 33457, "tags": "ros, ros-melodic, std-msgs, msg, rosserial" }
Work with thermodynamical equilibrium condition
Question: A thermodynamic system being in thermodynamic equilibrium is characterized by the property that for every thermodynamic potential $F$ which describes the system, its differential $dF$ is zero. Let consider for example the internal energy $U(S, V, N_i)$ for now. If the system is in thermodynamic equilibrium then $$dU= \frac{\partial U}{\partial S}dS + \frac{\partial U}{\partial V}dV + \sum_i^n \frac{\partial U}{\partial N_i}dN_i = TdS + pdV + \sum_i^n \mu_i dN_i = 0 $$ Note that $\frac{\partial U}{\partial S} =T, \frac{\partial U}{\partial V} =p, \frac{\partial U}{\partial N_i} =\mu_i$. Question: How this condition $dU$ helps "in practice" when one works with concrete systems and want to find out in which $(S_0, V_0, (N_i)_0)$ the system has its "equilibrium"? When I try to apply it I obtain something nonsensical and I want to understand which mistake I make here. Back to our condition $dU=0$ implies that $\frac{\partial U}{\partial S} =T, \frac{\partial U}{\partial V} =p, \frac{\partial U}{\partial N_i} =\mu_i$ should be all zero, because $S, V $ and $N_i$ are independent variables, therefore the differentials $dS, dV$ and $dN_i$ as well. So I obtain $n+2$ conditions $\frac{\partial U}{\partial S} =0, \frac{\partial U}{\partial V} =0, \frac{\partial U}{\partial N_i} =0$ But this not make any sense to me simply because this would imply that if the state is equilibrium state, then always its $T, p $ and $\mu$ are all zeroes. But certainly there are thermodynamical systems which are in equilibrium but their $T, p, \mu_i$ are not zero. I'm confused now, what I'm doing wrong? could anybody explain to me how to "read" and "work" with the condition $dU=0$ correctly? sorry, if my question is too easy for people with elementary knowledge on this topic but also after long search I nowhere found an answer. Answer: You have been misled by ab incomplete statement of the equilibrium condition. The precise statement is that for a system at thermodynamic equilibrium under the condition of a fixed set of thermodynamic variables, the corresponding thermodynamic potential is minimum with respect to any additional variable representing a possible internal constraint. In practice, this means that if you have an isolated system characterized by fixed values of entropy, volume, and number of molecules, its internal energy $U(S,V,N)$ at equilibrium is minimum with respect to any other variable different from those determining the thermodynamic state. For example, if the system is in a container and a fixed, impenetrable and insulating wall is separating subsystem $1$ from subsystem $2$, this is equivalent to have two separate subsystems with energies $U_1(S_1,V_1,N_1)$ and $U_2(S_2,V_2,N_2)$. If the constraint on thermal insulation is relaxed, and heat can flow between the two subsystems varying $S_1$ and $S_2$ but without entropy production, $S_1+S_2=S$, then only one additional independent variable, say $S_1$, represents the constraint. The vanishing of the first order variation of the total energy with respect to $S_1$ $$ \frac{\partial{U(S,V,N,S_1)}}{\partial{S_1}}=0, $$ at fixed $S,V,N$, provides the condition for thermal equilibrium after removal of the constraint, i.e. the equality of the temperatures of the two subsystems.
{ "domain": "physics.stackexchange", "id": 76209, "tags": "thermodynamics, work, equilibrium" }
The parking machine implementation
Question: I am interesting in the topic of finite automata, so I started over by implementing a parking machine from this video: Computers Without Memory - Computerphile to understand conceptions. Didn't do much of an error checking - have made a minimal working version. Short description of the task, but for better understanding watch the video: You have the parking machine which is implemented by the finite automaton. The parking lot costs 25 pence. The machine takes only 5, 10, 20 pence coins. Thus, any combination of these coins can be used to pay for parking. After a sum of coins has reached 25 pence, the ticket is issued. The machine gobble up all extra money above the 25 pence, like 20 + 20 pence = the ticket only, no change are given. Questions: What do you think about my approach? Is it optimal, extendable, suitable for real world applications or this technique has limiting factors and an another should be used? How would you implement this machine? It will be good to see another solutions. It is possible to do this by such way: Explicit state transition table, may be I will do this also. My target is to implement something more serious like a regex engine in the future, but I should read some books/articles before. #!/usr/bin/python3 def get_coin(current_sum, *functions): print(f"Current sum = {current_sum} pence") coin = int(input("Insert coin\n")) if coin == 5: functions[0]() elif coin == 10: functions[1]() elif coin == 20: functions[2]() else: error(current_sum, functions) def error(current_sum, functions): print("Error: only 5, 10, 20 pence coins are allowed\n") get_coin(current_sum, *functions) def zero_pence(): get_coin("0", five_pence, ten_pence, twenty_pence) def five_pence(): get_coin("5", ten_pence, fifteen_pence, twenty_five_pence) def ten_pence(): get_coin("10", fifteen_pence, twenty_pence, give_ticket) def fifteen_pence(): get_coin("15", twenty_pence, twenty_five_pence, give_ticket) def twenty_pence(): get_coin("20", twenty_five_pence, give_ticket, give_ticket) def twenty_five_pence(): input("Current sum = 25 pence, press the 'return' button to pay") give_ticket() def give_ticket(): print("""\nTake your ticket: Date: 1 November 2019 Start time: \t20:21 End time: \t22:21\n """) def parking_machine(): prompt = """\n\tThe parking machine. Information: 1. The machine takes 5, 10, 20 coins. 2. The machine doesn't give a change. 3. The parking costs 25 pence. Press 's' button to start inserting of coins. """ prompt = '#' * 80 + prompt + '#' * 80 + '\n' while True: button = input(prompt) if button == 's': zero_pence() parking_machine() Answer: Not a finite state machine The state machine in the posted code has infinite states. This is caused by the recursive calls of get_coin and error. Consider for example these states: zero-pence with nothing entered yet zero-pence, after zero-pence and an invalid input zero-pence, after zero-pence and an invalid input twice ... And so on. You can see these are different states, because the content of the stack is different. There are theoretically infinite zero-pence, five-pence, etc, states. Practically there are finite, because after enough invalid inputs, the stack will eventually overflow. There should be precisely one zero-pence, five-pence, etc, states. This is easy to fix by replacing the error function with a loop inside get_coin. There are too many states Even after making the states finite, there will still be more states than required by the exercise. Why, again, because of the stack. For example, there are two ways to reach 10 pence: 0 -> 5 -> 5 0 -> 10 At this point, the machine should be in the 10 pence state, but these two states are not the same, because the stack stores two different histories. If you want truly identical ten-pence, fifteen-pence, etc, states, you have to eliminate history, you have to eliminate stack, you have to eliminate function calls. This is easy to fix by implementing state transitions in a loop, instead of through function calls. Don't use varargs if you don't need it The get_coin function takes a variable number of functions as arguments. But in fact it will only ever use 3, and in fact it's only ever called with 3. Therefore, it would be more natural to give those parameters dedicated, descriptive names. Why convert input to int? The get_coin function converts input to int, but doesn't actually perform any numeric operations with it. Use a more portable shebang Not all systems have Python 3 installed in /usr/bin/python3. A more robust, portable shebang would be: #!/usr/bin/env python3
{ "domain": "codereview.stackexchange", "id": 36731, "tags": "python, python-3.x, state-machine" }
Fibre bundles and space-time
Question: I'm having some trouble understanding the concept for this more than likely due to my lacking mathematical background. I am currently reading Roger Penrose's The Road to Reality page 394 specifically if anyone has a direct reference. Spacetime ${\cal N}$ is described as being bundled with base space $\mathbb{E}^1$, time, and fibre $\mathbb{E}^3$. Now unless I've completely missed something, points in space are not equivalent at different times as described earlier in the text, instead each point in space for some particle under motion is on a different fibre $\mathbb{E}^3$. That's all well and good, now here's where my question lies. The structure of ${\cal N}$ is equivalent to the Galilean case ${\cal G}$ by a "sliding" of the $\mathbb{E}^3$ fibres. What exactly is meant by a sliding of these fibres, both mathematically and conceptually. Answer: I think you have misunderstood the text slightly. In figure 17.7, figure (a) shows a general Newton-Cartan spacetime with random gravitational fields. The trajectories of the freely moving particle worldlines are curves, and there is no global transformation that can simultaneously make them all straight. Figure (b) shows the special case where the gravitational field is uniform throughout all of space (though it can vary in time). That means at any instant in time every object in the associated bundle has the same acceleration ${\bf g}(t)$ (for some value of ${\bf g}(t)$). So if we can use a coordinate transformation and switch to coordinates that are accelerating with a proper acceleration of $-{\bf g}(t)$. In these coordinates objects are not accelerating, so the space is equivalent to the Galilean space shown in figure (c). So when Penrose talks about sliding the bundles he means we use a coordinate transformation (that is time dependant so it's different for each bundle) to cancel out the acceleration. But note again that it is only for the special case where the gravitational acceleration is independant of position in the space $E^3$. This is obviously physically unreasonable. You might be interested to read up about Rindler coordinates. In the relativistic world this is the coordinate transformation that eliminates the acceleration when the acceleration is independant of time.
{ "domain": "physics.stackexchange", "id": 22073, "tags": "general-relativity, differential-geometry, spacetime" }
Putting It Together: Where Do IDEs Come In With ROS Development?
Question: Hi, I went through the tutorials and I feel like I have a good understanding for ROS. I want to develop my own nodes and packages now. I installed QTCreator and I've been able to load/compile the beginner tutorials project. However, now I'm trying to put this all together. If one wants to create arbitrarily complex classes/nodes/packages/meta-packages... Do you always have to first make your package with catkin via terminal, and then open the project in QTCreator (or whatever your IDE is)? Do you always have to manually edit your CMakeList files to include the correct dependencies and whatnot? What happens when I want to debug a particular node that, for example, subscribes to some arbitrary topic? How can I possibly create that debugging environment from within QTCreator? Do I need to start with $roscore in terminal and then somehow get the debug session from QTCreator to talk to the master? What happens when I start stacking all these dependencies between nodes and I want to debug them together? I suppose I'm having difficulty seeing the "big picture". Where does the IDE come into play? Thanks for your help. Originally posted by trianta2 on ROS Answers with karma: 293 on 2013-10-31 Post score: 5 Original comments Comment by lindzey on 2013-10-31: I'm a happy emacs user, but am interested to see the discussion here. I assume you've already found This wiki page, and this question with a broader focus on IDEs, including some who love QTCreator. Comment by lindzey on 2013-10-31: Sorry - one more comment - if you're a ROS newbie, it might make sense to start with emacs or vim since there's lots of information out there assuming you're using terminal + text editor, and when you get more comfortable, figure out how to set up your IDE of choice (and document it on the wiki!) Answer: Just like with any code development, IDEs are not necessary, but they are a very powerful tool and help you save time. I've been using QTCreator for some time and there's no way I will go back to things like hunting down the file which contains compilation errors (double click on the error to go to that line), not having code autocompletion, and manual debugging via GDB. Linux purists may scoff at me, but this stuff lets me write better code by not wasting my time on the things that should be easy anyway. Now, to answer your specific questions: Theoretically, no, but catkin makes it easier. You do need to have the usual ROS files like CMakeLists.txt and manifest.xml in place - you can create these by hand or copy/paste and edit from a different project. However, the catkin buildsystem creates many more directories than rosbuild, so you're better off creating your package via catkin first. Yes. This is how you tell the buildsystem what your compilables and dependencies are, and where to look for them. Otherwise, the compiler/linker won't know what to do. 3 & 4. I do it the way you described with a combination of ROS_WARN/ROS_DEBUG printouts. I find this much easier than dealing with gdb directly and it is very similar, though not as powerful to Visual Studio's debugger. I think it would be pretty hard to debug multiple nodes simultaneously. What you would do is simulate/replay your inputs and then debug one node at a time. See this and this answers for a way to use QTCreator. In general, you would run everything but the node you are trying to debug from the command line. Having set up QTCreator (in Projects->Run set your executable and command line parameters appropriately), you can then simply set your breakpoints and click on the debug button. Edit: To answer about multiple mains - go to Projects in QtCreator and click on Run (note: I guess it's important to mention that QtCreator has to be started from a terminal in which the ROS environment is setup, I have the usual ROS setup in my ~/.bashrc file and then launch via qtcreator CMakeLists.txt&) QtCreator is smart enough to pick up the executables in your project (I believe it's doing it by reading the CMakeLists file). You can pick the exec you want to run from the list or you can add a path to any other exec. Here, I want to run prosilica_node from a specific directory: Now, if you have some other parameters to set: here in the arguments field I am remapping 2 topics. In addition, I am forcing this node to run in the /robot namespace by setting the ROS_NAMESPACE variable to robot Just open one of the existing stacks in QtCreator and look around in the Projects view. You should be able to change all of these yourself. Hope this helps. Originally posted by autonomy with karma: 435 on 2013-11-01 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by trianta2 on 2013-11-01: Hi Autonomy, Since you can have multiple nodes in a package, and each node is a main(), how do you tell QtCreator which main() to actually execute when you run? Comment by trianta2 on 2013-11-01: Also, is it possible to have rosmaster up and running, and then having the debug session from within QtCreator interact with rosmaster? How would one go about doing that? Comment by autonomy on 2013-11-01: I'm not sure I understand your second point - you mean roscore, and subscribing/publishing, correct? If so, then running your debug session from within QtCreator is no different than running it from the command line. Comment by trianta2 on 2013-11-01: I have been having difficulty getting my talker (from the beginner tutorials) to register with rosmaster. I have my terminal (in QtCreator options) set to "/usr/bin/xterm -e" is this a problem? Comment by trianta2 on 2013-11-01: I ran QtCreator from terminal (so ROS has now been sourced), then set my terminal within QtCreator to "gnome-terminal -e", and now I can run both my listener and talker from QtCreator! I can even interact with them via separate terminal! Comment by autonomy on 2013-11-01: Glad you got it working. Out of curiosity, where is the terminal setting? I usually source the ROS setup.sh file in my ~/.bashrc and also export ROS_PACKAGE_PATH in there, so all my terminals work. Comment by trianta2 on 2013-11-01: I found the terminal setting in Tools (the top main bar that keeps auto-hiding annoyingly), Options, Environment.. under the General tab you will see the Terminal field. Comment by trianta2 on 2013-11-01: By the way, thank you for your thorough answer in the edit above. Feel free to look at my other question ;) http://answers.ros.org/question/96921/design-question-where-does-the-business-logic-go/
{ "domain": "robotics.stackexchange", "id": 16023, "tags": "ros, ide, qtcreator" }
Why a pair of complex-conjugate zeros provide a nulling filter? (FIR filter case)
Question: FIR filter having a pair of complex-conjugate zeros that lie on the unit circle, with zeros of the form: $$ z_1 = e^{j\omega_i}\qquad\text{and}\qquad z_2 = e^{-j\omega_i} $$ And transfer function: $$ H(z) = \left(1 - z_1z^{-1}\right)\left(1 - z_2z^{-1}\right) = 1 - 2\cos\left(\hat{\omega}_i\right)z^{-1} + z^{-2} $$ Has the following pole-zero map, and magnitude & frequency response graph: But what is the intuitive idea behind that? And how the locations of zeros will affect the result? Thank you! Answer: The unit circle on the z-plane represents the frequency axis, similar to the imaginary axis $j\Omega$ on the s-plane for the Laplace Transform in the continuous time case. So the frequency response of the system is given by $H(z)$ when $z= e^{j\omega}$ with $\omega$ going from $0$ to $2\pi$ representing the normalized fractional radian frequency (which is the continuous time radian frequency $2\pi f$ divided by the sampling rate $f_s$. That said, any zero on the unit circle will create a null in the frequency response. With the OP's case of complex conjugate zeros (resulting in a real response), two nulls would result as shown. The location of the zero if on the unit circle, is the fractional radian frequency where $H(z) = 0$, thus called a "zero". If the zero is not on the unit circle, the null will not be zero but will be lower the closer the zero is to the unit circle for that frequency. This may be clearer from the plot below showing the frequency response for a 2 point moving average filter, which has a zero at $z= -1$. The frequency response is $H(z)$ as z sweeps over the unit circle, thus giving a numerator magnitude as the difference between z at any point on the unit circle and the zero location: $z-q_z$ (or the multiplication of multiple such magnitudes if there is more than one zero), and a denominator magnitude given by the same for the pole locations: $z-q_p$. In this case the pole is at the origin, so $z-q_p=1$ for all $z=e^{j\omega}$. What also should be now clear is how the resulting phase response is formed since the net phase will be the difference between the phase of the numerator and the phase of the denominator (phases subtract in the division of complex numbers). This type of nulling filter (zero-only) is not very effective given the gradual roll-off in frequency. To achieve very sharp nulls, place a pole very close to the zero; the closer the pole, the sharper the response! Given all poles must be inside the unit circle for a stable causal linear time invariant system, the magnitude of the pole would therefore be less than but close to 1. This IIR approach is further detailed here: Transfer function of second order notch filter Also this is an excellent write-up by Richard Lyons on linear phase nulling (or notch) filters that do provide a sharp notch with an FIR appraoch. This could similarly be translated to provide a notch at any frequency: https://www.dsprelated.com/showarticle/58.php
{ "domain": "dsp.stackexchange", "id": 8946, "tags": "filters, discrete-signals, signal-analysis, filter-design, finite-impulse-response" }
How the nucleon structure has been identified experimentally?
Question: It is known that nucleons (proton, neutron) are composed of partons (quarks, etc.). How was this identified experimentally? In particular, how it has been identified that nucleons comprise of more than one constituent? Answer: Matt Strassler goes into detail with LHC data here: http://profmattstrassler.com/articles-and-posts/largehadroncolliderfaq/whats-a-proton-anyway/checking-whats-inside-a-proton/
{ "domain": "physics.stackexchange", "id": 2577, "tags": "experimental-physics, particle-physics" }
Why electronegativity instead of electropositivity
Question: When I learnt about polarity, I always come to the term electronegativity and always use the electronegativity chart. However, when I studied further, they have the word electroposivity. So, I've been thinking why we use electronegavity more often than electropositivity? And why we have a electronegativity chart instead of electropositivity chart? Answer: The first thing that should be highlighted here is that electropositivity is simply the opposite of electronegativity, any of the two can be used interchangeably with the necessary modifications to the sentence. It is true though that electronegativity is more commonly used then electropositivity. For example, the Wikipedia article on the subject is electronegativity, and mentions electropositivity as its opposite. I believe there are two reasons for that, related to each other. First: Consistency It's very important in the scientific community, from reasons that go from facilitating students understanding to the writing of papers, that standards are used, and that everyone is talking about the same thing. It's easier if everyone is talking about the same thing, even if they are the complete opposite. It just makes us think faster. Why choose electronegativity then? Second: How we explain chemistry It's unecessary to explain why and how the study of the electrons "behaviour" in atoms and molecules are important to chemistry. Usually, when explaining a phenomenon, we talk about where the electrons are "going to". We say a reaction occurs because an electron is taken by an atom, or maybe because it's donated by one. We explain the polarity of the H-O bond by saying that the oxygen will attract the electrons of the covalent bond a lot more than hydrogen will, and not (usually) that the hydrogen doesn't attract the electrons as much as oxygen does. Again, it's obvious that you can explain everything by means of electropositivity, but, in my experience, we usually explain things by saying which atom/group has the property that makes electrons go to them with higher value, rather than saying which atom/group has the property that makes electrons leave them with higher value.
{ "domain": "chemistry.stackexchange", "id": 9677, "tags": "polarity, electronegativity" }
How to calculate the increase in temperature due to drop?
Question: Question- Calculate the rise in temperature in celcius in a bucket of water after it is dropped from 50 m where acceleration due to gravity is 10. I know that I need to find the amount of energy absorbed and then find out the increase in temperature using the specific heat of water but cant do it as the mass isn't mentioned. This is a question in the book of my coaching center. I am a seventh grader. Thanks. Answer: Well I hate answering my own question but here goes- V^2 = u^2 + 2as => V^2 = 1000 Assuming mass 1kg, Kinetic energy = 1/2 × mass × velocity^2 => 500 joules => 1200 cal Hence increase in temp = 0.12 ○ Celcius
{ "domain": "physics.stackexchange", "id": 25845, "tags": "homework-and-exercises, thermodynamics, gravity, energy, temperature" }
What is escape velocity? In reality, how can something no longer be under the gravitational influence of something else?
Question: Isn't G a continuous function and although you leave the immediate vicinity of the earth with an escape velocity won't it always exert a force, however small it may be. Won't that force eventually pull the object back to the earth (assuming the absence of other objects) Answer: It's easiest to think of this problem in terms of energy. At launch the object of mass m has certain amount of kinetic energy corresponding to its velocity v \begin{equation} E_k = \frac{mv^2}{2} \end{equation} and certain amount of gravitational potential energy corresponding to its separation r from the central body of mass M \begin{equation} E_p = -\frac{GMm}{r} \end{equation} As the object moves away from the central body, it slows down under the influence of the gravitational field, so its kinetic energy depletes while its gravitational potential energy increases since it is now further away from the central body. Kinetic energy is transformed into gravitational potential energy and the total mechanical energy is conserved. The critical realization is that the amount of kinetic energy ΔEk the object loses as it moves a unit of distance further away from the central body becomes smaller as r grows. This is due to the fact that decrease in kinetic energy ΔEk is equal to the increase in gravitational potential energy ΔEp and Ep's growth slows down considerably as r increases as can easily be seen on the chart of Ep against r. At infinity Ep reaches 0, which means that as r makes infinite change, Ep grows only by a finite value. This fact explains why it is possible to depart to infinity using a finite amount of kinetic energy. If an object's initial kinetic energy is smaller than that finite amount then it will fall back. If it is equal or larger then it will depart to infinity. Let's consider some concrete numbers. Imagine a 1kg spacecraft 10,000km away from the center of the Earth speeding away with velocity 8.929km/s. Its kinetic energy is 39.8MJ and its gravitational potential energy is -39.8MJ. When the object reaches 11,000km distance from Earth's center, its velocity will have decreased to 8.513km/s due to gravitational pull of the Earth and its kinetic energy has dropped to 36.2MJ. At the same time its separation from the center of the Earth grew to 11,000km so its gravitational potential energy is now -36.2MJ. Thus on the first 1,000km the drop in kinetic energy (and increase in gravitational potential energy) was 3.6MJ. As the object continues on to 12,000km its velocity drops to 8.151km/s and its kinetic energy to 33.2MJ. At the same time its gravitational potential energy increases to -33.2MJ. This time, the drop in kinetic energy (and corresponding increase in gravitational potential energy) is only 3MJ. For each subsequent 1,000km the drop in kinetic energy becomes smaller and smaller. If you continue this way summing up all infinite decreases in kinetic energy that the body will experience as it departs from Earth, you'll find that the total kinetic energy dropped by 39.8MJ to 0J while the gravitational potential energy increased from -39.8MJ to 0J. At the same time though, distance increased by ∞km.
{ "domain": "physics.stackexchange", "id": 2030, "tags": "newtonian-mechanics, newtonian-gravity" }
Orange 3 Heatmap clustering under the hood
Question: I have recently used the heatmap widget in Orange 3. All the documentation says is "Clustering (clusters data by similarity)". Is this using hierarchical or k-means or some other type of clustering? On that note, is there a way to look at the code being run by all the widgets to see whats going on under the hood? It would be nice if after you finish the workflow you would get a file with the script run to perform the analysis. Answer: It appears the widget uses hierarchical clustering. I guess the metric is Euclidean distance by default and there doesn't seem to be a way to specify another one (except by using Distances widget and connecting it into the Distance Map widget). I don't think it is possible to export the widget's workflow as pure code, but you can look at what the widget does in the source code (seems pretty low-level, though). What you can do, however, is select subsets of data (can be saved with Save Data widget) for further analysis if that's of any help.
{ "domain": "datascience.stackexchange", "id": 661, "tags": "orange" }
Small and large extra dimension(s) of the physical space
Question: Trying to make sense of small and large extra dimension(s) of phyiscal space in a simple intuitive example. Consider a two dimensional manifold like $\mathbb{R}^2$ and we are trying to add a small and a large extra dimension. Do we mean by small extra dimension in this case something like $(0,1) \times \mathbb{R}$ (the flat case) or $S^1 \times \mathbb{R}$ (the curved case)? Do we mean by large extra dimension something like $\mathbb{R^2} \times \mathbb{R}=\mathbb{R}^3$? Do we mean in the case of our three dimensional space that basically we have a base space of our phyiscal three dimensional space with a total space built by adding a fiber and thus creating a fiber bundle or a even more general an arbitrary total space? Does the extra dimension need to be real or can we even consider the complex manifolds, in the case of adding extra dimension to the phyiscal space, for example $\mathbb{C} \times \mathbb{R^3}$ or (Riemann surface) $\times \mathbb{R^3}$ Answer: The word large and small are determined by a metric tensor/characteristic length scale. Small dimensions can typically not be detected by current experiments. Large dimensions are typically of cosmological size, however, see e.g. the ADD model. The topology of extra dimensions need not be compact: E.g. the metric tensor could in principle contain a warp factor that makes the extra dimension effectively small. The full spacetime is not necessarily a total space of a fiber bundle. Extra dimensions may have a complex structure, such as e.g. a Calabi-Yau manifold.
{ "domain": "physics.stackexchange", "id": 93424, "tags": "spacetime, dimensional-analysis, topology, spacetime-dimensions, compactification" }
Ampere's Law and Permanent Magnets
Question: I'm studying electrical engineering and am just learning Ampere's Law. I'm ok with a simple conductor carrying a current fully enclosed by the integration path: However, say for example there was a permanent magnet in the vicinity of the conductor so there is more flux present than just that from the current, wouldn't Ampere's law effectively say there was more current present than there actually is? Answer: What is called Ampere's Law is the integral form of the 1st Maxwell equation: $$\text{curl} \textbf{H} = \textbf{J} \tag{1}\label{1}$$ Using Stokes's theorem for the vector differential operator $\text{curl}$ Maxwell's 1st differential law is shown to be the same as the integral form, ie., Ampere's law: $$\oint_\mathcal{L} \textbf{H}\cdot \textbf{dl} = \int_\mathcal{A} \textbf{J}\cdot \textbf{da} \tag{2}\label{2}$$ Here $\mathcal{L}$ is any simple closed loop in 3-space that is a boundary to some simple smooth surface $\mathcal{A}$ whose elementary line element $\textbf{dl}$ or surface element $\textbf{da}$ are assigned a vector that is proportional with the loop's tangent or the surface's outward normal, respectively. The equivalence of the differential and integral forms, $\eqref{1}$ and $\eqref{2}$, is to be understood as being analogous to the fundamental theorem of calculus: differentiation and integration are dual operation of each other where one describes a local the other a global property of the function (field). In both cases you have to write down all the fields including all the sources. What this means is that you must decompose the current as $\textbf{J}=\textbf{J}_c+\textbf{J}_b$. Here $\textbf{J}_c$ and $\textbf{J}_b$ denote the "true" flowing conduction current density and the "bound" current density within the permanent magnets, respectively. Now if you break it down that way $$\oint_\mathcal{L} \textbf{H}\cdot \textbf{dl} = \int_\mathcal{A} \textbf{J}_c\cdot \textbf{da} +\int_\mathcal{A} \textbf{J}_b\cdot \textbf{da} \tag{3}\label{3}$$ and you can see that if you select an integrating loop $\mathcal{L}$ bounding a surface $\mathcal{A}$ through which only true current flows, ie., $\textbf{J}_b = 0$ you only have the contribution of the true currents in the integral, $\oint_\mathcal{L} \textbf{H}\cdot \textbf{dl} = \int_\mathcal{A} \textbf{J}_c\cdot \textbf{da}$. This does not mean that $\textbf{H}$ is the same with or without the bounded currents, instead when the magnet is brought in the field changes so that contour integral stay the same if its surface does not cross the permanent magnet that is the source of the bounded currents. It can be shown that the bounded currents are related to the magnetic polarization of the permanent magnets as $\textbf{J}_b=\text {curl}\textbf{M}$ and $\textbf{B}=\mu _0 (\textbf{H}+\textbf{M})$
{ "domain": "physics.stackexchange", "id": 39585, "tags": "electromagnetism, magnetic-fields" }
about venturi/bernuolli- what amount of pressure do I need to lift x amount of water?
Question: I'm trying to build a venturi pump/tee in which I want air to create a suction which lift a liquid (like they do in paint sprayers), lets say water. My question is basically what is the minimal PSI of air i need to have in my tube in order to lift 1kg of water 1 meter up. I have no idea how to calculate this and would really appreciate your help. Thank you very much! Answer: The most important parameter in a Venturi is the flow velocity, not the pressure at the inlet. And the mass of the water doesn't come into it - it's the pressure difference (height difference) you need to worry about. In your case, 1 m height difference of water requires a pressure difference of $10^4$ Pa (0.1 atm, since 10 m of water is 1 atm). If you are using air in your venturi, $\Delta P = \frac12 \rho v^2$ . You can find the velocity by solving: $$v^2 = \frac{2\Delta P}{\rho}$$ $$v = 128 m/s$$ That seems pretty fast - but if you use a constricted nozzle, it is not so hard to achieve. For example, if you start with a tube that is 1 cm diameter and you constrict it to 2 mm diameter, the change in area is 25x so you need the inlet air to flow at approximately 5 m/s.
{ "domain": "physics.stackexchange", "id": 21604, "tags": "bernoulli-equation" }
Sorting on non-linear topology
Question: Disclaimer. What I'm going to ask about below may seem to be "Topological sorting". To my understanding, it is not. The latter runs in linear time, while I'm looking for a modification of the regular sort, which is at $N\log N$. Remark. I'm doing the sort in the reverse order, to use the analogy with heights and masses. Let us reformulate the sorting problem in the following way. Let us introduce two sets: A set of sites (positions) $s_a,\,a=1..N$. Those are assigned certain 'heights' $h[s_a]$, which in the case of usual sorting can be simply $h[s_a]=a$. A set of objects (numbers) $x_i,\,i=1..N$ which are assigned numerical values $m[x_i]$ (according to which the sorting will be performed). In the case of usual sorting, $m[x_i]$ are the numbers to be sorted. The problem of sorting can be viewed as assigning to each object $x_i$ a site $s_a$ in such a way that for all assignments $\{x_i\to s_a; x_j\to s_b\}$: $$ h[s_a] > h[s_b] \Longrightarrow m[x_i]\leq m[x_j] $$ The physical visualization of this problem is a vertical stack of massive balls. Sorting them is equivalent to arranging them in such a way that heavier ones have lower height. I first wanted to generalize the problem to "higher dimensions", i.e. to assume that the balls can stay in a $2$-dimensional grid, in a bucket, etc. However, I realized that from the computational point of view these settings are equivalent. In other words, whether we have a $2d$ grid with $nm$ balls in each row or a box with $n\times m$ balls in each layer makes no difference. So, I decided to generalize the problem to an arbitrary topology (which can be reduced to the $2$-dimensional problem actually). A set of sites (positions) $s_a,\,a=1..N$. Those are assigned certain 'heights' $h[s_a]$. Repetitions are allowed: $h[s_a]=h[s_b]$ for $a\neq b$ is OK. A set of objects (numbers) $x_i,\,i=1..N$ which are assigned numerical values $m[x_i]$. Again, we want to assign to each object a site in accordance with the same requirement as above. Clearly, the brute-force solution for the most general set of 'heights' and 'masses' runs in $N \log N$. One simply sorts the objects according to their 'mass', and then fills the sites, starting from the 'highest' or the 'lowest' one. Question. Has there been done any research for some particular cases? Clearly, if all the 'heights' are equal, the problem is trivial (which, I guess, means that it is solved in linear time). The first thing coming to my mind would be considering the case of "levels of same lengths", $$ h[s_a]=\lceil a/r\rceil,\,r\in\mathbb{N} $$ I would actually expect that this can be done faster than $N\log N$ (I'd assume that $r$ should enter the answer). Answer: If there are $k$ distinct heights, you can sort in $O(n \log k)$ time using a heap or balanced binary tree. Even better, you can sort with expected time $O(n + k \log k)$, if you use a hashtable. This leads to a solution to your problem with the corresponding complexity.
{ "domain": "cs.stackexchange", "id": 13286, "tags": "graphs, sorting, search-algorithms" }
Doppler broadening of nuclear cross section: Peak reduction
Question: What is the cause of the reduction in peak microscopic cross section with increasing temperature, shown here for a nuclear resonance? The nuclear properties of the target material do not actually change with motion, so why does the peak reduction of the target's microscopic cross section ("effective area") suggest that they do? I understand the need to broaden with respect to relative energy of neutron-target, but not to simultaneously reduce the absolute values of the curve itself. Taking it to an extreme like T-->infinity, I can see that if Doppler broadening were forced to be anchored to the peak, and the curve were merely expanded outward in energy, then the neutron would be considered always to be within the resonance - and, in fact, at the peak energy - which is the opposite of what is intended. Still, when considering just one neutron at a time - say, in a Monte Carlo simulation - if the target's motion is sampled independently from a temperature dependent distribution (e.g. Maxwellian) it would seem appropriate to use zero Kelvin cross sections. Is this true? Answer: It's important to understand first that free-atom cross sections are not temperature dependent. That is, cross sections are the same for the same relative speed between a neutron and a target atom. This means that, for the case of a Monte Carlo simulation of a single particle, if the relative speed between the particle and the target is determined explicitly, then it is appropriate to sample the 0K cross sections. This is distinct from when we perform a calculation - perhaps deterministic transport, or Monte Carlo without determining the target speed by sampling from a distribution - in the laboratory frame of reference, where the projectile (neutron) has a fixed speed, but the target has a distribution of speeds (e.g. Maxwellian). In this case, we have to average the cross section weighted by the relative speed (energy) distribution surrounding the projectile energy. This averaging reduces the peak, and it also explains why the reaction rate is preserved per neutron (area under curve).
{ "domain": "physics.stackexchange", "id": 69678, "tags": "nuclear-physics, nuclear-engineering" }
What angle - for a strut - provides the greatest vertical strength/support for a cantilever?
Question: I want to affix a cantilever to wall. I will support the other end of the cantilever with a strut made of wood, that attaches to some point on the wall below the cantilever, as shown in this sketch (click for full resolution): At what angle will the strut provide the greatest vertical strength/support for the free end of the cantilever? Answer: Assumptions The angle between the wall and the strut is $\theta$ $a$ is the depth of the table top $P$ is the weight on the table top, applied at the edge furthest from the wall The strut will fail when it buckles, which implies $F_{\text{max}}=\frac{\pi^2EI}{L^2}$ where $L$, $E$ and $I$ are the length, the elastic modulus, and the moment of area, respectively, of the strut Analysis The axial force on the strut will be $F=\frac{P}{\cos\theta}$. The length of the strut will be $L=\frac{a}{\sin\theta}$. Combining both equations with the equation for buckling we have: $(EI)_{\text{required}}=\frac{Pa^2}{\pi^2\sin^2\theta \cos\theta}$. $EI$ is the stiffness of the strut. The most efficient strut will be one for which $(EI)_{\text{required}}$ is minimized. The lowest $(EI)_{\text{required}}$ occurs when $\sin^2\theta \cos\theta$ is maximized and that is when $\theta=\sin^{-1}\sqrt{\frac{2}{3}}$ so the most efficient angle is $\theta\approx54.7^{\circ}$
{ "domain": "engineering.stackexchange", "id": 5324, "tags": "structures, statics, stresses, wood" }
Generic function for loading a function from a DLL library
Question: I am trying to write a generic function for loading a function from a DLL library. I am in no way an expert on DLLs, that is why I ask. We have discussed it first in my Stack Overflow question to discover the most obvious errors, and there were some crucial ones. Current version of my code follows: function LoadFunctionFromLibrary(const LibraryName, FunctionName: string; out FunctionPointer: Pointer): Boolean; var LibraryHandle: THandle; begin Result := False; FunctionPointer := nil; LibraryHandle := Winapi.Windows.LoadLibrary(PChar(LibraryName)); if LibraryHandle = 0 then Exit; FunctionPointer := Winapi.Windows.GetProcAddress(LibraryHandle, PChar(FunctionName)); if Assigned(FunctionPointer) then Result := True; end; This version of the function has been successfully tested on my EnableInput function: function EnableInput(const Enable: Boolean): Boolean; var BlockInput: function(Block: BOOL): BOOL; stdcall; begin Result := LoadFunctionFromLibrary('User32.dll', 'BlockInput', @BlockInput) and BlockInput(not Enable); end; Answer: Just like Dangph, I wonder why you need this. There are other better approaches suggested in the previous comments. Even if this is useful, Your solution is not great: 1. You repeatedly load the library and the function. 2. There is no way for you to free the loaded libraries. Here is a better way to do it: type TFunctionLoader = class private FLibraries: TStrings; // This stores the library handles and names FFunctions: TStrings; // This stores the function pointers and names public constructor Create; destructor Destroy; override; function LoadFunction(const LibraryName, FunctionName: string; out FunctionPointer: Pointer): Boolean; end; { TFunctionLoader } constructor TFunctionLoader.Create; begin FLibraries := TStringList.Create; FFunctions := TStringList.Create; end; destructor TFunctionLoader.Destroy; var i: Integer; begin for i := 0 to FLibraries.Count - 1 do // free all opened libraries Windows.FreeLibrary(THandle(FLibraries.Objects[i])); FLibraries.Free; // free other objects FFunctions.Free; end; function TFunctionLoader.LoadFunction(const LibraryName, FunctionName: string; out FunctionPointer: Pointer): Boolean; var i: Integer; LibraryHandle: THandle; begin i := FFunctions.IndexOf(FunctionName); // Is the function already loaded? if i >= 0 then begin // Yes, just return the stored pointer FunctionPointer := Pointer(FFunctions.Objects[i]); Exit(True); end; i := FLibraries.IndexOf(LibraryName); // No, test if the library is already loaded if i < 0 then begin // No, load it and store in FLibraries LibraryHandle := Windows.LoadLibrary(PChar(LibraryName)); if LibraryHandle = 0 then Exit(False); // Failed, quit i := FLibraries.AddObject(LibraryName, TObject(LibraryHandle)); end; // Load the function from the library FunctionPointer := Windows.GetProcAddress(THandle(FLibraries.Objects[i]), PChar(FunctionName)); Result := Assigned(FunctionPointer); // succeeded? if Result then // Add the function to FFunctions FFunctions.AddObject(FunctionName, TObject(FunctionPointer)); end; With above you can define var FunctionLoader: TFunctionLoader; somewhere and add this to the end of the unit initialization FunctionLoader := TFunctionLoader.Create; finalization FunctionLoader.Free; end. Use it as Result := FunctionLoader.LoadFunction('User32.dll', 'BlockInput', @BlockInput) and BlockInput(not Enable); All functions and libraries will be loaded only once and will be automatically freed.
{ "domain": "codereview.stackexchange", "id": 31360, "tags": "functional-programming, library, delphi" }
How to set a model's position using /gazebo/set_model_state service in python
Question: I want to set a model's position to origin after each measure loop, so that I can make automatically several measurements in the same conditions. The /gazebo/set_model_state service enables us to transport the model to set position. How do I use it in python node? Originally posted by kumpakri on Gazebo Answers with karma: 755 on 2019-03-20 Post score: 1 Answer: The code below shows a node written in python, that will set the my_robot's position to the origin of the map. The service requires a message of type gazebo_msgs::ModelState. import rospy import rospkg from gazebo_msgs.msg import ModelState from gazebo_msgs.srv import SetModelState def main(): rospy.init_node('set_pose') state_msg = ModelState() state_msg.model_name = 'my_robot' state_msg.pose.position.x = 0 state_msg.pose.position.y = 0 state_msg.pose.position.z = 0.3 state_msg.pose.orientation.x = 0 state_msg.pose.orientation.y = 0 state_msg.pose.orientation.z = 0 state_msg.pose.orientation.w = 0 rospy.wait_for_service('/gazebo/set_model_state') try: set_state = rospy.ServiceProxy('/gazebo/set_model_state', SetModelState) resp = set_state( state_msg ) except rospy.ServiceException, e: print "Service call failed: %s" % e if __name__ == '__main__': try: main() except rospy.ROSInterruptException: pass Originally posted by kumpakri with karma: 755 on 2019-03-20 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 4392, "tags": "python, ros-kinetic, gazebo-7" }
Drawing phasor diagrams when $x$ is the sum of two cosine terms
Question: If $$x= A\cos(w t) + A\cos(W t),$$ how do I draw a phasor diagram when $t=2$? Do I treat each cosine term as a vector, and then do vector addition? I know I have to differentiate to get the velocity/acceleration parts, but I'm just unsure about dealing with the sum of the two cosine terms. $w=4\pi$ and $W=5\pi$, by the way Answer: The only thing you have to ensure when using phasors is that the real (or imaginary part, depending on the convention used) part of the phasor reduces to the original entity, i.e. $$Re[\phi]=x$$ And since the real operator is linear, you can easily check that the real part of the sum of individual phasors reproduces your x, as you correctly speculated: $$\phi' = e^{iwt}+e^{iWt}$$ Now you can omit a common oscillation by dividing by, say, exp(iwt) to get $$\phi = 1+e^{i(W-w)t}$$
{ "domain": "physics.stackexchange", "id": 17624, "tags": "homework-and-exercises, waves" }
TCP Socket Wrapper
Question: I'm trying to build a simple server software for training purpose, most likely a IRC server, but I'm not there yet. I'm currently implementing a TCP socket class, to ease the use of the C socket API. It uses the addrinfo struct and getaddrinfo() (which add some constraints). TcpSocket.hpp #ifndef TCPSOCKET_H #define TCPSOCKET_H #include <netdb.h> #include <string> #include <memory> namespace Socket { /** * TcpSocket class implementation to facilitate the use of sockets */ class TcpSocket { public: TcpSocket(); TcpSocket(int family, int flags); TcpSocket(int socket, addrinfo info, bool connected, bool bound); virtual ~TcpSocket(); //Avoiding copy TcpSocket(const TcpSocket &socket) = delete; TcpSocket &operator=(const TcpSocket &socket) = delete; void bind(int port); void connect(std::string adress, int port); void listen(int maxQueue); std::shared_ptr<TcpSocket> accept(); void send(const char *data, unsigned int length, int flags); /** * Receive data (blocking) *@return true if socket is still open, false otherwise */ bool receive(char* msg, int len, int flags); void close(); private: void setInfo(int port); void setInfo(std::string adress, int port); void openSocket(addrinfo *info); addrinfo * mInfo; int mSock = -1; bool mSockCreated = false; bool mBound = false; bool mConnected = false; bool mClosed = false; }; } #endif TcpSocket.cpp (Please don't mind the DEBUG function, this is going away) #include "TcpSocket.hpp" #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <errno.h> #include <unistd.h> #include <cstring> #include <string> #include "exceptions.hpp" //DEBUG #include <iostream> #define DEBUG_ACT true namespace Socket { void DEBUG(std::string message) { if(DEBUG_ACT) std::cout << "DEBUG : " << message << std::endl; } // Public : TcpSocket::TcpSocket() { mInfo = new addrinfo; memset(mInfo, 0, sizeof *mInfo); mInfo->ai_family = AF_UNSPEC; mInfo->ai_socktype = SOCK_STREAM; //Can't create socket now. } TcpSocket::TcpSocket(int family, int flags) { mInfo = new addrinfo; memset(mInfo, 0, sizeof *mInfo); mInfo->ai_family = family; mInfo->ai_socktype = SOCK_STREAM; mInfo->ai_flags = flags; if(family == AF_UNSPEC) //Can't create socket now return; mSock = socket(mInfo->ai_family, mInfo->ai_socktype, 0); if(mSock == -1) { SocketCreationException except(std::string(strerror(errno))); throw except; } //Socket succesfully "opened". mSockCreated = true; } TcpSocket::TcpSocket(int socket, addrinfo info, bool bound, bool connected) : mSock(socket), mBound(bound), mConnected(connected) { mInfo = new addrinfo(info); } TcpSocket::~TcpSocket() { if(!mClosed) close(); freeaddrinfo(mInfo); } void TcpSocket::bind(int port) { if(mBound && mConnected) throw SocketBindingException("Already bound"); setInfo(port); addrinfo * result; for(result = mInfo; result != NULL; result = mInfo->ai_next) { if(!mSockCreated) { try { openSocket(result); } catch(SocketCreationException &e) { continue; } } //Socket sucessfully opened from here if( ::bind(mSock, result->ai_addr, result->ai_addrlen) == 0) { mBound = true; return; } } //Couldn't bind, throw throw SocketBindingException("Can't bind to port"); } void TcpSocket::connect(std::string address, int port) { if(mConnected) throw SocketConnectException("Already connected"); setInfo(address, htons(port)); addrinfo * result; for(result = mInfo; result != NULL; result = mInfo->ai_next) { if(!mSockCreated) { try { openSocket(result); } catch(SocketCreationException &e) { continue; } } //Socket sucessfully opened from here if( ::connect(mSock, result->ai_addr, result->ai_addrlen) == 0) { mConnected = true; return; } } //Couldn't connect, throw throw SocketConnectException("Can't connect to host"); } void TcpSocket::listen(int maxQueue) { if( ::listen(mSock, maxQueue) != 0) throw SocketListenException(std::string(strerror(errno))); DEBUG("Listening..."); } std::shared_ptr<TcpSocket> TcpSocket::accept() { DEBUG("Starting to accept"); union { sockaddr addr; sockaddr_in in; sockaddr_in6 in6; sockaddr_storage s; } address; socklen_t addressSize = sizeof (sockaddr_storage); int newSock; if( (newSock = ::accept(mSock, (sockaddr*)&address.s, &addressSize)) == -1) throw SocketAcceptException(std::string(strerror(errno))); DEBUG("1 client accepted"); addrinfo info; memset(&info, 0, sizeof info); if(address.s.ss_family == AF_INET) { info.ai_family = AF_INET; info.ai_addr = new sockaddr(address.addr); } else { info.ai_family = AF_INET6; info.ai_addr = new sockaddr(address.addr); } return std::shared_ptr<TcpSocket>(new TcpSocket(newSock, info, true, false)); } void TcpSocket::send(const char *data, unsigned int length, int flags) { const char * buff = data; int status = 0; int total_sent = 0; int left_to_send = length; while(total_sent < length) { status = ::send(mSock, buff + total_sent, left_to_send, flags); if(status == -1) { throw SocketSendException(std::string(strerror(errno))); } else { total_sent += status; left_to_send -= status; } } } bool TcpSocket::receive(char* msg, int len, int flags) { int status; if( (status = ::recv(mSock, msg, len, flags)) == -1) throw SocketReceiveException(std::string(strerror(errno))); else if(status == 0) return false; return true; } void TcpSocket::close() { if( ::close(mSock) == -1) throw SocketCloseException(std::string(strerror(errno))); else mClosed = true; } // Private : void TcpSocket::setInfo(int port) { setInfo("null", port); } void TcpSocket::setInfo(std::string address, int port) { const char *charAddress; if(address == "null") charAddress = NULL; else charAddress = address.c_str(); addrinfo hints = *mInfo; int status; if( (status = getaddrinfo(charAddress, std::to_string(port).c_str(), &hints, &mInfo)) != 0) { delete charAddress; throw SocketException("getaddrinfo returned non-zero : " + std::string(gai_strerror(status))); } delete charAddress; } void TcpSocket::openSocket(addrinfo *info) { mSock = socket(info->ai_family, info->ai_socktype, info->ai_protocol); if(mSock == -1) { SocketCreationException except(std::string(strerror(errno))); throw except; } } } main.cpp (usage example, you don't need to review this file) #include "TcpSocket.hpp" #include <iostream> #include <exception> #include <cstring> #include <memory> using namespace Socket; using Socket_p = std::shared_ptr<TcpSocket>; int main(int argc, char *argv[]) { Socket_p sock(new TcpSocket); Socket_p client; try { sock->bind(1170); sock->listen(5); client = sock->accept(); } catch( std::exception &e) { std::cout << e.what() << std::endl; } //Welcoming the new user. client->send("Welcome !\n\f", 15, 0); //Closing the listening soket, we want nobody else. sock->close(); char data[512]; memset(&data, 0, 512); while( client->receive(data, sizeof data, 0) ) { client->send(data, sizeof data, 0); memset(&data, 0, 512); } client->close(); return 0; } What is good is that it's all-purpose, and it seems to function great. However, the use of so many booleans look a bit sloppy to me, but it seemed like the best way to avoid common problems. I'm also wondering if this all-purposeness is that good, because it seems like this class can't be extended or specialized. What do you think of my approach? I'm also interested by any security concerns you may have as this is the first time I work with the C standard lib. Answer: The code is generally clear, readable and easy to understand, which is great. However, you can still improve a few things. Generally speaking, you don't want to manage memory manually like you're currently doing for mInfo. There is an easy way to make it managed effortless. Create the following type: struct addrinfo_delete { void operator()(addrinfo* ptr) const { freeaddrinfo(ptr); } }; Now, instead of storing an addrinfo* field in your TcpSocket class, you can make it store and std::unique_ptr<addrinfo, addrinfo_delete> field instead and let it memory by itself. Call it addrinfo_ptr and you will probably want to use it in some other places where you need an addrinfo*. Moreover, having an std::unique_ptr pointer member won't make you lose anything fature-wise since you already explicity made your class uncopyable in the first place. When possible, use std::make_shared<Foo>(bar); instead of std::shared_ptr<Foo> foo(new Foo{bar});. It will prevent the new/delete visual mismatch and it can also save some reference counting. Therefore, instead of defining a Socket_p class, I would be as explicit as possible and use std::shared_ptr<TcpSocket> directly. @Zeks advice is really good: you can pack your boolean flags into an std::bitset<4> to make your class more compact. You shouldn't have to write client->close(); at the end of your main since your class already does that in its destructor. Let the destructor do its job and it will be easier for everyone. It is good practice to always fully qualify every component from the standard library. For example use std::memset instead of memset. Not only can it avoid some name clashes, but it will also make it easier to look for std:: if you need to know which components from the standard library you're using. Whenever you use control flow statements like if, try to always use curly braces, even when you have a single statement following them. It will make it easier to add more statements (debug statements for example) and avoid problems of the Apple's goto fail kind. Another good practice is to always use nullptr instead of NULL. While it doesn't matter most of the time, when it does you're glad you used nullptr everywhere.
{ "domain": "codereview.stackexchange", "id": 15081, "tags": "c++, c++11, socket, server, tcp" }
How to download Sequencing data on Windows using SRA toolkit?
Question: I have downloaded and installed the SRA toolkit, but there seems to be no online material on how to download SRR files using the SRA toolkit on Windows. All online materials are Ubuntu or Linux based. Is there some obvious resource that I am missing? Answer: I found a solution to this problem. After having installed SRA toolkit: prefetch -v SRR9....1 this helps download the SRA file. Then to extract it as FASTAQ fasterq-dump SRR9....1
{ "domain": "bioinformatics.stackexchange", "id": 2627, "tags": "sratoolkit, sra" }
Find drag force on link of rotating chain
Question: Given a closed chain with a total length of 1.2m rotating at 1'800 rpm and a total mass of 0.4kg, what is the drag force pulling on one chain link? I originally thought that since no link size was given I need to assume the link sizes to be infinitely small. Thanks to the answers below I now know this won't work. Yet I am still baffled as to how I could calculate the drag force without it, sure I could give a function of the drag force that is dependent on the link size, but looking at the parameters given I think I should be able to calculate the actual force. Here is a video of a very similar experiment as we conducted and are asked to describe now: http://www.univie.ac.at/elearnphysik/video/PhysikI/rotKette_648x480.flv I am glad for any hints and explanations. Edit: Rewrote question to match exactly the problem description Answer: I will assume in this answer that "drag" means tension. You are asked to find the tension in the chain as it is rotating. This is independent of the link size, so long as the links are not a significant fraction of the circumference. If you have a hoop of mass density per unit length $\rho$ and circumference C (so that $\rho C = M$ where M is the total mass), rotating with rotational velocity $\omega$, the centripetal force on a segment of length l is the mass times the rotational velocity squared times the radius, or $$ F_c = \rho l w^2 {C\over 2\pi} $$ If the chain is at tension T, the two endpoints of the segment pull in with a total force of $$ {Tl\over C} $$ Setting the two forces equal, the l drops out (as it must) and gives the tension: $$ T = (\rho C) \omega^2 {C\over 2\pi} = M \omega^2 {C\over 2\pi} $$ or $\omega= 30 {1\over s}$, $M=.4 \mathrm{kg}$, $C = 1.2 m$, this is about 68N.
{ "domain": "physics.stackexchange", "id": 3994, "tags": "homework-and-exercises, classical-mechanics, newtonian-mechanics, forces, centripetal-force" }
Is there a public record of planetary disks apart from ours?
Question: Our planetary disk is not aligned with the rest of the galaxy. Are there publicly available records of the orientations of any other planetary disks? Especially interested in nearby stars - but I'd expect that to be easier for us to measure? Answer: A useful resource for protoplanetary discs is the recent DSHARP survey using the ALMA telescope. With the ALMA telescope the spatial resolution is good enough that orientation (i.e. position angle and inclination) can be derived from the images. See here.
{ "domain": "astronomy.stackexchange", "id": 4912, "tags": "exoplanet" }
Finding all possible pairs of square numbers in an array
Question: I am writing a program that allows me to find all possible pairs of square numbers including duplicates. We can also assume the array elements to be of positive integers only. e.g an array of {5,25,3,25,4,2,25} will return [5,25],[5,25],[2,4],[5,25] since 25 is square of 5. Currently, I am using a nested for loop to find the squares. I'm just wondering if there is a better way to do this? import java.lang.Math.*; public static void main(String args[]) { int arr[] = {5,25,3,25,4,2,25}; String s = ""; for(int i =0; i < arr.length;i++) { for(int j = 0;j < arr.length;j++) { if(Math.sqrt(arr[i]) == arr[j]) { s += arr[j] + "," + arr[i] + " "; } } } System.out.println(s); } Answer: Avoid string addition String addition is not good for building up strings from many pieces inside of loops. You should use StringBuilder instead. StringBuilder sb = new StringBuilder(); // ... omitted ... sb.append(arr[j]).append(',').append(arr[j]).append(' '); // ... omitted ... String s = sb.toString(); System.out.println(s); Avoid square-roots Checking \$\sqrt x == y\$ is ... dangerous. The results of the square-root are a float-point number, and may not exactly equal your integer value. If you have a negative number in your list, Math.sqrt() will raise an exception, yet { -5, 25 } is a valid pair. Testing x == y*y is safer, as long as there is no danger of y*y overflowing. Avoid repeated calculations for(int i =0; i < arr.length;i++) { for(int j = 0;j < arr.length;j++) { if(Math.sqrt(arr[i]) == arr[j]) { ... In the inner-loop, i is constant. Yet you are computing Math.sqrt(arr[i]) every time through the loop. The value should not be changing, so you could compute it once, outside of the inner loop. for(int i =0; i < arr.length;i++) { double sqrt_arr_i = Math.sqrt(arr[i]); for(int j = 0;j < arr.length;j++) { if(sqrt_arr_i == arr[j]) { ... Pairs need distinct indices If your input contains a single 0 or 1, it will mistakenly report that it has found a pair, since 0 == 0*0 and 1 == 1*1. You can protect against this by adding i != j && to your test. If the input contains two 0's (or two 1's), your algorithm will emit 4 pairs: [first,first], [first,second], [second,first], and [second,second]. Adding the i != j guard will eliminate the first and last of those pairs, but it will still declare two pairs: [first,second], [second,first] since first² = second and first = second² would both be true. You'd have to weight in on whether this would be two distinct pairs or not. Formatting Consistent and liberal use of white space is always recommended. Add white space after every semicolon inside of for(...), on both sides of operators (i = 0 not i =0), and after every comma in {5,25,3,25,4,2,25}. With the above recommendations, your function body would become: int arr[] = { 5, 25, 3, 25, 4, 2, 25 }; StringBuilder sb = new StringBuilder(); for(int i = 0; i < arr.length; i++) { for(int j = 0; j < arr.length; j++) { if(arr[i] == arr[j] * arr[j]) { sb.append(arr[j]).append(',').append(arr[i]).append(" "); } } } String s = sb.toString(); System.out.println(s); Additional considerations You have a trailing space in your resulting string. There are several tricks you can use to remove it. However, an interesting alternative is to use StringJoiner: StringJoiner sj = new StringJoiner(" "); // ... omitted ... sj.add(arr[j] + "," + arr[j]); // ... omitted ... String s = sj.toString(); System.out.println(s); When StringJoiner adds the second and successive strings, it automatically adds the delimiter specified in the constructor.
{ "domain": "codereview.stackexchange", "id": 35313, "tags": "java, algorithm, array" }