anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
"Early" and "late" nomenclature regarding O, B & K, M stars
Question: The Wikipedia article Stellar Classification has a subsection "Early" and "late" nomenclature. It says: Stars are often referred to as early or late types. "Early" is a synonym for hotter, while "late" is a synonym for cooler. My lecture notes say: O and B stars are sometimes referred to as ‘early-type’, while K and M are ‘late-type’. If I look at a HR diagram, I see that main sequence O, B stars are hotter than main sequence K, M stars. So they are burning their fuel at a greater rate than K, M stars, so they run out of it early. So the stars which we are still able to observe now from the early universe are K, M stars, rather than O, B, which presumably formed rather *late**ly. Yet this is not consistent with the terminology above, so something is obviously wrong here. What is wrong with my argument that the early and late words should be used in the other way round? Answer: What's wrong with your argument is that you are using modern theories of stellar evolution. The terminology was introduced in the context of the theory that stars are powered by the Kelvin–Helmholtz mechanism (heat generated via contraction due to gravity). In this model, stars would start out hot and gradually cool down as they evolved, hence hot stars are called "early-type" and cool stars are called "late-type". This theory was shown to be inadequate because it can only power the Sun for tens of millions of years, while geological evidence indicated that the Earth had been around for far longer than this. The resolution was the discovery of nuclear fusion. However by this point the terminology of "early-type" and "late-type" stars had become established and it survives to this day. (Incidentally the terms "early-type" and "late-type" are also used for galaxies, where once again they do not correspond to modern theories of how these objects actually evolve.)
{ "domain": "astronomy.stackexchange", "id": 4988, "tags": "stellar-evolution, stellar-classification, hr-diagram" }
WRF/Chem: Editing NetCDF (wrfchemi files) for own emissions
Question: Does anybody here have any experience with editing wrfchemi outputs from the prep_chem utility so that I overwrite the file with my own emissions? I am knowledgeable in python, but I am not that well oriented with the manipulation of NetCDF files. What I want to do, basically, is to use my own emissions inside the WRF/Chem model I have a shapefile of emissions (gridded) with each grid having a value for the emissions. I want to convert this to a readable format for the WRF/Chem; however, I think editing the wrfchemi files by prep_chem would be easier? (Tell me if not) I have read the emissions guide of the WRF/Chem, but it does not really expound on details about this especially about the following How to deal with the speciation (my shapefile has only PM10 and PM2.5). where exactly do I place this in the new emissions file (the edited wrfchemi)I want. There are many pm species there (like pm2.5 nucleation mode, pm, etc). I am not sure what weights to put on these. My emissions are not time dependent, and what do I do when I edit the wrfchemi file for this. (the wrfchemi files I have now are also not time dependent) Hoping someone could help out with this. Thanks! Answer: You can try the R package eixport, with wrf_put # example # Read the array emissions, CO <- wrf_get(file = "Path_to_WRFCHEMI", name = "E_CO") # Change the values, here you should use your data CO[] = rnorm(length(CO)) # Inyect your emissions into the wrfchemi wrf_put(file = "Path_to_WRFCHEMI", name = "E_CO", POL = CO) How to deal with the speciation (my shapefile has only PM10 and PM2.5). where exactly do I place this in the new emissions file (the edited wrfchemi)I want. There are many pm species there (like pm2.5 nucleation mode, pm, etc). I am not sure what weights to put on these. You need to know the speciation for your local. At my department at Uni. of São Paulo, we use the following speciation of PM 2.5 (g/h/km^2) e_so4i = 0.0077, e_so4j = 0.0623, e_no3i = 0.00247, e_no3j = 0.01053, e_pm2.5i = 0.1, e_pm2.5j = 0.3, e_orgi = 0.0304, e_orgj = 0.1296, e_eci = 0.056, e_ecj = 0.024, h2o = 0.277 Check this My emissions are not time dependent, and what do I do when I edit the wrfchemi file for this. (the wrfchemi files I have now are also not time dependent) You need to first edit the namelist.wps and namelist.input to the desired length of time. Then, after running ./real.exe you will have the wrfinput_d0x. At this stage you can use wrf_create which creates a wrfchemi file with 0. Read the manual to see if you want two 0-12z 12-0z files or one file with the length for all hours. Then you can use wrf_put. Just follow the example and read the manual. You might also try the R package EmissV References: Ibarra-Espinosa et al., (2018). eixport: An R package to export emissions to atmospheric models. Journal of Open Source Software, 3(24), 607, https://doi.org/10.21105/joss.00607 Schuch et al., (2018). EmissV: an R package to create vehicular and other emissions for air quality models. Journal of Open Source Software, 3(30), 662, https://doi.org/10.21105/joss.00662
{ "domain": "earthscience.stackexchange", "id": 1982, "tags": "weather-forecasting, wrf, netcdf, wrf-chem, emissions" }
Why don't we study spin-3/2 fields?
Question: I only did one QFT course so there may be something obvious I'm missing. Studying quantum fields, some of the easiest examples are bosonic scalar $\phi$ and vector $A_\mu$ fields, and fermionic spinor $\psi$ field. On the book from Peskin and Schroeder, it also mentions the (probable) bosonic tensor $g_{\mu\nu}$ for gravity. Are there theories regarding higher-order fermionic fields, for example with spin 3/2? If not, why not? If yes, how would one build such a theory? Answer: If not, why not? Because we have not yet observed a fundamental particle with $3/2$ spin. If yes, how would one build such a theory? The Rarita–Schwinger equation.
{ "domain": "physics.stackexchange", "id": 98703, "tags": "quantum-field-theory, field-theory, fermions" }
Connection between Mongodb and UR5 Nodes?
Question: Hi All, Recently I have been working with the Universal Robot Stack, When I tried to test the demo.launch file present ur5_moveit_config package, Everything works fine but then I tried to check the Inter-connectivity between different nodes by typing. $ rosrun rqt_graph rqt_graph This Command opens the rqt window and displays the Inter-connectivity between all the Nodes. However the mongodb_wrapper Node does not seem to be connected to any of the Nodes. I would like to understand, as to how one can utilize this database? Any one who has worked on this with other Robots are also requested to share the details. As I understand, This db is used to store Planning information! Please Correct me if I am Wrong. Thanks and Regards, Murali Originally posted by MKI on ROS Answers with karma: 246 on 2013-11-27 Post score: 0 Answer: Maybe the mongodb_wrapper is used through service calls and not topics. (Which would make sense in my opinion...) These will not be visible in rqt_graph. Originally posted by Ben_S with karma: 2510 on 2013-11-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by MKI on 2013-11-27: Thank you for Replying @Ben_S, would it be possible to access mongodb like other databases? I am wanting to understand the relationship between 'Planning' a Pose and How it is 'Stored and Retrieved'?.
{ "domain": "robotics.stackexchange", "id": 16284, "tags": "ros" }
The effect of pH and solubility on coagulation in water treatment
Question: In water treatment, a specific pH range must be met in order for the process of coagulation to occur properly. Sources have stated this is due to the pH affecting the solubility of the coagulant, but why is solubility significant? Answer: To start, I assume you are speaking of metal salt coagulants such as aluminum sulfate (alum) and ferric chloride, as these are the most commonly used. As you have stated, their solubility is affected by pH, the simplest representation of the reaction for alum being: $$\ce{Al^3+ + 3H2O <=> Al(OH)3,_s + 3H+}$$ Small particles in the water such as clay or bacteria are too small to settle out on their own. The purpose of coagulation is to either cause the smaller particles to aggregate and settle or to trap them during precipitation. Coagulation occurs via three main mechanisms: (1) sweep - where the metal salts precipitate (i.e., solubility) and capture particles/contaminants like a net; (2) adsorption/charge neutralization - where the positive charges on the Al and Fe cations neutralize the negatively charged e.g., clay, bacteria, etc., causing them to aggregate and settle out; (3) adsorption and bridging - where a polymer coagulant can be used to "bridge" together like charged particles, causing them to aggregate and settle out. All are affected by solubility, but sweep is the mechanism that uses precipitation of the metal salts to remove particulates. The mechanism you will want to choose will depend on the source water and the coagulant. The mechanisms are controlled by pH and solubility. Below are the solubility diagrams for alum and ferric chloride, with solubility represented on the left y-axis, dose on the right y-axis, and pH on the x-axis. As you will see, the shaded areas are the regions for achieving optimal coagulation, depending on the mechanism you desire. For example, if your laboratory tests showed that sweep coagulation was the best for your source water and you chose alum, then you would want to be at a pH between ~7 to 8 and an alum dose of ~15 to 60 mg/L alum (the "optimal sweep" circle). This pH and dose would result in the desired solubility behavior to achieve optimum sweep conditions. *Figure 9-11 from MWH Water Treatment: Principles and Design, 3rd Ed (Crittenden)
{ "domain": "chemistry.stackexchange", "id": 10399, "tags": "water, solubility, ph, water-treatment" }
OSX indigo opencv linking problem (cv_bridge, image_view)
Question: I'm trying to install indigo from source on a mac recently upgraded to yosemite (10.10). I've had a few issued but managed to build everything up to cv_bridge. cv_bridge has a linking error with opencv: Linking CXX shared library /.../devel_isolated/cv_bridge/lib/python2.7/site-packages/cv_bridge/boost/cv_bridge_boost.so Undefined symbols for architecture x86_64: "cv::Exception::Exception(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int)", referenced from: NumpyAllocator::allocate(int, int const*, int, int*&, unsigned char*&, unsigned char*&, unsigned long*) in module_opencv2.cpp.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[2]: *** [/.../devel_isolated/cv_bridge/lib/python2.7/site-packages/cv_bridge/boost/cv_bridge_boost.so] Error 1 make[1]: *** [src/CMakeFiles/cv_bridge_boost.dir/all] Error 2 make: *** [all] Error 2 Based on some older answers, it looked like this was a libc++ vs libstdc++ issue (http://answers.ros.org/question/95056/building-rosconsole-osx-109/), so after adding: SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libstdc++") SET(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -stdlib=libstdc++") cv_bridge compiled fine, as did everything up to image_view. image_view also had an opencv linking error: Linking CXX executable /.../image_view/lib/image_view/disparity_view Undefined symbols for architecture x86_64: "cv::imwrite(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, cv::_InputArray const&, std::__1::vector<int, std::__1::allocator<int> > const&)", referenced from: ... many more opencv missing symbols ... Making the same change to CMakeLists for image_view, however, causes linking error to ros core libraries: Linking CXX executable /.../image_view/lib/image_view/disparity_view Undefined symbols for architecture x86_64: "ros::init(int&, char**, std::string const&, unsigned int)", referenced from: _main in disparity_view.cpp.o I must be missing something here as it seems other people are having success with indigo on OSX. Also, just to be clear, I have opencv 2.4.5 from homebrew-science. Any help is appreciated. Thanks! Originally posted by Mike Shomin on ROS Answers with karma: 43 on 2015-03-26 Post score: 0 Answer: I uninstalled opencv2 and used brew install opencv --devel to get opencv 3. It got rid of the errors for that library...I am having trouble with the stereo_image_proc library though. Originally posted by Mr. CEO with karma: 143 on 2015-03-26 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Mike Shomin on 2015-03-26: Thanks for the tip. Trying opencv 3 now. Seems to fix things, but also brings a new problem. image_view now complains about a missing function: /.../image_nodelet.cpp:145:35: error: use of undeclared identifier 'cvGetWindowHandle' Comment by Mr. CEO on 2015-03-26: I fixed that...Let see what I did Comment by Mr. CEO on 2015-03-26: This is a copy of my file. https://gist.github.com/automagically/c844457eeb739759fbca Comment by Mike Shomin on 2015-03-26: Gotcha, looks like you just skipped that bit if opencv3. After a little opencv 3 name change in rqt_image_view, I am now up to image_proc. Did you happen to run into a missing need for libopencv_contrib.dylib? Comment by Mr. CEO on 2015-03-26: I dont think I did.... Comment by Mr. CEO on 2015-03-26: I assume you are doing the full install Comment by Mr. CEO on 2015-03-26: I have decided to do the barebones and install the libraries I need. It is what I should of done in the first place. If you need help I will still attempt to help you. Comment by Mike Shomin on 2015-03-26: I'm trying for full... but I'm having to skip some thing. I really need rviz though. Through hacking, I've gotten there. Now I'm having OGRE linking problems. Tried to install ogre 1.9 through brew, but having a problem. Comment by Mr. CEO on 2015-03-27: Paste the error, and I will try to help (I wont be able to answer for a couple of hours)
{ "domain": "robotics.stackexchange", "id": 21263, "tags": "opencv, ros-indigo, osx" }
Why Runge-Kutta for Quaternion integration in Kalman filter?
Question: I'm reading up on Kalman filtering at the moment. In particular, I'm interested in using the "extended" and "unscented" variants for IMU sensor fusion and calibration. In A comparison of unscented and extended Kalman filtering for estimating quaternion motion, quaternions are used to represent 3d rotation. I understand unit quaternions can be used to represent a 3d rotation. They suit for representing absolute attitude (a rotation from a universal reference), relative rotation, or angular velocity (a rotation representing rate per second, or some other fixed time period). However, this papers discuss using Runge-Kutta integration, specifically RK4. It uses RK4 with the quaternions but doesn't seem to furnish details of what this involves or why it is necessary. Here's the part of the paper that mentions it… Given the state vector at step k − 1, we first perform the prediction step by finding the a priori state estimate xˆ−k by integrating equation 1 [f = dq/dt = qω/2] through time by ∆t (i.e., 1.0 divided by the current sampling rate) using a 4th Order Runge-Kutta scheme. I've encountered Runge Kutta before for integrating positions in kinematics. I don't really understand how or why it would be needed here. My naive approach would be to simply multiply the existing attitude q by the angular velocity ω to get the expected new q – I don't see why numerical integration is necessary here? Perhaps it's to "scale" the unit time ω to the change that occurs in in ∆t, but surely that can be done very simply by directly manipulating of ω (raising it to fractional power ∆t)? Anyone know? Answer: I think the confusion comes from the authors not parameterizing things clearly. Furthermore, by switching to geometric algebra rather than quaternions, some additional confusion can be cleared up. The main difference between normal vector algebra and geometric algebra is that we can multiply vectors. So of $e_x$, $e_y$, and $e_z$ are our (orthonormal) basis vectors, we also have $e_x e_y$ which is not a vector but a bivector which can be thought of as an oriented plane element. (By "pure vector quaternion" they mean a bivector; you are correct that that means $0$ $w$ part and it's reasonable to say it has a $0$ "real" part.) A key property is that the basis vectors anti-commute with each other, i.e. $e_x e_y = -e_y e_x$. This leads to $(e_x e_y)^2 = -1$. In general, any unit bivector squares to $-1$, which means we can apply Euler's formula: $$R = e^{\theta B} = \cos\theta + B\sin\theta$$ where $B$ is a unit bivector and we call $R$ a rotor. A unit quaternion is just a 3D rotor. (Complex numbers are just the even sub-algebra of the 2D geometric algebra. Here we are looking at quaternions, the even sub-algebra of 3D GA.) If we want a rotation by $\theta$ in the plane of a unit bivector $B$ we use the rotor $R = e^{\frac{\theta}{2}B}$. (The half comes in because we rotate a vector $v$ via $Rv\widetilde{R}$; see the link above.) Of course, we want to allow both the plane and the angle to vary so we define the (non-unit) bivector $\Theta(t) = \theta(t)B(t)$ so if $$R(t) = e^{\frac{1}{2}\Theta(t)}$$ then $$\dot{R}(t) = \frac{1}{2}\Omega(t)R(t)$$ where $\Omega = \dot{\Theta}$. This is their $f = q\omega/2$; their $q$ is our $R$ and their $\omega$ is our $\Omega$. Now if we have $R(t_0)$ and we want $R(t_0 + \Delta{}t)$ we need to integrate $\dot{R}$ from $t_0$ to $t_0 + \Delta{}t$. The scheme you describe is roughly analogous to doing forward Euler integration, which essentially assumes their $q$ and $\omega$ our $R$ and $\Omega$ is constant over $\Delta{}t$. RK4 is just a better integration method. I doubt there is any special reason they chose to use RK4 as opposed to other integration methods. It's just the default choice typically.
{ "domain": "dsp.stackexchange", "id": 3930, "tags": "kalman-filters" }
Weak contribution to nuclear binding
Question: Does the weak nuclear force play a role (positive or negative) in nuclear binding? Normally you only see discussions about weak decay and flavour changing physics, but is there a contribution to nuclear binding when a proton and neutron exchange a $W^\pm$ and thus exchange places? Or do $Z$ exchanges / neutral currents contribute? Is the strength of this interaction so small that it's completely ignorable when compared to the nuclear binding due to residual strong interactions? Answer: A back of the envelope calculation suffices, meaning all factors of $2,3, \pi$ etc. have been ignored. The residual strong force is mainly due to pion exchange and can be modeled by this plus a short range repulsion due to exchange of $\omega$ mesons. The Yukawa potential from pion exchange is $e^{-m_\pi r}/r$. The weak interactions arise from $W^\pm$ and $Z$ exchange and will give rise to a similar potential with $m_\pi$ replaced by $m_Z$ (in the spirit of the calculation I take $m_W \sim m_Z$). At typical nuclear densities of $.16/(\rm fermi)^3$ nuclei are separated by distances which are within a factor of two of $1/m_\pi$. Thus the ratio of the Yukawa potential due to weak exchange to the Yukawa potential due to pion exchange at these densities is roughly $e^{-m_W/m_\pi} \sim e^{-640}$. I've ignored the fact that there are different coupling constants in front of the two potentials, but this difference is irrelevant compared to the factor of $e^{-640}$. So yeah, you can ignore the weak contribution to the binding energy.
{ "domain": "physics.stackexchange", "id": 25016, "tags": "quantum-field-theory, nuclear-physics, standard-model, binding-energy, weak-interaction" }
Model bounces in reaction to teleop_twist_keyboard input
Question: I have my robot model spawned into a gazebo world and I am attempting to drive it around using teleop. When I push i to innitiate movement the robot flips up on it's rear wheels and almost flips over. Increasing the speed also creates unpredictable reactions. Stopping the robot also creates what I can only describe as a "bouncing effect". I inserted a screenshot below: The image above shows the robot flipping and bouncing after I pushed the stop (k) button. It's even worse when I try getting the robot to turn left or right. I'm looking for suggestions on what could be causing this behaviour and how I can fix it, please. **Update: ** URDF code added (with the exception of the ouster_description package lidar, inthe interest of preserving space) below as requested: <?xml version="1.0"?> <robot xmlns:xacro="http://www.ros.org/wiki/xacro" name="SweepBOT"> <!-- body --> <xacro:property name="chasis_length" value="1.0"/> <xacro:property name="chasis_heigth" value="0.45"/> <xacro:property name="chasis_width" value="0.5"/> <link name="chasis"> <inertial> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <mass value="1.0"/> <inertia ixx="0.4" ixy="0.0" ixz="0.0" iyy="0.4" iyz="0.0" izz="0.2"/> </inertial> <visual name=""> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <geometry> <box size="${chasis_length} ${chasis_heigth} ${chasis_width}"/> </geometry> <material name=""> <color rgba="0.0 1.0 0.0 1.0"/> </material> </visual> <collision> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <geometry> <box size="${chasis_length} ${chasis_heigth} ${chasis_width}"/> </geometry> </collision> </link> <gazebo reference='chasis'> <material>Gazebo/Orange</material> </gazebo> <!-- WHEELS --> <xacro:property name="wheel_radius" value="0.1"/> <xacro:property name="wheel_length" value="0.1"/> <xacro:property name="wheel_rpy" value="1.57075 0 0"/> <xacro:macro name="wheel" params="side position flip alt"> <xacro:property name="wheel_xyz" value="${alt * 0.3} ${flip * 0.2} -0.25"/> <link name="wheel_${side}_${position}"> <inertial> <mass value="1.0"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> </inertial> <visual name=""> <origin xyz="0 0 0" rpy="${wheel_rpy}"/> <geometry> <cylinder radius="${wheel_radius}" length="${wheel_length}"/> </geometry> <material name=""> <color rgba="1.0 0.0 0.0 1.0"/> <texture filename=""/> </material> </visual> <collision> <origin xyz="0 0 0" rpy="${wheel_rpy}"/> <geometry> <cylinder radius="${wheel_radius}" length="${wheel_length}"/> </geometry> </collision> </link> <gazebo reference='"wheel_${side}_${position}'> <material>Gazebo/Red</material> </gazebo> <joint name="body_wheel_${side}_${position}_joint" type="continuous"> <parent link="chasis"/> <child link="wheel_${side}_${position}"/> <limit lower="0.0" upper="0.0" effort="0.0" velocity="0.0"/> <origin xyz="${alt * 0.3} ${flip * 0.2} -0.25" rpy="0.0 0.0 0.0"/> <axis xyz="0.0 1.0 0.0"/> </joint> </xacro:macro> <xacro:wheel side="left" position="fore" flip='1' alt='1'/> <xacro:wheel side="right" position="fore" flip='-1' alt='1'/> <xacro:wheel side="left" position="aft" flip='1' alt='-1'/> <xacro:wheel side="right" position="aft" flip='-1' alt='-1'/> <!-- LIDAR --> <!-- os1_sensor_mount_joint --> <xacro:include filename="$(find ouster_description)/urdf/OS1-64.urdf.xacro"/> <OS1-64 parent="chasis" name="os1_sensor" hz="10" samples="220"> <origin xyz="0 0 0.3" rpy="0 0 0" /> </OS1-64> <!-- MULTI-CAMERA --> <xacro:macro name="sweeper_cams" params=""> <gazebo reference="left_camera_frame"> <sensor type="multicamera" name="stereo_camera"> <update_rate>30.0</update_rate> <camera name="left_camera"> <horizontal_fov>1.3962634</horizontal_fov>  <clip> <near>0.02</near> <far>300</far> </clip> <noise> <type>gaussian</type> <mean>0.0</mean> <stddev>0.007</stddev> </noise> </camera> <camera name="right_camera"> <pose>0 -0.07 0 0 0 0</pose> <horizontal_fov>1.3962634</horizontal_fov>  <clip> <near>0.02</near> <far>300</far> </clip> <noise> <type>gaussian</type> <mean>0.0</mean> <stddev>0.007</stddev> </noise> </camera> <plugin name="stereo_camera_controller" filename="libgazebo_ros_multicamera.so"> <alwaysOn>true</alwaysOn> <updateRate>0.0</updateRate> <cameraName>multisense_sl/camera</cameraName> <imageTopicName>image_raw</imageTopicName> <cameraInfoTopicName>camera_info</cameraInfoTopicName> <frameName>right_camera</frameName> <!--<rightFrameName>right_camera_optical_frame</rightFrameName>--> <hackBaseline>0.07</hackBaseline> <distortionK1>0.0</distortionK1> <distortionK2>0.0</distortionK2> <distortionK3>0.0</distortionK3> <distortionT1>0.0</distortionT1> <distortionT2>0.0</distortionT2> </plugin> </sensor> </gazebo> <!-- --> <link name="left_camera"> <inertial> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <mass value="0.0"/> <inertia ixx="0.0" ixy="0.0" ixz="0.0" iyy="0.0" iyz="0.0" izz="0.0"/> </inertial> <visual name=""> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <geometry> <box size="0.1 0.1 0.1"/> </geometry> <material name=""> <color rgba="0.0 0.0 1.0 1.0"/> <texture filename=""/> </material> </visual> <collision> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <geometry> <box size="0.0 0.0 0.0"/> </geometry> </collision> </link> <joint name="left_camera_joint" type="fixed"> <origin xyz="0.45 -${chasis_width / 1.82} 0.0" rpy="0.0 0.0 0.0"/> <parent link="chasis"/> <child link="left_camera"/> </joint> <link name="right_camera"> <inertial> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <mass value="0.0"/> <inertia ixx="0.0" ixy="0.0" ixz="0.0" iyy="0.0" iyz="0.0" izz="0.0"/> </inertial> <visual name=""> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <geometry> <box size="0.1 0.1 0.1"/> </geometry> <material name=""> <color rgba="0.0 0.0 1.0 1.0"/> <texture filename=""/> </material> </visual> <collision> <origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0"/> <geometry> <box size="0.0 0.0 0.0"/> </geometry> </collision> </link> <joint name="right_camera_joint" type="fixed"> <origin xyz="0.45 ${chasis_width / 1.82} 0.0" rpy="0.0 0.0 0.0"/> <parent link="chasis"/> <child link="right_camera"/> </joint> </xacro:macro> <xacro:sweeper_cams/> </robot> Originally posted by sisko on ROS Answers with karma: 247 on 2020-12-18 Post score: 1 Original comments Comment by tryan on 2020-12-21: Are the wheels spinning the way you expect for forward, backward, and turning motions? You may check by making your robot "float" with a fixed joint between world and base_link, so physics doesn't get in the way as suggested in this Gazebo answer. If that works, then the issue may be your physical parameters (inertia, friction, etc.). Either way, you should post your URDF file(s) for further troubleshooting. Comment by sisko on 2020-12-21: @Tryan: Hello again :-) I added my urdf code as you requested. Regarding the spinning movements of the wheels, I am certain they are moving in the right and required directions. I can get the model to move without issue if I reduce the to almost zero. That is particularly important when the robot is starting from stand still. Increasing the speed tends to have the "galloping horse" effect where the robot lifts up on it's rear wheels, comes down on all four wheels and then picks up speed. Comment by tryan on 2020-12-21: Haha, hi, @sisko! I remember the tumbling wheels now :) Looking a little closer at the URDF, it seems the wheels have significantly more inertia than the chassis (and too much for their mass value). The Husky's URDF provides an example of some real values ixx="0.02467" ixy="0" ixz="0" iyy="0.04411" iyz="0" izz="0.02467". You can use your mass value and the formula for a solid cylinder from Wikipedia's List of Moments of Inertia: izz = 1/2 * m * r^2 ixx = iyy = 1/12 * m * (3 * r^2 + h^2) where m is the mass, r is the radius, and h is the height (wheel width). You could also increase your chassis' mass (the Husky's is ~46 kg), and its inertia values should reflect its mass (see the formula linked above). Comment by sisko on 2020-12-23: @tryan: Please excuse my ignorance because I am NOT a maths guy. Do you take a different formular from that Wiki page depending on the shape you wish to calculate for ? Am I correct in appling the formular you provided me to both the chassis and the wheels ? And lastly, can you recommend a video or article to help me understand the formulae ? Comment by tryan on 2020-12-25: I submitted an answer in response as it allows more characters. Answer: Reposting as an answer for character limit. Looking a little closer at the URDF, it seems the wheels have significantly more inertia than the chassis (and too much for their mass value). The Husky's URDF provides an example of some real values ixx="0.02467" ixy="0" ixz="0" iyy="0.04411" iyz="0" izz="0.02467". You can use your mass value and the formula for a solid cylinder from Wikipedia's List of Moments of Inertia: izz = 1/2 * m * r^2 ixx = iyy = 1/12 * m * (3 * r^2 + h^2) where m is the mass, r is the radius, and h is the height (wheel width). You could also increase your chassis' mass (the Husky's is ~46 kg), and its inertia values should reflect its mass (see the formula linked above). ###Update Do you take a different formular from that Wiki page depending on the shape you wish to calculate for ? Am I correct in appling the formular you provided me to both the chassis and the wheels ? Yes, each shape has a specific formula. The one I posted is only for solid cylinders, like wheels, so no, you can't really use it for the chassis, too. The values may not be extremely far off, but it's better to use the correct formula for a given shape. Your chassis is a solid cuboid, which has izz = 1/12 * m * (x^2 + y^2) ixx = 1/12 * m * (y^2 + z^2) iyy = 1/12 * m * (x^2 + z^2) with the other values as 0. I should have noted above that the unmentioned inertia values are 0 also. can you recommend a video or article to help me understand the formulae ? Wikipedia's Moment of Inertia article is as good a place to start as any for a quick overview. Here's a Khan Academy video lesson on the topic that may be more helpful for understanding. Also, most general physics text books cover it. Originally posted by tryan with karma: 1421 on 2020-12-25 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 35893, "tags": "ros-melodic, xacro" }
Time-Evolution Operators (Sakurai)
Question: For the infinitesimal time-evolution operator, [Sakurai] has the following equation [2.1.21]: $$ \mathcal U\left( t_0 + dt, t_0 \right) = 1 - \frac{iHdt}{\hbar},$$ where $H$ is the Hamiltonian. Now they derive the Schrödinger equation for the time-evolution operator [2.1.25] as follows: The composition property they're referring to is [2.1.12] $$\mathcal U\left( t_2, t_0\right) = \mathcal U\left( t_2, t_1 \right)\mathcal U\left( t_1, t_0\right).$$ Does anybody see why $\mathcal U\left( t+dt, t_0 \right) = 1-\frac{iHdt}{\hbar},$ as it indirectly says in their Eq. [2.1.23]? According to my very first formula, this this would hold if the first argument were $t_0 + dt$ and not $t+dt$? [Sakurai] J.J. Sakurai, Jim Napolitano, "Modern Quantum Mechanics", 2nd Edition, Pearson Education Answer: Does anybody see why $\mathcal U\left( t+dt, t_0 \right) = 1-\frac{iHdt}{\hbar}$, as it indirectly says in their Eq. [2.1.23]? That's not what the equation says. The second argument of the first time-evolution operator in the middle part of the equation is $t$, not $t_0$. They are using the the formula for the infinitesimal time-evolution operator given in (2.1.21) to say $$\mathcal U\left( t + dt, t \right) = 1 - \frac{iHdt}{\hbar}$$ Note that there is still a $\mathcal U(t,t_0)$ on the right in (2,1,23).
{ "domain": "physics.stackexchange", "id": 74098, "tags": "quantum-mechanics, schroedinger-equation, time-evolution" }
TCP Socket Server
Question: I've only been coding C# a few weeks and was just hoping for a little constructive criticism of a socket server I've been working on: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net.Sockets; using System.Net; namespace NetworkCommunication { public class TCPSocketServer : IDisposable { private int portNumber; private int connectionsLimit; private Socket connectionSocket; private List<StateObject> connectedClients = new List<StateObject>(); public event SocketConnectedHandler ClientConnected; public delegate void SocketConnectedHandler(TCPSocketServer socketServer, SocketConnectArgs e); public event SocketMessageReceivedHandler MessageReceived; public delegate void SocketMessageReceivedHandler(TCPSocketServer socketServer, SocketMessageReceivedArgs e); public event SocketClosedHandler ClientDisconnected; public delegate void SocketClosedHandler(TCPSocketServer socketServer, SocketEventArgs e); #region Constructors public TCPSocketServer(int PortNumber) : this(PortNumber, 0) { } public TCPSocketServer(int PortNumber, int ConnectionsLimit) { this.portNumber = PortNumber; this.connectionsLimit = ConnectionsLimit; startListening(); } #endregion #region Send Messages public void SendMessage(string MessageToSend, int clientID) { try { byte[] byData = System.Text.Encoding.UTF8.GetBytes(MessageToSend + "\0"); foreach (StateObject client in connectedClients) { if (clientID == client.id) { // Send message on correct client if (client.socket.Connected) { client.socket.Send(byData); } break; } } } catch (SocketException) { } } public void SendMessage(byte[] MessageToSend, int clientID) { try { foreach (StateObject client in connectedClients) { if (clientID == client.id) { // Send message on correct client if (client.socket.Connected) { client.socket.Send(MessageToSend); } break; } } } catch (SocketException) { } } #endregion #region Connection and Listening private void startListening() { try { // Create listening socket connectionSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); connectionSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true); IPEndPoint ipLocal = new IPEndPoint(IPAddress.Any, this.portNumber); // Bind to local IP Address connectionSocket.Bind(ipLocal); // Start Listening connectionSocket.Listen(1000); // Creat callback to handle client connections connectionSocket.BeginAccept(new AsyncCallback(onClientConnect), null); } catch (SocketException) { } } private void onClientConnect(IAsyncResult asyn) { try { // Create a new StateObject to hold the connected client StateObject connectedClient = new StateObject(); connectedClient.socket = connectionSocket.EndAccept(asyn); if (connectedClients.Count == 0) { connectedClient.id = 1; } else { connectedClient.id = connectedClients[connectedClients.Count - 1].id + 1; } connectedClients.Add(connectedClient); // Check against limit if (connectedClients.Count > connectionsLimit) { // No connection event is sent so close socket silently closeSocketSilent(connectedClient.id); return; } // Dispatch Event if (ClientConnected != null) { SocketConnectArgs args = new SocketConnectArgs(); args.ConnectedIP = IPAddress.Parse(((IPEndPoint)connectedClient.socket.RemoteEndPoint).Address.ToString()); args.clientID = connectedClient.id; ClientConnected(this, args); } // Release connectionSocket to keep listening if limit is not reached connectionSocket.BeginAccept(new AsyncCallback(onClientConnect), null); // Allow connected client to receive data and designate a callback method connectedClient.socket.BeginReceive(connectedClient.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(onReceivedClientData), connectedClient); } catch (SocketException) { } catch (ObjectDisposedException) { } } private void onReceivedClientData(IAsyncResult asyn) { String content = String.Empty; // Receive stateobject of the client that sent data StateObject dataSender = (StateObject)asyn.AsyncState; try { // Complete aysnc receive method and read data length int bytesRead = dataSender.socket.EndReceive(asyn); if (bytesRead > 0) { // More data could be sent so append data received so far dataSender.sb.Append(Encoding.UTF8.GetString(dataSender.buffer, 0, bytesRead)); content = dataSender.sb.ToString(); if ((content.Length > 0) || (content.IndexOf("") > -1)) { String formattedMessage = String.Empty; formattedMessage += content.Replace("\0", ""); // Dispatch Event if (MessageReceived != null) { SocketMessageReceivedArgs args = new SocketMessageReceivedArgs(); args.MessageContent = formattedMessage; args.clientID = dataSender.id; MessageReceived(this, args); } dataSender.sb.Length = 0; } try { dataSender.socket.BeginReceive(dataSender.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(this.onReceivedClientData), dataSender); } catch (SocketException) { } } else { closeSocket(dataSender.id); } } catch (SocketException) { } catch (ObjectDisposedException) { } } #endregion #region Socket Closing public void closeSocket(int SocketID) { foreach (StateObject client in connectedClients.ToList()) { try { if (SocketID == client.id) { client.socket.Close(); client.socket.Dispose(); // Dispatch Event if (ClientDisconnected != null) { SocketEventArgs args = new SocketEventArgs(); args.clientID = client.id; ClientDisconnected(this, args); } connectedClients.Remove(client); break; } } catch (SocketException) { } } } // This does not dispatch an event, this task is to be used when rejecting connections past the limit. // No connection event is sent so no disconnection event should be sent. private void closeSocketSilent(int SocketID) { foreach (StateObject client in connectedClients.ToList()) { if (SocketID == client.id) { try { client.socket.Close(); client.socket.Dispose(); connectedClients.Remove(client); break; } catch (SocketException) { } } } } public void closeAllSockets() { foreach (StateObject client in connectedClients.ToList()) { closeSocket(client.id); } } #endregion public void Dispose() { this.ClientConnected = null; this.ClientDisconnected = null; this.MessageReceived = null; connectionSocket.Close(); } } } Here's a new updated version with the help I was given kindly below. I've added error handling and moved some parts around: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net.Sockets; using System.Net; namespace NetworkCommunication { public class TCPSocketServer : IDisposable { private const int MaxLengthOfPendingConnectionsQueue = 1000; private int portNumber; private int connectionsLimit; private Socket connectionSocket; private Dictionary<int, StateObject> connectedClients = new Dictionary<int, StateObject>(); public event SocketConnectedHandler ClientConnected; public delegate void SocketConnectedHandler(TCPSocketServer socketServer, SocketConnectArgs e); public event SocketMessageReceivedHandler MessageReceived; public delegate void SocketMessageReceivedHandler(TCPSocketServer socketServer, SocketMessageReceivedArgs e); public event SocketClosedHandler ClientDisconnected; public delegate void SocketClosedHandler(TCPSocketServer socketServer, SocketEventArgs e); #region Constructor public TCPSocketServer(int PortNumber, int ConnectionsLimit = 0) { // Validate Port Number if (PortNumber > 0 && PortNumber < 65536) { this.portNumber = PortNumber; } else { throw new InvalidPortNumberException("Ports number must be in the 1-65535 range. Note: 256 and bellow are normally reserved."); } this.connectionsLimit = ConnectionsLimit; startListening(); } #endregion private StateObject GetClient(int clientId) { StateObject client; if (!connectedClients.TryGetValue(clientId, out client)) { return null; } return client; } #region Send Messages public void SendMessage(string MessageToSend, int clientID) { byte[] data = System.Text.Encoding.UTF8.GetBytes(MessageToSend + "\0"); SendMessage(data, clientID); } public void SendMessage(byte[] MessageToSend, int clientID) { StateObject client = GetClient(clientID); if (client != null) { try { if (client.socket.Connected) { client.socket.Send(MessageToSend); } } catch (SocketException) { // Close socket closeSocket(clientID); } } } #endregion #region Connection and Listening private void startListening() { try { // Create listening socket connectionSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); connectionSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true); IPEndPoint ipLocal = new IPEndPoint(IPAddress.Any, this.portNumber); // Bind to local IP Address connectionSocket.Bind(ipLocal); // Start Listening connectionSocket.Listen(MaxLengthOfPendingConnectionsQueue); // Create callback to handle client connections connectionSocket.BeginAccept(new AsyncCallback(onClientConnect), null); } catch (SocketException) { throw new SocketCannotListenException("Cannot listen on this socket. Fatal Error"); } } private void onClientConnect(IAsyncResult asyn) { // Create a new StateObject to hold the connected client StateObject connectedClient = new StateObject(); try { connectedClient.socket = connectionSocket.EndAccept(asyn); connectedClient.id = !connectedClients.Any() ? 1 : connectedClients.Keys.Max() + 1; connectedClients.Add(connectedClient.id, connectedClient); // Check against limit if (connectedClients.Count > connectionsLimit && connectionsLimit != 0) { // No connection event is sent so close connection quietly closeSocket(connectedClient.id, true); return; } // Allow connected client to receive data and designate a callback method connectedClient.socket.BeginReceive(connectedClient.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(onReceivedClientData), connectedClient); } catch (Exception) { closeSocket(connectedClient.id, true); } finally { // Perfomed here to not get any exceptions on the main socket caught up in client connection errors. ReleaseConnectionSocket(); } // Dispatch Event at the end as any errors in socket dispatch silent diconnections if (ClientConnected != null) { SocketConnectArgs args = new SocketConnectArgs() { ConnectedIP = IPAddress.Parse(((IPEndPoint)connectedClient.socket.RemoteEndPoint).Address.ToString()), clientID = connectedClient.id }; ClientConnected(this, args); } } private void ReleaseConnectionSocket() { try { // Release connectionSocket to keep listening connectionSocket.BeginAccept(new AsyncCallback(onClientConnect), null); } catch (SocketException) { throw new SocketCannotListenException("Cannot listen on the main socket. Fatal Error"); } } private void onReceivedClientData(IAsyncResult asyn) { // Receive stateobject of the client that sent data StateObject dataSender = (StateObject)asyn.AsyncState; try { // Complete aysnc receive method and read data length int bytesRead = dataSender.socket.EndReceive(asyn); if (bytesRead > 0) { // More data could be sent so append data received so far dataSender.sb.Append(Encoding.UTF8.GetString(dataSender.buffer, 0, bytesRead)); String content = dataSender.sb.ToString(); if (!string.IsNullOrEmpty(content)) { String formattedMessage = content.Replace("\0", ""); // Dispatch Event if (MessageReceived != null) { SocketMessageReceivedArgs args = new SocketMessageReceivedArgs() { MessageContent = formattedMessage, clientID = dataSender.id }; MessageReceived(this, args); } dataSender.sb.Clear(); } try { dataSender.socket.BeginReceive(dataSender.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(this.onReceivedClientData), dataSender); } catch (SocketException) { closeSocket(dataSender.id); } } else { closeSocket(dataSender.id); } } catch (SocketException ex) { // Socket closed at other end if (ex.ErrorCode == 10054) { closeSocket(dataSender.id); } else { closeSocket(dataSender.id); } } } #endregion #region Socket Closing public void closeSocket(int SocketID) { closeSocket(SocketID, false); } // This does not dispatch an event, this task is to be used when rejecting connections past the limit. // No connection event is sent so no disconnection event should be sent. private void closeSocket(int SocketId, bool silent) { StateObject client = GetClient(SocketId); if (client == null) { return; } try { client.socket.Close(); client.socket.Dispose(); if (!silent) { // Dispatch event if (ClientDisconnected != null) { SocketEventArgs args = new SocketEventArgs() { clientID = client.id }; ClientDisconnected(this, args); } } } catch (SocketException) { // Socket is being removed anyway. } finally { connectedClients.Remove(client.id); } } public void closeAllSockets() { var keys = connectedClients.Keys; foreach (int key in keys) { var client = connectedClients[key]; closeSocket(client.id); } } #endregion public void Dispose() { this.ClientConnected = null; this.ClientDisconnected = null; this.MessageReceived = null; connectionSocket.Close(); } } } Answer: I will not comment on the actual TCP functionality itself. I am not competent for that. Stopping broadcast - on purpose? When you are broadcasting to the clients in SendMessage(), it seems like you stop broadcasting if you get problems reaching one of the clients. Is this intended? I would move the try/catch inside the foreach, and around the if. Then, in the catch, I would either explicitly break; out of the foreach or leave a comment like // Don't care, ignore and proceed. Empty try-catches As @ChaosPandion suggested, I would not leave empty try-catches. I would either handle them, try to refactor the code as not to throw them, or leave a comment with a quick explanation. Succinct code If you are using C#4.0, I would write the default values like this: #region Constructors // This behaves the same as the other version with two constructors. public TCPSocketServer(int PortNumber, int ConnectionsLimit = 0) { Parameters - case You use camelCase for local variables, which agrees to what I think is the de facto convention. It seems you use PascalCase for parameters. Shouldn't that be camelCase instead, same as local variables? Hungarian notation I would avoid Hungarian notation. I have no real suggestion for byte[] byData though. Maybe just data? encodedData? Code duplication Code duplication is recipe for disaster. You will update one of the code sections and not the other, and so on... So I would suggest that the first SendMessage just reuses the second: public void SendMessage(string MessageToSend, int clientID) { byte[] data = System.Text.Encoding.UTF8.GetBytes(MessageToSend + "\0"); SendMessage(data, clientID); } Magic numbers I would extract this number to a constant somewhere, and name it appropriately. SocketListenTimeoutMillisec? // Start Listening connectionSocket.Listen(1000); Black magic People got dunked for much less than this. :) content.IndexOf("") > -1 Does it do anything, considering that the string has Length > 0? Context: content = dataSender.sb.ToString(); if ((content.Length > 0) || (content.IndexOf("") > -1)) Succint code String formattedMessage = String.Empty; formattedMessage += content.Replace("\0", ""); could be String formattedMessage = content.Replace("\0", ""); Var declarations closer to usage String content = String.Empty; could move next to content = dataSender.sb.ToString();, and even be integrated into that line: string content = dataSender.sb.ToString(); Use StringBuilder better // instead of dataSender.sb.Length = 0; dataSender.sb.Clear(); useless .ToList() foreach (StateObject client in connectedClients.ToList()) Linq makes code succint foreach (StateObject client in connectedClients.ToList()) { if (SocketID == client.id) { can be replaced with StateObject client = connectedClients.Where( conClient => conClient.id == SocketID ); if( client != null ) { Object constructor // Parentheses are optional, if empty. SocketMessageReceivedArgs args = new SocketMessageReceivedArgs() { MessageContent = formattedMessage, clientID = dataSender.id }; And same for other similar property assignments immediately after the respective constructor. My proposal of cleaner code using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Sockets; using System.Text; namespace NetworkCommunication { class StateObject { } class SocketConnectArgs { } class SocketMessageReceivedArgs { } class SocketEventArgs { } public class TCPSocketServer : IDisposable { // Or whatever the name. const int MaxLengthOfPendingConnectionsQueue = 1000; private int portNumber; private int connectionsLimit; private Socket connectionSocket; private Dictionary<int, StateObject> connectedClients = new Dictionary<int, StateObject>(); public event SocketConnectedHandler ClientConnected; public delegate void SocketConnectedHandler(TCPSocketServer socketServer, SocketConnectArgs e); public event SocketMessageReceivedHandler MessageReceived; public delegate void SocketMessageReceivedHandler(TCPSocketServer socketServer, SocketMessageReceivedArgs e); public event SocketClosedHandler ClientDisconnected; public delegate void SocketClosedHandler(TCPSocketServer socketServer, SocketEventArgs e); #region Constructors public TCPSocketServer(int portNumber, int connectionsLimit = 0) { this.portNumber = portNumber; this.connectionsLimit = connectionsLimit; startListening(); } #endregion private StateObject GetClient(int clientId) { StateObject client; if(!connectedClients.TryGetValue(clientId, out client)) { return null; } return client; } #region Send Messages public void SendMessage(string messageToSend, int clientID) { byte[] data = System.Text.Encoding.UTF8.GetBytes(messageToSend + "\0"); SendMessage(data, clientID); } public void SendMessage(byte[] messageToSend, int clientID) { StateObject client = GetClient(clientID); if (client != null) { try { if (client.socket.Connected) { client.socket.Send(messageToSend); } } catch (SocketException) { // TODO: sending failed; disconnect from client, or? } } } #endregion #region Connection and Listening private void startListening() { try { // Create listening socket connectionSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); connectionSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true); IPEndPoint ipLocal = new IPEndPoint(IPAddress.Any, this.portNumber); // Bind to local IP Address connectionSocket.Bind(ipLocal); // Start Listening connectionSocket.Listen(MaxLengthOfPendingConnectionsQueue); // Creat callback to handle client connections connectionSocket.BeginAccept(new AsyncCallback(onClientConnect), null); } catch (SocketException) { // TODO: if we fail to start listening, is it even ok to continue? // Consider that some of the bootstrapping actions might not even have been done. // Thus execution will likely crash on next step. } } private void onClientConnect(IAsyncResult asyn) { try { // Create a new StateObject to hold the connected client StateObject connectedClient = new StateObject() { socket = connectionSocket.EndAccept(asyn), id = !connectedClients.Any() ? 1 : connectedClients.Keys.Max() + 1 }; connectedClients.Add(connectedClient); // TODO: consider if we can instead do this at the beginning of the method. // Check against limit if (connectedClients.Count > connectionsLimit) { // No connection event is sent so close socket silently closeSocket(connectedClient.id, true); return; } // Dispatch Event if (ClientConnected != null) { SocketConnectArgs args = new SocketConnectArgs() { ConnectedIP = IPAddress.Parse(((IPEndPoint)connectedClient.socket.RemoteEndPoint).Address.ToString()), clientID = connectedClient.id }; ClientConnected(this, args); } // Release connectionSocket to keep listening if limit is not reached connectionSocket.BeginAccept(new AsyncCallback(onClientConnect), null); // Allow connected client to receive data and designate a callback method connectedClient.socket.BeginReceive(connectedClient.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(onReceivedClientData), connectedClient); } catch (SocketException) { // TODO: should we closeSocketSilent()? Or? } catch (ObjectDisposedException) { // TODO: should we closeSocketSilent()? Or? } } private void onReceivedClientData(IAsyncResult asyn) { // Receive stateobject of the client that sent data StateObject dataSender = (StateObject)asyn.AsyncState; try { // Complete aysnc receive method and read data length int bytesRead = dataSender.socket.EndReceive(asyn); if (bytesRead > 0) { // More data could be sent so append data received so far dataSender.sb.Append(Encoding.UTF8.GetString(dataSender.buffer, 0, bytesRead)); if ( dataSender.sb.Length != 0 && MessageReceived != null ) { // TODO: is it possible that multiple messages are in the sb? // Consider whether it's necessary to replace with newline. dataSender.sb.Replace("\0", null); // Removes them. // Dispatch Event SocketMessageReceivedArgs args = new SocketMessageReceivedArgs(); args.MessageContent = dataSender.sb.ToString(); args.clientID = dataSender.id; MessageReceived(this, args); dataSender.sb.Clear(); } try { dataSender.socket.BeginReceive(dataSender.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(this.onReceivedClientData), dataSender); } catch (SocketException) { } } else { closeSocket(dataSender.id); } } catch (SocketException) { // TODO: should we closeSocketSilent()? Or? } catch (ObjectDisposedException) { // TODO: should we closeSocketSilent()? Or? } } #endregion #region Socket Closing public void closeSocket(int socketID) { closeSocket(socketID, false); } /// <param name="silent">Whether to skip dispatching the disconnection event. Used to cancel the bootstrapping of the client-server connection.</param> private void closeSocket(int socketID, bool silent) { StateObject client = GetClient(socketID); if (client == null) { return; } try { client.socket.Close(); client.socket.Dispose(); if(!silent) { // Dispatch Event if (ClientDisconnected != null) { SocketEventArgs args = new SocketEventArgs(); args.clientID = client.id; ClientDisconnected(this, args); } } // Moved to finnaly block: connectedClients.Remove(client.id); } catch (SocketException) { // Don't care. Or? } finally { connectedClients.Remove(client.id); } } public void closeAllSockets() { var keys = connectedClients.Keys; foreach( int key in keys ) { var client = connectedClients[key]; closeSocket(client.id); } } #endregion public void Dispose() { ClientConnected = null; ClientDisconnected = null; MessageReceived = null; connectionSocket.Close(); } } } PS: you owe me "a beer". :) (This was fun to do, though!) edit: camelCase for parameters; reduced code duplication; Dictionary<int, StateObject> instead of List; more object constructors. edit2: StringBuilder methods as suggested by @pstrjds.
{ "domain": "codereview.stackexchange", "id": 832, "tags": "c#, networking, socket, tcp" }
Relationship between two eigenfunctions of the time-independent Schrödinger Equation in one dimension?
Question: What is the relationship between two eigenfunctions of the time-independent Schrödinger Equation (in one spatial dimension) if they both have the same eigenvalue? Answer: For a hamiltonian of the form $$\hat H = -\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+V(x)$$ in one single spatial dimension, all the energy eigenvalues are non-degenerate, under suitable regularity conditions for $V(x)$. This means that if two eigenfunctions share the same eigenvalue, they must be equal or, at most, differ by a phase. For a proof of this fact, and what sort of horribleness you must introduce into the potential to break this behaviour, try "Can degenerate bound states occur in one dimensional quantum mechanics?" Sayan Kar and Rajesh R. Parwani. Europhys. Lett. 80 no. 3 (2007), p. 30004; arXiv:0706.1135. It's important to note that this is strictly a one-dimensional result, and fails to hold as soon as a second degree of freedom - be it spin or a second spatial dimension - is present; examples for that are trivial to construct.
{ "domain": "physics.stackexchange", "id": 9883, "tags": "quantum-mechanics, schroedinger-equation, eigenvalue" }
What explains this puzzling radio signature from a fireball?
Question: September 21, 2017 a bright fireball over Holland was reported by hundreds of people. Several all-sky cameras recorded it as well as a special radio beacon set up to capture radio signatures of meteors - the BRAMS beacon at Dourbes, Belgium. I'm puzzled by what I see in the radio spectrogram from one of the BRAMS stations at Ophain, Belgium : The orange blob in the center is the radio signature of this fireball. The horizontal line is the beacon signal (0 Hz), time runs horizontally, frequency runs vertically, positive above and negative below the beacon line. What is displayed are Doppler shifts. The track for this fireball has been computed since and relative to the radio receiver at Ophain and radio beacon at Dourbes it is receding from both. However, the Doppler blob is above the beacon line which is only possible when the fireball recedes from Ophain but approaches the beacon, approaches both, or approaches Ophain but recedes from the beacon - NOT when it recedes from both! The only way I can make sense out of this is to assume that the ionized gas trail from the fireball reflecting the radiowaves was NOT receding from the receiver and transmitter beacon - only the fireball was?! Or are there any other explanations? Answer: I checked with the people who run the BRAMS project and they confirmed my reasoning, and added another possible explanation. Even if the fireball moves away from the receiver AND transmitter, the ionization trail it leaves behind can expand and be blown by mesospheric winds towards the receiver and/or transmitter. It's this trail that mostly reflects the radio waves. And theoretically it's still possible but unlikely that the observed Doppler signal was coming from another meteor trail unrelated to this particular fireball, but coinciding with the exact time frame the fireball appeared in.
{ "domain": "astronomy.stackexchange", "id": 2538, "tags": "meteor, radio" }
Energy lost through phosphorescence?
Question: I have done a lot of research on phosphorescence and luminescence, and I believe that UV markers rely on the effect of phosphorescence i.e. they take in UV radiation and give back radiation in the visible light spectrum (purple), which has a lower frequency/energy. This is what I saw on Wikipedia Fluorescence is the emission of light by a substance ... In most cases, the emitted light has a longer wavelength, and therefore lower energy, than the absorbed radiation Where does the energy go? It surely cannot vanish! Tell me if this effect applies for phosphorescence as well BTW: Try to keep the language of the answer simple (I'm 14) Answer: For simplicity let's consider a diatomic molecule, although the same ideas apply to larger ones. For reference, there's a nice diagram covering both fluorescence and phosphorescence on Wikipedia. Typically, at room temperature, the molecule will be in its electronic ground state. It will be vibrating as well: for a diatomic molecule this vibration will stretch and compress the bond between the atoms. One can plot the potential energy as a function of atomic separation: the molecule will oscillate about the minimum potential energy (which defines the bond length). This is properly described by quantum mechanics: there will be a discrete set of energy levels, and most molecules will lie in the lowest one: the vibrational ground state. The gap between vibrational energy levels is typically much smaller than the electronic energy difference between the two electronic states. To a first approximation (the Born-Oppenheimer approximation) the electronic and vibrational energies can be treated separately, and simply added together. When the molecule absorbs a photon of UV or visible light, it goes up to an excited electronic state. This has a different potential energy curve, with a different bond length (typically, longer) and a different set of vibrational levels. However, the electronic excitation happens rapidly, compared with the motion of the nuclei. It is represented as a vertical line on the diagram. So the molecule finds itself in a vibrationally excited level of the excited electronic state. What happens next is the dissipation of the vibrational energy, usually by collisions with other molecules in the liquid (assuming this is in a liquid). The molecule cascades down the vibrational energy levels, until it reaches the vibrational ground state (of the excited electronic state). This is where (some of) the energy goes: dissipated as heat into the surrounding liquid. No photons are involved in this process. Fluorescence occurs by emission of a photon, and the molecule returns to the ground electronic state. However, again, because of the mismatch between the equilibrium bond lengths in the two electronic states, and the fact that the transition happens "vertically" on the diagram, the molecule returns to an excited vibrational ground state. Once again, the extra energy is dissipated as heat into the surrounding liquid, as the system returns to its vibrational ground state. So, the photon emitted in fluorescence has a lower energy than the absorbed photon, because some of the energy is converted before, and after, the transition, into heat. Something similar happens in phosphorescence, except that, before emission, there is an extra electronic transition happening in the excited molecule (which does not involve any photons). This is called intersystem crossing. But otherwise, the explanation (of where the energy went) is the same.
{ "domain": "physics.stackexchange", "id": 51263, "tags": "visible-light, electromagnetic-radiation, energy-conservation" }
Can you sum moments about a point not on the object?
Question: Do you have to sum moments about points on an object, or can moments be summed around an object as well. For example, This is what someone did in my statics class to help find the reactions on part of a frame. H is not on any other part of the frame. Can you do this? Are there any restrictions to this technique? Answer: You can sum the moments around any point and get the correct results. Summing moments and summing forces are actually mathematically equivalent. You just have to integrate everything properly (basically summing over every particle). Some problems are easier to solve with moments, others are easier to solve with momentum. What's happening here is no more strange than trying to do math on a doughnut, where the CG is actually hanging out in empty space in the middle of the hole. (which, frankly, is pretty weird, but the math works out!) The one thing you have to remember is that the moment of inertia will be different if your calculate moments around a different point. In statics, you wont have to worry about this, because the sum of the moments is always zero. In dynamics, this is a bit more of a pest.
{ "domain": "physics.stackexchange", "id": 63247, "tags": "forces, reference-frames, rotational-dynamics, torque" }
How to define signs for angular velocity, acceleration and torques?
Question: I get confused how to define signs of angular velocity, acceleration and torques in the cases like the following. We have a disk with radius $r$ and center of mass at point $CM$ shifted $d$ from the disk center. The disk stands on a sleepless surface. If we push slightly top of the dist, it starts small oscillation around equilibrium point. As I understand rotating torque is $( d m g\ sin \theta - r F_{fr})$ where $F_{fr}$ is a friction force (as there is no slipping between the disk and surface). According the the second law for rotation: $$ d m g \ sin \theta - r F_{fr} = I \dot{\omega}$$ and for small $\theta$: $$ d m g \theta - r F_{fr} = I \ddot{\theta}$$ $F_{fr}$ creates linear acceleration of the disk to the left side and the second law for disk translation: $$F_{fr} = m \dot{v}$$ As there is no slipping: $ v = r \omega$. Eventually for oscillations I would expect to get equation looks like: $\ddot {\theta} + z\ \theta = 0$ Now comes confusing part: what happens with signs here ... I understand that as result of rotation $\theta$ gets decreased. So $\omega$ is negative. Right? Does it mean that $ v = - r \omega$? And if resulting torque decreases the $\theta$ does it mean that the right equation is: $$ - (d m g \theta - F_{fr} r) = I \ddot{\theta}$$ and thus $$ - (d m g \theta - ( - r \ddot{\theta} m r)) = I \ddot{\theta}$$ Is it a right way of thinking? What rules should we use for cases like this? Thanks. Answer: I have added a few labels and a set of axes to your diagram. It is the arrow that you have assigned to the angle $\theta$ which determines the positive direction for $\theta$ - clockwise - and in terms of an angular rotation and using the right hand Cartesian system one might write the rotation as $\theta\, \hat z$ where $\hat z$ is the unit vector into the screen. The torque due to the frictional force about the centre $B$ is $F_{\rm fr}\,r\,\hat z$ and that due to the weight of the mass $m$ is $-mgd\sin \theta \, \hat z$ The equation of motion is $F_{\rm fr}\,r\,\hat z-mgd\sin \theta \, \hat z = I \dot \omega \,\hat z \Rightarrow F_{\rm fr}\,r-mgd\sin \theta = I \dot \omega =I \ddot \theta$ About the point of contact $A$ , $\vec v = \vec \omega \times \vec r =(\omega \,\hat z) \times (r \,\hat y) = - \omega \,r\,\hat x = v\,\hat x$ which is what you might expect in that moving to the left results in a decrease in the angle $\theta$ ie the component of the angular velocity $\omega$ is negative when the component of the translational velocity $v$ is positive.
{ "domain": "physics.stackexchange", "id": 54671, "tags": "homework-and-exercises, rotational-dynamics, conventions, torque, angular-velocity" }
In the cross-entropy method, should I select state-action pairs by their immediate reward or by the episode reward?
Question: I am trying to understand the code mechanics when selecting the elite states and elite actions. It appears clear to me that they are those that appear in the episodes with the rewards bigger than the threshold. My question is: should I select state-action pairs by their immediate reward or by the episode reward? I am applying the method to a craft environment interesting to me and I have been studying an example applying the OpenAI's Gym taxi environment, but I do not fully understand the code. Answer: My question is if I should select state_action pairs by theirs immediate reward or should I select them by the episode reward? By the return (sum of all rewards) from the whole episode. A lot of decisions made in "good" episodes do not lead to immediate rewards, but instead transition towards states where better rewards are possible. In retrospect, you do not know whether any single action was a good choice, but with the cross entropy method (CEM) you rely on the fact that on average the better episodes will contain more good decisions than the worse episodes, so you train the policy neural network on the state, action pairs from the elite episodes as if all the decisions were correct. This will not be true, but will hopefully be true more often than by chance, so the policy should improve. This can be a noisy, high variance approach with any RL method. CEM is one of the most sensitive to noise and variance. However, the taxi environment is deterministic, and that makes using CEM more reasonable.
{ "domain": "ai.stackexchange", "id": 2928, "tags": "reinforcement-learning, rewards, monte-carlo-methods, return" }
A practice program to manipulate a database table using dependency injection
Question: I am learning dependency injection and trying to use this pattern on practice. I am trying to write simple program where the user can write something to database, delete row and clear all rows from db. So is it right realization of dependency injection? import os from enum import Enum import pymysql class Properties(Enum): ID = 0 NAME = 1 class ConsoleManager: def print_line(self) -> None: print('-----------------\n') return None def print_data(self, data) -> None: for row in data: print(f"{row[Properties.ID.value]}\t|\t{row[Properties.NAME.value]}") return None class DatabaseManager: def __init__(self, connection) -> None: self.connection = connection self.cursor = connection.cursor() return None def get_count_abobas(self) -> int: self.cursor.execute('SELECT COUNT(*) FROM abobas') count = self.cursor.fetchone()[0] self.connection.commit() return count def insert_id_and_name(self, count, name) -> None: sql = f'INSERT INTO abobas (id, abobas.aboba_name) VALUES ({count + 1}, \'{name}\')' self.cursor.execute(sql) self.connection.commit() return None def select_and_get_all_rows(self) -> tuple: sql = "SELECT * FROM abobas ORDER BY id" self.cursor.execute(sql) result = self.cursor.fetchall() self.connection.commit() return result def delete_by_name(self, name) -> None: sql = f"DELETE FROM abobas WHERE aboba_name = '{name}'" self.cursor.execute(sql) self.connection.commit() return None def clear_table(self) -> None: sql = 'DELETE FROM abobas' self.cursor.execute(sql) self.connection.commit() return None def update_id(self) -> None: count = self.get_count_abobas() for i in range(1, count + 1): sql = f'UPDATE abobas SET id = {i} WHERE id > {i - 1} LIMIT 1' self.cursor.execute(sql) return None class ConsoleService: def __init__(self, console_manager, database_manager) -> None: self.console_manager = console_manager self.database_manager = database_manager @staticmethod def console_output(func): def wrap(*args): args[0].console_manager.print_line() func(*args) args[0].console_manager.print_line() return wrap @console_output def write_to_db(self) -> None: name = input("Enter name: ") count = self.database_manager.get_count_abobas() self.database_manager.insert_id_and_name(count, name) print("Row successfully added to the table.") @console_output def read_from_file(self) -> None: print('id\t|\tname') self.console_manager.print_line() result = self.database_manager.select_and_get_all_rows() self.console_manager.print_data(result) @console_output def delete_by_name(self) -> None: name = input('Enter the name to delete: ') self.database_manager.delete_by_name(name) self.database_manager.update_id() print(f"An element with name {name} was successfully deleted.") @console_output def clear_all(self) -> None: self.database_manager.clear_table() print('Table is now clear.') def main(): connection = pymysql.connect( host='localhost', user='sqluser', password='password', database='abobadb' ) console_manager = ConsoleManager() database_manager = DatabaseManager(connection) console_service = ConsoleService(console_manager, database_manager) while True: os.system('cls||clear') console_service.read_from_file() print('Choose the option:\n\t1. Write to db.\n\t2. Delete by name.\n\t3. Clear all elements.') choice = int(input("Enter your choice: ")) if choice == 1: console_service.write_to_db() elif choice == 2: console_service.delete_by_name() elif choice == 3: console_service.clear_all() else: break connection.close() return None if __name__ == '__main__': main() Also you can check my code and give me some advices in order to make my code clear and readable. I have tried to implement dependency injection using classes DatabaseManager, ConsoleManager and ConsoleService. Answer: Congrats, you nailed dependency injection! Although for a case where it is really useful you would need several implementations of e.g. ConsoleManager that you can choose from, or several instances of DatabaseManager that use the same dependency. Otherwise it may be hard to see the actual benefits of DI. As for the code itself: MySql databases come with an auto-generated id, so you're essentially making a second one. Not saying that it's bad, especially for an exercise like this, just keep that feature in mind when designing a scheme for a real database. There is no need to return None as it's returned implicitly (and no, explicit return doesn't make the code more readable, if that was the concern). Nitpick: method name print_line is associated with "printing the text and then a \n character (a new line)", so something like draw_horizontal_line would fit better. print_data will print something like 9 | Lupa 10 | Pupa if strings in the first column have different width. As you can see vertical lines are not aligned. Python has some built-in methods of dealing with this. get_count_abobas doesn't need to commit as it's not changing anything. update_id, on the other hand, does! Overall very clean code, keep it up!
{ "domain": "codereview.stackexchange", "id": 44807, "tags": "python, python-3.x, dependency-injection, crud" }
Guidlines on meshes and image textures
Question: Hello all, are there general guidelines for images used as textures for collada files? I have a faint recollection that these images are supposed to be square and maybe also have dimensions that are powers of 2 (e.g. 64x64, 512x512, etc.).. I also know that small is better from a memory/speed concern.. but what are the practical guidelines that should be followed? Originally posted by SL Remy on Gazebo Answers with karma: 319 on 2013-11-07 Post score: 0 Answer: You are correct on all accounts. The textures should be square and powers of two. Smaller is also more efficient. Originally posted by nkoenig with karma: 7676 on 2013-11-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3506, "tags": "gazebo" }
How to get vectors of memory cell & the last output of $LSTM$ in keras?
Question: In this research paper the following paragraph appears, The state of every LSTM model is stored in two fixed-size vectors of real numbers called the memory cells and the last output. Since our LSTM model is trained to predict user’s behavior, elements of these vectors are the natural candidates for the user-dependent features. They can be extended by the resulting predictions (answers to the questions). This way 218 new features are obtained from the memory cells (100) and the last output (100) of the second LSTM layer and from the final output (18). I am aware of getting weights of each layer. But how to get these two vectors? Answer: Self answering after reading this article. Actually, the paragraph is saying that $218$ new features are obtained from - Memory cells or Memory units of 2nd LSTM layer: It will give a vector of $100$ cell states (cell state corresponding to each memory cell). Note that the dimension of the vector is equal to the number of hidden memory cells. Last output of 2nd LSTM layer: It will give a list of $100$ cell outputs (cell output of each memory cell/unit). Final output: Final output would eventually comprise of an $18$ dimensional vector. Dimensions are - $2$ for (a), $6$ for (b), $4$ for (c), $3$ for (d), $2$ for (e), $1$ for (f). From the same article, it can be achieved in keras using return_state=True parameter. from keras.models import Model from keras.layers import Input from keras.layers import LSTM from numpy import array # define model inputs1 = Input(shape=(3, 1)) lstm1, state_h, state_c = LSTM(2, return_sequences=True, return_state=True)(inputs1) model = Model(inputs=inputs1, outputs=[lstm1, state_h, state_c]) # define input data data = array([0.1, 0.2, 0.3]).reshape((1,3,1)) # make and show prediction print(model.predict(data)) This ouputs, [array([[[-0.00559822, -0.0127107 ], [-0.01547669, -0.03272599], [-0.02800457, -0.0555565 ]]], dtype=float32), array([[-0.02800457, -0.0555565 ]], dtype=float32), array([[-0.06466588, -0.12567174]], dtype=float32)]
{ "domain": "datascience.stackexchange", "id": 6627, "tags": "deep-learning, keras, lstm" }
Range of difference equation coefficients in practical FIR/IIR filters
Question: In the design of practical FIR and IIR filters using difference equations, what is the range of coefficients that is employed for practical filters. How many orders might be used? What might the range of coefficients be? Answer: IIR filters are typically implemented as cascaded second order sections. For the filter to be stable, the poles must be inside the unit circle, that means the denominator coefficient for each section are $a_0 = 1$, $|a_1| < 2$, and $|a_2| < 1$. If the IIR filter is minimum phase, the same holds for the numerator coefficients as well, except for an overall gain, which can be all over the place. IIR filter order varies but it's generally low. Orders higher than 20 are pretty rare. For FIR filters I don't think there is anything "typical". A simple differentiator has an order of 2, room impulse response can have an order of 100s of thousands. You can always scale any filter to any order of magnitude you want.
{ "domain": "dsp.stackexchange", "id": 10872, "tags": "filter-design, digital-filters" }
Particular Solution of Multiple-Degree-of-Freedom Systems
Question: I was going through some Vibration examples on how to solve multiple-degree-of-freedom systems and I have noticed that they usually assume a particular solution with just one trigonometric function (sine or cosine depending on the excitation force, and when there is none, they just assume sine) instead of the regular pair $$x_p = A \cos{\omega t} + B \sin{\omega t}$$ I was wondering why they make this assumption and when it is allowed (2 questions). Answer: Although it is not explicitly stated, since you are considered partial solution then the problem considers an external excitation F of some sort. If I understood correctly the question the reason why a particular solution with just one trigonometric function is assumed is that: through the Fourier theorem any time series can be approximated by a summation of many sines/cosines Using sine or cosine is not a problem because of the identity $\cos(\phi) = \sin\left(\frac{\pi}{2} + \phi\right)$, and equally $\sin(\phi) = \cos\left( \phi-\frac{\pi}{2} \right)$. So choosing sine or cosine is not a problem the Laplace transform (which is a very useful tool in those problems) is very simple. the sum of $A\cdot \cos(\omega t) + B\cdot \sin (\omega t) $, can be written as $A\cdot \sin(\omega t +\frac{\pi}{2}) + B\cdot \sin (\omega t) \Rightarrow$, and therefore you have the some of two sines at different phases (0 and $\frac{\pi}{2}$), which is equal to a sine wave of the same frequency at a different phase. So, the main reason - IMHO - is no.3 because it provides a simple solution (some might giggle at this statement) through the Laplace transform, and that makes it an ideal candidate for textbooks and in-class paradigms.
{ "domain": "engineering.stackexchange", "id": 4585, "tags": "vibration" }
Rotation Angle in Pose Orientation
Question: I have a 2D image on which I conduct an algorithm to find its rotation, and I get it in radian. No problem until here. Now that I want to fill in the pose object with what I collected from the 2D vision, I get stuck at where to insert the rotation. I do know I cannot fill in everything, I don't intend to, either. I only need the position (I have it already) and the rotation. obj.pose.position.x = v['pose'][0] #x obj.pose.position.y = v['pose'][1] #y obj.pose.position.z = v['pose'][2] #z # which one below is the one I should use to pass the rotation angle? obj.pose.orientation.x = v['pose'][3] obj.pose.orientation.y = v['pose'][4] obj.pose.orientation.z = v['pose'][5] obj.pose.orientation.w = v['pose'][6] Any thoughts? EDIT: So I came up with the following after the useful insights given by the commentators. self.some_srv = rospy.Service("/bla/request_poses", PartArray, self.some_service_srv) def some_service_srv(self, req): # some irrelevant stuff happening here # this is where the actual thing shall happen obj.pose.position.x = v['pose'][0] obj.pose.position.y = v['pose'][1] obj.pose.position.z = v['pose'][2] rotation_angle = v['pose'][3] # extract the rotation angle zaxis = (0, 0, 1) #rotation is around the Z axis in the image quaternion = quaternion_about_axis(rotation_angle, zaxis) # use rotation angle + axis to get the quaternion obj.pose.orientation.x = quaternion[0] obj.pose.orientation.y = quaternion[1] obj.pose.orientation.z = quaternion[2] # this is the rotation we are talking about obj.pose.orientation.w = quaternion[3] list_detected.part_array.append(obj) return list_detected This runs, however I always get 0 values for all of the quarternions, which is somehow nonsense because the rotation angle is definitely non-zero (I print its value). position: x: 274.0 y: 250.0 z: 602.0 orientation: x: 0.0 y: 0.0 z: 0.0 w: 0.0 I print the quarternion list separately, and there is a non-zero value inside, which seems fine, but I don't know why the published value gets zero. [0.000000e+00 0.000000e+00 3.061617e-17 1.000000e+00] I also printed the value of rotation (in radian) and here are some values of it: 0.980580687945 6.12323399574e-17 -0.707106781187 0.294085862655 -1.0 0.56419054038 -0.0114934175734 6.12323399574e-17 6.12323399574e-17 -1.0 0.707106781187 6.12323399574e-17 And this is how I get the angle using OpenCV: # helper function to find out which side of the rectangle is longer, then add the angle appropriately def getAngle(rect): angle = rect[2] width, height = rect[1] if (width < height): return math.cos(math.radians(angle + 180)) else: return math.cos(math.radians(angle + 90)) ... # Find contours: (im, contours, hierarchy) = cv2.findContours(im, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cnt = contours[0] # get the rotated rectangle to get the rotation rect = cv2.minAreaRect(cnt) rotation = getAngle(rect) # in radians ... Originally posted by Jägermeister on ROS Answers with karma: 81 on 2019-02-04 Post score: 0 Original comments Comment by kosmastsk on 2019-02-04: Considering that the image is on the xy plane, I would say that the rotation is on the orientation.z. Think of it, as turning the image around the z axis, but keeping the other axes stable. If the image is not on one axis, it may be more complicated though Comment by PeteBlackerThe3rd on 2019-02-04: It is indeed more complicated than that, in all cases I'm afraid. See my answer below. Comment by PeteBlackerThe3rd on 2019-02-05: Even if your rotation was zero the quaternion would contain positive values. The quaternion values you printed out is very small (x10-17) so is being rounded to zero. There appears to be a problem with the quaternion_about_axis function since it's returning an invalid rotation quaternion. Comment by Jägermeister on 2019-02-05: Is the function problematic or the way I use it? I don't get it. Is there a bug in ROS, seriously? Comment by PeteBlackerThe3rd on 2019-02-05: The function definitely works, can you show us more of your code. It's probably something about how your getting the values into the message. Comment by Jägermeister on 2019-02-05: I edited my question, butt there isn't much to write really, I mean, I wrote all that is relevant. It's just a service which is called, and it should return the pose (position + quarternion). Position values are filled in, no problem, but the quaternion values are always zero for some reason. Comment by PeteBlackerThe3rd on 2019-02-05: Can you print the value and type of v['pose'][3] is this not numeric then it may explain this behaviour. I've just checked the syntax and I think you're doing anything obvious wrong. The function really should be returning a 4D unit vector Comment by PeteBlackerThe3rd on 2019-02-05: One of the values your printing out 6.12323399574e-17 is Exactly twice the value in the quaternion of 3.061617e-17 I suspect there is some other code you haven't put in your question which is causing the problem. Can you post all your code which is producing the problem. Comment by Jägermeister on 2019-02-06: I put everything on the ROS side. If you want to see the OpenCV side (how I get the rotation angle out of 2D vision) I can do that too, I doubt it is relevant though. Comment by PeteBlackerThe3rd on 2019-02-06: I've just checked and the values coming out of the quaternion_about_axis function are correct, but there must be a bug between there and where your publishing the pose. Can you post all your code between this function and the publish call so we can try and find where it's going wrong. Comment by Jägermeister on 2019-02-06: Normally I couldn't do this due to confidential issues but now I found that I can give a time-based link, so perhaps you can look at it, and if you find the problem, we could reshape the question without exposing the code. Here for a short amount of time. Comment by PeteBlackerThe3rd on 2019-02-06: There are still no ros publishers in that code. But I think you need to carefully debug through the code because the orientation values are not being copied at some point. The quaternion_about_axis function calculates them correctly but the values are not making it into the pose message you show. Comment by Jägermeister on 2019-02-06: I traced the values until the end of the function and they are fine, but then it's all about the ROS service, which returns them as zero. It's really weird, I am about to call it a bug almost. Never seen such a thing before tbh, but I am not the ROS master either. Anyway, thank you for the effort. Answer: The quaternion for a given Z axis (yaw) rotation is given by: w = cos(theta / 2) x = 0 y = 0 z = sin(theta / 2) where theta is the yaw angle in radians. If X (roll) or Y (pitch) axis rotations are involved too, the formula is a little more tedious, and can be found on Wikipedia. The tf library has very helpful functions for constructing quaternions from Euler angles. For yaw quaternions, there is tf::createQuaternionFromYaw(double yaw) which takes a single argument which is the yaw angle in radians and returns a tf::Quaternion object that is equivalent. There is also tf::createQuaternionMsgFromYaw(double yaw) (notice the 'Msg') which is similar, but returns a geometry_msgs::Quaternion ROS message. For all three axes, there is tf::createQuaternionFromRPY(double roll, double pitch, double yaw) and tf::createQuaternionMsgFromRollPitchYaw(double roll, double pitch, double yaw). Of course, this is all available in C++ by including <tf/tf.h>. In Python, it looks like you can convert Euler angles into a quaternion using transformations.py (doc_link). I haven't used it myself before though. Originally posted by robustify with karma: 956 on 2019-02-04 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Jägermeister on 2019-02-05: I am not using C++, but the Python equivalent of what you are saying should be quaternion_about_axis(angle, axis), right? All I have is angle around the axis Z, so this should be it. Edited the question. Comment by VictorLamoine on 2019-02-05: Also make sure your quaternions are always normalized otherwise most operations you will do on them will yeld undefined behavior. Comment by Jägermeister on 2019-02-05: Thanks for the warning. Do you have any normalization example I can take a look at? Comment by robustify on 2019-02-05: The quaternion should never be all zeros, as it is always should be a unit-length vector. I would first try manually computing w and z according to the formula in my answer and make sure that you at least get the result that way to rule out any weirdness Comment by Jägermeister on 2019-02-05: It's not all zeros though: [0.000000e+00 0.000000e+00 3.061617e-17 1.000000e+00] does this also look weird? Comment by VictorLamoine on 2019-02-05: This looks fine, 3.0e-17 is a very small number, close enough to zero. The norm is this quaternion is 1 so you are good. Comment by robustify on 2019-02-05: Oh, didn't notice that w is 1! Yes, that is a valid quaternion, but it seems like the angle you passed into it is very close to zero. A quaternion with w = 1 and the rest are zero represents no rotation.
{ "domain": "robotics.stackexchange", "id": 32409, "tags": "ros, pose, ros-melodic" }
How is heat transferred to a thermometer?
Question: Quick question. I can't seem to find a satisfactory answer online. How does a thermometer measure the average kinetic energy of atmospheric air? I assume that the energy is transferred by molecular collisions, and this somehow raises the temperature of the alcohol by doing work on the thermometer. Is this correct? Somehow a thermometer acts as a speedometer right? Answer: Taking out your last analogy about the speedometer (which I don't find useful but it might work for you), I would add that in a sealed thermometer, thermal equilibrium between the external media and the alcohol is mostly reached by exchange of electromagnetic radiation (photons). But heating or cooling or the glass molecules by atmospheric gas and then from the glass to the alcohol also plays a role, albeit minimal in most circumstances.
{ "domain": "physics.stackexchange", "id": 22894, "tags": "thermodynamics, temperature" }
Why is $( \alpha_i r_i) (\alpha_j r_j ) = \frac{1}{2} \{ \alpha_i , \alpha_j\}r_i r_j$?
Question: Where $\alpha_i= \left( \begin{matrix} 0 & \sigma_i \\ \sigma_i & 0 \end{matrix} \right)$. To me it should just be $( \alpha_i r_i) (\alpha_j r_j ) = \alpha_i \alpha_j r_i r_j$, but it is not. Why the difference? Answer: Since $r_ir_j = r_jr_i$ we have $$ (\alpha_i r_i) (\alpha_jr_j) = \alpha_i\alpha_j r_ir_j\\ = \frac 12 \alpha_i\alpha_j r_ir_j + \frac 12 \alpha_i\alpha_j r_jr_i\\ = \frac 12 \alpha_i\alpha_j r_ir_j +\frac 12 \alpha_j\alpha_i r_ir_j\quad \text{relabeling $i\leftrightarrow j$ in second term} \\ = \frac 12 \{\alpha_i\alpha_j\} r_ir_j $$
{ "domain": "physics.stackexchange", "id": 93250, "tags": "homework-and-exercises, dirac-equation, dirac-matrices, clifford-algebra" }
What will the mass of the new galaxy be?
Question: When the Milky Way galaxy collides with the Andromeda galaxy, what will the mass and volume of the new galaxy be? Answer: The mass will be slightly less than the combine masses of the 2 galaxies since some of the stars will be hurled away. Since the disks of the galaxies are at an angle to each other, the volume would be (roughly) the volume of the 2 galaxies as they collide. Eventually, the volume will decrease (I'm guessing to between 60 to 70%) as the stars adapt to their new environment and the central massive black holes combine.
{ "domain": "astronomy.stackexchange", "id": 806, "tags": "galaxy, milky-way, galactic-dynamics, m31" }
Help understanding the equation for Spin Energy in Zero Field
Question: I’m working with a system that experiences spin frustration and for me this is the first time dealing with such a phenomenon. To grasp the concept, I’ve decided to go to the basics and read the classic book on Molecular Magnetism by Professor Olivier Kahn. On chapter 10.1 Kahn works to get the formula for the relative energies on Zero Field: S' varies by an integer from 0 to $2S_{a}$ and for every S' value S varies by an integer from $|S'-S_{b}|$ to $S'+S_{b}$ My main interest is the chapter 10.1.1 Copper(II) trinuclear species where Kahn states that we have Sa = Sb = ½ and that the relative energies of the states can be deduced from equation 10.1.6 (the book says 10.6 but it was probably a typo). $$ E(1/2,1) = 0$$ $$ E(3/2,1) = -3J/2$$ $$ E(1/2,0) = -J + J’$$ Substituting the values for S and S’ I’m failing to get the same results of 0, -3J/2 and -J + J’ and I’m getting, instead, 0.625J - 1.0J′, -0.875J - 1.0J′, −0.375J. You can see how I’m working to get these strange results here in this Jupyter Notebook. Maybe someone with more experience could guide me on how do I get the same results as the book? Thanks in advance Answer: I have found the answer and I'm leaving it here in case someone needs it in the future. Professor Kanh have arbitrarily selected E(1/2,1) as the energy zero and subtracted E(1/2,1) from all energies.
{ "domain": "chemistry.stackexchange", "id": 10848, "tags": "inorganic-chemistry, quantum-chemistry, computational-chemistry, magnetism, spin" }
Dynamic model of a robotic arm
Question: I have a question regarding the dynamic model of a robotic manipulator. It is commonly written as follows: $$ \tau = M(q)\ddot{q} + C(q,\dot{q})\dot{q} + G(q) + J^\intercal F_{ext} $$ From what I have seen, the equation $\tau_{ext}=J^\intercal F_{ext}$ only holds true at very low velocities, so authors usually remove the velocity and acceleration terms and write: $$ \tau = G(q) + J^\intercal F_{ext} $$ So my question is, what happens when our task is highly dynamic with high velocities and accelerations ? Answer: The first equation is always true. So at high velocities and accelerations, you use the first equation. See this At low velocities $\dot{q}<<$ and low accelerations $\ddot{q} <<$, the first two terms are negligible, and thus you use the last equation. $\tau_{ext} = J^T F_{ext}$ is also always true. It is the contribution of external forces. See this There are control laws, when you want to exert a desired force $F_d$ at the environment, when the end effector does not move (thus $\dot{q}=0$ and $\ddot{q}=0$): $$\tau_{control} = J^T F_{d}$$ and this can be even more precise if you add the gravity term. This leads to your last equation. See this
{ "domain": "robotics.stackexchange", "id": 39056, "tags": "robotic-arm, dynamics" }
Interpretation of Hubble Diagram
Question: According to my professor's notes, this is the Hubble Diagram. Unfortunately, I do not know what the y-axis is referring to. Is it the absolute luminosity? Answer: The quantity $m-M$ is the difference between the apparent magnitude and absolute magnitude, and is referred to as the distance modulus, $\mu$. The relationship between $\mu$ and distance, $d$, is logarithmic, i.e. $$\mu=5\log d+5+\text{corrections}$$ where the correcting terms account for observational effects. In cosmological cases, we substitute $D_L$, the so-called luminosity distance, for $d$, where $D_L=d(1+z)$, for redshift $z$. Therefore, $$ \begin{align} \mu&=5\log D_L+\text{constants}\\ &=5\log(d(1+z))+\text{constants}\\ &=5\log d+5\log(1+z)+\text{constants} \end{align}$$ We should therefore see a linear relationship between $\mu$ and $\log(1+z)$ - which we do.
{ "domain": "physics.stackexchange", "id": 43781, "tags": "astrophysics, doppler-effect, redshift" }
Torque Required to Drive Capstan and Bow
Question: I am trying to find the torque exerted on the shaft of a capstan and bow drive. In particular, the system I'm looking at has two wire rope pulleys, and two idler pulleys, and is used to drive a carriage as seen Below: In this case the drive pulley consists of two separate pulleys tensioned against one another, with the end of two wire ropes terminated in the pulleys. Naively, I would assume that the force F is equal to the Torque T * the radius r of the drive pulley. Are there any additional effects due to a multiple number of wraps on the pulley (one side reels out, while the other side reels in, keeping the total angle of wraps equal throughout the motion)? Does the capstan effect play a large part with this mechanism? From what I understand the capstan effect should cause less tension at the wire rope stops embedded in the drive pulley, but not effect F. Thanks! Answer: In theory, the force is just the torque, divided by the radius. In practice, wrapping the rope a few times around the pulley will rob a little torque from it, but it may even be unnoticeable. At the same time, friction between rope and pulley will increase, thus allowing you to transfer more force to the rope before it starts slipping over the pulley. It also helps to tension the rope to reduce slipping(much like a derailleur on mountainbikes), but again it'll cost you a little torque. So only consider it if the rope keeps slipping.
{ "domain": "engineering.stackexchange", "id": 1920, "tags": "torque, pulleys, power-transmission" }
Other units of resistance
Question: Resistance is $\frac{V}{I}$, and we get the unit $\Omega$. Another unit is $\frac{W}{A^2}$. How do you derive that unit? Answer: Watts are a unit of power, $P=IV$. considering the units: $$ \frac{[W]}{[A]^2} = \frac{[A][V]}{[A]^2} = \frac{[V]}{[A]} $$ therefore the units are the same. In general you can determine the units of anything by considering the equations to determine it and substituting all the variables for their units.
{ "domain": "physics.stackexchange", "id": 13446, "tags": "homework-and-exercises, electricity, units, electrical-resistance" }
How to use QFT operation in Q#?
Question: I see the QFT operation in the document given by Microsoft, but I don't know how to call it. operation QFT (qs : Microsoft.Quantum.Arithmetic.BigEndian) : Unit is Adj + Ctl Does this method need to be rewritten? How to set the parameters here? Can I have a brief example? Answer: BigEndian is a data type that is effectively just a wrapper for an array of qubits. If you want to apply QFT to a qubit array register, you need to convert it to BigEndian first: QFT(BigEndian(register));
{ "domain": "quantumcomputing.stackexchange", "id": 2556, "tags": "programming, q#, quantum-fourier-transform" }
Linked list implementation in Clojure
Question: Background Linked lists are a well-known data structure, so I won't waste too much detail on them. Suffice to say that a linked list consists of "cells". Each cell contains some kind of value and a reference to the next cell (except the last one, which contains an empty reference). I implemented a linked list in Clojure as an exercise. The code (ns linked-list.core) (defn create-linked-list [] {}) (defn add-to-linked-list [{:keys [value next-node] :as linked-list} new-value] (let [new-node {:value new-value :next-node nil}] (if (= {} linked-list) new-node (let [new-tail (if (nil? next-node) new-node (add-to-linked-list next-node new-value))] (assoc-in linked-list [:next-node] new-tail))))) (defn contains-linked-list? [{:keys [value next-node] :as linked-list} query-value] (if (empty? linked-list) false (or (= value query-value) (recur next-node query-value)))) (defn get-nth-linked-list [{:keys [value next-node] :as linked-list} n] (if (< n 1) value (recur next-node (dec n)))) (defn without-element-linked-list [linked-list n] (loop [{:keys [value next-node] :as act-node} linked-list counter 0 linked-list-accum (create-linked-list)] (let [new-linked-list (if (= counter n) linked-list-accum (add-to-linked-list linked-list-accum value))] (if (nil? next-node) new-linked-list (recur next-node (inc counter) new-linked-list))))) The tests (ns linked-list.core-test (:require [clojure.test :refer :all] [linked-list.core :refer :all])) (deftest create-linked-list-test (testing "create linked list" (is (= {} (create-linked-list))))) (def empty-linked-list (create-linked-list)) (deftest add-to-linked-list-test (testing "add to linked list" (is (= {:value 10 :next-node nil} (add-to-linked-list empty-linked-list 10))) (is (= {:value 10 :next-node {:value 20 :next-node nil}} (add-to-linked-list (add-to-linked-list empty-linked-list 10) 20))) (is (= {:value 10 :next-node {:value 20 :next-node {:value 30 :next-node nil}}} (add-to-linked-list (add-to-linked-list (add-to-linked-list empty-linked-list 10) 20) 30))))) (def one-element-linked-list (add-to-linked-list empty-linked-list 10)) (def two-element-linked-list (add-to-linked-list one-element-linked-list 20)) (deftest contains-linked-list-test (let [one-element-linked-list (add-to-linked-list empty-linked-list 10) two-element-linked-list (add-to-linked-list one-element-linked-list 20)] (testing "does a linked list contain a value" (is (false? (contains-linked-list? empty-linked-list 10))) (is (true? (contains-linked-list? one-element-linked-list 10))) (is (true? (contains-linked-list? two-element-linked-list 20))) (is (false? (contains-linked-list? two-element-linked-list 30)))))) (deftest get-nth-linked-list-test (testing "get the nth element of linked list" (is (nil? (get-nth-linked-list empty-linked-list 0))) (is (= 10 (get-nth-linked-list one-element-linked-list 0))) (is (= 20 (get-nth-linked-list two-element-linked-list 1))))) (def three-element-linked-list (add-to-linked-list two-element-linked-list 30)) (deftest without-element-linked-list-test (testing "remove the nth element of linked list" (is (= empty-linked-list (without-element-linked-list empty-linked-list 0))) (is (= empty-linked-list (without-element-linked-list one-element-linked-list 0))) (is (= {:value 20 :next-node nil} (without-element-linked-list two-element-linked-list 0))) (is (= one-element-linked-list (without-element-linked-list two-element-linked-list 1))) (is (= {:value 10 :next-node {:value 30 :next-node nil}} (without-element-linked-list three-element-linked-list 1))))) Review objectives What do you think about the chosen data structure (nested dictionaries)? Would there be a way to implement this in Clojure, in a more efficient way? Is there a way to rewrite add-to-linked-list (or maybe create-linked-list) so that it is not necessary to check the specific case for empty list, when adding a new element? Is it possible to make without-element-linked-list less complicated? Are there any places where good Clojure practices could be improved? Can you think of any (edge) cases not covered by the unit tests? GitHub The source for this question is available here. Answer: What do you think about the chosen data structure (nested dictionaries)? Would there be a way to implement this in Clojure, in a more efficient way? Conceptually it works. I wouldn't use plain maps here though. You know every node in the list should contain only the keys :value and :next-node. In a case like this, I'd use a record instead: (defrecord Node [value, next-node]) (defn new-node [value] (->Node value nil)) This comes with slight performance benefits, and just generally makes more sense. If you needed to create a object in Java, and you knew it only needed two specific fields, would you represent that object as a Map, or as a class? I would say a class. If you don't need an object with a variable number of fields, don't represent it as something that has a variable number of fields. Yes, records are basically just maps, but I would consider them more appropriate here because it's explicit what keys they should contain. This raises the problem of what an empty list would be. nil works here, and should go well with recursion (more on that in a bit). Is there a way to rewrite add-to-linked-list (or maybe create-linked-list) so that it is not necessary to check the specific case for empty list, when adding a new element? You'll always need to handle the base-case check in some way, so I can't see a way of taking it out completely. You could use multimethods to "pattern match" like you'd do in Haskell, but that would add a lot of bulk and not a lot of clarity. The entire function can be significantly neatened up through other means though: (defn add-to-linked-list2 [node new-value] ; An empty list is now nil, which is falsey (if node ; Using update frees us from needing to destructure the node (update node :next-node #(add-to-linked-list2 % new-value)) ; This also be written as (update node :next-node add-to-linked-list2 new-value) ; thanks to update's overloads, which is even nicer (new-node new-value))) The major changes: Because an empty list is now nil, you don't have to worry about an empty list being {}, but an empty :next-node being nil. This greatly simplifies the recursing line. Instead of checking if the list is equal to an empty map, we can just check its truthiness directly. I'm using my new-node here. I know a new node will have a next-node of nil, so I might as well remove that fact from this code and let a separate function handle that. Of course, the major problem with both our implementations is that they can't be written using recur, meaning they're susceptible to Stack Overflows. I tried writing a solution in terms of loop, but it got super messy, and really, the use case doesn't make sense anyway. It's unreasonable to iterate the entire list just to append an element anyway. If you're going to use a linked list, just prepend at the head which entirely does away with the need for iteration on addition. Look at Haskell and how well its standard linked list goes with recursion. Every operation is done at the head, and that simplifies (and speeds up) additions. Is it possible to make without-element-linked-list less complicated? Firstly, your loop accumulators are written in a super confusing way. At least separate each binding with commas, or ideally, place each on its own line: (loop [{:keys [value next-node] :as act-node} linked-list counter 0 linked-list-accum (create-linked-list)] Going against what I said earlier about using unoptimized recursion, the best method I could find is unoptimized: (defn without-element-linked-list2 [{:keys [next-node] :as node} n] (cond ; We hit the end of the list without finding it. Throw an error? ; Apparently n was out of bounds. (nil? node) nil ; We found the element to delete. Just "skip" over it. (zero? n) next-node ; Very similar to the add method. We're part way through the list, so keep iterating. :else (update node :next-node without-element-linked-list2 (dec n)))) I decided to just decrement n as we iterate to avoid a second counter. I understand that you're probably doing this purely as an exercise, but the fact that getting an optimized solution is so difficult should be a sign that's somethings wrong. In the ~2 years that I've been writing Clojure, I've never written my own basic structure. In most other languages, it's a very common exercise. In Clojure though, the lower level implementation of the structures used are best tucked away somewhere, and it's much more common to just use the basic structures of the language. If you need linked list behavior, use plain lists. If you need array behavior, use a vector. Sometimes a more custom structure is needed, but tend to reach for native structures first, since they're almost always sufficient, and already have an extensive API. Are there any places where good Clojure practices could be improved? In contains-linked-list?, you make good use of or. It can still be improved on though: (defn contains-linked-list?2 [{:keys [value next-node] :as node} query-value] (when node (or (= value query-value) (recur next-node query-value)))) when is great. It neatens up code nicely. If you ever find yourself conditionally returning a falsey value, consider just negating the condition, and using when to implicitly return nil. Or, instead of negating the condition like I did here, you could use when-not.
{ "domain": "codereview.stackexchange", "id": 29841, "tags": "unit-testing, linked-list, clojure" }
Determine the smallest $3$ numbers in a set using comparisons
Question: Given a set of $n$ distinct numbers, we would like to determine the smallest $3$ numbers in this set using comparisons. The elements can be determined using $n+O(\log n)$ comparisons. This was the answer given in a multiple choice question. Now, we can easily do this using $3n\in O(n)$ comparisons. However I have been thinking about it and I can't do it in $n+O(\log n)$ comparisons. They might be thinking about making a heap and using heapify which takes $O(n)$ time. However in heapify, the comparisons are $O(n)$ but not like $n+O\log(n)$ because every step in the heapify needs finding the minimum among $3$ numbers which takes $3$ comparisons so making a heap and finding the three minimums should be $3n$ comparisons. So, can anyone tell me how to find three minimums in $n+O(\log n)$ comparisons or am I messing up somewhere in the heapify logic? Answer: I can show you how to get the two smallest elements in $n + O( \log n)$ comparisons, the extension to the three smallest elements should not be difficult. Build the following tree. Start with your array, then for each pair of neighboring elements (non-overlapping), promote the smaller one level up. Do the same for the new level, until the tree has a root. This tree is built using less than $n$ comparisons ($n / 2 + n / 4 + \dots$). The smallest element is obviously the root. The second smallest element must be one of of the siblings of all the nodes that are equal to the root along the path down the tree to the array level. There are at most $\log n$ such nodes. You should also convince yourself of the bound when the number of nodes on a level is not even.
{ "domain": "cs.stackexchange", "id": 7804, "tags": "algorithms, complexity-theory, sorting" }
Premier League data scraper
Question: I have started a project where I am scraping JSON-data from an API. How the scraping is done right now is done in a very repetitive way where the keys of interest are specified and scraped. The data-structure I use is a nested dict to store all the data for each function. So the steps of each function is straightforward, make a request, iterate through all the data points of interest, store in a dictionary and then write the JSON-file. I'm looking to see if there is a more efficient way of parsing JSON-data, if I should consider creating functions that handle smaller tasks and if the data-structure of choice is appropriate. The end-game of it all is to create dashboards and analytics so an important function is to be able to link between the datasets, which is supposed to be handled by the different id's for teams, games, fixtures, arenas and so forth. Thank you for taking your time to read, below you will find the full-code. Many thanks, import requests import json from pprint import pprint from tqdm import tqdm class Premier_league: def __init__(self): self.base_url = 'https://footballapi.pulselive.com/football' def get_competion_id(self): competitions = {} #Store all competitions league = {} #Store info for each competion url = self.base_url + '/competitions' params = ( ('pageSize', '100'),#adds ?pageSize=100 to url ) response = requests.get(url, params = params).json() # request to obtain the id values and corresponding competition all_comps = response["content"] #loop to get all info for all competitions for comp in all_comps: league[comp["id"]] = comp["description"] # creating a stat dict for the player competitions[league[comp["id"]]] = {"info":{}} competitions[league[comp["id"]]]["info"]["abbreviation"] = comp["abbreviation"] competitions[league[comp["id"]]]['info']['id'] = comp['id'] f = open("competitions.json","w") # pretty prints and writes the same to the json file f.write(json.dumps(competitions,indent=4, sort_keys=False)) f.close() def get_clubs(self): clubs = {} #Store all clubs team = {} #Store info for each team url = self.base_url + '/clubs' page = 0 #starting value of page while True: params = ( ('pageSize', '100'), ('page', str(page))#adds ?pageSize=100 to url ) response = requests.get(url, params = params).json() # request to obtain the team info all_clubs = response["content"] #loop to get all info for all competitions for club in all_clubs: clubs[club['name']]= club['teams'][0]['id'] #Unessesary code below, might be of use, produces complex dict-structure #team[club["id"]] = club["name"] #clubs[team[club["id"]]] = {"info":{}} #clubs[team[club["id"]]]['info']['name'] = club["name"] #clubs[team[club["id"]]]['info']["id"]= club['teams'][0]['id'] page += 1 if page == response["pageInfo"]["numPages"]: break f = open("clubs.json","w") # pretty prints and writes the same to the json file f.write(json.dumps(clubs,indent=4, sort_keys=False)) f.close() def get_fixtures(self,compSeasons): fixtures_unplayed = {} #Store info for not played fixtures games_unplayed = {} #Store info for not played games fixtures_played = {} #Store all clubs games_played = {} #Store info for each team url = self.base_url + '/fixtures' page = 0 #starting value of page while True: params = ( ('pageSize', '100'), #adds ?pageSize=100 to url ('page', str(page)), ('compSeasons', str(compSeasons)), ) response = requests.get(url, params = params).json() # request to obtain the team info all_games = response["content"] #loop to get info for each game for game in tqdm(all_games): if game['status'] == 'U': games_unplayed[game["id"]] = game['id'] fixtures_unplayed[games_unplayed[game["id"]]] = {"match":{}} fixtures_unplayed[games_unplayed[game["id"]]]['match'] = game['id'] fixtures_unplayed[games_unplayed[game["id"]]]['kickoff'] = game['fixtureType'] fixtures_unplayed[games_unplayed[game["id"]]]['preli_date'] = game['provisionalKickoff']['label'] fixtures_unplayed[games_unplayed[game["id"]]]['scientific_date'] = game['provisionalKickoff']['millis'] fixtures_unplayed[games_unplayed[game["id"]]]['home_team'] = game['teams'][0]['team']['name'] fixtures_unplayed[games_unplayed[game["id"]]]['home_team_id'] = game['teams'][0]['team']['club']['id'] fixtures_unplayed[games_unplayed[game["id"]]]['home_team_abbr'] = game['teams'][0]['team']['club']['abbr'] fixtures_unplayed[games_unplayed[game["id"]]]['away_team'] = game['teams'][1]['team']['name'] fixtures_unplayed[games_unplayed[game["id"]]]['away_team_id'] = game['teams'][1]['team']['club']['id'] fixtures_unplayed[games_unplayed[game["id"]]]['away_team_abbr'] = game['teams'][1]['team']['club']['abbr'] fixtures_unplayed[games_unplayed[game["id"]]]['grounds'] = game['ground']['name'] fixtures_unplayed[games_unplayed[game["id"]]]['grounds_id'] = game['ground']['id'] fixtures_unplayed[games_unplayed[game["id"]]]['gameweek'] = game['gameweek']['gameweek'] fixtures_unplayed[games_unplayed[game["id"]]]['status'] = game['status'] for game in tqdm(all_games): if game['status'] == 'C': games_played[game["id"]] = game['id'] fixtures_played[games_played[game["id"]]] = {"match":{}} fixtures_played[games_played[game["id"]]]['match'] = game['id'] fixtures_played[games_played[game["id"]]]['kickoff'] = game['fixtureType'] fixtures_played[games_played[game["id"]]]['preli_date'] = game['provisionalKickoff']['label'] fixtures_played[games_played[game["id"]]]['scientific_date'] = game['provisionalKickoff']['millis'] fixtures_played[games_played[game["id"]]]['home_team'] = game['teams'][0]['team']['name'] fixtures_played[games_played[game["id"]]]['home_team_id'] = game['teams'][0]['team']['club']['id'] fixtures_played[games_played[game["id"]]]['home_team_abbr'] = game['teams'][0]['team']['club']['abbr'] fixtures_played[games_played[game["id"]]]['home_team_score'] = game['teams'][0]['score'] fixtures_played[games_played[game["id"]]]['away_team'] = game['teams'][1]['team']['name'] fixtures_played[games_played[game["id"]]]['away_team_id'] = game['teams'][1]['team']['club']['id'] fixtures_played[games_played[game["id"]]]['away_team_abbr'] = game['teams'][1]['team']['club']['abbr'] fixtures_played[games_played[game["id"]]]['away_team_score'] = game['teams'][1]['score'] fixtures_played[games_played[game["id"]]]['grounds'] = game['ground']['name'] fixtures_played[games_played[game["id"]]]['grounds_id'] = game['ground']['id'] fixtures_played[games_played[game["id"]]]['gameweek'] = game['gameweek']['gameweek'] fixtures_played[games_played[game["id"]]]['outcome'] = game['outcome'] fixtures_played[games_played[game["id"]]]['extraTime'] = game['extraTime'] fixtures_played[games_played[game["id"]]]['shootout'] = game['shootout'] fixtures_played[games_played[game["id"]]]['played_time'] = game['clock']['secs'] fixtures_played[games_played[game["id"]]]['played_time_label'] = game['clock']['label'] fixtures_played[games_played[game["id"]]]['status'] = game['status'] page +=1 if page == response["pageInfo"]["numPages"]: break fixtures = dict(fixtures_unplayed) fixtures.update(fixtures_played) with open("unplayed_fixtures.json","w") as f: f.write(json.dumps(fixtures_unplayed,indent=4, sort_keys=True)) with open("played_fixtures.json","w") as f: f.write(json.dumps(fixtures_played,indent=4, sort_keys=True)) with open("fixtures.json","w") as f: f.write(json.dumps(fixtures,indent=4, sort_keys=True)) if __name__ == "__main__": prem = Premier_league() prem.get_fixtures(274) Answer: There's a couple small things I usually do differently that I'd like to point out: f = open("competitions.json","w") # pretty prints and writes the same to the json file f.write(json.dumps(competitions,indent=4, sort_keys=False)) f.close() Can be replaced with: with open("competitions.json","w") as f: # pretty prints and writes the same to the json file f.write(json.dumps(competitions,indent=4, sort_keys=False)) Which prevents leaving files open by accident. You also do: page = 0 #starting value of page while True: # stuff page += 1 if page == response["pageInfo"]["numPages"]: break Which can be replaced by: for page in range(response["pageInfo"]["numPages"]): The assignment of dictionaries can also be done nicer imo. Instead of: games_unplayed[game["id"]] = game['id'] fixtures_unplayed[games_unplayed[game["id"]]] = {"match": {}} fixtures_unplayed[games_unplayed[game["id"]]]['match'] = game['id'] fixtures_unplayed[games_unplayed[game["id"]]]['kickoff'] = game['fixtureType'] fixtures_unplayed[games_unplayed[game["id"]]]['preli_date'] = game['provisionalKickoff']['label'] fixtures_unplayed[games_unplayed[game["id"]]]['scientific_date'] = game['provisionalKickoff']['millis'] fixtures_unplayed[games_unplayed[game["id"]]]['home_team'] = game['teams'][0]['team']['name'] fixtures_unplayed[games_unplayed[game["id"]]]['home_team_id'] = game['teams'][0]['team']['club']['id'] fixtures_unplayed[games_unplayed[game["id"]]]['home_team_abbr'] = game['teams'][0]['team']['club']['abbr'] fixtures_unplayed[games_unplayed[game["id"]]]['away_team'] = game['teams'][1]['team']['name'] fixtures_unplayed[games_unplayed[game["id"]]]['away_team_id'] = game['teams'][1]['team']['club']['id'] fixtures_unplayed[games_unplayed[game["id"]]]['away_team_abbr'] = game['teams'][1]['team']['club']['abbr'] fixtures_unplayed[games_unplayed[game["id"]]]['grounds'] = game['ground']['name'] fixtures_unplayed[games_unplayed[game["id"]]]['grounds_id'] = game['ground']['id'] fixtures_unplayed[games_unplayed[game["id"]]]['gameweek'] = game['gameweek']['gameweek'] fixtures_unplayed[games_unplayed[game["id"]]]['status'] = game['status'] Use: game_id = game['id'] index = games_unplayed[game_id] fixtures_unplace[index] = \ {'match': game_id, 'kickoff': game['fixtureType'], 'preli_date': game['provisionalKickoff']['label'], 'scientific_date': game['provisionalKickoff']['millis'], 'home_team': game['teams'][0]['team']['name'], 'home_team_id': game['teams'][0]['team']['club']['id'], 'home_team_abbr': game['teams'][0]['team']['club']['abbr'], 'away_team': game['teams'][1]['team']['name'], 'away_team_id': game['teams'][1]['team']['club']['id'], 'away_team_abbr': game['teams'][1]['team']['club']['abbr'], 'grounds': game['ground']['name'], 'grounds_id': game['ground']['id'], 'gameweek': game['gameweek']['gameweek'], 'status': game['status']} Lastly, not that important, but I don't like to hardcode values: def __init__(self): self.base_url = 'https://footballapi.pulselive.com/football' Could also be: def __init__(self, base_url = 'https://footballapi.pulselive.com/football'): self.base_url = base_url
{ "domain": "codereview.stackexchange", "id": 37330, "tags": "python, web-scraping, hash-map" }
reflection - physical significance
Question: okay as long as physical qualitative analysis is considered i treat reflection occurs due to the following fact: wave carries energy with it while propagating and when it meets a hard {reflecting } surface through which it cannot pass, hence it reflects {the feasible method to conserve the energy easily} in the above reasoning i find certain flaws; energy can be conserved in other ways too( like heat etc.) i think there are certain more vital reasons too ,mine being less important one okay then what are the real qualitative reasons behind reflection? most books describe it as "bouncing back of light" which i find somewhat unsatisfactory. isn't it a more mechanistic view? Answer: You're right, it's a lot more complicated than that. Griffiths' book on E&M goes much deeper into the process (mostly in 2nd half of the book). It has to do with complex E&M processes, based on Maxwell's equations, but Wikipedia has a nice summary here. Basically, it's a hard scatter, or absorption and re-emission of light. It's not easy to describe because the physics are very mathematically intense (though certain gauge theories can simplify it). What's really interesting is the physics of where it scatters or absorbs (preferred depth for reflection/refraction). Griffiths covers that really nicely. It's amazing to see how quickly light reflects from certain materials, etc. I hope these resources are accessible to you at your local library or school!
{ "domain": "physics.stackexchange", "id": 43576, "tags": "reflection" }
State of the art trajectory for fine precision robots
Question: Which kind of trajectory would one use for a fine precision robot? I know trapezoidal and cubic trajectories, but errors get very high when stopping at every configuration and when speeding up to the highest possible velocity. How is this done in practise especially when not much deviation from a straight path is wanted? In my case I try to get the end-effector to go a straight line with a three joint rotational robot. Answer: For very high-precision applications such as finishing, milling by CNC machines, jerk-bounded trajectories (that is, trajectories comprising polynomials of degree 3 of higher) are often used. If you search on Google Scholar using the term "jerk bounded", you can find loads of methods to plan such trajectories. High-order polynomial trajectories (or splines) are not always necessary, however. For stiff articulated manipulators (such as position-controlled robots), they are usually stiff enough that using second-order (parabolic) polynomial trajectories is perfectly fine. The effect of (theoretically) unbounded-jerk is often negligible. In your case, you mentioned that the errors got high when stopping and speeding up. I suspect that this might not come from the trajectory generation method you used. If the robot you are using is custom-built, you might want to Do kinematics calibration to make sure that the locations of joints and the tool are actually where you think they are; and Check how well the motors can track your input commands. Otherwise, you may check if this issue is related in any way to singularities.
{ "domain": "robotics.stackexchange", "id": 1545, "tags": "actuator, joint, errors" }
Can these things be done at home on a hobbyist budget?
Question: I've toyed with getting into microscopy as a hobby for fun and potentially practical reasons. Can anyone tell me whether or not these are doable at home in 2023 with a \$500 to \$1000 budget? Identifying random molds (on surfaces, on foods) Seeing what's in dust? (mites, mold?) Looking for and identifying parasites in animals or soil samples (worms in feces, fleas or fungi on skin or hooves) Looking for metal shavings in engine oils (maybe not appropriate for this forum, worth a shot anyway) Quick googling led me to something like the swift sw380t with a camera for around \$500 + some equipment for slides. Being able record is particularly attractive. Is that reasonable? Is it overkill? Underkill? Should I just grab something used off fb marketplace for a fraction of the cost? Answer: Yes, and no as to bacterial and fungal samples. For these you need a conventional light microscope that transmits the light through the sample. These are the common ones when people consider microscopes. You can also get reflected light microscopes (and even hybrid versions for hobbyists), that are needed if you are looking at solid objects such as fleas. These are sometimes called dissection microscopes and generally have lower magnifications (10-200x) only. I don't see why you wouldn't be able to see metal shavings in engine oil, with a conventional microscope, though getting enough light through the sample might be a problem there. Don't get oil on the lenses though and be wary of scratching your lens with any metal shavings if you do. The key with microscopes is quality of the lenses (not surprisingly) - the better the lens, the fewer aberrations you get in the image, both in colour distortion and spherical (shape) aberration. Mid-range microscopes, price-wise, tend not to be the greatest at lens productions, but are certainly suitable for general use - and you may also be able to buy better lenses and swap them on. In this group I would include manufacturers such as Swift and Celestron. The top-end microscope makers such as Olympus and Zeiss are typically much more expensive, but worth it, if you can find a good one second-hand. For bacteria you need 1000x (100x objective with 10x eyepiece) to see any details, but they are visible at 400x, if you know what you are doing. However, you need a lot of bacteria to do this, as I explain below. Generally for ID purposes and even just to visualize bacteria, you would need to stain the sample. This involves a bunch of chemicals (iodine, potassium hydroxide etc.) and stains (Giemsa, Saffronin, Toludine blue, Malachite green etc.) that won't be available to the home hobbyist, though there might be alternatives to these that home hobbyists have worked out and/or are available (e.g. crystal violet, part of the Gram stain, is gentian violet used to treat skin infections sometimes, so chemists/pharmacies have it). Food dyes work in some situations. You also generally need a pure isolate to look at, though you can do mixed floral growths off medium that might work. Just taking something like skin and looking for bacteria or fungi won't work as they aren't abundant enough to see easily. In a proper lab, bacteria and fungi are grown on a solid medium (agar plate - you can think of this as a jelly with nutrients) and scraped off. Generally not done straight from a sample as this is difficult and has lots of other things in there that interfere with the microscopy. That's not to say you can't do it, just it is more difficult! Having said that, you can look at all sorts of things under a light microscope that don't require high magnifications or any expertise to look at - pond water being the classic. This will be full of small protozoa (amoeba, paramecium, rotifers etc) and algae that look great and are really fun to ID at 100-400x magnifications. Cheek cells (use the round end of a toothpick to gently scrape the inside of your cheek, fix gently over a flame, stain with crystal violet) look great, and you can see a bunch of internal structures with 400x magnification. Cross sections of leaves (hold between two thin pieces of polystyrene or cork and slice gently with a new single-edge razor blade or craft knife). . These will show the internal structure of the leaves - veins (xylem, phloem), cells etc. You can also use clear nailpolish to paint on the surface of a leaf (try the underside), then peel off and look at under the microscope - this should let you see the pores (called stomata) in detail, they look like pairs of lips usually. The fine tissue skin (not the brown bit, it's a very thin wet translucent bit) of an onion also looks pretty nice. Moss leaves also are fun to look at. Along with mosses - take some dry moss, let it sit in water for 30 min or so and then squeeze out - you'll hopefully find tardigrades Edited to add: With respect to parasites in faeces; this requires a bit of expertise to get good at. There is a lot of matter in faeces and parasites are generally low abundance. Unless you know a host is infected and are willing to mix faeces with water, filter and do a bunch of screening, you might not find any actual parasites, though you might see things that look, to the untrained eye, like parasites but are really just debris. You also run a significant chance of infecting yourself with something, be it parasitic, bacterial or viral.
{ "domain": "biology.stackexchange", "id": 12206, "tags": "mycology, microscopy, parasitology" }
How to obtain a count of the classes of a categorical var within a certain time interval for a time-stamped data?
Question: I have a dataframe with several categorical variables. But for simplicity let's assume there is only 1 categorical variables with 3 classes. I want to obtain the counts of these classes within a certain time-intervals, say 15mins. To make it easier to understand what I am looking for, here is a toy example and the output I am looking for. _time AN 0 2019-04-09 16:00:00.050 a 1 2019-04-09 16:00:00.050 a 2 2019-04-09 16:00:00.050 b 3 2019-04-09 16:00:00.050 a 4 2019-04-09 16:00:00.050 b 5 2019-04-09 16:02:38.992 a 6 2019-04-09 16:06:41.884 c 7 2019-04-09 16:15:00.051 a 8 2019-04-09 16:15:00.051 b 9 2019-04-09 16:15:00.051 a The output that I am looking for is below: _time AN 2019-04-09 16:00:00 a 4 b 2 c 1 2019-04-09 16:15:00 a 2 b 1 So for this toy example, the timeline has two 15mins intervals. For the first one, 'a' appears 4 times, 'b' appears 2 times and 'c' appear 1 time. In the 2nd 15min interval, 'a' appears 2 times and 'b' appear only 1 time and 'c' doesn't appear at all. I obtained this result by running this code: c.resample('15T',on = '_time').agg({'AN':'value_counts'}) However, when I am running this same code on the entire dataframe, df, I am getting the following error: ValueError: operands could not be broadcast together with shape (10267,) (4315,) I am not sure why I am getting this. Is there a different method to obtain the same result? or is there any suggestion to fix this error? Thanks in advance. Answer: import pandas as pd data = pd.read_csv("/content/da_time.txt",header=None) # File has your example data data.columns=['num','date_time','var'] # Named columns data.date_time = pd.to_datetime(data.date_time) # To Pandas datetime data['date_time_copy'] = data.date_time # Created a copy of datetme column # make second as multiple of 15 and millisecs=0 data['date_time_copy'] = data.date_time_copy.apply(lambda x:x.replace(minute=x.minute//15*15,second=0,microsecond=0)) data.groupby(by=['date_time_copy','var']).count()['num'] Output
{ "domain": "datascience.stackexchange", "id": 8264, "tags": "pandas, python-3.x" }
How are the parameters of a force field obtained for the bond angle HCH?
Question: In my head, I can't see molecules that I could vary the HCH bond angle without varying other bond lengths, angles and dihedrals to build an energy vs. HCH bond angle curve. If I vary a single HCH angle of a simple methane molecule, for example, I would automatically be varying the other HCH angles of that same molecule, which would affect the construction of the actual energy vs. HCH bond angle curve. I may be talking nonsense. Answer: You don't necessarily need a potential energy curve to fit a force field. I answered a related question about fitting a force field from quantum calculations. Let's assume you have a harmonic angle term. You could either do this in terms of the angle bending or the distance between the end atoms, but let's take the angle for our example: $E_{angle} = k (\theta - \theta_0)^2$ The first parameter you need is the optimal angle $\theta_0$ which you can get from a geometry optimization, experiment, etc. Your question stems from the second parameter, $k$ - the force constant for whatever this angle type happens to be (e.g. H-C-H in methane for your example). There are a few ways to get the force constant, but it comes from the second derivative, right? So you can get that from a Hessian or vibrational calculation. My favorite discussions of this center around the "Badger's rule for angle force constants, e.g." "Maximally diagonal force constants in dependent angle-bending coordinates. II. Implications for the design of empirical force fields" J. Am. Chem. Soc. 1990, 112, 12, 4710-4723 Unfortunately, most such force fields are defined in well-determined sets of internal coordinates, whereas empirical potentials use larger sets of dependent coordinates. This paper illustrates a unique “localized” representation of the angle-deformation potential in dependent coordinates which is exactly diagonal for in-plane bending at trigonal-planar centers and is nearly diagonal for angle bending at tetra coordinate centers. Modern force field fitting methods typically use scripts that minimize the differences between an in-development parameter set and a set of Hessians and/or experimental data. As I mentioned in the other question, there's a huge pile of such papers, many with code to derive force fields from quantum chemical data. QubeKit J. Chem. Inf. Model. 2019, 59, 4, 1366-1381 - code at GitHub QuickFF J. Comput. Chem. 2015, 36, 1015– 1027 - code at GitHub ForceBalance J. Chem. Theory Compute. 2013, 9, 1, 452-460 and J. Phys. Chem. Lett. 2014, 5, 11, 1885-1891 - code at GitHub ForceFit J. Comput. Chem., 2010 31: 2307-2316 Parfit J. Chem. Inf. Model. 2017, 57, 3, 391-396 - code at GitHub
{ "domain": "chemistry.stackexchange", "id": 13358, "tags": "computational-chemistry" }
Gauge fermions versus gauge bosons
Question: Why are all the interactions particle of a gauge theory bosons. Are fermionic gauge particle fields somehow forbidden by the theory ? Answer: The reason that the gauge particle must be a spin 1 gauge boson is because there aren't any renormalizable alternatives. To see this consider the Dirac Lagrangian: \begin{equation} \bar{\psi} i \gamma ^\mu \partial _\mu \psi \end{equation} This term is not gauge invariant under the transformation, $ \psi \rightarrow e ^{ i T ^a \theta ^a (x) } \psi $, because of the derivative spoils the desired transformation of $ \partial _\mu \psi $. To fix this we must add a contribution that transforms in the same way as the derivative, i.e., transforms as a vector. In other words we modify the derivative such that, \begin{equation} D _\mu \psi \rightarrow e ^{ i T _a \theta _a (x) } D _\mu \psi \end{equation} The question is what to add to $ D _\mu $. We can potentially add spin $ 0, \frac{1}{2} , 1 , \frac{3}{2} , $ and $ 2 $ particles to fix this. We go case by case. There is no combination of spin zero fields that transform as a vector without adding derivatives (adding derivative to fix the derivative covariance would take you in circles), thus we can't have a spin zero gauge boson. Next consider adding a spin $ 1/2 $ gauge boson we could write ($ \psi _a $ is a gauge particle, not $ \psi $), \begin{equation} D _\mu = \partial _\mu + \sum _a T ^a \left( g\bar{\psi} ^a \gamma _\mu \psi ^a + g ' \bar{\psi} ^a \gamma _\mu \gamma ^5 \psi ^a \right) \end{equation} However, this would give an interaction \begin{equation} \sum _a i\left[ \bar{\psi} \gamma ^\mu\psi \right] \left[ \bar{\psi} _a \gamma _\mu \psi _a \right] \end{equation} and similarly for the $ \gamma ^5 $ term. These interactions are non-renormalizable as they involve four fermions. Non-renormalizable interactions arise from effective field theories and are suppressed by the scale at which they arise. This would make the gauge interactions non-fundamental but instead involve a massive vector particle integrated out. For the integrated out interaction to be renormalizable it must be between two fermions and a spin $1$ field. This brings us back to the usual case. The spin $1$ field works well and exists in the SM. I'm not sure about the spin $ 3/2 $ field as I have no experience with working with such fields however, I presume it won't work for similar reasons. I also know that spin $2$ fields must mediate gravitational fields and thus would give a nonsensible result.
{ "domain": "physics.stackexchange", "id": 27266, "tags": "quantum-field-theory, particle-physics, gauge-theory" }
How to implement sinc interpolation
Question: I'm trying to write my own high quality audio sample rate converter. I barely know anything about signal processing though so I need help. From what I understand I need to sum together normalized sinc functions that touch every sample to find the value at any arbitrary point. I'm guessing this is hard because that would mean more than a billion calculations for a few second long audio file. So do I only use the sinc functions for the samples that are closest to my x value? Is this what is meant by "windowed sinc"? How many samples should I go in each direction away from my x value? Additionally, my DAW has something called "32 point sinc" resampling. What is the "32 point" supposed to mean in this case? Answer: "So do I only use the sinc functions for the samples that are closest to my x value?" Yes, when you are truncating. "Is this what is meant by "windowed sinc"?" Yes. The sinc goes to infinity, calculating that is impractical. "How many samples should I go in each direction away from my x value?" The sinc diminishes the further away you are. At some point additional points become insignificant (no wider than a whisker on a gnat). "Additionally, my DAW has something called "32 point sinc" resampling. What is the "32 point" supposed to mean in this case?" I'm pretty sure it means a 32 point window, centered on your current sample. Sinc interpolation is one form of interpolation. It is the "Fourier compatible" one as it gives limits on a bandwidth basis. Here is another link that may be even more useful: Multi-channel audio upsampling interpolation On re-reading this reference, Olli has already done this analysis, spectacularly.
{ "domain": "dsp.stackexchange", "id": 9128, "tags": "audio, resampling, sinc" }
Effective strength of prescription glasses for arbitrary angle
Question: I am trying to understand the optics of prescription glasses. Prescriptions have a spherical and a cylindrical (with associated axis) component specified in dioptres which are roughly additive. For illustrative purposes lets assume these are -2.0 diop spherical and -1.0 diop cylindrical at 90°. From my understanding, for one axis (0°), only the spherical component is effective (-2.0 diop) since the cylindrical component has constant thickness in this direction. For the other axis (90°) the glasses have the effective strength of -3.0 diop since the cylindrical component is effective. My question is: What is the effective strength for an arbitrary angle? Is this simply a sinusoidal relationship or is is more complicated? Answer: Blessedly, the diopter measurement used for corrective lenses is the same as the definition of curvature used in mathematics: $$D = \frac{1}{R} $$ Where $D$ (denoted $\kappa$ in mathematical parlance) is the curvature measured in diopters and R is the radius of curvature. For reference, a flat surface has $R =\infty$ and $D=0$. Obviously a spherical lens has the same curvature at any angle. And a cylindrical lens will have a curvature that goes from $D\rightarrow [0,1/R]$ as the angle from the cylinder's axis $\theta \rightarrow [0,90^\circ ]$. Curvature can be calculated, and need not be constant over a curve. Let's do this for a green curve across a cylinder's surface like shown: We need to write down the parameterization (denoted $\gamma(\lambda)$ ) of the green line. It is: $$\gamma(\lambda) = \begin{pmatrix} R \cos{\pi \lambda} \\ R \sin{\pi\lambda} \\ R \tan{\theta}(1-2\lambda) \end{pmatrix} $$ Where $\lambda \rightarrow [0,1]$. The curvature of a parameterized curve in $\mathbb{R}^3$ is: $$D = \frac{||\gamma' \times \gamma'' ||}{||\gamma' ||^3} $$ where $\gamma' = \frac{d\gamma}{d\lambda}$ and $\gamma''= \frac{d^2\gamma}{d\lambda^2}$, the first and second derivatives respectively, '$\times$' is the cross product, and $||...||$ is the norm (magnitude). I handed this off to $\mathrm{Mathematica}$ at this point, and got the following for $D$: $$D = \frac{1}{R+ \Large{4R \tan{\theta}^2 \over \pi^2 } } $$ So yes, the Diopter value does change for the cylindrical component for various angles (note it no longer depends on $\lambda$, meaning the green curves are constant in curvature given any $\theta$). A plot of this for $R=1$ shows: You mentioned a $D=-1.0$ at $90^\circ$. I plotted this for positive $R$, so simply flip it over for the negative variety.
{ "domain": "physics.stackexchange", "id": 46898, "tags": "optics, geometric-optics, lenses, medical-physics" }
Substitution cipher algorithm performance boost
Question: This algorithm is meant to read a string of numbers on an input, a naive substitution cipher code (A = 1, B = 2, ..., Z = 26) and output the number of ways the code could be interpreted (e.g. 25114 could mean 'BEAN', ‘BEAAD’, ‘YAAD’, ‘YAN’, ‘YKD’ and ‘BEKD’, hence output is 6). It is working properly, but not fast enough, though. Is there a way to improve its performance? I commented it heavily, so it is easy to read. #include <iostream> #include <string> using namespace std; //Those are global, because handled by both functions int ways; //number of ways the code could be interpreted string s; //input //Takes account of all the tuples after the one on position "start" using //recursion void findNextPossible(size_t start) { if(start + 2 <= s.length()){ //Tuple considered string ss = s.substr(start, 2); //Otherwise not interpretable as a letter if(ss <= "26") { ways++; for(size_t i = 0; i <= s.length(); i++) findNextPossible(start+2+i); } } } int main() { size_t pos; bool test, someDeletedInTheMiddle; //Until zero on input while(cin >> s, s.at(0) != '0') { ways = 0; test = true; someDeletedInTheMiddle = false; pos = s.find('0'); //In this while I look for zeros while(pos != string::npos) { //Code with 30,40,... is not valid -> output 0 if(s.at(pos - 1) > '2') { test = false; break; } //If delete some in the middle, output smaller by 1 if((pos >= 2 && pos < s.length()-1) && s.length() > 2) someDeletedInTheMiddle = true; //Don't cosider the tuple with zero in it anymore s.erase(pos-1, 2); //Any other zero? pos = s.find('0', pos); } if(test) { if(!someDeletedInTheMiddle) ways++; //Process the rest for(size_t i = 0; i < s.length(); i++) { findNextPossible(i); } } cout << ways << endl; } return 0; } Answer: You should use memoization on findNextPossible(), otherwise your algorithm has exponential complexity. Also: the optimization obtained by looking for '0's will be no more useful once you use memoization... so the resulting code should become much, much smaller. This is a possible implementation: #include <string> #include <map> #include <iostream> #include <cassert> using namespace std; int ways(std::string s) { static map<string,int> cache; map<string,int>::const_iterator f = cache.find(s); if (f!=cache.end()) return f->second; // cout<<"ways("<<s<<")"<<endl; if (s.size()==0) return 1; if (s[0]=='0') return 0; // no way the first digit is 0 if (!isdigit(s[0])) return 0; if (s.size()==1) return 1; // only one possibility for a single digit // here we have at least 2 digits if (!isdigit(s[1])) return 0; int n = 10*(s[0]-'0')+(s[1]-'0'); if (n>26) return cache[s] = ways(s.substr(1)); // cannot merge first two digits. return cache[s] = ways(s.substr(1)) + ways(s.substr(2)); // two possible interpretations } const char *tests[]={"1213","12","214","205",0}; int main() { for (int i = 0; tests[i]; ++i) { cout<<tests[i]<<": "<<ways(tests[i])<<endl; } }
{ "domain": "codereview.stackexchange", "id": 9041, "tags": "c++, optimization, performance, algorithm, dynamic-programming" }
ROS Indigo/Jade on ARM 64 bit?
Question: Hi all, Tomorrow Nvidia Jetson TX1 will be available in USA, it will be a big step for robotics... Have anyone tried to compile ROS on 64 bit ARM systems? I do not know if all the dependencies can be satisfied... I'm really curious to start testing it. Walter Originally posted by Myzhar on ROS Answers with karma: 541 on 2015-11-15 Post score: 1 Original comments Comment by lanyusea on 2015-11-15: I'm running ROS Indigo on TK1 Comment by Myzhar on 2015-11-16: Jetson TK1 has a 32 bit ARM and I use it without problems since about one year and half. My doubt is about the Jetson TX1, based on the Tegra X1 that is a 64bit SoC. Answer: Most ROS packages should work on ARM64. We already support 32 and 64bit x86 and most things work on armhf. So filling in the matrix is a minimal step forward. You're right that the system dependencies are usually the limiting factor. I would expect at least up through desktop to work. With more arm64 boards coming out soon we expect to have more experience and demand for those system dependencies so hopefully they will all get filled in. Originally posted by tfoote with karma: 58457 on 2015-12-14 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 22990, "tags": "ros, 64bit" }
Is it safe to change system date/time during execution
Question: Reading on how ros Time is implemented, it looks like it uses system wall-time. http://www.ros.org/wiki/Clock Would it be safe to change system time when ros nodes are running? We plan to synchronize the time between robots using NTP on startup using Wi-Fi. But if a robot was powered on out of Wi-Fi range, started all it's nodes, and then (once in Wi-Fi range) changed it's system time using NTP, we are afraid that those nodes would start acting weird. This question is both for C++ and python implementation of Time. Originally posted by Victor Lopez on ROS Answers with karma: 651 on 2011-07-26 Post score: 3 Answer: It is not safe as you already suspected. Timestamps might jump and this might lead to inconsistencies. If you sync with NTP before, there shouldn't be much difference, though. It is recommended to use chrony for your application, although there still is the problem that the times might get out of sync. In a comparably short time that should not matter unless you are synching high-frequency sensor data between different machines. Originally posted by dornhege with karma: 31395 on 2011-07-26 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by tfoote on 2011-07-30: This is why chrony is recommended. It doesn't jump the clock it skews the rate to synchronize instead of jumping time. Comment by dornhege on 2011-07-27: I think that wouldn't solve the problem. A clock server still has to run on one of the machines and once you have a disconnect the problem is again the sync. You can see one systems clock as a clock server. Comment by Victor Lopez on 2011-07-26: I was thinking about using a Clock Server that publishes time based on monotonic time, therefore a change on system time wouldn't cause timestamp jumps, I'll have to test it which harmful effects this has.
{ "domain": "robotics.stackexchange", "id": 6269, "tags": "ros, clock, rostime" }
What's the relationship between "semantic type soundness" and "functional correctness"?
Question: In the Milner Award lecture "The Type Soundness Theorem That You Really Want to Prove (and now you can)" and related Sigplan blog post (with collaborators), Derek Dreyer argues that semantic soundness is an important thing worth proving. In the questions section, Adam Chlipala asked how to trade off complexity between proving soundness of semantic types and functional correctness, and Dreyer unfortunately said "let's take this offline," so I don't know what the answer is. Is the answer "semantic type soundness is something to prove about a programming language, but functional correctness something one proves of a program"? I'm still a bit confused, though, because then someone asked a follow-up question about how semantic type soundness could fail to hold. Dreyer's answer was that if someone were to add a covfefe rule that flipped an arbitrary int to 0, it could trigger someone's assertion that under certain conditions a given int is never 0. So this makes it sound like soundness proofs for semantic types do involve the details of particular programs. In addition to an answer to Adam Chlipala's question, it would be helpful to have some simple reference for what "semantic type soundness" means - the blog post doesn't really define it, and the closest paper I could find was the RustBelt paper which only discusses "semantic type soundness" in the context of a Rust MIR-like language and Iris, but I couldn't find a more general definition. Answer: I'm not an expert, but can give at least a partial answer since I've been reading about this lately. The more recent preprint A Logical Approach to Type Soundness gives a good overview of what this is all about, outside the specific context of Rust. The key distinction is between two relations: syntactic type checking ($\Gamma \vdash e : \tau$), which is the syntax-directed type checking we're usually familiar with, and semantic type checking ($\Gamma \vDash e : \tau$) . Roughly speaking, an expression $e$ semantically type checks as type $\tau$ if at run-time it eventually evaluates to a value with type $\tau$. Of course in general that reduces to the halting problem and is just as hard as any other kind of functional correctness proof. I could be misinterpreting, but I think what Adam is asking is "The proof system you've shown here isn't powerful enough to prove all functional correctness properties, so how do you design a proof system that's 'good enough' to prove most of the stuff you need for semantic type checking, but not overly powerful/general?". I suspect the answer is along the lines of "it's a tradeoff, and there are many valid points you could select on the spectrum from very simple to very complex proof system", and so getting to the details of that question requires taking it offline. Re: proving about a program or a language: Yes, semantic type soundess is something you would prove about a language as a whole. Semantic type soundness means that whenever a program semantically type checks, then it is always "safe" to run that program (for some definition of safe). However, whether an individual program semantically type checks is something you prove about a single program, and can sometimes require proving functional correctness of that program. The first few sections of the "Logical Approach" paper above give a nice description of this, so I recommend checking it out if you'd like more details.
{ "domain": "cs.stackexchange", "id": 21893, "tags": "type-theory" }
How to obtain position data from acceleration without forward euler?
Question: I am doing a investigation into the Wilberforce Pendulum and in order to find the position and rotation at any time I have attached my phone onto the pendulum in order to use Phyphox, a app that finds the acceleration and angular velocity. However, when I put the data into excel and use forward Euler to find the velocity from the acceleration, and the position from the velocity. However, this doesn't really work, and the velocity seems to drift downwards a bit, and then the position seems to drift down much, much more. I also took a video of the pendulum, and I compared this to the results to show that it isn't experimental error. Do any of you know how to get the position data from acceleration, without this error? Answer: Since you already have the data you just need a numeric integrator which inherently smooths the data. But if you use a forward Euler, then you are biasing the smoothing (averaging) to previous values causing a bias in the results. I have been down the road you are going through and here are my suggestions Use the gyroscopes to measure rotational velocity. Use trapezoidal integration to calculate angles. In general, use an integration technique that considers forwards data equally with backwards data in order to avoid bias. Numerically adjust the data by applying a DC offset, or a slope to hit a target value at the end of the test. This corrects for the numeric drift you see. If you want more accurate integration of data, I suggest to use a cubic spline interpolation to do so.
{ "domain": "physics.stackexchange", "id": 82471, "tags": "classical-mechanics, acceleration, computational-physics, data" }
Differences between Church and Scott encoding
Question: I'm kind of new to lambda calculus and I found this Wikipedia article https://en.wikipedia.org/wiki/Mogensen%E2%80%93Scott_encoding The section Comparison to the Church encoding presents a short comparison between Church and Scott encoding Church λx1...xAi.λc1...cN.ci (x1c1...cN)...(xAic1...cN) Scott λx1...xAi.λc1...cN.ci x1...xAi Do you agree with the Church generic encoding description? The page specified that citation is needed for that description. Answer: It's correct as long as all the arguments $x_1 ... x_{A_i}$ are recursive occurrences of the same structure. Otherwise there's no way to say what the Church encoding should be without more information. That is the major difference between the Church and Scott encodings. If we think in terms of types, then for a fixed point type: $T \cong F T$ the Church encoding of $T$ has a type like: $(F r \to r) \to r$ while the Scott encoding has a type like: $(F T \to r) \to r$ So, Scott encodings generally require some other source of recursion/iteration, both at the value level, because it only gives you access to one level of unfolding at a time, and at the type level (if you have one), because the fixed point type $T$ is still present in the type of the Scott encoding. However, this means that Scott encodings are much better than Church encodings for some things. For instance, it's obvious how to write the (clamped) predecessor for Scott encoded naturals $\mathbb{N} = \forall r. r \to (\mathbb{N} \to r) \to r$: pred n = n 0 (\pn -> pn) However, for the Church encoded naturals: $\mathbb{N} = \forall r. r \to (r \to r) \to r$ writing the predecessor function is a challenging exercise (and the resulting function is expensive).
{ "domain": "cs.stackexchange", "id": 12098, "tags": "lambda-calculus" }
Is the zero acceleration path also the shortest path between two points?
Question: In flat, free, Euclidean space, the shortest path and the zero acceleration path are the same path, which is a straight line. However, in general relativity, is the zero acceleration path also the shortest path between two points? I am assuming that free fall is zero acceleration. Answer: In general relativity, you're dealing with a 4D spacetime, so the "points" in spacetime are events, and the measures that you can make coordinate-independent statements about are intervals instead of distances. The rule that applies is that the world line with the longest possible proper time between two events is a world line that involves zero proper acceleration. Such a world line is called a "time-like geodesic". There's a similar concept for space-like curves. A "space-like geodesic" is a curve with a stationary proper length between two events with a space-like separation. A space-like geodesic is locally straight. For more information, see the Wikipedia article section "Geodesics as curves of stationary interval" And yes, free fall means zero proper acceleration.
{ "domain": "physics.stackexchange", "id": 81375, "tags": "general-relativity, variational-principle, equivalence-principle, geodesics" }
What improvements are needed in this class to fill DropDown using PetaPoco?
Question: Below is the class which I am using to fill multiple DropDowns on ASP.NET Page Load event: public sealed class getBlocks { public getBlocks(DropDownList dropDownName, string districtId) { returnBlocks(dropDownName, districtId); } public void returnBlocks(DropDownList DropDownName, string DistrictId) { var DB = new PetaPoco.Database("cnWebDems"); string Query = "SELECT distinct blockname, blockid FROM hab_master WHERE distid = '" + DistrictId + "' ORDER BY blockname"; var result = DB.Fetch<hab_master>(Query); DropDownName.DataSource = result; DropDownName.DataTextField = "blockname"; DropDownName.DataValueField = "blockid"; DropDownName.DataBind(); DropDownName.Items.Insert(0, "-- Select --"); DB.Dispose(); } Suggest further improvements. Answer: Firstly, data access should be separate from your UI logic. Certainly create a new layer, where you'll manage CRUD operations. Secondly, I'm not really fond of your naming convention of classes. Even though it may be a subjective matter, many people tend to use capitalized names for classes and certainly not using words like "getBlocks". This isn't a name for a class, that's a name for a getter method. Thirdly, you're dealing with your parameters the wrong way, at least in my opinion. I believe it would be more appropriate to have private fields of type DropDownList and String, populate them in the constructor and then just use these private variables (and/or properties, depends on your needs) instead of specifying parameters for the method itself. Seems more OOP to me that way. Otherwise I don't really see a reason why not just create a helper class with a static method you'll call whenever needed, without the need to instantiate the class itself. Another thing to consider - usually it's better to use an using statement instead of manually calling Dispose method. Edit based on the comment: 1.) example using private fields public sealed class Blocks { private DropDownList _ddList; private int _districtId; public Blocks(DropDownList dropDownList, int districtId) { _ddList = dropDownList; _districtId = districtId; } public void PopulateDropDownList() { var results = MyDbAccessClass.GetBlocks(_districtId); _ddList.DataSource = results; _ddList.DataTextField = "blockname"; _ddList.DataValueField = "blockid"; _ddList.DataBind(); _ddList.Items.Insert(0, "-- Select --"); } } 2.) example using static method public sealed class MyHelperMethods { public static void PopulateWithBlocks(DropDownList ddList, int districtId) { var results = MyDbAccessClass.GetBlocks(districtId); ddList.DataSource = results; ddList.DataTextField = "blockname"; ddList.DataValueField = "blockid"; ddList.DataBind(); ddList.Items.Insert(0, "-- Select --"); } } Hope you get the idea...
{ "domain": "codereview.stackexchange", "id": 2014, "tags": "c#, .net, asp.net" }
Size of atomic shells
Question: Is the distance of some $n^{th}$ shell ($n$ $\neq$ outermost shell) from the nucleus of an atom different for different elements? If so, then how much is the difference or how could we calculate it? Answer: Yes, the size* of the $n$-th shell will differ between elements. The reason is that the nuclear charge will shrink each shell as it goes up. Additional electrons cannot fully compensate this effect because they do not perfectly shield the core. The difference can be measured (see below) or calculated using quantum chemistry with flexible basis sets. Results from the latter can be validated against X-ray spectroscopy. However, even the Bohr model will yield different radii for the first shell when increasing the nuclear charge. *The definition of the size is not trivial. One can use the van-der-Waals radius, ionic radius or determine the isosphere that contains e.g. 90% of the electron probability density as long as one is consistent about it.
{ "domain": "chemistry.stackexchange", "id": 12733, "tags": "spectroscopy, atomic-radius, atomic-structure, nuclear" }
Redefining the kilogram using Planck's constant instead of the density of water among other examples
Question: The kilogram is in the process of being redefined in terms of Planck's constant so as to eliminate its dependence on a physical artefact. Since the length and temperature units are already precisely defined, why not just calculate the density of some substance, say water, at a particular temperature and use that as a standard for mass? Sounds simpler to me. Answer: Since the length and temperature units are already precisely defined, why not just calculate the density of some substance, say water, at a particular temperature and use that as a standard for mass? Water is a lousy choice. The initial proposal for the French metric system used the mass of a cubic decimeter of water. Measurement issues resulted in this being changed to a prototype-based system in just a few years. Issues with those initial prototypes resulted in the current prototype masses. Issues with those newer prototypes are part of what motivated the physics-based redefinition of the International System (SI). Water is a bad choice, but what about some other substance? The problem with this is that it flies in the face of one of the key goals of the proposed redefinition of the SI, which is to define the base units of time, length, mass, current, and temperature solely in terms of fundamental physical constants. The mole is also being redefined, from the number of atoms in 12 grams of 12C to a specified number. Other key goals are that the changes should represent improvements and that the redefined base units must be consistent with the past. Those latter two have always been goals. Using a fundamental physics-based approach is new, or almost new. The definitions of the second and meter are fundamentally-based. The improvements that these redefinitions enabled were a strong motivator to continue this process to the remaining three physical units, and to the mole as well. That said, using a carefully measured quantity of some substance is close to one of the two approaches being used to establish the exact value of Planck's constant. Those two approaches are the Kibble balance (formerly Watt balance), which carefully compares electrical power to mechanical power, and the Avogadro technique, which carefully calculates the number of atoms in a carefully measured sphere of nearly pure 28Si. The deadline for measurements by multiple groups using these two techniques to estimate Planck's constant passed on July 1. The requirement by the International Bureau of Weights and Measures (BIPM; the acronym is French) was to have at least three experiments with an uncertainty of 50 parts per billion (ppb) or better and at least one with an uncertainty of 20 ppb or better. Previous failures to meet that goal is the key reason the SI redefinition is not yet in place. That goal has now been met. There are multiple experiments with much better than the requisite 50 ppb uncertainty and three with better than 20 ppb.
{ "domain": "physics.stackexchange", "id": 41878, "tags": "mass, si-units, metrology" }
ROS wiki password reset broken?
Question: When I use the Lost Password page and enter my username and email, I get an error "{u'<redacted@example.com>': (450, '4.1.8 <moin@ros.osuosl.org>: Sender address rejected: Domain not found')}" (redaction my own). How do I reset my password? Originally posted by DanRose on ROS Answers with karma: 274 on 2020-07-29 Post score: 0 Original comments Comment by kscottz on 2020-07-30: Strange, we've been having some issues lately with the wiki. Can you PM your e-mail address so we can take a look? Answer: The from address of the server was misconfigured. It should be able to send the emails now. Originally posted by tfoote with karma: 58457 on 2020-08-01 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 35347, "tags": "ros, ros2, wiki" }
Is the probabilistic nature of QM simply just randomness (does it exclude causality)?
Question: I have read this question: What is the reason that Quantum Mechanics is random? where Puk says in a comment: I see. I would call both "random", with the degree of "randomness" defined by ψ. But yes, just a matter of semantics. On a measurement level, is quantum mechanics a deterministic theory or a probability theory? where Arnold Neumaier says: On the other hand, it is clear from the laws of physics such a computing device can never be built, and the required knowledge can never be collected since an infinite amount of storage is already needed for representing a single transcendental number numerically. Therefore, in fact, one has to be content with approximations, resulting in a probabilistic prediction only. Thus, in practice, physics is probabilistic. So based on these, the probabilistic nature of QM is basically just the same as randomness, we can never build a computer that could incorporate all the needed information, and we need to use approximations. How do we know that certain quantum effects are random? where CR Drost says: In a very superficial sense of the term, one which has to do with excluding the possibility of determinism and therefore asking, "is there a way to understand the system as having an initial state which forced it to come to this conclusion," the answer is a qualified "no" he qualification here is the word "global": using some thought experiments (my favorite is a game called Betrayal) one can prove that there are quantum effects which cannot be understood in terms of classical local information In a deeper sense randomness is our way of reasoning about information that we do not know But this one says more. This says that randomness means there is no way for the system to have a initial state which (because of causality) forces the system to evolve to a certain state. And that basically the world is quantum mechanical and there are quantum effects which cannot be understood in classical sense. This would mean that QM is not just simply random, but there are quantum effects that we do not understand and cannot even explain classically, it is not just simply random, but the underlying nature of the universe is probabilistic, and that is what we can model with mathematics. Is the universe fundamentally deterministic? But my question is about randomness meaning unpredictable, that is in some ways excluding causality. I do believe that QM probability does include causality, that is, it is predictable (to some extent). Question: Is the probabilistic nature of QM simply just randomness (does it exclude causality)? Answer: I would regard probability and randomness as essentially synonymous, but I think that does not answer the real question you are getting at, which has to do with determinacy and indeterminacy. In classical physics, and in classical probability theory, it is assumed that random results arise from unknown quantities or "hidden variables". The universe could be determinate, but it would still be impossible to determine all unknown quantities, and consequently we would only be able to give a probabilistic theory of measurement results. This seems to be Neumaier's point, but it does not explain quantum probability. There are numerous proofs, starting with von Neumann (whose original proof has been tightened up by numerous authors, such as Jauch and Piron, and Stan Gudder), Kochen and Specker, Bell, who gave two proofs (one of which he did not accept himself although it is perfectly valid and usually accepted as a variant on Kochen-Specker) which demonstrate that quantum predictions cannot be explained by a theory determined by classical hidden variables. Those proofs are rejected by some physicists for much the same reasons that Dingle rejected relativity. I.e. "I can't understand this, so it must be wrong". There is not much point in paying heed to that kind of argument (admittedly qm is far more difficult to understand than relativity). The conclusion must be that randomness (and hence probability) in qm is a result of a fundamental indeterminacy in nature.
{ "domain": "physics.stackexchange", "id": 68870, "tags": "quantum-mechanics, quantum-information, probability, causality" }
Why FQHE need a lower energy state?
Question: There are a lot papers explaining why Laughlin's wavefunction are energetically favorable, but seldom explain why a lower energy state could explain the plateau at $\nu=1/3$. I met at several places claims like: a lower energy state at $\nu=1/3$ will pin the electron density at $\nu=1/3$. But why is that? And what actually it means? when we move $\nu$ from $1/3$ what happens? electrons adjust there distance or new particle been added? And is this a phase transition? Hope someone familiar this field could give me some help, thanks! Answer: The Laughlin state alone doesn't explain the plateau. There is a lot more to the story. Firstly at filling factor=1/3 the many-body ground state of the interacting electron gas is "approximately" the Laughlin wavefunction. By this I mean that the overlap between the Laughlin state and the numerically found ground state (for any realistic interaction like coulombic) is very large, i.e. their inner product is quite close to 1. Using the plasma analogy one can show that this state corresponds to uniform electron density. (See Girvin's Les Houches notes for details on Plasma analogy.) Secondly the transport phenomena are decided by charged excitations in the system. For the filling factors 1/3,1/5,1/7,etc. the charged excitations are quasiholes and quasielectrons. While the former has a dip in the density profile at some point Z (say) in the 2D plane, the latter has the opposite thing in its density profile (as opposed to the earlier uniform case). The plasma analogy can again be used to show that these quasiparticles will have fractional e/3 charge in our case. (Atleast for now let us avoid justifying why they are excited states.) Now lets say we are sitting exactly at 1/3 filling factor and then we add an electron to the system. It will break into 3 quasielectrons which can be separated at no extra energy cost (the idea of fractionalization). Similarly if some more electrons are added they will produce more quasiparticles. Now start thinking in terms of the 'semiclassical percolation picture' that is applied to electrons to explain Integer QHE (Again see Girvin's notes). Instead of electrons we give the same arguments using quasiparticles to explain the plateaus around 1/3 filling factor. The conductivities stop changing when the added quasiparticles are either going into the valleys of the disorder potential or are ending up on shorelines at the 2 well separated edges. Let me clarify things a bit more. Think of starting with the 1/3 filling factor ground state. Now let us add adiabatically 1 flux quanta through a thin solenoid at the origin of space (See Laughlin's Nobel lecture). He shows that in this process e/3 charge flows towards the origin and gets collected there. Thus we have ended up with an exact ground eigenstate of the original hamiltonian + e/3 charge. So quasiholes are 'charged excitations', not the excited state when sitting at 1/3 filling. In fact the low energy gapped excitations at 1/3 filling are 'neutral collective excitations' (Again see Girvin's notes) and the existence of this gap is necessary for adiabaticity to work fine in the above thought experiment. (In the words of Laughlin the usage of the word quasiparticles here was "unfortunate".) Now if I just move the filling factor a bit in an experiment the new ground state is made up of new "quasiparticles".
{ "domain": "physics.stackexchange", "id": 6852, "tags": "condensed-matter, quantum-hall-effect" }
pre-requisite C programming knowledge for ROS?
Question: Hi i am new to ROS. i want to know that if advance programming concpts in C programming(e.g inheritance etc) are essential to understand ROS? Originally posted by hina on ROS Answers with karma: 1 on 2012-06-21 Post score: 0 Answer: I don't want to sound too critical, but ROS uses C++, not C. The ROS API is strictly object-oriented in nature, so it's essential that you at least understand how to instantiate/use objects. Here's my list of programming concepts that you should understand if you were to use C++ in ROS: Object-Oriented Design: If you're going to code using ROSCpp, it's vital that you at least understand how to instantiate, handle, destroy, and pass objects. This applies to using Python with ROS as well. Interface Programming: Any time that you use ROS's pluginlib features, you will be required to utilize interfaces to create your plugins. In terms of C++, this means inheriting from a base class and implementing virtual functions. So if you want to use pluginlib, you will need to understand inheritance. Inheritance also exists in Python, so it's a good concept to know either way. Real-time Programming: Based on your application, you might need to understand the concepts of real-time programming to achieve your goals. Boost: Boost is an incredibly powerful and helpful library, and ROS already includes it as a dependency, so you should learn how to use if you're going to use ROSCpp. Originally posted by DimitriProsser with karma: 11163 on 2012-06-22 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 9890, "tags": "ros" }
De Casteljau's Algorithm Tool for Khan Academy Contest
Question: So I wrote this code for a contest going on over at Khan Academy known as "Pixar in a Program". The goal of the contest is to create an entry that uses one of the skills shown in the new "Pixar in a Box" course made by them in their partnership with Disney Pixar. In my entry I used De Casteljau's algorithm to make a tool that allowed for easy editing to find the touching point of a parabola. My entry can be found here: https://www.khanacademy.org/computer-programming/de-casteljaus-algorithm-made-easy-wip/5879067530887168 It is written in their live editor over at Khan Academy using the Processing port to JavaScript known as Processing.JS. Here's the code: var a = [-150, 50], b = [0, -50], c = [150, 50]; var ar = [], br = [], cr = []; // The "r" in these variable names means rounded, this is for grid snapping. var ad = [], bd = [], cd = [], qd = [], rd = [], pd = []; // The "d" in these variable names means data, this is for data ouput to match the Cartesian coordinate plane snaps. var t = 0.25; var q = [(1 - t) * a[0] + t * b[0], (1 - t) * a[1] + t * b[1]]; var r = [(1 - t) * b[0] + t * c[0], (1 - t) * b[1] + t * c[1]]; var p = [(1 - t) * q[0] + t * r[0], (1 - t) * q[1] + t * r[1]]; var qMenu = false; var qMenuHover = false; var rMenu = false; var rMenuHover = false; var pMenu = false; var pMenuHover = false; var settingsMenu = false; var settingsMenuHover = false; var mfp = false; // Stands for "Make Full Parabola" var totalPoints = 3; var selected = false; var x = function(i) { if(i === 0) { return ar[0]; } else if(i === 1) { return br[0]; } else if(i === 2) { return cr[0]; } }; // Used to get the x value of a certain point var y = function(i) { if(i === 0) { return ar[1]; } else if(i === 1) { return br[1]; } else if(i === 2) { return cr[1]; } }; // Used to get the y value of a certain point var setX = function(i, value) { if(i === 0) { a[0] = value; } else if(i === 1) { b[0] = value; } else if(i === 2) { c[0] = value; } }; // Used to set the x value of a certain point var setY = function(i, value) { if(i === 0) { a[1] = value; } else if(i === 1) { b[1] = value; } else if(i === 2) { c[1] = value; } }; // Used to get the y value of a certain point var drawControlPoint = function(cp) { var cpX; var cpY; if(cp === 0) { cpX = ar[0]; cpY = ar[1]; } else if(cp === 1) { cpX = br[0]; cpY = br[1]; } else if(cp === 2) { cpX = cr[0]; cpY = cr[1]; } if(cp === selected) { if(mouseIsPressed) { fill(66, 97, 222); } else { fill(128, 128, 128); } } else { fill(255, 255, 255); } strokeWeight(1); stroke(163, 163, 163); ellipse(cpX, -cpY, 10, 10); noStroke(); }; // Used to draw each control point translate(200, 200); // Translate the program to make the origin at the center of the screen mouseDragged = function() { // See the comments in the mouseMoved function var MouseX = mouseX - 200; var MouseY = -(mouseY - 200); var PMouseX = pmouseX - 200; // If a point is selected, set its x and y to the MouseX and MouseY if (selected !== null) { setX(selected, MouseX); setY(selected, MouseY); } // Variable t controller if(mouseX > 200 && mouseX < 400 && mouseY > 0 && mouseY < 40) { t += (MouseX - PMouseX) / 160; } }; mouseMoved = function() { // I have to adjust the mouse coords due to the translation of data var MouseX = mouseX - 200; var MouseY = -(mouseY - 200); // Make MouseY follow the Cartesian coordinate system selected = null; // Find the distance of the mouse to the control point for(var i = 0; i < totalPoints; i++) { if(dist(MouseX, MouseY, x(i), y(i)) < 5) { selected = i; } } if(mouseY > 380) { if(mouseX > 50 && mouseX < 100) { qMenuHover = true; } else if(mouseX > 150 && mouseX < 200) { rMenuHover = true; } else if(mouseX > 250 && mouseX < 300) { pMenuHover = true; } else if(mouseX > 360 && mouseX < 400) { settingsMenuHover = true; } else { qMenuHover = false; rMenuHover = false; pMenuHover = false; settingsMenuHover = false; } } else { qMenuHover = false; rMenuHover = false; pMenuHover = false; settingsMenuHover = false; } }; mouseClicked = function() { // variable = !variable allows me to toggle variables if(mouseY > 380) { if(mouseX > 50 && mouseX < 100) { qMenu = !qMenu; } else if(mouseX > 150 && mouseX < 200) { rMenu = !rMenu; } else if(mouseX > 250 && mouseX < 300) { pMenu = !pMenu; } else if(mouseX > 360 && mouseX < 400) { settingsMenu = !settingsMenu; } } if(settingsMenu && mouseX > 370 && mouseX < 390 && mouseY > 355 && mouseY < 375) { mfp = !mfp; // Toggle Make Full Parabola } }; draw = function() { /** --- CARTESIAN COORDINATE PLANE --- **/ background(120, 228, 255); stroke(0, 0, 0); strokeWeight(0.5); for (var x = -200; x < 200; x += 20) { line(x, -200, x, 200); } for (var y = -200; y < 200; y += 20) { line(-200, y, 200, y); } // Draw a thicker line along the origin lines of the x and y axes strokeWeight(2); line(0, -200, 0, 200); line(-200, 0, 200, 0); /* --- GRID SNAPPING --- */ // This is where we use those "r" variables. ar[0] = round(a[0] / 10) * 10; ar[1] = round(a[1] / 10) * 10; br[0] = round(b[0] / 10) * 10; br[1] = round(b[1] / 10) * 10; cr[0] = round(c[0] / 10) * 10; cr[1] = round(c[1] / 10) * 10; // We then set the "d" variables for later output. ad[0] = ar[0] / 20; ad[1] = ar[1] / 20; bd[0] = br[0] / 20; bd[1] = br[1] / 20; cd[0] = cr[0] / 20; cd[1] = cr[1] / 20; qd[0] = q[0] / 20; qd[1] = q[1] / 20; rd[0] = r[0] / 20; rd[1] = r[1] / 20; pd[0] = p[0] / 20; pd[1] = p[1] / 20; /** --- POINTS A-C AND Q,R,P ALGORITHM GENERATION --- **/ strokeWeight(3); stroke(255, 0, 21); line(ar[0], -ar[1], br[0], -br[1]); line(br[0], -br[1], cr[0], -cr[1]); if(!mfp) { strokeWeight(3); q = [(1 - t) * ar[0] + t * br[0], (1 - t) * ar[1] + t * br[1]]; r = [(1 - t) * br[0] + t * cr[0], (1 - t) * br[1] + t * cr[1]]; p = [(1 - t) * q[0] + t * r[0], (1 - t) * q[1] + t * r[1]]; line(q[0], -q[1], r[0], -r[1]); noStroke(); fill(255, 242, 0); // Show that these are not draggable ellipse(q[0], -q[1], 10, 10); ellipse(r[0], -r[1], 10, 10); ellipse(p[0], -p[1], 10, 10); } else { strokeWeight(1); for(var i = 0; i < 1; i += 0.05) { q = [(1 - i) * ar[0] + i * br[0], (1 - i) * ar[1] + i * br[1]]; r = [(1 - i) * br[0] + i * cr[0], (1 - i) * br[1] + i * cr[1]]; p = [(1 - i) * q[0] + i * r[0], (1 - i) * q[1] + i * r[1]]; line(q[0], -q[1], r[0], -r[1]); } } for(var i = 0; i < totalPoints; i++) { drawControlPoint(i); } // Draw control points /** --- VARIABLE 'T' SLIDER --- **/ t = constrain(t, 0, 1); // It's possible to exceed 1 or going the opposite way with 0, so we'll constrain it. noStroke(); fill(255, 255, 255, 200); rect(0, -200, 200, 40); stroke(152, 179, 230); strokeWeight(2); line(40, -180, 180, -180); fill(72, 123, 224); ellipse(40 + t * 140, -180, 15, 15); fill(0, 0, 0); textAlign(CENTER, CENTER); textSize(25); text("t", 20, -180); textSize(12); text(t, 40 + t * 140, -166); /** --- ALGORITHM COMPUTATION DISPLAY --- **/ textAlign(CORNER, CORNER); textSize(18); fill(0, 0, 0); text("Q: (" + qd[0].toFixed(2) + ", " + qd[1].toFixed(2) + ")", -180, -180); text("R: (" + rd[0].toFixed(2) + ", " + rd[1].toFixed(2) + ")", -180, -160); text("P: (" + pd[0].toFixed(2) + ", " + pd[1].toFixed(2) + ")", -180, -140); /** --- STEP-BY-STEP ALGORITHM EVALUATIONS WITH MENU --- **/ /* --- MENUS --- */ // The reason I have so many .toFixed(2), it's because in weird situations it will go up to 16 decimal places // There can be rounding errors because of .toFixed not rounding, that's why I compute the final step with the qd/rd/pd rather than manually if(qMenu) { fill(0, 0, 0, 200); noStroke(); rect(-200, -25, 200, 205); fill(255, 255, 255); textSize(11); textAlign(CENTER, CENTER); text("Qx = (1-t) * Ax + t * Bx", -100, -10); text("Qx = (1-" + t.toFixed(2) + ") * " + ad[0].toFixed(2) + " + " + t.toFixed(2) + " * " + bd[0].toFixed(2), -100, 10); text("Qx = (" + (1 - t.toFixed(2)).toFixed(2) + ") * " + ad[0].toFixed(2) + " + " + t.toFixed(2) + " * " + bd[0].toFixed(2), -100, 30); text("Qx = " + ((1 - t.toFixed(2)) * ad[0]).toFixed(2) + " + " + (t.toFixed(2) * bd[0]).toFixed(2), -100, 50); text("Qx = " + qd[0].toFixed(2), -100, 70); text("Qy = (1-t) * Ay + t * By", -100, 90); text("Qy = (1-" + t.toFixed(2) + ") * " + ad[1].toFixed(2) + " + " + t.toFixed(2) + " * " + bd[1].toFixed(2), -100, 110); text("Qy = (" + (1 - t.toFixed(2)).toFixed(2) + ") * " + ad[1].toFixed(2) + " + " + t.toFixed(2) + " * " + bd[1].toFixed(2), -100, 130); text("Qy = " + ((1 - t.toFixed(2)) * ad[1]).toFixed(2) + " + " + (t.toFixed(2) * bd[1]).toFixed(2), -100, 150); text("Qy = " + qd[1].toFixed(2), -100, 170); } if(rMenu) { fill(0, 0, 0, 200); noStroke(); rect(-100, -25, 200, 205); fill(255, 255, 255); textSize(11); textAlign(CENTER, CENTER); text("Rx = (1-t) * Bx + t * Cx", 0, -10); text("Rx = (1-" + t.toFixed(2) + ") * " + bd[0].toFixed(2) + " + " + t.toFixed(2) + " * " + cd[0].toFixed(2), 0, 10); text("Rx = (" + (1 - t.toFixed(2)).toFixed(2) + ") * " + bd[0].toFixed(2) + " + " + t.toFixed(2) + " * " + cd[0].toFixed(2), 0, 30); text("Rx = " + ((1 - t.toFixed(2)) * bd[0]).toFixed(2) + " + " + (t.toFixed(2) * cd[0]).toFixed(2), 0, 50); text("Rx = " + rd[0].toFixed(2), 0, 70); text("Ry = (1-t) * By + t * Cy", 0, 90); text("Ry = (1-" + t.toFixed(2) + ") * " + bd[1].toFixed(2) + " + " + t.toFixed(2) + " * " + cd[1].toFixed(2), 0, 110); text("Ry = (" + (1 - t.toFixed(2)).toFixed(2) + ") * " + bd[1].toFixed(2) + " + " + t.toFixed(2) + " * " + cd[1].toFixed(2), 0, 130); text("Ry = " + ((1 - t.toFixed(2)) * bd[1]).toFixed(2) + " + " + (t.toFixed(2) * cd[1]).toFixed(2), 0, 150); text("Ry = " + rd[1].toFixed(2), 0, 170); } if(pMenu) { fill(0, 0, 0, 200); noStroke(); rect(0, -25, 200, 205); fill(255, 255, 255); textSize(11); textAlign(CENTER, CENTER); text("Px = (1-t) * Qx + t * Rx", 100, -10); text("Px = (1-" + t.toFixed(2) + ") * " + qd[0].toFixed(2) + " + " + t.toFixed(2) + " * " + rd[0].toFixed(2), 100, 10); text("Px = (" + (1 - t.toFixed(2)).toFixed(2) + ") * " + qd[0].toFixed(2) + " + " + t.toFixed(2) + " * " + rd[0].toFixed(2), 100, 30); text("Px = " + ((1 - t.toFixed(2)) * qd[0]).toFixed(2) + " + " + (t.toFixed(2) * rd[0]).toFixed(2), 100, 50); text("Px = " + pd[0].toFixed(2), 100, 70); text("Py = (1-t) * Qy + t * Ry", 100, 90); text("Py = (1-" + t.toFixed(2) + ") * " + qd[1].toFixed(2) + " + " + t.toFixed(2) + " * " + rd[1].toFixed(2), 100, 110); text("Py = (" + (1 - t.toFixed(2)).toFixed(2) + ") * " + qd[1].toFixed(2) + " + " + t.toFixed(2) + " * " + rd[1].toFixed(2), 100, 130); text("Py = " + ((1 - t.toFixed(2)) * qd[1]).toFixed(2) + " + " + (t.toFixed(2) * rd[1]).toFixed(2), 100, 150); text("Py = " + pd[1].toFixed(2), 100, 170); } if(settingsMenu) { fill(0, 0, 0, 200); noStroke(); rect(50, 125, 150, 55); fill(186, 186, 186); textSize(12); textAlign(CENTER, CENTER); text("If this is pressed it will\nnullify other data.", 125, 141); fill(255, 255, 255); text("Make Full Parabola", 110, 166); stroke(255, 255, 255); noFill(); rect(170, 156, 20, 20); if(mfp) { stroke(0, 255, 9); strokeWeight(3); line(172, 171, 177, 176); line(177, 176, 187, 156); } noStroke(); } textAlign(CORNER, CORNER); /* --- BOTTOM BAR --- */ fill(0, 0, 0, 200); noStroke(); rect(-200, 180, width, 20); textSize(15); if(qMenuHover) { fill(255, 255, 255, 100); rect(-175, 180, 100, 20); } else if(rMenuHover) { fill(255, 255, 255, 100); rect(-75, 180, 100, 20); } else if(pMenuHover) { fill(255, 255, 255, 100); rect(25, 180, 100, 20); } else if(settingsMenuHover) { fill(255, 255, 255, 100); rect(160, 180, 40, 20); } fill(255, 255, 255); text("Point Q", -150, 195); text("Point R", -50, 195); text("Point P", 50, 195); stroke(255, 255, 255); line(170, 195, 180, 185); line(180, 185, 190, 195); }; Note: I didn't use mouseDragged in totality because it gets flaky if you move your mouse too fast. So instead I set a boolean based off dist(); in mouseDragged, and controlled it elsewhere. Answer: This will be a slightly abstract review. I'm not a big fan of Processing.js - or rather, I don't know it well enough to know if I'm going against the grain. And Khan Academy's editor is annoying me (it reminds me too much of Microsoft's Clippy; always butting in). So I'll be talking mostly about how you could do things in raw JavaScript. Some of it already exists in Processing.js, or is done differently in Processing.js. Anyway, my first point would be that user interface and core logic are too intermingled here. But that's Processing for you - it gets messy. My next point would be to attack this more high-level. The core data type you're dealing with is coordinates - x and y. Also known as a point, or a vector. You're storing them as two numbers in an array, which is valid, but with a little more preparation, you can use objects instead, which in turn can make you code more expressive. Right now a lot of your code relies on hard-coded array indices, when what you really mean is x or y, or point a or b. So let's make some points, using a Point constructor (Processing has a PVector constructor you can use instead): function Point(x, y) { this.x = x; this.y = y; } var a = new Point(-150, 20); var b = new Point(0, -50); var c = new Point(150, 50); That's our control points. Next are the interpolated points Q, R, and P. Here's where object orientation comes in handy. The coordinates of the interpolated points are derived entirely from the coordinates of the control points. So we'll want our InterpolatedPoint constructor to take two Points as arguments: function InterpolatedPoint(a, b) { this.a = a; this.b = b; } Now, instead of just assigning some numbers to the interpolated point's x and y, let's make getter methods that'll calculate coordinates on the fly: InterpolatedPoint.prototype = { getX: function (t) { return lerp(this.a.x, this.b.x, t); }, getY: function (t) { return lerp(this.a.y, this.b.y, t); } }; I'm using lerp here, which is a basic linear interpolation function that's also in Processing.js. It's just (b - a) * t + a. So putting that to use: var q = new InterpolatedPoint(a, b); var r = new InterpolatedPoint(b, c); var p = new InterpolatedPoint(q, r); So now, you can change the coordinates of points a, b, and c, yet as soon as you call p.getX(0.5) or q.getY(0.8) you'll get the right value back. The InterpolatedPoint object keep references to the Point objects that define them. So you're pretty close to a algebraic definition, and the logic has been encapsulated in objects. $$ \vec{q} = (\vec{b}-\vec{a})t + \vec{a} $$ $$ \vec{r} = (\vec{c}-\vec{b})t + \vec{b} $$ $$ \vec{p} = (\vec{r}-\vec{q})t + \vec{q} $$ From here it's a question of drawing the points and lines. I'd suggest moving some of this logic to methods on Point/InterpolatedPoint - i.e. make them draw themselves. Because it was a fun little challenge, I've written an alternative implementation (which also differs from the above a little by defining "real" getters) in plain JS. Note: Since I'm using a built-in slider input, it won't work in IE9 and below.
{ "domain": "codereview.stackexchange", "id": 16375, "tags": "javascript, programming-challenge, processing.js" }
Why do we use ΔxΔp≥ℏ instead of ΔxΔp≥ℏ/2 to calculate the minimum uncertainty in momentum?
Question: I came across a question in which we were asked to calculate the minimum uncertainty in the momentum of a particle. The solution showed that the min. uncertainty is found using the formula ΔxΔp≥ℏ instead of ΔxΔp≥ℏ/2. Why do we use this specific formula here? Answer: I think it's just because it's only a factor of 2. Same order of magnitude. Using the uncertainty principle for a question like this is usually just meant to be an estimate/ball park, so when you do the proper calculation you know if your final answer makes sense.
{ "domain": "physics.stackexchange", "id": 67374, "tags": "momentum, heisenberg-uncertainty-principle" }
How is InstructGPT a fine-tuned version of GPT-3 and at the same time has fewer parameters than the original GPT3?
Question: I am reading the paper "Training language models to follow instructions with human feedback" It says: Our labelers provide demonstrations of the desired behavior on the input prompt distribution (see Section 3.2 for details on this distribution). We then fine-tune a pretrained GPT-3 model on this data using supervised learning. The paper also says: On our test set, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having over 100x fewer parameters. I am not able to understand how the aligned model is a fine tune of GPT-3 using supervised learning (and other steps associated with reinforcement learning) and at the same time the aligned model has fewer parameters than the original model. Can someone give me a hint on the subject? Answer: They say “a pretrained GPT-3” model, emphasis on “a” implying one of many, rather than “the”. I believe they simply repeat the process with various parameters scales of the pretrained GPT-3, comparing the performance to GPT-3 175B throughout to see if there are parameter efficiency gains. They note that even the 1.3B instructGPT version outperforms the 175B GPT-3 version.
{ "domain": "ai.stackexchange", "id": 3689, "tags": "weights, gpt-3, instruct-gpt" }
What is an agent in Artificial Intelligence?
Question: While studying artificial intelligence, I have often encountered the term "agent" (often autonomous, intelligent). For instance, in fields such as Reinforcement Learning, Multi-Agent Systems, Game Theory, Markov Decision Processes. In an intuitive sense, it is clear to me what an agent is; I was wondering whether in AI it had a rigorous definition, perhaps expressed in mathematical language, and shared by the various AI-related fields. What is an agent in Artificial Intelligence? Answer: The acclaimed book Artificial Intelligence: A Modern Approach (by Stuart Russell and Peter Norvig) gives a definition of an agent An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. This definition is illustrated by the following figure This definition (and illustration) of an agent thus does not seem to include the agent as part of the environment, but this is debatable and can be a limitation, given that the environment also includes the agent. According to this definition, humans, robots, and programs are agents. For example, a human is an agent because it possesses sensors (e.g. the eyes) and actuators (e.g. the hands, which, in this case, are also sensors) and it interacts with an environment (the world). A percept (or perception) is composed of all perceptual inputs of the agents. The specific definition of the percept changes depending on the specific agent. For example, in the case of a human, the percept consists of all perceptual inputs from all sense organs of the human (eyes, ears, tongue, skin, and nose). In the case of a robot only equipped with a camera, the percept consists only of the camera frame (at a certain point in time). A percept sequence is a sequence of percepts. An action is anything that has an effect on the environment. For example, in the case of a legged robot, an action can be "move forward". An action is chosen by the agent function (which is illustrated by the white box with a black question mark in the figure above), which can also be called policy. The agent function highly determines the intelligent or intellectual capabilities of the agent and differentiates it from other agents. Therefore, there are different agents depending on the sensors and actuators they possess, but, more importantly, depending on their policy, which highly affects their intellectual characteristics. A possible categorization of agents is rational agents do the "right" thing (where "right", of course, depends on the context) simple reflex agents select actions only based on the current percept (thus ignoring previous percepts) model-based reflex agents build a model of the world (sometimes called a state) that is used to deal with cases where the current percept is insufficient to take the most appropriate action goal-based agents possess some sort of goal information that describes situations that are desirable; for example, in the case of a human, a situation that is desirable is to have food utility-based agents associate value with certain actions more than others; for example, if you need immediate energy, chocolate might have more value than some vegetable learning agents update their e.g. model based on the experience or interaction with the environment More details regarding these definitions can be found in section 2 of the book mentioned above (3rd edition). However, note that there are other possible categorizations of agents. A reinforcement learning (RL) agent is an agent that interacts with an environment and can learn a policy (a function that determines how the agent behaves) or value (or utility) function (from which the policy can be derived) from this interaction, where the agent takes an action from the current state of the environment, and the environment emits a percept, which, in the case of RL, consists of a reinforcement (or reward) signal and the next state. The goal of the RL agent is to maximize the cumulative reward (or reinforcement) signal. An RL agent can thus be considered a rational, goal, utility-based, and learning agent. It can also be (or not) a simple reflex and model-based agent.
{ "domain": "ai.stackexchange", "id": 1199, "tags": "reinforcement-learning, terminology, definitions, intelligent-agent" }
Finding periodicity of spikes, matlab
Question: In my time series data, I have often spikes with regular periods. Sometimes, I get two or three different periodical spike sequences (two electrical noise sources), some with T1 seconds period, some with T2 second period and some irregular and at random times. Lets say I have two groups of overlapping periodical spikes in my data, one with a period of 8 seconds and another with 6. I have zeroed the data except for the times where there is spikes. So, my data now looks like a train of spikes. How can I find the periodicity of the two and separate the two groups? Cross-correlation didn't help as many are just single sample spikes. I was thinking of FFT. But I can't see anything in the spectra at those low frequencies, nor when I use FFT on original data. My data has 500 Hz. sampling frequency. I do not want to remove them, just to find the periodicity. Thanks. Answer: My Scipy code that generates data, plots it and performs the fft is listed below. The code generates two "spike trains" at intervals of 6 and 8 seconds. The following plot shows the signal in time, and its fft (sorry, the fft title got mangled). The interesting thing to notice is that the frequencies you are looking for are there. Said frequencies are 1/8 Hz or omega = 1/8/500 = 0.00025 omega and 1/6 Hz or omega = 1/6/500 = 0.00033 Which is where the first two peaks are in the frequency domain. Further out are many more repeating peaks/aliases because these are essentially really extreme, intermittent square waves. You could try low-pass-filtering to isolate the fundamental frequencies and eliminate other peaks, but I think that's probably a lost cause at a sample rate of 500 Hz. Trying to get a 0.2 Hz cutoff filter @ 500 Hz seems a bit much. Just zooming in on the frequency region of interest might be good enough. The next step could be to understand exactly how these spike trains are going to cause repetition/aliasing in the frequency domain and look for a certain pattern. Or try to either LPF and downsample a lot and retry the FFT route, calculate individual DFT bins using straight-up DFT math or use the Goertzel algorithm to calculate DFT bins for suspected frequency locations. Maybe try a Goertzel algorithm from 0.0 Hz to 0.2 Hz with 0.01 spacing - just as an example. Good luck - it doesn't seem like a trivial problem. from pylab import * close('all') #close previous plots fs=500 # generate signal s1 = hstack(( 1,zeros(fs*6-1) )) # 1/6 Hz s2 = hstack(( 1,zeros(fs*8-1) )) # 1/8 Hz s1r = tile(s1,(12,)) s2r = tile(s2,(9,)) s = s1r + s2r print "shape of s is %s"%(str(s.shape)) sfft=fft(s) # execute fft sfreqs=fftshift(fftfreq(len(sfft))) print "shape of sfft is %s"%(str(sfft.shape)) figure(); # generate plots subplot(211); plot( 1./fs * arange(len(s)) , s ) #, 'o' ) title('signal, s');xlabel('sec');ylabel('mag') subplot(212); plot( sfreqs , (abs(sfft)) ) #, 'o' ) title('fft(s)');xlabel('$\omega$');ylabel('linear mag') axis([0.0,0.0005,0.0,25.]) show() UPDATE: Thinking about it some more, the best solution probably doesn't involve signal processing. You may want to threshold the samples in to zeros and spikes. Then take the interval between the spike locations 0 and 1 and look for that interval through the rest of the spikes. If that interval is consistent with all of your spikes, then you've found that series of spikes and should "subtract out" that series from the spikes. Then keep iteratively looking at intervals of spikes. In a noise, imperfect system, a good implementation of this will probably be tricky, but it's probably a better solution to this specific problem.
{ "domain": "dsp.stackexchange", "id": 754, "tags": "fft, cross-correlation" }
How to programmatically add display to running Rviz instance?
Question: I'm writing a package that is meant to be run after a 3rd-party package starts Rviz. My package simply provides a new display type, and some topics for that display type to subscribe to. Currently, I have to launch the 3rd-party package, then launch my package, then manually add and configure the display type to Rviz. It would be nice if in the launch file for my package I could automatically add and configure the display. Is this at all possible? I don't think Rviz exposes any services for this, or allows you to 'force' it to reload a custom config file, so I don't know how this would be possible. Originally posted by danep on ROS Answers with karma: 197 on 2013-09-05 Post score: 0 Original comments Comment by danep on 2013-09-05: I thought about that, but it was too ugly to seriously consider :) I guess I'll just resort to hacking the other package's config file, and maintaining a vendor branch. It would be nice if there were a Better Way in the future... Answer: The only (hacky) way I see would be to add a 'rosnode kill' on the rviz instance launched by the 3rd party package and an rviz instance with a different config to your own launchfile. Originally posted by dgossow with karma: 1444 on 2013-09-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15434, "tags": "rviz, ros-groovy" }
How to differentiate exponentials of operators?
Question: Suppose we have $$e^{At}e^{Bt}=F(t),$$ where $$A, B$$ - operators that do not commute. Now I need to take the derivative $$dF(t)/dt.$$ In which order do I write the operators? $$dF(t)/dt = Ae^{At}e^{Bt} + e^{At}Be^{Bt}$$ or $$dF(t)/dt = e^{At}Ae^{Bt} + e^{At}e^{Bt}B~?$$ Answer: They are the same, since any operator commutes with its exponential \begin{align} Ae^{tA} &= A\left(1+tA+\frac{1}{2!}t^2A^2+\dots\right) \\ &= A+tA^2+\frac{1}{2!}t^2A^3+\dots \\ &= \left(1+tA+\frac{1}{2!}t^2A^2+\dots\right)A \\ &= e^{tA}A \end{align} (and, in general, any operator $A$ commutes with every function $f(A)$ of it, for similar reasons).
{ "domain": "physics.stackexchange", "id": 59010, "tags": "operators, differentiation, commutator" }
If a conductor becomes charged will the surface be neutral?
Question: If an object that is a conductor has a negative or positive charge will the charge redistribute so that the surface of the object is neutral? If the surface does become neutral will it be attracted to things? Answer: Actually, it is quite the opposite; usually the charge redistribute itself on the surface and inside the conductor there is no charge. Simply think in this way, if the conductor is charged then the charge carriers because of self repulsion will position themselves as far as possible, that is mostly on the surface.
{ "domain": "physics.stackexchange", "id": 35240, "tags": "electrostatics, charge, conductors" }
Changing RPLIDAR laser scanner motor from being always on?
Question: (This is on Ubuntu 12.04 and Hydro, Turtlebot2 with kobuki.) I just got a Robopeak RPLIDAR, and I'm working on mounting it to my turtlebot2. (I wanted a wider angle than the kinect could give me.) As soon as the RPLIDAR is plugged into the USB slot, the motor starts spinning (This is without even having the driver loaded). The motor is enabled by the RS232 DTR signal. When the RPLIDAR ros node starts, it also sets the DTR signal to start the motor spinning. I often have my turtlebot on for hours (days!) at a time, even if I'm not using it. I don't like the idea of the motor spinning for hours (days!). I'm thinking of modifying the RPLIDAR ros node to implement a way to stop/start the motor. What do people think of these ideas? Add a service to the rplidar node, with 3 messages: motor_on, motor_off, get_motor_status. The motor could be controlled from command line, or programatically. Add a dynamic parameter to the rplidar node: motor_enable. Could still use command line or program to change. Use a special case of the rplidar publishing rate. If set to 0 Hz, this would turn the motor off; if non-zero, motor would turn on. (And I know the motor should be given time to warm up.) thanks for your comments, buddy Originally posted by mrsoft99 on ROS Answers with karma: 78 on 2014-08-26 Post score: 2 Answer: Hi, I have implemented the functionalities you want (the rplidar_node create 2 services "start_motor" and "stop_motor"). I have send a pull request on github, but you can access the code on my fork : https://github.com/negre/rplidar_ros Originally posted by Amaury Negre with karma: 56 on 2015-04-09 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 19190, "tags": "ros, rplidar, scanner" }
Find big files on the harddrive
Question: For practicing Python I wrote a script which scans the harddrive for large files and lists them in a report. The script takes the root path and then all the subfolders are scanned aswell. Also the minimum size for the files needs to be defined. Smaller files are not listed. The code: find_big_files.py """ Find big files on the harddrive and writes a report were the big files are located Supply root folder and desired size to find the files. """ import os def size_with_biggest_unit(size: int) -> str: """ Turns size in Bytes into KB, MB, GB or TB """ unit: str = "" if size > (1024 ** 4): unit = "TB" size = int(round(size / 1024 ** 4)) elif size > (1024 ** 3): unit = "GB" size = int(round(size / 1024 ** 3)) elif size > (1024 ** 2): unit = "MB" size = int(round(size / 1024 ** 2)) elif size > 1024 ** 1: unit = "KB" size = int(round(size / 1024)) return str(size) + unit def size_in_bytes(size: str) -> int: """ Turns size in KB, MB, GB or TB into Byte """ ret_size: int = 0 if size.endswith("TB"): ret_size = int(size.strip("TB")) * (1024 ** 4) elif size.endswith("GB"): ret_size = int(size.strip("GB")) * (1024 ** 3) elif size.endswith("MB"): ret_size = int(size.strip("MB")) * (1024 ** 2) elif size.endswith("KB"): ret_size = int(size.strip("KB")) * 1024 elif size.isdigit(): ret_size = int(size) else: raise Exception(("Input size should be digit + TB/GB/MB/KB or digit." "Size was: ") + size) return ret_size def sort_by_size(big_files: list) -> list: """ Sort dictionary with folder_name, filename and size by filesize in decreasind order """ return sorted(big_files, key=lambda k: k['file_size'], reverse=True) def write_report_of_big_files(big_files: list): """ Write report in same folder as the script is excecuted. Report contains formated output of found big files """ with open('big_files_report.txt', 'w') as file: for big_file in big_files: file.write(size_with_biggest_unit(big_file['file_size']) + '\t' + big_file['filename'] + '\t' + big_file['folder_name'] + '\n') def find_big_files(root_folder: str, input_size: str): """ Checks from all files in root and sub folder if they are exceeding a certain size """ size = size_in_bytes(input_size) big_files: list = [] for folder_name, subfolders, filenames in os.walk(root_folder): for filename in filenames: file_size = os.path.getsize(folder_name + '\\' + filename) if file_size > size: big_file: dict = {'folder_name': folder_name, 'filename': filename, 'file_size': file_size} big_files.append(big_file) sorted_big_files = sort_by_size(big_files) write_report_of_big_files(sorted_big_files) find_big_files("E:\\", "100MB") I checked the code with PyLint and MyPy. PyLint still gives one warning in line 86 (in find_big_files ): Unused variable 'subfolders' [W:unused-variable] It is true I dont use the variable subfolders but I need to supply three variables in the for loop to use folder_name and filename which i get from os.walk(root_folder) or not? Also I wonder if I used Type Annotations correctly. I used them here the first time and already found some bugs with MyPy (which are already fixed in the posted code). Other than that: Are there any other smells? Is the code easy to follow? Can anything be done easier? Feel free to comment on anything suspicious you can find. Answer: size_with_biggest_unit should be streamlined. Move the unit names into a list: unit_names = [ "", "KB", "MB", "TB" ] and iterate downwards: for exponent in range (4, -1, -1): if size > 1024 ** exponent: unit_name = unit_names[exponent] return str(round(size / 1024 ** exponent)) + unit_name raise ImpossibleError Same (almost same) applies to size_in_bytes. Hardcoding \ as a path delimiter seriously impairs the portability. Prefer os.path.sep. Instead of returning a list, consider turning it into an iterator. Re Unused variable 'subfolders', a pythonic way to tell that the variable is truly unused is to call it _. I don't know if for folder_name, _, filenames in os.walk(root_folder): would pacify PyLint, but it would definitely make reviewer happier.
{ "domain": "codereview.stackexchange", "id": 32839, "tags": "python, beginner, file, file-system" }
Is there a "water wave" analogy for lasers?
Question: I've been trying to understand why light can both be a wave and travel in a straight line, and a lot of the answers seem to include reference to the fact that the wavelength of visible light is extremely small and because of that we "don't see interference coming into play". I'm still really confused about why interference matters, but that made me consider the classic analogy for the interference pattern of light, where there are two slits on a water pond and someone creates some waves that travel through both the slits which ends up creating the pattern. What I want to know is if you could have "laser waves" on the surface of such a pond. Does a "water wave" or "water surface" analogy like this exist for lasers? If it does, what does it look like? If it does exist, how big of a pond would you need before you start seeing these effects? (Ocean size or bigger?) Answer: You can find examples online by searching for "water diffraction", although most images have slits too narrow (compared to the wavelength) to have an obvious beam on the other side. Here's a nice image where you can clearly see the beam: The physics behind this is the same as the physics behind the propagation of laser beams. See also this video (from Chiral Anomaly's comment).
{ "domain": "physics.stackexchange", "id": 82373, "tags": "optics, waves, electromagnetic-radiation, visible-light" }
find_package(catkin) fails in Hydro
Question: I am trying to move some packages made in Groovy to Hydro, which I just finished installing. After struggling to convert my previous workspace, I am trying to start again fresh, so following the tutorial I source /opt/ros/hydro/setup.bash mkdir -p ~/iar_ws_hydro/src cd ~/iar_ws_hydro/src cd ~/iar_ws_hydro/ catkin_make However, when I run the catkin_make command in my brand new folder, it seems to recognize the command and start setting up the workspace, but then fails to find the catkin package. Here's the output: abouchard@linux-z28r:~/iar_ws_hydro> catkin_make Base path: /home/abouchard/iar_ws_hydro Source space: /home/abouchard/iar_ws_hydro/src Build space: /home/abouchard/iar_ws_hydro/build Devel space: /home/abouchard/iar_ws_hydro/devel Install space: /home/abouchard/iar_ws_hydro/install #### #### Running command: "cmake /home/abouchard/iar_ws_hydro/src -DCATKIN_DEVEL_PREFIX=/home/abouchard/iar_ws_hydro/devel -DCMAKE_INSTALL_PREFIX=/home/abouchard/iar_ws_hydro/install" in "/home/abouchard/iar_ws_hydro/build" #### -- The C compiler identification is GNU 4.7.2 -- The CXX compiler identification is GNU 4.7.2 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done CMake Error at CMakeLists.txt:44 (message): find_package(catkin) failed. catkin was neither found in the workspace nor in the CMAKE_PREFIX_PATH. One reason may be that no ROS setup.sh was sourced before. -- Configuring incomplete, errors occurred! Invoking "cmake" failed I have to confess to being totally lost on this one. Obviously catkin is installed because it recognizes the catkin_make command, and the setup.bash calls setup.sh. $CMAKE_PREFIX_PATH contains /opt/ros/hydro:/home/abouchard/ros_catkin_ws/install_isolated, and I can confirm that there are a whole mess of catkin executables in /opt/ros/hydro/bin if that helps. Originally posted by teddybouch on ROS Answers with karma: 320 on 2013-11-01 Post score: 1 Answer: The problem was that I was building locally and manually moving the contents of the install_isolated directory to /opt/ros/hydro/, with the result that my paths were not properly configured and packages not properly installed. When I wiped out my installation entirely and rebuilt using the --install-space flag, everything worked fine and didn't give any errors. Originally posted by teddybouch with karma: 320 on 2013-11-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 16029, "tags": "ros" }
Iterative tree traversal to turn tree into a dictionary and list
Question: I am trying to iteratively turn a tree into a list. For example: c1 c11 c12 c111 c112 The tree above should return: [ {'value': 'c1', 'children': [ {'value': 'c12', 'children': []} , {'value': 'c11', 'children': [ {'value': 'c112', 'children': []}, {'value': 'c111', 'children': []} ] ] } ] Below is my code and here is gist for clearer viewing: class Node(): def __init__(self, node_list, value): self.node_list = node_list self.value = value def reconstruct_iteratively(root): stack = list() # A tuple that stores node to be traverse and the layer the node is in node_tuple = (root, 0) stack.append(node_tuple) layer = 0 pre_layer = 0 level = dict() # This is a postorder traversal while len(stack) != 0: # This case catch the event that next node is the parent layer and this node is not # a termination node if node_tuple[1] < pre_layer - 1: parent_node = stack.pop() parent_node_dict = {'value': parent_node[0].value, 'children': level.pop(layer, None)} pre_layer = layer layer -= 1 if layer in level: level[layer].append(parent_node_dict) else: level[layer] = [parent_node_dict] if len(stack) != 0: node_tuple = stack[-1] # This case catch the event for traversing down to the child elif node_tuple[0].node_list is not None: for child in node_tuple[0].node_list: stack.append((child, layer + 1)) node_tuple = (node_tuple[0].node_list[-1], layer + 1) layer += 1 # This case catch the event that we are at the termination node elif node_tuple[0].node_list is None: old_node = stack.pop() node_dict = {'value': old_node[0].value, 'children': []} node_tuple = stack[-1] # Two possible scenario # 1. The next node is in the same layer # 2. The next node is in the parent layer if node_tuple[1] == layer: if layer in level: level[layer].append(node_dict) else: level[layer] = [node_dict] else: if layer in level: level[layer].append(node_dict) else: level[layer] = [node_dict] parent_node = stack.pop() parent_node_dict = {'value': parent_node[0].value, 'children': level.pop(layer, None)} pre_layer = layer layer -= 1 if layer in level: level[layer].append(parent_node_dict) else: level[layer] = [parent_node_dict] if len(stack) != 0: node_tuple = stack[-1] return [parent_node_dict] I am looking for advices to make the code clearer and more efficient. Any comment would be appreciated. Thanks Answer: Some suggestions: Empty lists are False, so while len(stack) != 0 is the same as while stack. None is also False, so you can check for empty lists and None values at the same time. You can use dict.setdefault to get a value and set it to a default (in your case an empty list). if it isn't already there. You can convert an append loop to extend with a generator expression, or better yet just a zip. Your last two elif tests are mutually exclusive, so the last can be an else. The if node_tuple[1] == layer: test does the same thing in the first line of both cases. I moved that out of the if test, but if they are supposed to do something different you should fix that yourself. pre_layer always has its value subtracted by one, so it is easier to subtract one before defining it. You always set node_tuple to stack[-1] if stack is non-empty, so you can move that out of the if test entirely. And if you put it at the beginning of the loop, you can avoid the if test entirely. You can simplify this further by only getting it if you need it. def reconstruct_iteratively(root): # A tuple that stores node to be traverse and the layer the node is in stack = [(root, 0)] layer = 0 pre_layer = -1 level = dict() # This is a postorder traversal while stack: node_value, node_layer = stack[-1] node_list = node_value.node_list # This case catch the event that next node is the parent layer and # this node is not a termination node if node_layer < pre_layer: parent_node = stack.pop() parent_node_dict = {'value': parent_node[0].value, 'children': level.pop(layer, None)} layer -= 1 pre_layer = layer level.setdefault(layer, []).append(parent_node_dict) # This case catch the event for traversing down to the child elif node_list: stack.extend(zip(node_list, [layer+1]*len(node_list))) layer += 1 # This case catch the event that we are at the termination node else: # Two possible scenario # 1. The next node is in the same layer # 2. The next node is in the parent layer level.setdefault(layer, []).append({'value': node_value.value, 'children': []}) del stack[-1] if stack[-1][1] != layer: parent_node_dict = {'value': stack.pop()[0].value, 'children': level.pop(layer, None)} layer -= 1 pre_layer = layer level.setdefault(layer, []).append(parent_node_dict) return [parent_node_dict]
{ "domain": "codereview.stackexchange", "id": 13156, "tags": "python, algorithm, tree" }
2s orbital wavefunction has non-zero probability at $r=0$?
Question: The wavefunction for an electron within a hydrogen atom in the $2s$ state has the following wavefunction: $$\psi(r,\phi,\theta)=\psi(r)=\frac{1}{2\sqrt{\pi}}\left(2-\frac{r}{a_0}\right)\frac{e^{-r/2a_0}}{(2a_0)^{3/2}}$$ However, at $r=0$, $$\psi^*\psi\left.\right|_{r=0}=\left(\frac{1}{2\sqrt{\pi}}\left(2-\frac{0}{a_0}\right)\frac{e^{-0/2a_0}}{(2a_0)^{3/2}}\right)^2=\frac{1}{\pi(2a_0)^{3}}$$ I don't understand how this should be possible. My answer doesn't match logic and it doesn't match graphs that I find online. (every graph I see goes to zero at $r=0$) However, this does seem to really be the wavefunction for the 2s state. Where have I gone wrong? Answer: The graph shows the probability of finding the electron between the distances $r$ and $r + dr$. This probability is given by: $$ P = \psi^* \psi dV $$ where $dV$ is the volume element: $$ dV = 4\pi r^2 dr $$ So we get the probability: $$ P(r,r+dr) = \psi^* \psi 4\pi r^2 dr $$ and therefore when $r = 0$ the probability $P = 0$. It isn't that the wavefunction goes to zero at $r = 0$, but that the size of the volume element goes to zero.
{ "domain": "physics.stackexchange", "id": 21025, "tags": "quantum-mechanics, atomic-physics, hydrogen" }
STAR-long parameters for aligning RNA ONT reads to genome
Question: Are there any suggested parameters to align ONT reads to the reference genome using STAR-long? For now, I used the parameters suggested here, but I noticed a weird behaviour. I have RNA reads (D. melanogaster) from R7 and R9 flowcells, separately. I only selected to analyze 2D reads in pass category. I have, respectively, 113249 reads for R7 and 40318 reads for R9. I aligned those reads and get (only!) 150 uniquely mapped reads for R7 data and 8017 uniquely mapped reads for R9 data. I tried to run again the same command on a different server with fresh compilation, but the output file is consistent with these 150 reads. However, if I align the same with GMAP, I get 78016 uniquely mapped reads for R7 and 33523 uniquely mapped reads for R9, so I suspect that something went wrong in the alignment run. I am aware that the two mappers behave very differently, STAR-long being more precise and preferring to report mappings of fewer reads but at better loci, and GMAP being overall less precise, trying to map the most of the reads but at not-so-good loci. I was wondering if some of you had experience with this and could suggest me the best parameters for RNA reads from ONT? Answer: I've had great results using minimap2, particularly when combined with a pre-treatment of Canu for error correction (using minimap2 for the read-to-read mapping): # correct reads ~/install/canu/canu-1.6/Linux-amd64/bin/canu overlapper=minimap \ genomeSize=100M minReadLength=100 minOverlapLength=30 -correct \ -p 4T1_BC06 -d 4T1_BC06 \ -nanopore-raw workspace/pass/barcode06/fastq_runid_*.fastq # align reads ~/install/minimap2/minimap2 -p 10 \ -a ~/db/fasta/mmus/ucsc/mmus_ucsc_all_cdna.idx \ -x splice <(pv all/4T1_BC06/4T1_BC06.correctedReads.fasta.gz) | \ samtools sort > 4T1_BC06_all_vs_mmusAll.bam Update: the most recent nanopore basecaller, combined with the most recent minimap2, seems to now do a decent job with mapping, and pre-correction of reads no longer seems necessary. More recently I've been using LAST to map to the transcriptome, and minimap2 to map to the genome. Minimap2 has the ability to use a homopolymer-compressed genome index, which means that the most common consistent error for nanopore reads (i.e. misjudgement of homopolymer length) will not influence the mapping rate.
{ "domain": "bioinformatics.stackexchange", "id": 381, "tags": "nanopore, rna-alignment, star, gmap" }
Could the randomness of quantum mechanics be the result of unseen factors?
Question: The possibility of randomness in physics doesnt particularly bother me, but contemplating the possibility that quarks might be made up of something even smaller, just in general, leads me to think there are likely (or perhaps certainly?) thousands of particles and forces, perhaps layers and sub layers of forces, at play that we do not know about. So this got me thinking about quantum mechanics. I'm no physicist, but I do find it interesting to learn and explore the fundamentals of physics, so I'm wondering: Could the randomness found in radioactive decay as described in quantum mechanics be the result of forces and / or particles too weak / small for us to know about yet resulting in the false appearance of randomness? Or rather, can that be ruled out? Answer: As noted in the comments this is a much studied question. Einstein, Podolsky and Rosen wrote a paper on it, "Can Quantum-Mechanical Description of Reality Be Considered Complete?", published in Physical Review in 1935, and universally known today as the EPR paper. They considered a particular situation, and their paper raised the question of "hidden variables", perhaps similar to the microstates which undergird thermodynamics. Several "hidden variable" theories have been proposed, including one by David Bohm which resurrected de Broglie's "Pilot Wave" model. These are attempts to create a quantum theory which gets rid of the random numbers at the foundations of quantum mechanics. In 1964 Bell analyzed the specific type of situation which appears in the EPR paper, assuming that it met the conditions Einstein et al had stipulated for "physical reality". Using this analysis he then showed some specific measurements that are in agreement with any such hidden-variable, classical theory would satisfy a set of inequalities; these are today known as the Bell inequalities. They are classical results. He then showed that for ordinary quantum mechanics that the Bell inequalities are violated for certain settings of the apparatus. This means that no hidden variable theory can replace quantum mechanics if it also meets Einstein's conditions for "physical reality". The EPR abstract reads: "In a complete theory there is an element corresponding to each element of reality. A sufficient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system. In quantum mechanics in the case of two physical quantities described by non-commuting operators, the knowledge of one precludes the knowledge of the other. Then either (1) the description of reality given by the wave function in quantum mechanics is not complete or (2) these two quantities cannot have simultaneous reality. Consideration of the problem of making predictions concerning a system on the basis of measurements made on another system that had previously interacted with it leads to the result that if (1) is false then (2) is also false. One is thus led to conclude that the description of reality as given by a wave function is not complete." In fact, one can run quantum mechanical experiments that routinely violate Bell's inequalities; I'm currently involved in setting one up which will be validated by violating Bell's inequalities. People have been doing this for over 40 years. The main argument against closing this chapter are the various "loopholes" in the experiments. Recently it has been claimed that a single experiment has simultaneously closed all of the loopholes. If that is true, then there are no classical hidden variable theories which can replace regular quantum mechanics unless they are grossly non-local. Einstein certainly would not think that these were an improvement!
{ "domain": "physics.stackexchange", "id": 28821, "tags": "quantum-mechanics, determinism, randomness" }
Circular Buffer in C - Follow Up
Question: This is a follow up to this post I uploaded a few days back on my alternative account about a circular buffer in C. I'm confident that I fixed most of the warnings and even added an iterator. circularBuffer.h #ifndef CIRCULAR_BUFFER_H #define CIRCULAR_BUFFER_H #include <stddef.h> // max_align_t in source struct circularBuffer; // circularBufferCreate: {{{ // Creates a new circularBuffer in memory and returns a pointer to it. // // @param capacity The capacity of the new circularBuffer. // @param elementSize The element size of the contained elements. // Expected to be greater than zero. // // @return Pointer to the created circularBuffer in memory. // NULL on allocation fail or false arguments. }}} struct circularBuffer *circularBufferCreate(size_t capacity, size_t elementSize); // circularBufferDestroy: {{{ // Destroys the passed circularBuffer struct in memory. // // @param buf The circularBuffer, which is to be destroyed. }}} void circularBufferDestroy(struct circularBuffer *buf); // circularBufferCardinality: {{{ // Returns the current number of elements stored in the passed circularBuffer. // // @param buf Pointer to the circularBuffer to get cardinality from; // // @return The number of elements currently stored in the passed circularBuffer. }}} size_t circularBufferCardinality(const struct circularBuffer *buf); // circularBufferIsEmpty: {{{ // Checks whether a circularBuffer is empty. // // @param buf Pointer to the circularBuffer to check emptyness of (not NULL). // // @return Non-zero when empty, zero otherwise. }}} int circularBufferIsEmpty(const struct circularBuffer *buf); // circularBufferIsFull: {{{ // Checks whether a circularBuffer is full. // // @param buf Pointer to the circularBuffer to check fullness of (not NULL). // // @return Not zero when full, zero otherwise. }}} int circularBufferIsFull(const struct circularBuffer *buf); // circularBufferPushHead: {{{ // Pushes the data pointed to by ptr onto the circularBuffers head. // // @param buf CiruclarBuffer the data is pushed onto. // @param ptr Data to be pushed. }}} void circularBufferPushHead(struct circularBuffer *buf, void *ptr); // circularBufferPopTail: {{{ // Pops current tail element of passed circularBuffer and returns pointer to it. // // @param buf circularBuffer, which tail should be popped (NON-NULL-REQUIRED). // // @return Pointer to popped data. // This pointer is only valid until the data is overwritten internally. // For later use copying is necessary. }}} char *circularBufferPopTail(struct circularBuffer *buf); // circularBufferPeekHead: {{{ // Returns pointer to the current head element of the buffer. // // @param buf CircularBuffer, on which head a peek is wanted (NON-NULL-REQUIRED). // // @return Pointer to the current head of the passed circular buffer. // NULL in the case of an empty buffer. // }}} const char *circularBufferPeekHead(const struct circularBuffer *buf); // circularBufferPeekTail: {{{ // Returns pointer to the current tail element of the buffer. // // @param buf CircularBuffer, on which tail a peek is wanted (NON-NULL-REQUIRED). // // @return Pointer to the current tail of the passed circular buffer. // NULL in the case of an empty buffer. // }}} const char *circularBufferPeekTail(const struct circularBuffer *buf); struct circularBufferIterator; // circularBufferIteratorCreate: {{{ // Creates a new circularBufferIterator in memory and returns a pointer to it. // // @return Pointer to the created circularBufferIterator in memory. // NULL on allocation. }}} struct circularBufferIterator *circularBufferIteratorCreate(); // circularBufferIteratorDestroy: {{{ // Destroys the passed circularBufferIterator struct in memory. // // @param buf Pointer to the struct circularBufferIterator, // which is to be destroyed. }}} void circularBufferIteratorDestroy(struct circularBufferIterator *it); // circularBufferIteratorPrepare: {{{ // Prepares passed iterator to iterate over the elements of passed buffer. // // @param it CircularBufferIterator, which is about to get prepared. // @param buf CircularBuffer, which should be iterated over by the iterator. // }}} void circularBufferIteratorPrepare(struct circularBufferIterator *it, const struct circularBuffer *buf); // circularBufferIteratorNext: {{{ // Fetches the next element form the passed iterator. // // @param it CircularBufferIterator, which the next element should be fetched from. // // @return Pointer to the fetched data. May get overwritten by buffer at a later point in time. // Copying required for use parallel to the buffer. // }}} char *circularBufferIteratorNext(struct circularBufferIterator *it); #endif /* !defined CIRCULAR_BUFFER_H */ circularBuffer.c #include <string.h> // memcpy #include <stdlib.h> // SIZE_MAX, calloc, free #include <stdalign.h> // alignas #include "circularBuffer.h" struct circularBuffer { size_t capacity; size_t cardinality; size_t alignElementSize; size_t originalElementSize; size_t headOffset; size_t tailOffset; alignas(max_align_t) char data[]; }; static inline size_t incrementIndex(const struct circularBuffer *buf, size_t index) { // Avoid expensive modulo arithmetic return (++index == buf->capacity) ? 0 : index; } struct circularBuffer *circularBufferCreate(size_t capacity, size_t elementSize) { if (elementSize == 0) { return NULL; } size_t alignElementSize; if (elementSize >= sizeof(max_align_t)) { // Least number of blocks sizeof (max_align_t) where one element fits into. alignElementSize = (elementSize + sizeof (max_align_t) - 1) / sizeof (max_align_t) * sizeof(max_align_t); } else { alignElementSize = elementSize; // Find smallest int x >= 0 where (elementSize + x) divides sizeof (max_align_t) evenly. while (sizeof(max_align_t) % alignElementSize != 0) { alignElementSize++; } } if (SIZE_MAX / alignElementSize <= capacity || SIZE_MAX - sizeof (struct circularBuffer) < capacity * elementSize) { return NULL; } // Do I need to take extra care of the initial alignment of buf->data[] at this point? // Or is the alignas(max_align_t) enough? struct circularBuffer *buf = calloc(1, sizeof (*buf) + alignElementSize * capacity); if (!buf) { return NULL; } buf->capacity = capacity; buf->cardinality = 0; buf->alignElementSize = alignElementSize; buf->originalElementSize = elementSize; buf->headOffset = 0; buf->tailOffset = 0; return buf; } void circularBufferDestroy(struct circularBuffer *buf) { free(buf); } size_t circularBufferCardinality(const struct circularBuffer *buf) { return buf->cardinality; } int circularBufferIsEmpty(const struct circularBuffer *buf) { return buf->cardinality == 0; } int circularBufferIsFull(const struct circularBuffer *buf) { return buf->cardinality == buf->capacity; } void circularBufferPushHead(struct circularBuffer *buf, void *ptr) { if (buf->cardinality != 0) { buf->headOffset = incrementIndex(buf, buf->headOffset); // Cannot use circularBufferIsFull(buf) at this point, // since the cardinality isn't incremented yet. // circularBufferIsFull(buf) uses the cardinality // to determine fullness. if (buf->headOffset == buf->tailOffset) { buf->tailOffset = incrementIndex(buf, buf->tailOffset); } } memcpy(buf->data + buf->headOffset*buf->alignElementSize, ptr, buf->originalElementSize); if (!circularBufferIsFull(buf)) { buf->cardinality++; } } char *circularBufferPopTail(struct circularBuffer *buf) { if (buf->cardinality == 0) { return NULL; } // Store point to be returned. size_t tmpOffset = buf->tailOffset;; // Pop internally. buf->tailOffset = incrementIndex(buf, buf->tailOffset); buf->cardinality--; return buf->data + tmpOffset * buf->alignElementSize; } const char *circularBufferPeekHead(const struct circularBuffer *buf) { return buf->cardinality == 0 ? NULL : buf->data + buf->alignElementSize * buf->headOffset; } const char *circularBufferPeekTail(const struct circularBuffer *buf) { return buf->cardinality == 0 ? NULL : buf->data + buf->alignElementSize * buf->tailOffset; } struct circularBufferIterator { struct circularBuffer *buf; size_t continuousIndex; size_t actualIndex; }; struct circularBufferIterator *circularBufferIteratorCreate() { return calloc(1, sizeof(struct circularBufferIterator)); } void circularBufferIteratorDestroy(struct circularBufferIterator *it) { free(it); } void circularBufferIteratorPrepare(struct circularBufferIterator *it, const struct circularBuffer *buf) { it->buf = buf; it->continuousIndex = 0; it->actualIndex = it->buf->tailOffset; } char *circularBufferIteratorNext(struct circularBufferIterator *it) { if (it->continuousIndex == it->buf->cardinality) { return NULL; } it->continuousIndex++; size_t tmp = it->actualIndex; it->actualIndex = incrementIndex(it->buf, it->actualIndex); return it->buf->data + tmp * it->buf->alignElementSize; } Notes to possible reviewers: I'm still unsure about the initial padding of the flexible array member. I'm especially hoping for a review on that topic. It's also especially wanted that the old data is overwritten, when new data is pushed to the head and the buffer is already filled - "It's a feature not a bug". Here is a really small test: #include <stdio.h> #include "circularBuffer.h" void DEBUG_circularBufferPrintContent(struct circularBufferIterator *it, struct circularBuffer *buf); void DEBUG_circularBufferPrintState(const struct circularBuffer *buf); int main() { struct circularBuffer *buf; struct circularBufferIterator *it; buf = circularBufferCreate(3, sizeof(long)); it = circularBufferIteratorCreate(); if (!buf) { printf("ERROR: No buffer was returned.\n"); return -1; } if (!it) { printf("ERROR: No iterator was returned.\n"); return -1; } printf("--- TEST INITIAL STATE ---\n"); DEBUG_circularBufferPrintState(buf); DEBUG_circularBufferPrintContent(it, buf); printf("\n--- TEST UNDERFLOW ---\n"); circularBufferPopTail(buf); DEBUG_circularBufferPrintState(buf); DEBUG_circularBufferPrintContent(it, buf); printf("\n--- TEST PUSH ---\n"); long l = 0; circularBufferPushHead(buf, &l); DEBUG_circularBufferPrintState(buf); DEBUG_circularBufferPrintContent(it, buf); printf("\n--- TEST POP ---\n"); circularBufferPopTail(buf); DEBUG_circularBufferPrintState(buf); DEBUG_circularBufferPrintContent(it, buf); printf("\n--- TEST FULL ---\n"); for (l = 1; l <= 3; l++) { circularBufferPushHead(buf, &l); } DEBUG_circularBufferPrintState(buf); DEBUG_circularBufferPrintContent(it, buf); printf("\n--- TEST OVERFLOW ---\n"); l = 4; circularBufferPushHead(buf, &l); DEBUG_circularBufferPrintState(buf); DEBUG_circularBufferPrintContent(it, buf); printf("\n--- TEST POP ---\n"); circularBufferPopTail(buf); DEBUG_circularBufferPrintState(buf); DEBUG_circularBufferPrintContent(it, buf); circularBufferIteratorDestroy(it); circularBufferDestroy(but); return 0; } void DEBUG_circularBufferPrintContent(struct circularBufferIterator *it, struct circularBuffer *buf) { printf("--> DEBUG: "); long *tmp; for (circularBufferIteratorPrepare(it, buf); (tmp = (long*) circularBufferIteratorNext(it)) != NULL; ) { printf("%ld, ", *tmp); } printf("\n"); } void DEBUG_circularBufferPrintState(const struct circularBuffer *buf) { printf("--> DEBUG: CARDINALITY: %zu, IS_EMPTY: %5s, IS_FULL: %5s\n", circularBufferCardinality(buf), circularBufferIsEmpty(buf) ? "true" : "false", circularBufferIsFull(buf) ? "true" : "false"); } Answer: There is an asymmetry in your APIs, as can for example be seen in void circularBufferPushHead(struct circularBuffer *buf, void *ptr); char *circularBufferPopTail(struct circularBuffer *buf); char *circularBufferIteratorNext(struct circularBufferIterator *it); All functions which add an element to the circular buffer take the a void * parameter, but returning the elements is done as a char *. I would recommend to use void * in all cases (and apart from changing the function prototypes, no changes are necessary). Since void * can be assigned to any pointer type without an explicit cast, retrieving elements simplifies from (taking your test code as an example) tmp = (long*) circularBufferIteratorNext(it); to tmp = circularBufferIteratorNext(it); Your calculations of alignElementSize seem unnecessary to me (and causes to more memory to be allocated than necessary). Here are some quotes of the C11 standard (taken from http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf): 6.2.8 Alignment of objects 2 A fundamental alignment is represented by an alignment less than or equal to the greatest alignment supported by the implementation in all contexts, which is equal to _Alignof (max_align_t). 3 An extended alignment is represented by an alignment greater than _Alignof (max_align_t). It is implementation-defined whether any extended alignments are supported and the contexts in which they are supported. A type having an extended alignment requirement is an over-aligned type. 7.22.3 Memory management functions 1 The order and contiguity of storage allocated by successive calls to the aligned_alloc, calloc, malloc, and realloc functions is unspecified. The pointer returned if the allocation succeeds is suitably aligned so that it may be assigned to a pointer to any type of object with a fundamental alignment requirement and then used to access such an object or an array of such objects in the space allocated. So – unless you are using "over-aligned types" – the alignment of data in struct circularBuffer { // ... alignas(max_align_t) char data[]; }; is suitable for any type of object if the memory of struct circularBuffer was obtained from malloc() or a related function. It is then not necessary to separate between alignElementSize and originalElementSize and the buffer creation simplifies to struct circularBuffer *circularBufferCreate(size_t capacity, size_t elementSize) { if (elementSize == 0) { return NULL; } if ((SIZE_MAX - sizeof(struct circularBuffer)) / elementSize < capacity) { return NULL; } struct circularBuffer *buf = calloc(1, sizeof(struct circularBuffer) + elementSize * capacity); if (!buf) { return NULL; } buf->capacity = capacity; buf->cardinality = 0; buf->elementSize = elementSize; buf->headOffset = 0; buf->tailOffset = 0; return buf; } You can replace calloc() by malloc() if the initialization with zero bytes is not needed. (If you are planning to work with – implementation-defined – "over-aligned" types then the required alignment must be passed as an additional parameter to the creation function, and more work is necessary to ensure that the address of the data field is property aligned.) The iterator expects that no elements are added or removed to the circular buffer during the iteration. A possible solution to detect such a programming error would be to add a generation number to struct circularBuffer which is incremented with every modification. Then copy that generation number to struct circularBufferIterator in circularBufferIteratorPrepare() and verify it to be unchanged in circularBufferIteratorNext(). I would add such a check at least if the program is compiled in DEBUG mode. It suffices to #include <stddef.h> // max_align_t in source in the implementation file "circularBuffer.c", it is not needed in the header file "circularBuffer.h".
{ "domain": "codereview.stackexchange", "id": 19271, "tags": "performance, c, circular-list" }
matplotlib graph to plot values and variance
Question: I am really new to the world of matplotlib graphing as well as using those graphs to understand data. I have written a simple python code where I read a .csv file in and then store the values of one column into a variable. Then plotting them similar to the code bellow: dev_x= X #storing the values of the column to dev_x plt.plot(dev_x) plt.title('Data') The graph looks like this, which seems quite messy and hard to understand. So, I am asking for some advice on how to make more cohesive graphs. This is what my .csv column looks like. It is just many other other rows. ['40' '20' '10' '0' '10' '30' '50' '70' '90' '110' '130' '150' '170' '200' '240' '290' '40' '20' '10' '0' '10' '30' '50' '70' '90' '110' '130' '150' '170' '200' '240' '290' '40' '20' '10' '0' '10' '30' '50' '70' '90' '110' At the end of the day I would like a way to display these in a better way so I can also find the variance of this column. Answer: You have currently stored your numbers as strings causing matplotlib to treat your variable as categorical, hence the y-axis is not ordered as expected. Before plotting you should therefore first convert them to integers like this: x = [float(i.replace(",", ".")) for i in dev_x] You can then use plt.plot(x) once again to plot the values, this should give you the following plot: Edit: Using the csv file you've provided, I am using the following code to read in the data and create the plot: import matplotlib.pyplot as plt import pandas as pd # Read in csv file df = pd.read_csv("DATA.csv") # Set figure size plt.figure(figsize=(15, 5)) # Create plot plt.plot(df["DATA"]) This should give the following plot:
{ "domain": "datascience.stackexchange", "id": 6865, "tags": "python, pandas, matplotlib, variance" }
How to compute fundamental frequency from a list of overtones?
Question: Given a list of overtones (F1, F2, F3, etc), how do I compute the fundamental frequency? Can I do something like F2/F1=F1/F0? Is it the correct method to use? Answer: The frequencies of the harmonics are integer multiples of the fundamental frequency $f_0$, i.e. $f_n = (n+1)f_0$. The fundamental frequency $f_0$ is the greatest common divisor of the harmonics $f_n$. If you are sure that there is no other unknown harmonic between two known harmonics, e.g. you know that you have the fourth and the fifth harmonic, then $f_0$ is of course the difference between the two. But if you just have a collection of harmonics and you don't know anything else about them, then you need to determine $f_0$ as the gcd of $f_n$.
{ "domain": "dsp.stackexchange", "id": 887, "tags": "frequency-spectrum, frequency, frequency-domain" }
How many stars would it take to draw a line across the middle of the sky that appeared solid?
Question: I was talking with a friend about how slowly the star field changes (based on the speed that we are moving through the galaxy) and I started to wonder about a star's visible size. They are basically the pixels that make up our sky. This made me wonder, how many stars (let's use the north star as a reference.) would it take (lined up side by side) to draw line that appeared solid across the middle of the sky, perpendicular to the horizon which appeared solid? Answer: This is more of a question of human perception, than astronomy. I was going to answer your question with this: "One, it just needs to be close to Earth." But, I decided it wasn't THAT funny. Anyway, stars are essentially point sources as far as our eyes are concerned. Typical visual resolution is about 0.02° or 0.0003 radians. Assuming from horizon to horizon is 180° (or π radians) that calculates out to roughly 10,000 stars. I'd probably increase that by 50% or 100% to be sure. You do understand that you can't line stars up side by side, I hope. There's no need, its about the angular distance, not absolute distance between them, that matters. They can be light years apart as long as they appear to be within about 1 arcminute of one another, our eyes will see them as a single object - subject to the psychological aspects of keeping color and apparent magnitude roughly uniform as well.
{ "domain": "astronomy.stackexchange", "id": 1703, "tags": "star" }
When exactly is a callback function executed?
Question: Hello, I know that in general a callback function is executed when its associated topic has published values. What I am wondering is how do the commands spin() and spinOnce() in main affect the callback's execution, because I noticed that when the spin() command is in practice not executed like in the code below the callback is also never executed even though messages are published to the topic ros::Rate loop_rate(0.1); while(ros::ok()){ ROS_INFO("Loop: %d\n",stop_msg.data); stop_pub.publish(stop_msg); loop_rate.sleep(); } ros::spin(); on the other hand, when I changed the code in main to the following everything worked perfectly: ros::Rate loop_rate(0.1); while(ros::ok()){ ROS_INFO("Loop: %d\n",stop_msg.data); stop_pub.publish(stop_msg); loop_rate.sleep(); ros::spinOnce(); } Could somebody enlighten me? Thank you Originally posted by smarn on ROS Answers with karma: 54 on 2020-03-16 Post score: 0 Answer: I would suggest you to read the wiki about Callbacks and Spinning, from the introduciton : The end result is that without a little bit of work from the user your subscription, service and other callbacks will never be called. The most common solution is ros::spin() In your first code ros::spin() is outside the while loop so it's never called so neither are your callbacks. In your second code you are actually spinning so the behavior you describe is totally normal. Now you should see some related questions to understand the concept of spinning : #q257361, #q11887. Originally posted by Delb with karma: 3907 on 2020-03-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 34599, "tags": "ros, c++, ros-melodic, callback, spin" }
What planets are visible to the naked eye from Mars?
Question: Here on Earth we are blessed with being able to see some other planets, Mars & Venus etc, with the naked eye on a fairly regular basis thanks to the distance between the planets. What about from Mars? What planets would be visible to the naked eye on a regular basis from Mars? Earth would obviously be one of them, as we can see it, but are any other planets close enough to mars at any point to be visible? Answer: Aside from having Earth visible in the night sky instead of Mars, you would expect the same planets to be visible. Venus will appear as a bright star close to the sun - smaller than we see it, but still very bright. Jupiter and Saturn will be easier to see in the night sky, and it should be possible to pick out Jupiter's four major moons with the naked eye. Uranus is going to be interesting. While Mars will get closer to it than Earth will, as @ProfRob points out, the dust in the Martian atmosphere will remove the possibility of seeing Uranus. Neptune will still be invisible to the naked eye.
{ "domain": "physics.stackexchange", "id": 87889, "tags": "astronomy, planets" }
2D physics game engine
Question: I'm writing a 2D game engine, and I just wrote the physics for it to handle collisions between AABBs and circles. It's on GitHub. Some of my worries are that my code isn't OOP, because I have to do a bit of casting. As well as that, the engine doesn't seem to be deterministic, but rather depends on when the update method is called, and I'm not really sure how to fix this (but perhaps that is better asked on Stack Overflow). This is the main body of my physics code: public final class GamePhysics { private GamePhysics() { // cant instantiate this class } public static boolean isColliding(final GameObject a, final GameObject b) { if (a instanceof RectObject && b instanceof RectObject) { return isColliding((RectObject) a, (RectObject) b); } if (a instanceof CircleObject && b instanceof CircleObject) { return isColliding((CircleObject) a, (CircleObject) b); } if (a instanceof RectObject && b instanceof CircleObject) { return isColliding((RectObject) a, (CircleObject) b); } if (a instanceof CircleObject && b instanceof RectObject) { return isColliding((RectObject) b, (CircleObject) a); } throw new UnsupportedOperationException(); } private static boolean isColliding(final RectObject a, final RectObject b) { final float w = 0.5f * (a.width() + b.width()); final float h = 0.5f * (a.height() + b.height()); final float dx = a.center().x - b.center().x; final float dy = a.center().y - b.center().y; return Math.abs(dx) <= w && Math.abs(dy) <= h; } private static boolean isColliding(final CircleObject o1, final CircleObject o2) { final float c = o1.radius + o2.radius; final float b = o1.center.x - o2.center.x; final float a = o1.center.y - o2.center.y; return c * c > b * b + a * a; } private static boolean isColliding(final RectObject a, final CircleObject b) { final float circleDistance_x = Math.abs(b.center().x - (a.min.x + a.width() / 2)); final float circleDistance_y = Math.abs(b.center().y - (a.min.y + a.height() / 2)); if (circleDistance_x > a.width() / 2 + b.radius) { return false; } if (circleDistance_y > a.height() / 2 + b.radius) { return false; } if (circleDistance_x <= a.width() / 2) { return true; } if (circleDistance_y <= a.height() / 2) { return true; } final int cornerDistance_sq = (int) Math.pow(circleDistance_x - a.width() / 2, 2) + (int) Math.pow(circleDistance_y - a.height() / 2, 2); return cornerDistance_sq <= (int) Math.pow(b.radius, 2); } private static Vec2D collisionNormal(final RectObject a, final RectObject b) { final float w = 0.5f * (a.width() + b.width()); final float h = 0.5f * (a.height() + b.height()); final float dx = a.center().x - b.center().x; final float dy = a.center().y - b.center().y; if (Math.abs(dx) <= w && Math.abs(dy) <= h) { /* collision! */ final float wy = w * dy; final float hx = h * dx; if (wy > hx) { if (wy > -hx) { /* collision at the top */ return new Vec2D(0, -1); } else { /* on the left */ return new Vec2D(1, 0); } } else { if (wy > -hx) { /* on the right */ return new Vec2D(-1, 0); } else { /* at the bottom */ return new Vec2D(0, 1); } } } throw new IllegalArgumentException("Rectangles must be colliding"); } public static <A extends GameObject, B extends GameObject> void fixCollision(final A a, final B b) { final CollisionManifold<A, B> m = generateManifold(a, b); // Calculate relative velocity final Vec2D rv = b.velocity.minus(a.velocity); // Calculate relative velocity in terms of the normal direction final float velAlongNormal = rv.dotProduct(m.normal); // Calculate restitution final float e = Math.min(a.restitution, b.restitution); // Calculate impulse scalar float j = -(1 + e) * velAlongNormal; j /= a.getInvMass() + b.getInvMass(); // Apply impulse final Vec2D impulse = m.normal.multiply(j); a.velocity = a.velocity.minus(impulse.multiply(a.getInvMass())); b.velocity = b.velocity.plus(impulse.multiply(b.getInvMass())); applyFriction(m, j); positionalCorrection(m); } public static <A extends GameObject, B extends GameObject> void applyFriction(final CollisionManifold<A, B> m, final float normalForce) { final A a = m.a; final B b = m.b; // relative velocity final Vec2D rv = b.velocity.minus(a.velocity); // normalized tangent force final Vec2D tangent = rv.minus(m.normal.multiply(m.normal.dotProduct(rv))).unitVector(); // friction magnitude final float jt = -rv.dotProduct(tangent) / (a.getInvMass() + b.getInvMass()); // friction coefficient final float mu = (a.staticFriction + b.staticFriction) / 2; final float dynamicFriction = (a.dynamicFriction + b.dynamicFriction) / 2; // Coulomb's law: force of friction <= force along normal * mu final Vec2D frictionImpulse = Math.abs(jt) < normalForce * mu ? tangent.multiply(jt) : tangent.multiply(-normalForce * dynamicFriction); a.velocity = a.velocity.minus(frictionImpulse.multiply(a.getInvMass())); b.velocity = b.velocity.plus(frictionImpulse.multiply(b.getInvMass())); } @SuppressWarnings("unchecked") public static <A extends GameObject, B extends GameObject> CollisionManifold<A, B> generateManifold(final A a, final B b) { if (a instanceof RectObject && b instanceof RectObject) { return (CollisionManifold<A, B>) generateManifold((RectObject) a, (RectObject) b); } else if (a instanceof CircleObject && b instanceof CircleObject) { return (CollisionManifold<A, B>) generateManifold((CircleObject) a, (CircleObject) b); } else if (a instanceof RectObject && b instanceof CircleObject) { return (CollisionManifold<A, B>) generateManifold((RectObject) a, (CircleObject) b); } else if (a instanceof CircleObject && b instanceof RectObject) { return (CollisionManifold<A, B>) generateManifold((RectObject) b, (CircleObject) a); } else { throw new UnsupportedOperationException(); } } private static CollisionManifold<RectObject, RectObject> generateManifold(final RectObject a, final RectObject b) { final CollisionManifold<RectObject, RectObject> m = new CollisionManifold<>(); m.a = a; m.b = b; final Rectangle2D r = a.toRectangle().createIntersection(b.toRectangle()); m.normal = collisionNormal(a, b); // penetration is the min resolving distance m.penetration = (float) Math.min(r.getWidth(), r.getHeight()); return m; } private static CollisionManifold<CircleObject, CircleObject> generateManifold(final CircleObject a, final CircleObject b) { final CollisionManifold<CircleObject, CircleObject> m = new CollisionManifold<>(); m.a = a; m.b = b; // A to B final Vec2D n = b.center.minus(a.center); final float dist = n.length(); if (dist == 0) { // circles are on the same position, choose random but consistent values m.normal = new Vec2D(0, 1); m.penetration = Math.min(a.radius, b.radius); return m; } // don't recalculate dist to normalize m.normal = n.divide(dist); m.penetration = b.radius + a.radius - dist; return m; } private static CollisionManifold<RectObject, CircleObject> generateManifold(final RectObject a, final CircleObject b) { final CollisionManifold<RectObject, CircleObject> m = new CollisionManifold<>(); m.a = a; m.b = b; // Vector from A to B final Vec2D n = b.center.minus(a.center()); // Closest point on A to center of B Vec2D closest = n; // Calculate half extents along each axis final float x_extent = a.width() / 2; final float y_extent = a.height() / 2; // Clamp point to edges of the AABB closest = new Vec2D(clamp(closest.x, -x_extent, x_extent), clamp(closest.y, -y_extent, y_extent)); boolean inside = false; // Circle is inside the AABB, so we need to clamp the circle's center // to the closest edge if (n.equals(closest)) { inside = true; // Find closest axis if (Math.abs(closest.x) > Math.abs(closest.y)) { // Clamp to closest extent closest = new Vec2D(closest.x > 0 ? x_extent : -x_extent, closest.y); } // y axis is shorter else { // Clamp to closest extent closest = new Vec2D(closest.x, closest.y > 0 ? y_extent : -y_extent); } } // closest point to center of the circle final Vec2D normal = n.minus(closest); final float d = normal.length(); final float r = b.radius; // Collision normal needs to be flipped to point outside if circle was // inside the AABB m.normal = inside ? normal.unitVector().multiply(-1) : normal.unitVector(); m.penetration = r - d; return m; } private static float clamp(final float n, final float lower, final float upper) { return Math.max(lower, Math.min(n, upper)); } private static <A extends GameObject, B extends GameObject> void positionalCorrection(final CollisionManifold<A, B> m) { final A a = m.a; final B b = m.b; final float percent = .8f; // usually .2 to .8 final float slop = 0.01f; // usually 0.01 to 0.1 final Vec2D correction = m.normal.multiply(Math.max(m.penetration - slop, 0.0f) / (a.getInvMass() + b.getInvMass()) * percent); a.moveRelative(correction.multiply(-1 * a.getInvMass())); b.moveRelative(correction.multiply(b.getInvMass())); } } Answer: You did a very nice job organizing your code. I especially like how you used operator overloading for collision detection! In your method collisionNormal, you have this chunk of code at the beginning: final float w = 0.5f * (a.width() + b.width()); final float h = 0.5f * (a.height() + b.height()); final float dx = a.center().x - b.center().x; final float dy = a.center().y - b.center().y; if (Math.abs(dx) <= w && Math.abs(dy) <= h) { Now where have I seen that before... oh right! It looks exactly like isColliding(final RectObject a, final RectObject b)! I don't really see the point of writing that code again exactly in this method. I think it'd be a lot easier to just use the isColliding method that you already wrote. With things as big and as complicated as physics engines, things can get easily confusing. What if you stopped working on this for a while and then came back to it later? It would be difficult to understand some crucial parts of your code. To aid this, you should supply your methods and classes, and (optionally) fields with JavaDoc so you can provide as much explanation as needed to understand the code fully. For example, I am having trouble understanding all the math being done in applyFriction and fixCollision. With JavaDoc, you could explain what formulas you are following, or what your reasoning is behind whatever you are doing. I'd say you are doing good with keeping your code OO, for what you've written so far. I recommend looking at pre-existing physics engines for reference on how they organize their data. For example, some physics engines describe certain forces in objects. As in, friction would be an object that is instantiated with a "power" which is then added to the game and affects object in a way that friction would based on how much "power" was described (some objects are more slippery than others). Also, there might be an object for describing a moving force, which might have properties describing direction, power, etc. Some good examples of game engines with physics engines are Unity3D, Unreal, and ROBLOX Studio. I apologize for not giving a more in-depth review. As mentioned, I had trouble understanding the math and why it was being done.
{ "domain": "codereview.stackexchange", "id": 14801, "tags": "java, game, simulation, physics" }
Observable universe equals its Schwarzschild radius (event horizon)?
Question: The estimated age of the universe is 14 billion years. The estimated Schwarzschild radius (event horizon) of the observable universe is 14 billion light-years. What are the ramifications? Answer: There's an error in your source, but even if there weren't, it wouldn't mean that the Universe is a black hole (see below): The Schwarzschild radius of the observable Universe is not equal to 13.7 Glyr. Wikipedia cites some random, non-refereed paper that uses the Hubble radius as the radius of the observable Universe, which is too small by a factor of $\gtrsim3$. The Schwarzschild radius of the Universe Although the age of the Universe is indeed ~13.8 Gyr, its radius is much larger than 13.8 Glyr, because it expands. In fact, the radius is $R \simeq 46.3\,\mathrm{Glyr}$. The mean density of the Universe is very close to the critical$^\dagger$ density $\rho_\mathrm{c} \simeq 8.6\times10^{-30}\,\mathrm{g}\,\mathrm{cm}^{-3}$. Hence, the total mass (including "normal", baryonic matter, dark matter, and dark energy) is $$ M = \rho_\mathrm{c}V = \rho_\mathrm{c}\frac{4\pi}{3}R^3 \simeq 3.0\times10^{57}\,\mathrm{g}, $$ and the corresponding Schwarzschild radius is $$ R_\mathrm{S} \equiv \frac{2GM}{c^2} \simeq 475\,\mathrm{Glyr}. $$ The Universe is not a black hole Even worse! you might say. If our Universe is much smaller than its Schwarzschild radius, does that mean we live in a black hole? No, it doesn't. A black hole is a region in space where some mass is squeezed inside its Schwarzschild radius, but the Universe is not "a region in space". There's no "outside the Universe". If anything, you might call it a white hole, the time-reversal of a black hole, in which case you could say that the singularity is not something that everything will fall into in the future, but rather something that everything came from in the past. You may call that singularity Big Bang. The error in the source You will see questions like this many places on the internet. As I said above, they all assume the Hubble radius, $R_\mathrm{H} \equiv c/H_0$ for the radius. But this radius is well within our observable Universe, and doesn't really bear any physical significance. In this case, the age in Gyr works out to be exactly equal to the radius in Glyr, by definition. So, what does that tell us? Nothing, really. Except that our Universe is flat, i.e. has $\rho\simeq\rho_\mathrm{c}$, which we already knew and used in the calculation. That is, setting $R = R_\mathrm{H} \equiv c/H_0$, $$ \begin{array}{rcl} R_\mathrm{S} & \equiv & \frac{2GM}{c^2}\\ & = & \frac{2G}{c^2} \rho V \\ & = & \frac{2G}{c^2} \rho \frac{4\pi}{3}R^3 \\ & = & \frac{2G}{c^2} \rho \frac{4\pi}{3} \left(\!\frac{c}{H_0}\!\right)^3 \\ & = & \frac{8\pi G}{3 H_0^2} \rho \frac{c}{H_0} \\ & = & \frac{8\pi G}{3 H_0^2} \rho R, \end{array} $$ so if $R=R_\mathrm{S}$, we have $$ \rho = \frac{3H_0^2}{8\pi G}, $$ which is exactly the expression for the critical density you get from the Friedmann equation. $^\dagger$The density that determines the global geometry of the Universe.
{ "domain": "astronomy.stackexchange", "id": 5277, "tags": "black-hole, universe, cosmology" }
Number to word converter (1 to 99 inclusive)
Question: Program It takes a numerical input in range [1, 99] and outputs its word equivalent. Concerns Procedural approach General code style Code #include <iostream> void print1to20(int num) { switch(num) { case 1: std::cout << "one"; break; case 2: std::cout << "two"; break; case 3: std::cout << "three"; break; case 4: std::cout << "four"; break; case 5: std::cout << "five"; break; case 6: std::cout << "six"; break; case 7: std::cout << "seven"; break; case 8: std::cout << "eight"; break; case 9: std::cout << "nine"; break; case 10: std::cout << "ten"; break; case 11: std::cout << "eleven"; break; case 12: std::cout << "twelve"; break; case 13: std::cout << "thirteen"; break; case 14: std::cout << "fourteen"; break; case 15: std::cout << "fifteen"; break; case 16: std::cout << "sixteen"; break; case 17: std::cout << "seventeen"; break; case 18: std::cout << "eighteen"; break; case 19: std::cout << "nineteen"; break; case 20: std::cout << "twenty"; break; default: std::cout << "1 to 20 switch error"; } } void printTens(int num) { switch(num) { case 20: std::cout << "twenty"; break; case 30: std::cout << "thirty"; break; case 40: std::cout << "fourty"; break; case 50: std::cout << "fifty"; break; case 60: std::cout << "sixty"; break; case 70: std::cout << "seventy"; break; case 80: std::cout << "eighty"; break; case 90: std::cout << "ninety"; break; default: std::cout << "tens switch error"; } } int main() { int num; do { std::cout << "Insert a number 1-99: "; std::cin >> num; } while(num < 1 || num > 99); int num_a, num_b; // num = (a)(b); num_a = num - num%10; num_b = num%10; std::cout << "Your number is "; if(num <= 20) print1to20(num); else { printTens(num_a); std::cout << ' '; print1to20(num_b); } return 0; } Answer: The code is broken. If a number is greater than 20 and ends with 0, print1to20(num_b) will not do what it should. Don't mix the logic with the output. I'd change the return type of your functions to std::string. It'll make the code more testable and reusable. Don't just print something to std::cout in a case of an error. Throw an exception instead. Declare each variable as late as possible: int num_a, num_b; // num = (a)(b); num_a = num - num%10; num_b = num%10; should be int num_a = ...; int num_b = ...; Use more descriptive names. I have no clue what num_a mean. What is a? Something like first_digit and second_digit would be better (well the first one is not exactly a digit, so you can come with even better names). There's no need to return 0 from the main function explicitly (sure, you can if you want to). Write proper automated tests for your code to reduce the number of bugs (your bug is very likely to get caught by tests). In fact, you can check all possible valid inputs. There're only 100 of them. There's nothing wrong in having stand-alone functions. There's no entity here, so I don't see the point of modeling this problem in terms of objects. A function from an int to its string representation is just a function.
{ "domain": "codereview.stackexchange", "id": 27524, "tags": "c++, object-oriented, c++11, numbers-to-words" }
One Ubuntu two ROS System
Question: Hi, just wanna ask, is it possible two have more than 2 ROS system on one Ubuntu system? Example Ubuntu will both have fuerte and electric. Originally posted by ROS_NOOB_CYBORG on ROS Answers with karma: 31 on 2012-09-27 Post score: 1 Answer: Sure. You can have all ros distros installed in parallel. You can switch between them by sourcing the different setup.bash files. Just make sure you create separate overlays for different ros distros. Originally posted by Lorenz with karma: 22731 on 2012-09-27 This answer was ACCEPTED on the original site Post score: 8 Original comments Comment by KruseT on 2012-09-28: Also in this case, you should not source setup.sh in your .bashrc, but decide every time you open a new shell which distro you want to use, and source the corresponding setup.sh. Swithcing between distros in the same shell can cause errors.
{ "domain": "robotics.stackexchange", "id": 11165, "tags": "ros, ros-diamondback, hector-quadrotor, ros-electric" }
How do you read the regular expression (0^∗10^+)^+?
Question: Give an example of a string in the language of $(0^*10^+)^+$. I've been asked to give an example of a string in this language but I'm confused on how to read this notation. I'm guessing the acceptable strings are supposed to start with 0, but that's about all I can infer. Answer: As I see in the comments, I think you have a bit of confusion in the syntax and semantics of regular expressions. $^*$ stands for Kleene star, which means "zero or more repetitions". (the string can be empty) $+$ stands for Kleene plus, which means "one or more repetitions". (the string cannot be empty) So for your example $$(0^∗10^+)^+$$ you should first look inside the brackets: $0^*$ can be empty or $0$,$00$,$000$,$\dots$ $10^+$ can't be empty, so it only can be of the form $10,1010,101010,10101010,\dots$ So now we can combine them and we can take for example $001010$. But this is still around brackets and has the form kleene plus ($+$), so this can be $$(001010)^+=001010,001010001010,001010001010001010,\dots$$ So we can take the string $001010001010$ as an example for this regular expression.
{ "domain": "cs.stackexchange", "id": 17448, "tags": "regular-expressions" }
Oxidation Number of the middle carbon in $\ce{C3O2}$ (Carbon Suboxide)
Question: By taking the oxidation state of the terminal oxygen atoms as -2, and oxidation state of the carbon atoms adjacent to these oxygen atoms as +2, we are left with the oxidation state of the middle carbon atom as 0. However I researched a bit into the structure of $\ce{C3O2}$ and found out that it is easy to bend (and can be non-planar?), would this affect the oxidation state of the central carbon atom or would it remain as 0. My thought process initially: The oxygen atoms being more electronegative will pull the electron clouds towards themselves leaving the adjacent carbon atoms with a positive charge. I then thought that this positive charge will be distributed among the carbon atoms leaving the middle carbon atom with a non zero oxidation state. (On reading the comment by Poutnik, I think this thought is wrong since it is not taking total bond breaking operation into account) Answer: First off, carbon suboxide is much closer to being linear than formally isoelectronic species such as $\ce{N5^+}$. The energy difference between linear and bent conformations is so small it gets smeared out by quantum-mechanical zero-point energy, and chemists call the structure "quasilinear". If we work out the effective hybridization of $s$ and $p$ orbitals as a function of bond angle, a 160° bond angle which is considered the most stable configuration (Wikipedia, citing Reference 1), gives $sp^{1.06}$ (the number is the negative of the secant of the bond angle). So even if the premise of a linear geometry is ambiguous, the $sp$-hybridized electronic structure is an accurate electronic model and on that basis, the middle carbon may be assigned oxidation state $0$ (and the other carbons $+2$). Some additional examples of carbon(0) bonded to only two other atoms is given by degruyter.com. Note that the middle carbon comes out with a significantly lower oxidation state than the other two. This dovetails with the middle carbon being nucleophilic (accepting protons when the carbon suboxide reacts with water or hydrogen chloride) while its neighbors are electrophilic. Cited Reference Brown RD (1993). "Structural Information on Large Amplitude Motions". In Laane J, Dakkouri M, Veken Bv, et al. (eds.). Structures and Conformations of Non-Rigid Molecules. NATO ASI Series. Vol. 410. Springer Netherlands. pp. 99–112. doi:10.1007/978-94-011-2074-6_5. ISBN 9789401049207.
{ "domain": "chemistry.stackexchange", "id": 17518, "tags": "bond, carbonyl-compounds, oxidation-state" }
Why could window function help to get more accurate specific frequency amplitude?
Question: In order to calculate amplitude of sinusoidal signal at a frequency around 60 Hz (but may be not 60 Hz), window function is adopted while using DFT. hk is the coefficient of window function and Kdc is the sum of these coefficients. Np is the number of sample in one period when frequency is 60 Hz. (XL1, XL2, XL3 are time series sample of three phase electrical quantities. Sqrt(2) is due to some electrical reason.) As far as I know, window function is helpful to reduce discontinuity when frequency deviates a little bit. However, I am really confused why this could help to get a more accurate sinusoidal signal amplitude. I tried to derive, but after getting the steps below, I don't know how to prove V1 is closer to A than V2. h1 and h2 are two types of window function. h1 corresponds to rectangular window and h2 represents triangle window. Answer: I don't think window functions are very useful. The reason you are seeing a drop in magnitude is that some you your "Height" is losing "leakage" when your frequency frequency goes off bin. Window functions try to "minimize" leakage so that may be the effect you are seeing. Instead, have a look at my answer here: FFT Phase interpretation of input signal with non-integer number of cycles in FFT window Particularly point #3 "The magnitude adjustment for being off bin." If this is an insufficient start for you, let me know in a comment and I can post some further reading for you.
{ "domain": "dsp.stackexchange", "id": 9223, "tags": "dft, window-functions" }
Updates or creates an entity based on if a value is present in an Optional
Question: I need to get car info from a 3rd party web service and persist the data in my application DB. If my DB already has the car, I only update property values that may have changed. Otherwise, I create and save the car: @Transactional //Spring Transaction void saveCars() { //get CarInfo from a webservice List<CarInfo> carInfos = getCarInfo(); // now get each Car from my application DB // that correspond to each CarInfo. Set<Car> cars = mySpringCarRepo.findByCarInfoIdIn( carInfos.stream().map(c -> c.getId()) .collect(Collectors.toSet()); for (CarInfo carInfo : carInfos) { // If a car for the carInfo // is already in my application // just update it if some carInfo has changed. // If a car is not there, create and save it to DB. Car car = cars .stream() .filter(c -> c.getCarInfoId().equals(carInfo.getId()) .findAny() .map(c -> updateCar(carInfo, car)) .orElseGet(() -> createCar(carInfo); mySpringCarRepo.save(car); } } I have the following questions / concerns about this code: Is this an inappropriate use of Optional's map method? I'm not mapping to a new object but rather just returning the same car with some properties potentially updated. The save method is really only required to be called for cars created in the orElseGet. Due to being in a transaction, the updates to the car will be persisted without the save call. It seems redundant and perhaps confusing to call save. Effective Java 3rd edition states that many uses of isPresent can profitably be replaced by other Optional methods. So that has led me to the path of using map & orElseGet and the fact that Java 8 does not have a ifPresentOrElse method on Optional. Answer: Regarding your questions Is this an inappropriate use of Optional's map method? I'm not mapping to a new object but rather just returning the same car with some properties potentially updated. I think it's appropriate because it might return an updated object. Unfortunately (as you said) Java 8 doesn't have ifPresentOrElse. The save method is really only required to be called for cars created in the orElseGet. You can change the orElseGet with orElseGet(() -> mySpringCarRepo.save(createCar(carInfo))) and remove the final save to the db. Performances The complexity of the method is O(c*ci) where c is the number of cars in your DB and ci is the number of CarInfo. If you store the Cars in a Map instead of a Set you can improve it to O(ci). void saveCars() { // get CarInfo from a webservice List<CarInfo> carInfos = getCarInfo(); // Get cars from DB that correspond to each CarInfo Map<Long,Car> cars = mySpringCarRepo .findByCarInfoIdIn( getCarInfoIds(carInfos) ) .stream() .collect( Collectors.toMap(Car::getCarInfoId,Function.identity())); for (CarInfo carInfo : carInfos) { // Returns Optional<Car> Optional.ofNullable(cars.get(carInfo.getId())) // Update car if already exists .map(c -> updateCar(carInfo, c)) // Save car otherwise .orElseGet(() -> mySpringCarRepo.save(createCar(carInfo))); } } Or if you don't mind to query the db for each CarInfo, than the method is shorter: void saveCars() { getCarInfo().stream() .forEach(carInfo -> // returns Optional<Car> mySpringCarRepo.findByCarInfoId(carInfo.getId()) // Update car if already exists .map(car -> updateCar(carInfo, car)) // Save car otherwise .orElseGet(() -> mySpringCarRepo.save(createCar(carInfo)))); } Note 1: I haven't fully tested the code, it's just to give you an idea how to improve it. Note 2: the performance gain might be irrelevant due to the DB and network latency, but with the second approach you should save some memory space. Naming The method name saveCars is not very appropriate, a better name might be updateCars or syncCars.
{ "domain": "codereview.stackexchange", "id": 39032, "tags": "java, functional-programming, optional, transactions" }
Hinged bridge statics problem
Question: For part (a), is the normal force by the hinge on the bridge at an angle or is it horizontal? For part (b), I know how to resolve forces horizontally and vertically, and to take torques about the hinge, but the information is still insufficient for me to figure out what the tension force is. Any help would be much appreciated! Answer: (1) As for the (a) the total force of the ground/hinge (e.g. thrust or normal force + friction) is generally neither vertical nor horizontal. EDIT: You can obtain the force of the ground/hinge by calculating the force of the rope first, and then add all three forces together to get zero. As for the (b) you have three forces acting to beam, force of the ground, gravitational force and force of the rope. Since problem suggests "considering equilibrium", torques of these three forces must equal zero. (2) Force at point B is simply the force of rod BD to rod AC (and vice versa). Effectively, you have three forces acting on rod AC. Note also that the force of the rod BD is along its direction (because it is limited by two joints at its ends and there is no force in between).
{ "domain": "physics.stackexchange", "id": 3538, "tags": "homework-and-exercises, statics" }
10x10 bitmapped square with bits
Question: This program prints out a 10x10 square and uses only bit operations. It works well, but I don't like that global array. Can you tell me if the code is proper or not? #include <iostream> const std::size_t HEIGHT = 10; const std::size_t WIDTH = 10; char graphics[WIDTH/8][HEIGHT]; inline void set_bit(std::size_t x, std::size_t y) { graphics[(x) / 8][y] |= (0x80 >> ((x) % 8)); } void print_screen(void) { for (int y = 0; y < HEIGHT; y++) { for (int x = 0; x < WIDTH/8+1; x++) { for (int i = 0x80; i != 0; i = (i >> 1)) { if ((graphics[x][y] & i) != 0) std::cout << "*"; else std::cout << " "; } } std::cout<<std::endl; } } int main() { for(int x = 0; x < WIDTH; x++) { for(int y = 0; y < HEIGHT; y++) { if(x == 0 || y == 0 || x == WIDTH-1 || y == HEIGHT-1) set_bit(x,y); } } print_screen(); return 0; } Answer: That global array is indeed not good. You'll need to pass around an array, but you shouldn't do it with a C-style array. Doing that will cause it to decay to a pointer, which you should avoid in C++. If you have C++11, you could use std::array, which will be set at an initial size. But if you don't have C++11, and also want to adjust the size, use an std::vector. You can also compare the two here. Either way, you'll be able to pass any of them around nicely, and it's something you should be doing in C++ anyway. To match your environment, the following code does not utilize C++11. I'll use std::vector here, but this can be done with other STL storage containers. Here's what a 2D vector would look like: std::vector<std::vector<T> > matrix; // where T is the type This type does look long, and you may not want to type it out each time. To "shorten" it, you can use typedef to create an alias (which is not a new type): typedef std::vector<std::vector<T> > Matrix; With that, you can use this type as such: Matrix matrix; and create the 2D vector of a specific size. However, this is where the syntax gets nasty (especially lengthy). It's not set to a specific size, so you can just push vectors into it to increase the size. For a fixed size (using your size and data type), you'll have something like this: std::vector<std::vector<char> > matrix(HEIGHT, std::vector<char>(WIDTH)); This can be made shorter by having another typedef to serve as a dimension of the matrix. This will also make it a little clearer what the vector means in this context. typedef std::vector<char> MatrixDim; It is then applied to the Matrix typedef: typedef std::vector<MatrixDim> Matrix; The 2D initialization will then become this: Matrix matrix(HEIGHT, MatrixDim(WIDTH)); Now you can finally use this in main() and pass it to the other functions. Before you do that, you'll need a different loop counter type. With an STL storage container, you should use std::size_type. With std::vector<char>, specifically, you'll have: std::vector<char>::size_type; You can use yet another typedef for this: typedef MatrixDim::size_type MatrixDimSize; Here's what the functions will look like with the changes (explanations provided). I've also included some additional changes, which are also explained. The entire program with my changes applied and produces the same output as yours. setbit(): inline void set_bit(Matrix& matrix, MatrixDimSize x, MatrixDimSize y) { matrix[(x) / 8][y] |= (0x80 >> ((x) % 8)); } An additional parameter of type Matrix is added. The matrix is passed in by reference and modified within the function. The std::size_t parameters were replaced with the MatrixDimSize type. print_screen(): void print_screen(Matrix const& matrix) { for (MatrixDimSize y = 0; y < HEIGHT; y++) { for (MatrixDimSize x = 0; x < WIDTH/8+1; x++) { for (int i = 0x80; i != 0; i >>= 1) { std::cout << (((matrix[x][y] & i) != 0) ? '*' : ' '); } } std::cout << "\n"; } } A parameter of type Matrix is added. The matrix is passed in by const&, which is necessary as the function displays the matrix but does not modify it. It's also cheaper to pass it this way as opposed to copying (passing by value). MatrixDimSize is added for the loop counter types. The if/else is replaced with an equivalent ternary statement. A newline is done with "\n" as opposed to std::endl. The latter also flushes the buffer, which is slower. You just need the former. i = (i >> 1) is shortened to i >>= 1. Main(): int main() { Matrix matrix(HEIGHT, MatrixDim(WIDTH)); for (MatrixDimSize x = 0; x < WIDTH; x++) { for (MatrixDimSize y = 0; y < HEIGHT; y++) { if (x == 0 || y == 0 || x == WIDTH-1 || y == HEIGHT-1) { set_bit(matrix, x, y); } } } print_screen(matrix); } Both matrix vector typedefs are applied. MatrixDimSize is added for the loop counter types. The matrix is passed to and modified only by set_bit(). It is passed to print_screen() and is not modified.
{ "domain": "codereview.stackexchange", "id": 6213, "tags": "c++, matrix, bitwise" }
Project Euler 2 - Fibonacci sequence
Question: I lately spend a lot of time at Project Euler programming challenges. I have following problem which I should solve: Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms. I wrote Java code and I submitted it. Implementation First, I wrote helper method fibonacci(int n) which calculates a n-th word of fibonacci sequence and returns it. In main method I created a list of numbers and a n variable, that would be incremented in a loop. In the loop I was looking for all numbers that are even-valued that are added to the list. Next, I created result variable and I sum all numbers in the list and then, finally I printed it out to console. Main.java: package pl.hubot.projecteuler.problem2; import java.util.ArrayList; import java.util.List; public class Main { public static void main(String[] args) { List<Integer> numbers = new ArrayList<>(); int n = 0; while (fibonacci(n) < 4_000_000) { if (fibonacci(n) % 2 == 0) numbers.add(fibonacci(n)); n++; } int result = 0; for (Integer number : numbers) { result += number; } System.out.print(result); } private static int fibonacci(int n) { if (n == 0) return 0; else if (n == 1) return 1; else return fibonacci(n - 1) + fibonacci(n - 2); } } Answer: The recursive definition of Fibonacci is an elegant one, but in practice it's something you would never want to use. Let's say you want to get fibonacci(5). How often does fibonacci get called? fibonacci(5) = fibonacci(4) + fibonacci(3) = (fibonacci(3) + fibonacci(2)) + (fibonacci(2) + fibonacci(1)) = ((fibonacci(2) + fibonacci(1)) + (fibonacci(1) + fibonacci(0))) + ((fibonacci(1) + fibonacci(0)) + fibonacci(1)) = (((fibonacci(1) + fibonacci(0)) + fibonacci(1)) + (fibonacci(1) + fibonacci(0)) ) + ((fibonacci(1) + fibonacci(0)) + fibonacci(1)) That seems a lot for 0 + 1 + 1 + 2 + 3 + 5. Indeed, the recursive variant without memoization has exponential runtime. You have several options to reduce the time: use an iterative variant (which leads to \$ \mathcal O(n)\$ instead of \$ \mathcal O(2^n)\$ per call of fibonacci), use memoization (which, if done correctly, leads to \$ \mathcal O(n)\$ in worst case and \$ \mathcal O(1)\$ if the number was already calculated). But that begs the question: do we actually need fibonacci? For a programming challenge, it's enough to calculate the sum in main, while you're calculating the numbers iteratively. If you're looking for performance, that would be the fastest variant (that doesn't use the closed form).
{ "domain": "codereview.stackexchange", "id": 25502, "tags": "java, programming-challenge" }
What is the relation between zeta and cut-off regularization of the Casimir effect?
Question: In the literature there are at least two methods to derive Casimir effect: original one by Casimir himself: take the quantized energy between plates minus the free space energy, then regularize, e.g. by cut-off function and use Euler-Maclaurin formula modern one: take the quantized energy between plates, regularize it by zeta and analyticaly continue to physical value Obviously the results are the same. Two remarks: "(...) This suggests an important physical intuition for zeta renormalization: using the analytic continuation from s=3 to s=0 in some sense corresponds to subtracting the electromagnetic field's inherent contribution to the ground state energy. In Casimir's derivation we removed the infnite contribution of the field by taking the difference of two configurations, in effect subtracting whatever (infnite) energy the field possesses in free space. Here the subtraction may not be as intuitive, but its analytic simplicity makes it a powerful tool for analyzing vacuum energy problems." (https://aphyr.com/data/journals/113/comps.pdf, pg.13) the post http://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/ explains what is the relation between cut-off and zeta, but still in two derivations above we have two different objects to regularize! My question is: how this difference, i.e. neglecting the free space energy in the latter approach, is explained from the physical point of view? (perhaps one could elaborate on first remark) Answer: Ok, I found the answer in the draft by Kleinert: http://users.physik.fu-berlin.de/~kleinert/b6/psfiles/qft.pdf, pg. 600. The fact that is not mentioned in other sources - the free space energy (aka zero-point energy) that we subtract in the original approach, goes to $0$ after analytic continuation performed as in the second (zeta regularization) approach (it can be shown using dimensional regularization). In other words, the difference between the quantized energy between plates and the free space energy should always be taken in order to obtain Casimir effect (at least we don't want to use van der Waals forces), and this is physically reasonable. It is just specific to zeta regularization that the free space energy goes to $0$, thus often not even mentioned (!) in other texts.
{ "domain": "physics.stackexchange", "id": 18756, "tags": "regularization, casimir-effect" }
Print permutations in an iterative way
Question: I've written the following code to print all the permutations of a list. For example: \$[1, 2, 3]\$ has these permutations: \$[1,2,3]\$, \$[1,3,2]\$, \$[2,1,3]\$, \$[2,3,1]\$, \$[3,1,2]\$, and \$[3,2,1]\$. def insert_each_position(self, nums, insert_one): res, len_of_nums = [], len(nums) for idx in range(len_of_nums+1): tmp = nums[:] tmp.insert(idx, insert_one) res.append(tmp) return res def permute(self, nums): res, len_of_nums = [[]], len(nums) for idx in range(len_of_nums): cur_num = nums[idx] tmp = [] for sub in res: tmp.extend(self.insert_each_position(sub, cur_num)) res = tmp return res Answer: As suggested by @netme in the comments, unless you need to write the permutation code yourself, it would be much better to use itertools.permutations, as so: import itertools for perm in itertools.permutations([1, 2, 3]): print(perm) That code is more compact, memory-efficient and faster than yours. The itertools module has been deliberately honed to be fast (I think it’s written in C rather than pure Python) and is worth looking into if you haven’t used it before – it’s really useful. Assuming that you're supposed to be writing your own permutation, here are some comments on that code. (Given the self parameter and the self.insert_each_position in the second function, I’m assuming these are class methods. I put them in their own class for testing.) Unless tuple unpacking makes the code cleaner or the variables are related, I'm generally inclined to split variable definitions over multiple lines. So I’d split the variable declarations over two lines in the start of each function. I would then get rid of the len_of_nums variable. It’s much clearer just to use len(nums) directly in the two places where this variable is used. You don’t mention whether you’re using Python 2 or 3. If you’re using 2, you should use xrange() instead of range(), which is slightly faster and more memory efficient. There are no comments or docstrings. I had to guess how the function worked. You should explain what the code is trying to achieve, and why you wrote it that way – it will make it easer for me to review, and other people to use. I would also suggest trying to pick better variable names. res and tmp are not particularly descriptive or useful. My concern with your approach is that you gradually build up a list of all the permutations, which is going to be very memory inefficient and slow. Your program began to struggle when I asked it to permute range(10). It think it has exponential growth, but I’m not sure. If you want something fast and you can’t use itertools, I suggest you consider looking at generators for yielding individual permutations. That’s a lot more memory efficient, and will be much faster.
{ "domain": "codereview.stackexchange", "id": 14775, "tags": "python, algorithm, combinatorics, iteration" }
Maxwell and special relativity
Question: The derivation of the speed of light from Maxwell's equations yields an expression which does not contain a term for the velocity of the frame of reference in which the derivation is performed. Is this simply an inconsequential effect of the manner in which Maxwell's equations were formulated, or does it instead demonstrate that the speed of light is frame-independent? I ask this question because I recall from one of the physics books (which I since gave away) in my collection, the author made the point that Maxwell himself could have used that result to demonstrate special relativity before Einstein. Is that assertion correct? Answer: The derivation of the speed of light from Maxwell's equations yields an expression which does not contain a term for the velocity of the frame of reference in which the derivation is performed It really does - if you follow classical derivation based on Galileo relativity transformations. Maxwell equations are not invariant under these transformations, which were at the time most natural to use. Back then it was natural to think that classical formulation of Maxwell equations hold only for one special frame of reference and the job of physicist was to find it. The thought was, that this frame of reference was locked to some substance called eather in analogy to sound waves and air. It was only after every attempt to actually find such reference frame failed and after Lorentz demonstrated that if you use Maxwell equations also on your clocks and lengths the absolute lengths and times become immaterial. The theory was still formulated with reference to these absolute coordinates, but it became obvious that all the clocks and meter sticks will deform in just the way that makes finding this absolute reference frame impossible. Only then it became natural to get rid of it and redefine simultaneity using Maxwell theory and thus reach special theory of relativity. So in principle, Maxwell could do it since his equations are invariant under Lorentz transformations, but to make this transformations central to whole of physics just because of that seems too crazy of an idea.
{ "domain": "physics.stackexchange", "id": 66853, "tags": "special-relativity, maxwell-equations" }
How do particles get their charge?
Question: How does an electron get its charge? And how can it maintain that charge for very long (infinite) periods of time? And how come a neutron has no charge since and a proton does? They are both made of the same type of quarks and they both have no movement. Answer: 1 How does an electron get its charge? This is the elementary particle table . The electron is an elementary particle and its charge is an observable attribute that , together with its other quantum numbers and mass, classify it as an electron. And how can it maintain that charge for very long (infinite) periods of time? Observations gathered over a century have not shown the decay of an electron, i.e. of losing charge and thus becoming another particle. So it is by construction of Nature. And how come a neutron has no charge since and a proton does? They are both made of the same type of quarks and they both have no movement. Look at the quarks on the table. The exact quark content has to be added up, and the charge added. Proton is up+ up +down =+1 , and neutron is down+ down +up =0. So the general answer to How do particles get their charge? is, it depends on the particles, if they are elementary or composite. Composite one get their charge by the addition of the charges of the elementary ones they are made out of. Elementary particles have been defined by the study of the results of innumerable experiments, over more than a century. A minimal mathematical model called the standard model of particle physics assigns them as a basis for describing the underlying quantum mechanical level of nature. This model has been very successful in describing all known interactions and predicting new observations.
{ "domain": "physics.stackexchange", "id": 30920, "tags": "particle-physics, charge, standard-model, elementary-particles" }
Reflection, Transmission, and Plasma Frequency
Question: Does anyone have a good, clear explanation of why and how this works? I don't understand the following. Say you have a piece of metal with a plasma frequency $\omega_p$. This is like a resonant frequency, so I can believe that for frequencies $\omega \approx \omega_p$, we will have mostly absorption from the damping term. Here is what I don't understand: we always say that we will have reflection for $\omega < \omega_p$ and transmission for $\omega > \omega_p$. Why is that? The resonance curve is symmetric, so it seems like the system would do more-or-less the same thing on either side of resonance. Thanks! Answer: If you have a very low frequency, the material behaves as a conductor: the electrons respond instantaneously to the excitation, and therefore the metal becomes a reflector (the boundary condition of "no E field parallel to the surface" is met). If you have a very high frequency, the electrons don't have "time to react" at all - so the amplitude of their response is small (and shifted in phase). The incident EM wave will continue unmolested. As you already said - at resonance, the electrons move just enough to absorb much of the energy of the incident wave. Another way to put it: the resonance amplitude curve looks symmetrical - but the phase curve is not. Phase changes continuously as you go from below-resonance to above-resonance.
{ "domain": "physics.stackexchange", "id": 23058, "tags": "solid-state-physics, metals" }
Why my base-quality in alignment is not perfect?
Question: I have a fasta file with some DNA sequences. I would like to simulate next-generation sequencing reads from it. I'm doing it without any base error and mutation error. wgsim -e 0 -r 0 sequence.fa seq_0_1.fq seq_0_2.fq To my knowledge, this is a perfect simulation. Next, I give the paired alignments to bwa for alignment. bwa mem -M sequence.fa seq_0_1.fq seq_0_2.fq > P0.sam Now, I check the ASCII of base quality (column 11 in the SAM format, the specification is here). head -n 3 P0.sam | cut -f11 IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII According to this page, the ASCII (in order of quality) is: !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ Question: My experiment is supposed to be perfect, so my base-quality metrics are expected to appear near the right-end of the list (like x,y,z). However, my metric I is no-where near the top of the list. In particular, I'm unable to achieve anything from J to ~. Why? Answer: When fasta reads are aligned they are by default assigned the phred score of 40, which in phred+33 encoding is represented by I. Phred+33 uses ASCII characters from ! to I: !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHI Phred+33 was originally used by sanger machines but it is now used by all popular platforms such as Illumina, Ion-torrent (also Ion-proton) and Roche 454. Since most platforms now use phred+33, most aligners also assume it by default. You can specify other quality encoding formats if you want (such as phred+64 or Solexa). Have a look at wikipedia page on FASTQ format for the different quality encoding formats.
{ "domain": "biology.stackexchange", "id": 4646, "tags": "bioinformatics" }
Does Bragg's law take into account atom size? (And should it?)
Question: Bragg's law explains wave diffraction and interaction when electromagnetic waves hit a lattice structure: $$n \lambda = 2 d \sin \theta$$ See picture and details on Wikipedia. I am wondering if the size of the atoms (modelled in the pictures as black dots) matters? In the derivation, the atom size is not mentioned anywhere, nor in the formula, so I guess it doesn't care. Should it, though? Or are they so negligibly small that this doesn't matter at all? Are there examples of the same lattice made of different molecules, and do they have the exact same diffraction pattern? Answer: The positions of the diffraction peaks do not depend on the size of the atoms. The peak positions are determined by the spacings of the crystal lattice and it does not matter what the atoms are or how big they are. That's why the atom size does not appear in the Bragg formula. However the intensities of the lines, both absolute and relative, depend very strongly on the size of the atoms. X-rays scatter off electrons, so what is actually doing the scattering is the regular variation of electron density in the crystal. The scattering is stronger where the electron density is higher, i.e. near the nucleus, and weaker where electron density is lower, i.e. between atoms. The variation of electron density over an atom is known as the form factor. relating the form factor so the scattering intensity is a somewhat involved calculation but is described in any textbook on X-ray scattering.
{ "domain": "physics.stackexchange", "id": 17022, "tags": "optics, diffraction" }
Why does causality lead to different contours when calculating propagators?
Question: On the Wikipedia page for propagators they mention three types of Green's functions for the Klein-Gordon equation: The retarded propagator (taken when $x^0 > y^0$). The advanced propagator (taken when $x^0 < y^0$). The Feynman propagator. It is not clear to me why $x^0 > y^0$ suggests we take the contour in the upper half plane, why $x^0 < y^0$ implies the contour is to be taken in the lower half plane, and what is the motivation behind contour in the Feynman propagator and how it is able to handle both cases ($x^0 > y^0$ or $x^0 < y^0$ or $x^0 = y^0$). Why is this the case? Answer: The integral $$G(x,y)=\int \dfrac{d^4p}{(2\pi)^4}\dfrac{e^{-ip(x-y)}}{p^2+m^2}\tag{1}$$ is undefined because of the fact that the denominator $p^2+m^2$ vanishes when $p$ goes on shell. This corresponds to a whole three-dimensional hyperboloid in the integration domain. This is a special feature of the Lorentzian signature of Minkowski spacetime. Indeed, in Euclidean signature, $p^2+m^2$ can never be zero because $p^2\geq 0$ and then (1) uniquely defines the inverse of the scalar Laplace operator. In Lorentzian signature, we must pick an inverse. To pick an inverse we introduce a prescription that makes (1) well-defined. Each of the prescriptions defines a different function, with different properties. First of all, let us write (1) a bit differently by using the definition of $p^2=-(p^0)^2+|\vec{p}|^2$. In that case $$G(x,y)=\int \dfrac{dp^0d^3\vec{p}}{(2\pi)^4}\dfrac{e^{-ip(x-y)}}{-(p^0)^2+|\vec{p}|^2+m^2}\tag{2}.$$ Now let us imagine that we take the integral over $p^0$ first with $\vec{p}$ held fixed. We can see that the singularity is hit when $(p^0)^2=m^2+|\vec{p}|^2$. This happens at $$p^0 = \pm \sqrt{m^2+|\vec{p}|^2}\tag{3}$$ Since the integral over $p^0$ is over the whole real line, and the two solutions in (3) are both sitting on the line, the integration goes through them as expected. You can circumvent this by defining $G(x,y)$ as $$\int \dfrac{dp^0d^3\vec{p}}{(2\pi)^4}\dfrac{e^{-ip(x-y)}}{-(p^0\pm i\epsilon)^2+|\vec{p}|^2+m^2},\quad \text{or},\quad \int \dfrac{dp^0d^3\vec{p}}{(2\pi)^4}\dfrac{e^{-ip(x-y)}}{-(p^0)^2+|\vec{p}|^2+m^2\pm i\epsilon} \tag{4}.$$ In the first option you get the singularities at $$p^0=\pm i \epsilon + \sqrt{m^2+|\vec{p}|^2},\quad p^0 = \pm i\epsilon -\sqrt{m^2+|\vec{p}|^2}.\tag{5}$$ The singularities are thus either pushed upward or downwards in the imaginary direction. So suppose we take the sign $+$. The singularities are then pushed upwards. You may close the contour up or down. If you close it upwards you enclose the singularities and get a contribution from the associated residues. If you close it downwards you don't get it. In one case you get a non-zero result, in the other you get zero. So what determines the contour choice? Well, it is essentially the exponential. Observe you have $e^{-ip(x-y)}$ on the integrand. This is $e^{ip^0(x^0-y^0)}e^{-i\vec{p}\cdot (\vec{x}-\vec{y})}$. Now since we are doing contour integration, we will have both real and imaginary parts of $p^0$ along the contour, so these exponentials are $$e^{i{\rm Re}(p^0)(x^0-y^0)-{\rm Im}(p^0)(x^0-y^0)}e^{-i\vec{p}\cdot (\vec{x}-\vec{y})}\tag{6}.$$ This is the key to the choice of contour. If you close the contour upwards, along the big semicircle you will have ${\rm Im}(p^0)\to \infty$. So if $(x^0-y^0)<0$ you will have one exponential $e^{-{\rm Im}(p^0)(x^0-y^0)}$ which badly diverges, while if $(x^0-y^0)>0$ you have one exponential $e^{-{\rm Im}(p^0)(x^0-y^0)}$ which goes to zero and eliminates the big semicircle contribution. So you must close the contour upwards when $x^0-y^0>0$. By the same argument, you must close the contour downwards when $x^0-y^0<0$. Now, recall that when you pick the plus sign in (5), the poles are shifted upwards. So you only get a non-zero contribution when the contour is closed upwards, which is the case for $x^0-y^0>0$. So for the plus sign in (5) you get something non-zero for $x^0-y^0>0$ and zero for $x^0-y^0<0$. This is the retarded propagator. When you pick the minus sign in (5) the opposite happens. The poles are shifted downwards and you only get a non-zero contribution when the contour is closed downwards, which is the case for $x^0-y^0<0$. So for the minus sign, you get a non-vanishing result for $x^0-y^0<0$ and zero for $x^0-y^0>0$. This is the advanced propagator. Finally, consider the second option in (4). Now the singularities are at $$p^0 = \sqrt{m^2+|\vec{p}|^2\pm i\epsilon},\quad p^0 = - \sqrt{m^2+|\vec{p}|^2\pm i\epsilon}\tag{7}.$$ Since $\epsilon$ will be taken to zero, you can expand the square roots and the singularities lie at $$p^0 = \sqrt{m^2+|\vec{p}|^2}\pm \frac{i\epsilon}{2\sqrt{m^2+|\vec{p}|^2}},\quad p^0 = - \sqrt{m^2+|\vec{p}|^2}\mp \frac{i\epsilon}{2\sqrt{m^2+|\vec{p}|^2}}\tag{8}.$$ Regardless of the choice of sign in the $i\epsilon$ prescription, the important point is that now one pole will be shifted upwards and the other downwards. The contour analysis now shows that in both cases $x^0-y^0>0$ where the contour must be closed upwards and $x^0-y^0<0$ where the contour must be closed downwards you will get contributions. The choice of the sign will determine the time-ordering of the resulting correlator. So, in summary, (1) by itself is ill-defined because the wave operator in Lorentzian signature does not have a unique inverse. To make (1) well-defined you can choose several possible $i\epsilon$ prescriptions as in (4), which are essentially boundary conditions on causal behavior of the inverse. The choice of prescription then picks an inverse of the wave operator. For each such choice, you may then evaluate the integral over $p^0$ using contours and you discover that that choice of prescription is tied to some causal behavior of $G(x,y)$ because of its dependence on $x^0-y^0$. This is why causality leads to different contours when calculating propagators.
{ "domain": "physics.stackexchange", "id": 92855, "tags": "quantum-mechanics, quantum-field-theory, causality, greens-functions, propagator" }
GPS Coarse Acquisition PRN Codes
Question: Are the actual 1023 bit sequences for each of the 32 GPS satellites available to consumers? I found the IS GPS 200 but it doesn't list the entire sequences. Are verified sequences available for download anywhere online? I'm trying to validate my python CA PRN generator. Also, what does the last footnote (**** The two-tap coder utilized here is only an example implementation that generates a limited set of valid C/A codes.) mean? Which codes is it valid for? Answer: I have a PRN generator that I have validated with live captured signals that is available on the Mathworks Exchange site at this address and equally runs in Octave (Update: I also pasted the core of this in a code block below): https://www.mathworks.com/matlabcentral/fileexchange/14670-gps-c-a-code-generator The two tap coder is as given in the diagram in that spec and copied in the figure below: What they mean by “two tap coder” is the selection and addition of two outputs of the G2 generator in the diagram and that this is just one way to generate the sequences (a very convenient way). An alternate approach which I have done is to implement the generator with one LFSR. In this case you simply convolve the coefficients for the two generator polynomials. This results in a length 20 shift register and non-maximum length sequence (since it clearly can be factored), and will generate 1023 different sequences each 1023 long, depending on how you seed it. (So if you change the state of the 20 elements with any 20 consecutive chips from a PRN of interest, it will continue to run the code for that SV). So when they mention it will only generate a limited selection of the Gold Codes, they mean it won’t generate all 1023 possible codes (because with the selection of two taps there are only $10!/(2!8!) = 45$ combinations -Thanks to RyanC in the comments), and therefore I highly suspect that the selection of which of the 1023 different codes to use was based on the assumption of using this 2-tap generator). The two-tap generator shown won’t generate all possible Gold Codes for these polynomials used but it will indeed generate all the GPS C/A codes. This is because the Gold Code (which is the GPS C/A code) is formed by adding two maximum length sequences in GF(2) (basically x-or'ing the output). Doing that is identical to what could be done with one generator where the polynomials are multiplied together, and you multiply polynomials by convolving their coefficients. The advantage of doing that with just one long LFSR would be the creation of a flexible code generator capable of creating any maximum length sequence up to $2^{20}-1$ including all the GPS C/A codes and many other compound codes. The interesting point about the Gold Code is that if you delay one of the generators relative to the other by even 1 chip, the result will be a completely new 1023 length sequence. So since there are 1023 possible delays, there are 1023 total possible sequences that can be generated, each 1023 long. Further an interesting point about maximum length sequences in general, is if you GF(2) add the sequence to a delayed version of itself, it will create the exact same sequence just at a completely different delay! Thus you see what is happening in the Gold Code Generator with the "two-tap coder" which is used to select the satellite; in the figure I gave we select taps 2 and 6 resulting in SV1 (which should match the first part of the table that you didn't post). The table you have shows the equivalent delay of the G2 coder for each two tap selection that is made. Since a login is required for the MathExchange site, here is a copy of my code with the front-end error-trapping removed (for brevity): function g=cacode(sv,fs) % function G=CACODE(SV,FS) % Generates 1023 length C/A Codes for GPS PRNs 1-37 % % % g: nx1023 matrix- with each PRN in each row with symbols 1 and 0 % sv: a row or column vector of the SV's to be generated % valid entries are 1 to 37 % fs: optional number of samples per chip (defaults to 1), fractional samples allowed, must be 1 or greater. % % For multiple samples per chip, function is a zero order hold. % % % For example to generate the C/A codes for PRN 6 and PRN 12 use: % g=cacode([6 12]), % and to generate the C/A codes for PRN 6 and PRN 12 at 5 MHz use % g=cacode([6 12],5/1.023) % % % Dan Boschen 12-30-2007 % boschen@loglin.com % table of C/A Code Tap Selection (sets delay for G2 generator) tap=[2 6; 3 7; 4 8; 5 9; 1 9; 2 10; 1 8; 2 9; 3 10; 2 3; 3 4; 5 6; 6 7; 7 8; 8 9; 9 10; 1 4; 2 5; 3 6; 4 7; 5 8; 6 9; 1 3; 4 6; 5 7; 6 8; 7 9; 8 10; 1 6; 2 7; 3 8; 4 9 5 10 4 10 1 7 2 8 4 10]; % G1 LFSR: x^10+x^3+1 s=[0 0 1 0 0 0 0 0 0 1]; n=length(s); g1=ones(1,n); %initialization vector for G1 L=2^n-1; % G2 LFSR: x^10+x^9+x^8+x^6+x^3+x^2+1 t=[0 1 1 0 0 1 0 1 1 1]; q=ones(1,n); %initialization vector for G2 % generate C/A Code sequences: tap_sel=tap(sv,:); for inc=1:L g2(:,inc)=mod(sum(q(tap_sel),2),2); g(:,inc)=mod(g1(n)+g2(:,inc),2); g1=[mod(sum(g1.*s),2) g1(1:n-1)]; q=[mod(sum(q.*t),2) q(1:n-1)]; end %upsample to desired rate if fs~=1 %fractional upsampling with zero order hold index=0; for cnt = 1/fs:1/fs:L index=index+1; if ceil(cnt) > L %traps a floating point error in index gfs(:,index)=g(:,L); else gfs(:,index)=g(:,ceil(cnt)); end end g=gfs; end
{ "domain": "dsp.stackexchange", "id": 6855, "tags": "signal-analysis, cross-correlation, bpsk, template-matching" }
Previously deleted node/console script names still showing with ros2 run
Question: I initially made an error when editing my setup.py file (missed a comma I think) and when I used tab/auto completion with the ros2 run command, I was getting an incorrectly named node. However, despite editing the setup.py file and using colcon build, I still have the same issue. I initially thought it may have been due to how I added the new node (meant for ArUco tag detection) in the 'console script' section and edited that but it made no difference. As a result, now I have several options coming up with ros2 run even though they are incorrect. I am using ROS2 Foxy on Ubuntu 20.04.4 LTS, would really appreciate any help. This is my current setup.py file: from setuptools import setup package_name = 'tb3_obj_detection' setup( name=package_name, version='0.0.0', packages=[package_name], data_files=[ ('share/ament_index/resource_index/packages', ['resource/' + package_name]), ('share/' + package_name, ['package.xml']), ], install_requires=['setuptools'], zip_safe=True, maintainer='___', maintainer_email='____@todo.todo', description='TODO: Package description', license='TODO: License declaration', tests_require=['pytest'], entry_points={ 'console_scripts': [ 'img_publisher = tb3_obj_detection.camera_pub:main', 'img_subscriber = tb3_obj_detection.camera_sub:main', 'hand_detector = tb3_obj_detection.hand_detection:main', 'hand_detector_tflite = tb3_obj_detection.hand_detection_tflite:main', 'aruco_detection = tb3_obj_detection.aruco_detection:main', ], }, ) This is the output I get when trying to use ros2 run after sourcing my workspace, (https://ibb.co/L8dwDKW) Originally posted by Reuben on ROS Answers with karma: 3 on 2022-06-13 Post score: 0 Answer: Can you try colcon build --cmake-clean-cache? You can try deleting the build and inc folders, then rebuild your workspace and source it again with source ~/dev_ws/install/setup.bash. Also add the result of ros2 wtf/ros2 doctor if the above doesn't help. Originally posted by ljaniec with karma: 3064 on 2022-06-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Reuben on 2022-06-13: Deleting the build & install folders and rebuilding the workspace worked! Thank you ljaniec
{ "domain": "robotics.stackexchange", "id": 37762, "tags": "ros, ros2, setup.py" }