anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Adding credentialsID node to SVN location nodes
Question: I'm new to ruby. Recently I wrote a script to add a credentialsID node to each SVN location node in each config.xml file for all of our Jenkins jobs (We have many). It works, but I'm guessing it could be done more elegantly. How would you fix this to be more succinct, clean, etc. require 'nokogiri' Dir["somepathhere/.jenkins/jobs/*/config.xml"].each do |filename| file = File.open(filename, 'r') doc = Nokogiri::XML(file) file.close locations = doc.xpath('/project/scm/locations/hudson.scm.SubversionSCM_-ModuleLocation') locations.each do |location| crednode = location.at_xpath('credentialsId') if crednode crednode.content = 'somevalue' else location.first_element_child.before("<credentialsId>somevalue</credentialsId>\n") end end File.open(filename, 'w') { |file| file.print(doc.to_xml) } end Answer: You can take a few shortcuts here and there, but overall it's not bad. My comments: You don't need both open and close when you just want to read everything; File.read(filename) will work just fine I'd like to see some constants or command line arguments used in place of the many hardcoded strings. E.g. tag names and xpaths could be constants, while the project directory's path and credentials string could be provided from the command line. Either one would make the script more maintainable and reusable. Don't insert the new credentials tag as a raw string; use doc.create_element instead The if...else isn't strictly necessary; you can just look for and remove any existing credentialsId tags, and then insert your own (though that may mess with the file's whitespace a little) The Ruby convention is to use 2 spaces of indentation, not 4 (and not tabs) My Nokogiri is a bit rusty (no pun intended1), but I imagine this should work too. I've left the project path and the credentials string hard-coded because I don't know if you'll want those from ARGV or something. require 'nokogiri' CONFIG_FILE_GLOB = ".jenkins/jobs/*/config.xml" LOCATION_XPATH = "/project/scm/locations/hudson.scm.SubversionSCM_-ModuleLocation" CREDENTIALS_TAG = "credentialsId" Dir["somepathhere/#{CONFIG_FILE_GLOB}"].each do |filename| doc = Nokogiri::XML(File.read(filename)) # simplified file read # create a credentials element ahead of time credentials = doc.create_element(CREDENTIALS_TAG, "somevalue") doc.xpath(LOCATION_XPATH).each do |location| location.xpath(CREDENTIALS_TAG).remove # remove any existing credentials location.add_child(credentials.clone) # insert a new one end File.write(filename, doc.to_xml) # slightly simpler file write end Note that in order to insert the new element properly, we must always insert a clone. If you just use add_child(credentials), and there are multiple locations, the element will end up in the last of those locations, because add_child will add or re-parent the element. So the first add_child call would add it, and subsequent ones would move it since it's the same object. Hence, the need for clone. 1) because nokogiri means hand saw
{ "domain": "codereview.stackexchange", "id": 9197, "tags": "ruby, xml" }
Potential Field in space of a "thick" dieletric spherical shell affected by an uniform electric field
Question: The spherical linear homogeneous dielectric shell has an inner radius $a$ and an outer radius $b$ and is under the influence of an uniform electric field, then what is the potential across all its regions? What I have been able to deduce: First, due to the uniform electric field I am assuming that the polarization $\vec P$ is constant across the dieletric. Second, because there is no enclosed charge in $r<a$ the electric field is zero and thus the potential is some constant on that whole region. Then at large distance $r\gg b$ the potential will simply be a function of the uniform field which using polar coordinates $V_{r\gg b}=-E_0 r \cos\vartheta$. I think that my biggest problem is that I dont really know how to quantify the polarization $\vec P$, which I suppose that I could use to discover the surface area charge density via $\sigma_P=\vec P\cdot\hat n$ and then use it to integrate. Do I also have to consider the compounding effects in which the electric field induces polarization which in turns amplifies the electric field and intensifies itself? It is rather confusing and I dont think I have been able to fully wrap my head around it. Answer: The thing is, you want to solve here a boundary value problem, but you seem to have an issue with setting the boundary conditions for you don't know $P$.The thing is, you need $\sigma$ to give you a boundary condition, which is determined using $P$. You do know this, $\sigma = \epsilon_{0} \chi_{e} E \cdot \hat n$ which is equal to $$\epsilon_{0} \frac{\partial V}{\partial n}_{below} - \epsilon_{0} \frac{\partial V}{\partial n}_{above}$$ on each of the surface (two). Simply exploit this: the boundary conditions now read: ${V_{above}}_{r=a} = {V_{below}}_{r=a}$ ${V_{above}}_{r=b} = {V_{below}}_{r=b}$ A) $\epsilon_{0} \frac{\partial V}{\partial n}_{below} - \epsilon_{0} \frac{\partial V}{\partial n}_{above}= \sigma$ (for two surfaces, make sure you get the signs and the input correct) B) $\sigma =\sigma_{b} =-\epsilon_{0} \chi_{e} \frac{\partial V_{above}}{\partial n} $ (for two surfaces, make sure you get the signs and inputs correct, and link 3a and 3b, which yields another equation) (Also here there are no free charges, hence the subscript b) Potential as $r \to \infty$ = $- E_{0} r cos(\theta)$ I guess this is enough to solve the problem, (intuition tells me you'll have 3 or 4 unknown coefficients in the spherical coordinates expansion). Good luck !!
{ "domain": "physics.stackexchange", "id": 96452, "tags": "homework-and-exercises, electrostatics, electric-fields, polarization, dielectric" }
Reducing problems to solve easier problems
Question: Is there any instance where a problem $A$ can be reduced to a problem $B$ where $B$ is easier to solve than $A$? I've been learning about NP-Hardness recently and seems that the answer is no. Whenever we show $ A \leq_p B$ its also said that $B$ must therefore be at least as hard as $A$. If this is always the case then is reducing a problem into another only useful for proving NP-Hardness/NP-Completeness? Is there no way to leverage reductions to find an easier problem to solve than the one you originally start out with? Answer: Roughly speaking, no. One way to solve A is by using the method for solving B combined with the reduction from A to B. So if that reduction is efficient, then solving A can't be too much harder than solving B. If I'm pedantic, yes, B might be a little bit easier to solve than A. If the reduction is very expensive (takes a long time, or significantly increases the size of the problem instance), then B might be a lot easier to solve than A. Normally in study of NP-completeness we address this by requiring the reduction to run in polynomial time, and we ignore any differences that are at most a polynomial factor.
{ "domain": "cs.stackexchange", "id": 20676, "tags": "np-complete, reductions, np-hard, np" }
Thread safety and rospy Timer
Question: I'm a little confused by how rospy.Timer works with regards to thread-safety. My understanding of callbacks in rospy in general is they're essentially handled sequentially during calls to Sleep and Spin, meaning they're implicitly threadsafe and I can modify the same data from multiple callbacks within a node without locks or other safeguards. Timers represent a totally separate thread of execution, albeit one that still runs off of a Rate.sleep() loop. Is it safe to work with data that a Timer interacts with, or do I need to start adding locks in any nodes that have both Subscriptions and Timers? Originally posted by etappan on ROS Answers with karma: 53 on 2015-11-11 Post score: 0 Answer: My understanding of callbacks in rospy in general is they're essentially handled sequentially during calls to Sleep and Spin, meaning they're implicitly threadsafe and I can modify the same data from multiple callbacks within a node without locks or other safeguards. Unfortunately, no. Unlike roscpp, rospy creates a new thread for every Subscriber and Timer. In fact, rospy.spin() is just an infinite loop of half-second waits. As such, callbacks in rospy are not threadsafe. You should use locks in any rospy node where multiple callbacks of any kind might interact. Originally posted by Ed Venator with karma: 1185 on 2017-08-10 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 22973, "tags": "rospy, timer" }
shadow_hand spawn on gazebo electric don't move
Question: hi, i have an haptic glove, and im using it to move the fingers of the shadow hand spawn on gazebo electric. first i was working on fuerte and doesn't a problem all was good, but i have to down to electric for collision part that i need and on fuerte gazebo version is broken. the values of sensors are publishing on this topics sh_mfj3_mixed_position_velocity_controller/command sh_mfj0_mixed_position_velocity_controller/command sh_lfj3_mixed_position_velocity_controller/command sh_lfj0_mixed_position_velocity_controller/command sh_ffj3_mixed_position_velocity_controller/command sh_ffj0_mixed_position_velocity_controller/command well, the same sh_xf#_mixed_position_velocity_controller/command for the 5 fingers. to spawn shadow_hand i use roslaunch sr_hand gazebo_hand.launch EDIT: well i'm thinking is an error or here miss something. see when i run this on fuerte the rxgraph show me the node of the communication and the publishin on the topics of contrtollers like see in the image but it electric is happenin this here is like the hand looks after some minutes, and nothing happens when is running rostopic pub Log when run roslaunch sr_hand gazebo_hand.launch This code block was moved to the following github gist: https://gist.github.com/answers-se-migration-openrobotics/ff663bbf8666674767dfb29736cdaf8d Answer of the rosmake shadow_robot virtualtouch@ubuntu:~/shadow$ rosmake shadow_robot [ rosmake ] rosmake starting... [ rosmake ] Packages requested are: ['shadow_robot'] [ rosmake ] Logging to directory /home/virtualtouch/.ros/rosmake/rosmake_output-20130212-115421 [ rosmake ] Expanded args ['shadow_robot'] to: ['sr_hand', 'sr_mechanism_model', 'sr_mechanism_controllers', 'sr_move_arm', 'sr_utilities', 'sr_hardware_interface', 'sr_kinematics', 'sr_movements', 'sr_example', 'sr_tactile_sensors', 'sr_description', 'sr_robot_msgs', 'sr_gazebo_plugins'] [rosmake-0] Starting >>> catkin [ make ] [rosmake-1] Starting >>> hardware_interface [ make ] [rosmake-2] Starting >>> sr_kinematics [ make ] [rosmake-0] Finished <<< catkin ROS_NOBUILD in package catkin No Makefile in package catkin [rosmake-3] Starting >>> sr_move_arm [ make ] [rosmake-0] Starting >>> rospack [ make ] [rosmake-1] Finished <<< hardware_interface ROS_NOBUILD in package hardware_interface [rosmake-1] Starting >>> genmsg [ make ] [rosmake-0] Finished <<< rospack ROS_NOBUILD in package rospack No Makefile in package rospack [rosmake-0] Starting >>> roslib [ make ] [rosmake-1] Finished <<< genmsg ROS_NOBUILD in package genmsg No Makefile in package genmsg [rosmake-1] Starting >>> genlisp [ make ] [rosmake-0] Finished <<< roslib ROS_NOBUILD in package roslib No Makefile in package roslib [rosmake-0] Starting >>> genpy [ make ] [rosmake-1] Finished <<< genlisp ROS_NOBUILD in package genlisp No Makefile in package genlisp [rosmake-1] Starting >>> gencpp [ make ] [rosmake-0] Finished <<< genpy ROS_NOBUILD in package genpy No Makefile in package genpy [rosmake-0] Starting >>> cpp_common [ make ] [rosmake-1] Finished <<< gencpp ROS_NOBUILD in package gencpp No Makefile in package gencpp [rosmake-1] Starting >>> message_generation [ make ] [rosmake-0] Finished <<< cpp_common ROS_NOBUILD in package cpp_common No Makefile in package cpp_common [rosmake-0] Starting >>> rostime [ make ] [rosmake-1] Finished <<< message_generation ROS_NOBUILD in package message_generation No Makefile in package message_generation [rosmake-1] Starting >>> rosunit [ make ] [rosmake-1] Finished <<< rosunit ROS_NOBUILD in package rosunit No Makefile in package rosunit [rosmake-1] Starting >>> roslang [ make ] [rosmake-1] Finished <<< roslang ROS_NOBUILD in package roslang No Makefile in package roslang [rosmake-1] Starting >>> xmlrpcpp [ make ] [rosmake-1] Finished <<< xmlrpcpp ROS_NOBUILD in package xmlrpcpp No Makefile in package xmlrpcpp [rosmake-1] Starting >>> rosgraph [ make ] [rosmake-0] Finished <<< rostime ROS_NOBUILD in package rostime 4 Active 13/118 Complete ] No Makefile in package rostime [rosmake-1] Finished <<< rosgraph ROS_NOBUILD in package rosgraph No Makefile in package rosgraph [rosmake-1] Starting >>> rosparam [ make ] [rosmake-1] Finished <<< rosparam ROS_NOBUILD in package rosparam No Makefile in package rosparam [rosmake-1] Starting >>> rosmaster [ make ] [rosmake-1] Finished <<< rosmaster ROS_NOBUILD in package rosmaster No Makefile in package rosmaster [rosmake-1] Starting >>> rosclean [ make ] [rosmake-0] Starting >>> roscpp_traits [ make ] [rosmake-0] Finished <<< roscpp_traits ROS_NOBUILD in package roscpp_traits No Makefile in package roscpp_traits [rosmake-0] Starting >>> roscpp_serialization [ make ] [rosmake-0] Finished <<< roscpp_serialization ROS_NOBUILD in package roscpp_serialization No Makefile in package roscpp_serialization [rosmake-0] Starting >>> message_runtime [ make ] [rosmake-0] Finished <<< message_runtime ROS_NOBUILD in package message_runtime No Makefile in package message_runtime [rosmake-0] Starting >>> std_msgs [ make ] [rosmake-0] Finished <<< std_msgs ROS_NOBUILD in package std_msgs No Makefile in package std_msgs [rosmake-0] Starting >>> rosgraph_msgs [ make ] [rosmake-0] Finished <<< rosgraph_msgs ROS_NOBUILD in package rosgraph_msgs No Makefile in package rosgraph_msgs [rosmake-0] Starting >>> rosconsole [ make ] [rosmake-0] Finished <<< rosconsole ROS_NOBUILD in package rosconsole No Makefile in package rosconsole [rosmake-0] Starting >>> roscpp [ make ] [rosmake-1] Finished <<< rosclean ROS_NOBUILD in package rosclean No Makefile in package rosclean [rosmake-0] Finished <<< roscpp ROS_NOBUILD in package roscpp No Makefile in package roscpp [rosmake-0] Starting >>> rospy [ make ] [rosmake-1] Starting >>> std_srvs [ make ] [rosmake-0] Finished <<< rospy ROS_NOBUILD in package rospy No Makefile in package rospy [rosmake-1] Finished <<< std_srvs ROS_NOBUILD in package std_srvs No Makefile in package std_srvs [rosmake-1] Starting >>> geometry_msgs [ make ] [rosmake-0] Starting >>> actionlib_msgs [ make ] [rosmake-0] Finished <<< actionlib_msgs ROS_NOBUILD in package actionlib_msgs No Makefile in package actionlib_msgs [rosmake-1] Finished <<< geometry_msgs ROS_NOBUILD in package geometry_msgs No Makefile in package geometry_msgs [rosmake-0] Starting >>> trajectory_msgs [ make ] [rosmake-0] Finished <<< trajectory_msgs ROS_NOBUILD in package trajectory_msgs No Makefile in package trajectory_msgs [rosmake-0] Starting >>> pr2_controllers_msgs [ make ] [rosmake-0] Finished <<< pr2_controllers_msgs ROS_NOBUILD in package pr2_controllers_msgs [rosmake-0] Starting >>> pr2_mechanism_msgs [ make ] [rosmake-0] Finished <<< pr2_mechanism_msgs ROS_NOBUILD in package pr2_mechanism_msgs No Makefile in package pr2_mechanism_msgs [rosmake-0] Starting >>> rosout [ make ] [rosmake-0] Finished <<< rosout ROS_NOBUILD in package rosout No Makefile in package rosout [rosmake-1] Starting >>> sensor_msgs [ make ] [rosmake-1] Finished <<< sensor_msgs ROS_NOBUILD in package sensor_msgs No Makefile in package sensor_msgs [rosmake-1] Starting >>> sr_robot_msgs [ make ] [rosmake-0] Starting >>> roslaunch [ make ] [rosmake-0] Finished <<< roslaunch ROS_NOBUILD in package roslaunch No Makefile in package roslaunch [rosmake-0] Starting >>> rostest [ make ] [rosmake-0] Finished <<< rostest ROS_NOBUILD in package rostest No Makefile in package rostest [rosmake-0] Starting >>> message_filters [ make ] [rosmake-0] Finished <<< message_filters ROS_NOBUILD in package message_filters No Makefile in package message_filters [rosmake-0] Starting >>> angles [ make ] [rosmake-0] Finished <<< angles ROS_NOBUILD in package angles No Makefile in package angles [rosmake-0] Starting >>> tf [ make ] [rosmake-0] Finished <<< tf ROS_NOBUILD in package tf No Makefile in package tf [rosmake-0] Starting >>> diagnostic_msgs [ make ] [rosmake-0] Finished <<< diagnostic_msgs ROS_NOBUILD in package diagnostic_msgs No Makefile in package diagnostic_msgs [rosmake-0] Starting >>> topic_tools [ make ] [rosmake-0] Finished <<< topic_tools ROS_NOBUILD in package topic_tools No Makefile in package topic_tools [rosmake-0] Starting >>> rosbag [ make ] [rosmake-0] Finished <<< rosbag ROS_NOBUILD in package rosbag No Makefile in package rosbag [rosmake-0] Starting >>> rosmsg [ make ] [rosmake-0] Finished <<< rosmsg ROS_NOBUILD in package rosmsg No Makefile in package rosmsg [rosmake-0] Starting >>> rosservice [ make ] [rosmake-0] Finished <<< rosservice ROS_NOBUILD in package rosservice No Makefile in package rosservice [rosmake-0] Starting >>> dynamic_reconfigure [ make ] [rosmake-0] Finished <<< dynamic_reconfigure ROS_NOBUILD in package dynamic_reconfigure No Makefile in package dynamic_reconfigure [rosmake-0] Starting >>> diagnostic_updater [ make ] [rosmake-0] Finished <<< diagnostic_updater ROS_NOBUILD in package diagnostic_updater No Makefile in package diagnostic_updater [rosmake-0] Starting >>> self_test [ make ] [rosmake-0] Finished <<< self_test ROS_NOBUILD in package self_test No Makefile in package self_test [rosmake-0] Starting >>> console_bridge [ make ] [rosmake-0] Finished <<< console_bridge ROS_NOBUILD in package console_bridge No Makefile in package console_bridge [rosmake-0] Starting >>> urdfdom_headers [ make ] [rosmake-0] Finished <<< urdfdom_headers ROS_NOBUILD in package urdfdom_headers No Makefile in package urdfdom_headers [rosmake-0] Starting >>> collada_parser [ make ] [rosmake-0] Finished <<< collada_parser ROS_NOBUILD in package collada_parser No Makefile in package collada_parser [rosmake-0] Starting >>> rosconsole_bridge [ make ] [rosmake-0] Finished <<< rosconsole_bridge ROS_NOBUILD in package rosconsole_bridge No Makefile in package rosconsole_bridge [rosmake-0] Starting >>> urdfdom [ make ] [rosmake-0] Finished <<< urdfdom ROS_NOBUILD in package urdfdom No Makefile in package urdfdom [rosmake-0] Starting >>> urdf [ make ] [rosmake-0] Finished <<< urdf ROS_NOBUILD in package urdf No Makefile in package urdf [rosmake-0] Starting >>> gazebo_msgs [ make ] [rosmake-0] Finished <<< gazebo_msgs ROS_NOBUILD in package gazebo_msgs [rosmake-0] Starting >>> pr2_hardware_interface [ make ] [rosmake-0] Finished <<< pr2_hardware_interface ROS_NOBUILD in package pr2_hardware_interface [rosmake-0] Starting >>> sr_hardware_interface [ make ] [ rosmake ] All 21 linesr_kinematics: 2.2 sec ] [ sr_move_... [ 4 Active 55/118 Complete ] {------------------------------------------------------------------------------- mkdir -p bin cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=/opt/ros/groovy/share/ros/core/rosbuild/rostoolchain.cmake .. [rosbuild] Building package sr_kinematics Failed to invoke /opt/ros/groovy/bin/rospack deps-manifests sr_kinematics [rospack] Error: package/stack 'sr_kinematics' depends on non-existent package 'kinematics_msgs' and rosdep claims that it is not a system dependency. Check the ROS_PACKAGE_PATH or try calling 'rosdep update' CMake Error at /opt/ros/groovy/share/ros/core/rosbuild/public.cmake:129 (message): Failed to invoke rospack to get compile flags for package 'sr_kinematics'. Look above for errors from rospack itself. Aborting. Please fix the broken dependency! Call Stack (most recent call first): /opt/ros/groovy/share/ros/core/rosbuild/public.cmake:203 (rosbuild_invoke_rospack) CMakeLists.txt:12 (rosbuild_init) -- Configuring incomplete, errors occurred! -------------------------------------------------------------------------------} [ rosmake ] Output from build of package sr_kinematics written to: [ rosmake ] /home/virtualtouch/.ros/rosmake/rosmake_output-20130212-115421/sr_kinematics/build_output.log [rosmake-2] Finished <<< sr_kinematics [FAIL] [ 2.24 seconds ] [ rosmake ] Halting due to failure in package sr_kinematics. [ rosmake ] Waiting for other threads to complete. [ rosmake ] All 21 linesr_move_arm: 2.3 sec ] [ sr_robot_m... [ 3 Active 55/118 Complete ] {------------------------------------------------------------------------------- mkdir -p bin cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=/opt/ros/groovy/share/ros/core/rosbuild/rostoolchain.cmake .. [rosbuild] Building package sr_move_arm Failed to invoke /opt/ros/groovy/bin/rospack deps-manifests sr_move_arm [rospack] Error: package/stack 'sr_move_arm' depends on non-existent package 'arm_navigation_msgs' and rosdep claims that it is not a system dependency. Check the ROS_PACKAGE_PATH or try calling 'rosdep update' CMake Error at /opt/ros/groovy/share/ros/core/rosbuild/public.cmake:129 (message): Failed to invoke rospack to get compile flags for package 'sr_move_arm'. Look above for errors from rospack itself. Aborting. Please fix the broken dependency! Call Stack (most recent call first): /opt/ros/groovy/share/ros/core/rosbuild/public.cmake:203 (rosbuild_invoke_rospack) CMakeLists.txt:12 (rosbuild_init) -- Configuring incomplete, errors occurred! -------------------------------------------------------------------------------} [ rosmake ] Output from build of package sr_move_arm written to: [ rosmake ] /home/virtualtouch/.ros/rosmake/rosmake_output-20130212-115421/sr_move_arm/build_output.log [rosmake-3] Finished <<< sr_move_arm [FAIL] [ 2.34 seconds ] [ rosmake ] Halting due to failure in package sr_move_arm. [ rosmake ] Waiting for other threads to complete. [rosmake-0] Finished <<< sr_hardware_interface [PASS] [ 9.12 seconds ] [rosmake-1] Finished <<< sr_robot_msgs [PASS] [ 66.42 seconds ] [ rosmake ] Results: [ rosmake ] Built 59 packages with 2 failures. [ rosmake ] Summary output to directory [ rosmake ] /home/virtualtouch/.ros/rosmake/rosmake_output-20130212-115421 Originally posted by monidiaz on ROS Answers with karma: 92 on 2013-02-07 Post score: 0 Answer: Install arm_navigation stack and you should be able to build: sudo apt-get install ros-electric-arm-navigation rosmake shadow_robot Originally posted by Ugo with karma: 1620 on 2013-02-13 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12788, "tags": "gazebo, ros-electric" }
How to interpret band structures
Question: I'm currently taking a Solid State Physics class, and is currently reading about the quantum mechanical description of solids. I then came across the following figure: It's supposed to be the band structure for aluminium. My question is basically: How do I interpret these band structures ? I can't even see why they cut of the region from $X$ to $\Gamma$ at the dotted line in the first figure, and not in the second :/ Why is the last bit of $X$ to $\Gamma$ that you see in figure 2, not important in figure 1 ? I understand that, the $X$, $\Gamma$, $K$, $L$ and $W$ are points of symmetry on the brillioun zone. But what does it mean for the electron(s) ? Only that they have higher energies at some points ? I'm just having some trouble understanding what this band structure tells me, and what I'm able to see/do with it ? So can anyone maybe explain it, maybe even easily, what I actually have to understand when I see this ? Thanks in advance. Answer: As you have already stated, the polyeder in the upper part of the figure is plotted in 3D k-space and visualizes the first Brillouin zone. The dotted line nicely visualizes a closed (one-dimensional) path in k-space that -- by convention -- runs through the special points you already mentioned. Now, this closed path is precisely the horizontal axis in the lower part of the figure (this is why it starts and ends with $\Gamma$) and what is plotted in this figure is the energy dispersion $E(\vec{k})$. As stated in the answer of user D-K, this is done to capture some important features in a 2D plot because we don’t know how to conveniently draw a 4D plot on a sheet of paper ;) .
{ "domain": "physics.stackexchange", "id": 10928, "tags": "quantum-mechanics, solid-state-physics, electronic-band-theory" }
Vertical alignment with CSS
Question: I'm starting up a new project, and I've always had an issue getting vertical alignment right in CSS. Is there anything I could do better that what I've currently come up with? Replacing the header and nav with divs, and changing the rgba values to hex values this seems to work all the way back to IE7 which I'm happy with. CSS html, * { -moz-box-sizing:border-box; -webkit-box-sizing:border-box; -ms-box-sizing:border-box; -o-box-sizing:border-box; box-sizing:border-box; } body { margin:0; padding:0; } nav { background-color:rgba(0, 0, 0, 0.1); } nav a.tab { text-align:center; padding:0 15px; color:#FFFFFF; display:inline-table; vertical-align:middle; text-decoration:none; } nav a.tab > span { display:table-cell; vertical-align:middle; } nav a.tab:hover { background-color:rgba(0, 0, 0, 0.1); text-decoration:underline; } header { background-color:#27AE60; font-size:1.2em; color:#FFF; display:table; width:100%; } header, header h1, header nav, header nav a.tab { height:80px; vertical-align:middle; } header nav { display:inline-block; } header h1 { margin:0 20px; display:inline; text-shadow:2px 2px 2px #333333; } HTML <header> <h1>Projects</h1> <nav> <a href="/" class="tab active"> <span>Home</span> </a> <a href="/projects" class="tab"> <span>Projects</span> </a> <a href="/" class="tab"> <span>Contact</span> </a> </nav> </header> jsFiddle Answer: Your code is good - both the CSS and the HTML validate at the validators: HTML validator CSS validator However, you should put your wrap your HTML code like this: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Page title here...</title> <!-- other links here, such as external CSS/JS files --> </head> <body> <!-- your code here --> </body> </html> I really like how you use the HTML5 header element instead of using the old style <div class="header"> When people write CSS, they often format it slightly different, like this: nav a.tab > span { display: table-cell; vertical-align: middle; } It is just easier to read this way. Also, you should have each element have its own line, not put many on the same line like this: vertical-align:middle; text-decoration:none; Otherwise, this looks good to me.
{ "domain": "codereview.stackexchange", "id": 12155, "tags": "html, css" }
Kleppner problem 6.33 Confusion
Question: A cone of height $h$ and base radius $R$ is free to rotate around a fixed vertical axis. It has a thin groove cut in its surface. The cone is set rotating freely with angular speed $ω_0$, and a small block of mass $m$ is released in the top of the frictionless groove and allowed to slide under gravity. Assume that the block stays in the groove. Take the moment of inertia of the cone around the vertical axis to be $I_0$. $h$ is the height of the cone. (a) What is the angular speed of the cone when the block reaches the bottom? (b) Find the speed of the block in inertial space when it reaches the bottom. In (a) it is trivial that the angular momentum of the system is conserved because there is no torque along the vertical axis ( but there is torque due to gravity in the perpendicular of vertical axis)! So I use coservation of angular momentum to get the final angular velocity of both the block and cone! But in (b) the problem is arrising : at first based on my reasoning when the block reaches bottom it has two velocity components , (1) due to the final angular velocity of the combined system and (2) due gravitation that is $\sqrt {2gh}$ along the the groove! So the net velocity should be combined of these two velocity! But on the other hand in a second reasoning it can be said that the total energy of the system is conserved because there is dissipative force at all! So we can use conservation of energy to get the final velocity. When solving this equation I am getting different result! And in the solution the second way is preferd; so I really can not understand what is wrong with my first reasoning? Answer: Your assumption that the final velocity along the groove is $\sqrt {2gh}$ is incorrect. If you analyze in the frame of reference of the cone, you'll see that the centrifugal force is also doing some work on the block, along with gravity. Analyzing it in an inertial frame gives the same result (obviously), but in a more roundabout way. In any case, you'll have to use calculus to solve it this way as the centrifugal force is constantly varying. Instead of going through all this, using conservation of energy is the cleanest method. NOTE: In a comment on your post, you said that you couldn't find a force other than gravity which could increase the kinetic energy along the groove. As the cone is rotating, the direction along which you are applying the work kinetic theorem, i.e. the 'along the groove' direction, is constantly changing. In fact it is rotating, which means that your frame of reference is non-inertial. This is why you have consider a pseudo force i.e. the centrifugal force, while writing $\Delta K = F \cdot ds$
{ "domain": "physics.stackexchange", "id": 71706, "tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, solid-mechanics, contact-mechanics" }
Shortest path where weights are computationally expensive to calculate
Question: Suppose we have a function, CalculateEdgeWeight, which is computationally expensive. We want to find the shortest path between two nodes $s$ and $t$ in a simple edge-weighted digraph $G= (V,E)$ where weights of $ij \in E$ are calculated with CalculateEdgeWeight($ij$). Is there a shortest path algorithm that does not require the knowledge of all edge weights by the end of the algorithm so that the use of CalculateEdgeWeight is minimized? What if we have bounds on the possible weights? Answer: No, you may have to examine all of the edges. Consider a graph where $V=\{s,t,v_1,\ldots , v_n\}$ and $E=\{(s,v_1),\ldots,(s,v_n),(v_1,t),\ldots,(v_n,t)\}$. If $W(s,v_i)=1$ for all $i$, the problem is finding $i$ so that $W(v_i,t)$ is minimal. And we can not find the minimal $W(v_i,t)$ without examining all of them. Dijkstra's Algorithm/$A^*$ use a "greedy" strategy, in which they do not necessarily examine all edges but only those on a path from the source that is at most as long as the shortest path. You could further improve this by using bidirectional (meet-in-the-middle) variants. You could also use $A^*$ with a heuristic to take any bounds on the edge weights in to account. If you have a lower bound on the edge weights, you can first use BFS to compute minimum length path, and then use that times the lower bound to get a lower bound on the actual distance from any vertex to the goal.
{ "domain": "cs.stackexchange", "id": 4082, "tags": "shortest-path" }
Voltage divider example
Question: I have the following voltage divider circuit: Now, my goal is to find U_{L}. Altough, I know already the solution (U_{L}= 0.5V), I don't understand how to calculate this. What is the easiest way to solve this? Any help would be greatly appreciated. Answer: What is the easiest way to solve this? The easiest way is unlikely to be the most illuminating way. What I propose is that you first work the problem from basic circuit principles and then explore more powerful solution methods such as, e.g., node voltage analysis, Thevenin equivalent circuit etc. Start from the right hand side with Ohm's law: $$I_6 = \frac{U_L}{R_6} = \frac{U_L}{5k\Omega}$$ Now, apply Kirchhoff's current law (KCL) to write: $$I_5 = I_6$$ and then Ohm's law to write: $$V_5 = I_5 \cdot R_5 = I_6 \cdot R_5 = \frac{U_L}{5k\Omega}5k\Omega = U_L$$ Now, apply Kirchhoff's voltage law (KVL) to write: $$V_4 = V_5 + V_6 = U_L + U_L = 2U_L$$ Repeat this process, 'climbing the ladder' to the left, just as I've started to do, until you finally reach the voltage source and, at that point, you will have an expression for $U_L$.
{ "domain": "physics.stackexchange", "id": 16931, "tags": "homework-and-exercises, electric-circuits, electrical-resistance, voltage" }
“Bananagrams” under black light?
Question: There is a game called “Bananagrams” which includes a bunch of pieces with a letter on each. It seems when I shine a black light flashlight on the letters, the “M” letters glow, but no other pieces do. All the pieces appear the same under normal lights (except of course the letters on each piece). Why would only the M’s glow, can someone explain what may be happening here? Answer: The blocks with the M came from a different batch than the others. There is no way to know why without more information. It could have been that a previous batch of M blocks was defective (e.g. incomplete letters due to a defective M die) and had to be replaced, but that is pure speculation and there could be many other explanations.
{ "domain": "physics.stackexchange", "id": 66561, "tags": "electromagnetic-radiation, home-experiment" }
Elastic pendulum: First order ODE
Question: I need to model an elastic pendulum. The spring has a spring constant $k$. Mass-, nominal pendulum length and gravitational constant are taken to be one. The model equations are give: $$ \dot y_1 = y_3 $$ $$ \dot y_2 = y_4 $$ $$ \dot y_3 = -y_1\lambda(y_1,y_2) $$ $$ \dot y_4 = -y_2\lambda(y_1,y_2) -1 $$ $$ \lambda (y_1, y_2) = k*\frac{\sqrt{y^2_1+y^2_2}-1}{\sqrt{y^2_1+y^2_2}} $$ I don't understand where this equation comes from? What is $y_1, y_2, y_3, y_4$? Are this $x,y$ coordinates and its derivatives (veloctiy and acceleration)? Answer: Looking at the form of the equations we can surmise that $y_1$ and $y_2$ are the horizontal and vertical position of the bob $y_3$ and $y_4$ are the horizontal and vertical velocity of the bob The rate of change of the velocity is given by the elastic force (hence the $\dot{y_3}=\lambda y_1$ etc. equations). It is extremely unhelpful to create a set of equations like this in which $m=g=1$, so that you completely lose all sense of dimensions - and meaning. If we wrote the equations as follows, you would probably be able to make sense of them: $$v_x = \dot{x}\\ v_y = \dot{y}\\ L = \sqrt{x^2+y^2}\\ \Delta L = \sqrt{x^2+y^2}-L_0\\ m \dot{v_x} = F_x = \left(m g + k \Delta L \right) \sin\theta\\ m \dot{v_y} = F_y = \left(m g + k \Delta L \right) \cos\theta$$ Simplify these for small angles, and set m=g=L_0=1, and you get your equations. Horrible.
{ "domain": "physics.stackexchange", "id": 39102, "tags": "simulations, differential-equations" }
When was the diameter of Titan first measured?
Question: Titan is enshrouded in a thick opaque cloud of methane : with a telescope, you can't see the moon's surface. Because of this, from a distance, you can only see the diameter of the moon + its atmosphere, not the diameter of the moon itself. When was the diameter of Titan first measured and how? Answer: Discovery of Titan, 1655: Unknown diameter. Dollfus, 1970: 4,850$\pm$300km (1). Measured by Filar micrometer (2) and diskmeter / double-image micrometer (3). (Apparently a summary of earlier measurements, currently trying to find print copy) NASA SP-340, 1974: Summary of above techniques, propose settling on 5,000km diameter until it can be measured by stellar occultation (the same process used to determine the size and shape of Ultima Thule). (4) Elliot, 1975: Measured the diameter by limb darkening / lunar occultation (i.e. by how Titan passed by Saturn). Calculated diameter somewhere between 5,132$\pm$47km and 5,832$\pm$53km. (5) Pioneer 11, 1979: Only 5,800km found in references, appears to still be using the 1975 calculation. (6) Voyager 1, 1980: Same as above. Cassini, 2004: The Cassini probe had radar which was able to "see" right through Titan's atmosphere. Although the first flyby was on July 2, 2004, it appears that the radar was not used until the second of the 45 planned flybys (7,8). This paper from 2009 combined data from all of Cassini's passes at that point. This allowed the authors to build a model of the actual surface over enough of Titan to determine its size (~5150km) and that it is slightly oblate, like the Earth. In terms of actually measuring the surface, this is probably the best answer. (1): Surfaces and interiors of planets and satellites, Dollfus, Audouin, 1970. Page 129 is apparently the specific table. (2): Lunar Occultation of Saturn. I. The Diameters of Tethys, Dione, Rhea, Titan, and lapetus, Elliot, J.L., 1975.
{ "domain": "astronomy.stackexchange", "id": 4836, "tags": "history, size, titan, observational-astronomy" }
Why can all solutions to the simple harmonic motion equation be written in terms of sines and cosines?
Question: The defining property of SHM (simple harmonic motion) is that the force experienced at any value of displacement from the mean position is directly proportional to it and is directed towards the mean position, i.e. $F=-k(x)$. From this, $$m\left(\frac{d^2x}{dt^2}\right) +kx=0.$$ Then I read from this site Let us interpret this equation. The second derivative of a function of x plus the function itself (times a constant) is equal to zero. Thus the second derivative of our function must have the same form as the function itself. What readily comes to mind is the sine and cosine function. How can we assume so plainly that it should be sin or cosine only? They do satisfy the equation, but why are they brought into the picture so directly? What I want to ask is: why can the SHM displacement, velocity etc. be expressed in terms of sin and cosine? I know the "SHM is the projection of uniform circular motion" proof, but an algebraic proof would be appreciated. Answer: This follows from the uniqueness theorem for solutions of ordinary differential equations, which states that for a homogeneous linear ordinary differential equation of order $n$, there are at most $n$ linearly independent solutions. The upshot of that is that if you have a second-order ODE (like, say, the one for the harmonic oscillator) and you can construct, through whatever means you can come up with, two linearly-independent solutions, then you're guaranteed that any solution of the equation will be a linear combination of your two solutions. Thus, it doesn't matter at all how it is that you come to the proposal of $\sin(\omega t)$ and $\cos(\omega t)$ as prospective solutions: all you need to do is verify that they are solutions, i.e. just plug them into the derivatives and see if the result is identically zero; and check that they're linearly independent. Once you do that, the details of how you built your solutions become completely irrelevant. Because of this, I (and many others) generally refer to this as the Method of Divine Inspiration: I can just tell you that the solution came to me in a dream, handed over by a flying mass of spaghetti, and $-$ no matter how contrived or elaborate the solution looks $-$ if it passes the two criteria above, the fact that it is the solution is bulletproof, and no further explanation of how it was built is required. If this framework is unclear or unfamiliar, then you should sit down with an introductory textbook on differential equations. There's a substantial bit of background that makes this sort of thing clearer, and which simply doesn't fit within this site's format.
{ "domain": "physics.stackexchange", "id": 56162, "tags": "newtonian-mechanics, waves, harmonic-oscillator, spring, differential-equations" }
How to understand out of bound in the following theoretical context?
Question: Consider the following initialization step of loop invariant for merge procedure Initialization: Prior to the first iteration of the loop, we have $k=p$, so that the subarray $A[p .. k - 1]$ is empty. This empty subarray contains the k- p= 0 smallest elements of $L$ and $R$, and since $i = j = 1$, both $L[i]$ and $R[j]$ are the smallest elements of their arrays that have not been copied back into $A$. I have doubt in the above statement that if $k=p$, then array $A[p..p-1]$ is impossible and hence the further argument cannot proceed, which didn't happen. Where am I going wrong? Answer: It is often useful to allow arrays of zero length. These are just empty arrays. The length of the subarray $A[i\ldots j]$ is $j-i+1$, so when $j=i-1$, we just get an array of length 0. Why is this useful? Here are some examples: An array $A[1\ldots n]$ can always be partitioned into two arrays $A[1\ldots i]$ and $A[i+1 \ldots n]$ of lengths $i,n-i$. This works even for $i \in \{0,n\}$. We can construct an array $A[1\ldots n]$ inductively using the formula $A[1\ldots i] = A[1\ldots (i-1)] \cdot A[i]$. The base case is the empty array $A[1\ldots 0]$. Having arrays of negative length makes less sense, and could lead to confusion and errors, since there is no semantics under which the formula $|A[i\ldots j]| = j-i+1$ would hold for them, since array lengths are non-negative.
{ "domain": "cs.stackexchange", "id": 13099, "tags": "loop-invariants" }
Biopython Phylogenetic Tree replace branch tip labels by sequence logos
Question: Having recently constructed a lot of phylogenetic trees with the module TreeConstruction from Phylo package from Biopython, I've been asked to replace the branch tip labels by the corresponding sequence logos (which I have in the same folder). I thought that it would be more efficient to make a code to generate the logo-trees automatically, as I would have to make a lot of them. The first idea that I came up with was to see whether the functions used to build the tree had an argument to replace the branch tip labels or to remove them, which I couldn't find. Therefore, I removed the branch tip labels by setting their font size to 0: (following is the code to build the tree) # Modules to build the tree from Bio.Phylo.TreeConstruction import DistanceCalculator, DistanceTreeConstructor from Bio.Phylo import draw from Bio import Phylo, AlignIO import subprocess import matplotlib import matplotlib.pyplot as plt alignment = AlignIO.read('MotifSeqAligned.fasta', 'fasta') # reading the alignment file calculator = DistanceCalculator('ident') dm = calculator.get_distance(alignment) # distance matrix constructor = DistanceTreeConstructor() tree = constructor.nj(dm) # build with neighbour joining algorithm a tree from dm Phylo.write(tree, 'TreeToCutOff.nwk', 'newick') plt.rc('font', size=0) # controls default text sizes #HERE IS THE SETTING FOR THAT ALLOWS ME TO HIDE THE BRANCH TIP LABELS plt.rc('axes', titlesize=14) # fontsize of the axes title plt.rc('xtick', labelsize=10) # fontsize of the tick labels plt.rc('ytick', labelsize=10) # fontsize of the tick labels plt.rc('figure', titlesize=18) # fontsize of the figure title draw(tree, do_show=False) plt.savefig("TreeToCutOff.svg", format='svg', dpi=1200) From this code I could get the tree: Since I don't know how to get the y coordinates of the branches to add the logos one by one, I built a column of logos with matplotlib, that I intended then to paste on the tree in python. The code to build the column of logos is the following: #Extract filename from newick newickFile = open("TreeToCutOff.nwk", 'r').read() orderedLogos = ["{}.eps".format(i) for i in re.split('(\W)', newickFile) if "Profile" in i] #Initialize the figure fig = plt.figure() #Add each image one after the other in the right order for i, files in enumerate(orderedLogos): img1 = mpimg.imread(files) ax1 = fig.add_subplot(len(orderedLogos), 1, 1+i) ax1.imshow(img1) ax1.set_xticks([]) ax1.set_yticks([]) # plt.show() plt.savefig("RowsOfLogos.svg", format='svg', dpi=1200) plt.clf() plt.cla() Having my tree and the column of logos in .svg or .png, I couldn't find any way to stack them properly. My first idea was to use the library svgutils which seemed to be easy to handle, with the following code: (taken from svgutils tutorials) import svgutils.transform as sg # Assemble #create new SVG figure fig = sg.SVGFigure("14cm", "14cm") # load figures fig1 = sg.fromfile('TreeToCutOff.svg') fig2 = sg.fromfile('RowsOfLogos.svg') # get the plot objects plot1 = fig1.getroot() plot2 = fig2.getroot() plot2.moveto(280, 100, scale=0.05) # append plots and labels to figure fig.append([plot1, plot2]) But the issue with the output was that the background of the column of logos was white and thus, I was pasting a huge white image with a thin column of logos on the tree. And I couldn't find a way to crop the column of logos with svgutils. I tried the module Image from PIL package to build a tree of logos from .png files but couldn't see the tree used as background after pasting the column of logos. There may be a way to do what I'm aiming for with matplotlib (which would be to stack 2 files .png, and place the logos all the time at the same distance), but I couldn't work it out. Does anyone know what the best solution is to make a tree of logos (as in the following image which I could only build manually with inkscape) with python libraries, allowing to automate the process without having to adapt the code depending on the number of branches? Following is a subset of "MotifSeqAligned.fasta" containing the aligned sequences used to build the trees: >ProfileCluster0.meme ---SSNDTTTCCAGGAAD- >ProfileCluster1.meme YBNRD---TTCYYGGAAT- >ProfileCluster10.meme ---VDKDWTTCTYGGAAT- With the 3 corresponding logos: The full "MotifSeqAligned.fasta" and all the logos.eps (as I used them instead of .png which is the format asked by the forum) can be found here. Answer: To expand on my comment from yesterday. You could do this with the ETE Toolkit (I just copied one logo file rather than converting all 26 to png): from ete3 import Tree, TreeStyle, faces def mylayout(node): if node.is_leaf(): logo_face = faces.ImgFace(str.split(node.name, '.')[0] + ".png") # this doesn't seem to work with eps files. You could try other formats faces.add_face_to_node(logo_face, node, column=0) node.img_style["size"] = 0 # remove blue dots from nodes t = Tree("tree.nwk", format=3) ts = TreeStyle() ts.layout_fn = mylayout ts.show_leaf_name=False # remove sequence labels ts.scale = 10000 # rescale branch lengths so they are longer than the width of the logos t.render("formatted.png", tree_style = ts, h=3000, w=3000) # you may need to fiddle with dimensions and scaling to get the look you want If you want all of the logos lined up in a column add aligned=True to faces.add_face_to_node
{ "domain": "bioinformatics.stackexchange", "id": 567, "tags": "python, motifs, phylogenetics" }
Chrononic Computing (Time Evolution Systems)
Question: In a recent question about quantum speed-up @DaftWullie says: My research, for example, is very much about "how do we design Hamiltonians $H$ so that their time evolution $e^{-iHt_0}$ creates the operations that we want?", aiming to do everything we can in a language that is "natural" for a given quantum system, rather than having to coerce it into performing a whole weird sequence of quantum gates. This makes me think of chronons, which are a proposed quantum of time. "There are physical limits that prevent the distinction of arbitrarily close successive states in the time evolution of a quantum system. If a discretization is to be introduced in the description of a quantum system, it cannot possess a universal value, since those limitations depend on the characteristics of the particular system under consideration. In other words, the value of the fundamental interval of time has to change a priori from system to system." Introduction of a Quantum of Time ("chronon"), and its Consequences for Quantum Mechanics Is universal chrononic computing possible? Answer: Is universal chrononic computing possible? There is no consensus that chronons even exist. See the first line of this, for example. However time (and space) is quantized in one of the most popular generalizations of quantum mechanics called loop quantum gravity. If loop quantum gravity is an accurate description of the universe (which is not something we will be able to test for a very long time, until we can observe for example, Hawking radiation), then universal quantum computation with chronons would be possible as long as we can find a way to implement a universal set of gates such as {H,CNOT,R($\pi$/4)}. It is hard enough to implement a useful number of {H,CNOT,R($\pi$/4)} gates with ordinary quanta that we've been working with for a century (such as spin quanta or atomic energy level quanta or photon quanta), so don't be disappointed if you don't see universal chrononic quantum computers on the market during your lifetime. But it is possible, provided that quanta of time actually do exist, which would be true if loop quantum gravity were to be true.
{ "domain": "quantumcomputing.stackexchange", "id": 277, "tags": "computational-models" }
Display a path using markers
Question: I am trying to display the path followed by my differential drive base using rviz. I tried doing the same by using a sphere type marker. However, it changed the position with time. Is there any way I can display the entire path using the coordinates from a tf frame with markers? Originally posted by Ratan Sadan on ROS Answers with karma: 13 on 2013-06-29 Post score: 0 Answer: You just need to give a different ns for each marker. I guess you use the following code to specify a marker: visualization_msgs::Marker marker; marker.type = visualization_msgs::Marker::SPHERE; marker.header.stamp = ros::Time::now(); ... marker.ns = "markerName" You need to change the "markerName", e.g. "marker0","marker1","marker2"... Originally posted by yangyangcv with karma: 741 on 2013-06-30 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by dornhege on 2013-07-01: It's probably better to change the id instead of the namespace. There are also specific path displays in rviz.
{ "domain": "robotics.stackexchange", "id": 14755, "tags": "ros" }
Products classification by name
Question: I am a beginner with machine learning, and I'm trying to build a model to classify products by category according to the words present in the product name. My goal is to predict the category of some new product, just by observing the categories for existing products. For example, having the following products: PRODUCT CATEGORY soap bar johnsons green leaves bath cookie bauducco lemon 120gr cookie nesfit cookie choc and st cookie strawberry soap soft bath spoon hercules medium kitchen soap dish plastic medium bath [...] My first thought is to group the words (tokens) present in each product, indicating the designated category and the occurrences count (to be used as a weight). So, for this sample, I have: WORD CATEGORY COUNT soap bath 3 cookie cookie 2 medium bath 1 medium kitchen 1 bar bath 1 johnsons bath 1 Having this, I could be able to train a model, and use it to classify a new product. For example, having a new product hands liquid soap 120oz, it could be classified as bath, because it contains the word soap, which have a strong weight for the bath category. In other case, the new product medium hammer could be classified as bath or kitchen , according the occurrence of the word medium in the training set. So, my doubts are: Am I going to the correct approach? What is the best algorithm to be used in this case? How can I apply this using Weka? Answer: I think, and have done similar problem too, that this problem can be solved in this way: 1. Generate NGrams 2. Create 1 hot encoding matrix 3. Pass to Naive Bayes or Random forest It would automatically count the words count (you can apply TFIDF too) and based on that weightage will be calculated. Examples: https://medium.com/data-from-the-trenches/text-classification-the-first-step-toward-nlp-mastery-f5f95d525d73 https://www.ritchieng.com/machine-learning-multinomial-naive-bayes-vectorization/ This is detailed one: https://www.analyticsvidhya.com/blog/2018/04/a-comprehensive-guide-to-understand-and-implement-text-classification-in-python/
{ "domain": "datascience.stackexchange", "id": 4825, "tags": "classification, multiclass-classification" }
Gazebo as a ROS node doesn't publish topics (odom, joint_state and tf) correctly
Question: Hi Everyone, I am running two turtlebots from kobuki and hydro ROS(robot1 and robot2) and when I checked the rqt_graph (subscriber and publisher graph) I noticed that Gazebo as ROS node doesn’t publish or subscribe the /odom, /join_states and / tf topics based on their correspondence namespaces (robot1/odom and robot2/odom). Gazebo node just publishes or subscribes using the global names, /odom, /join_states and /tf. I remapped some nodes to work properly with their correspondence namespace but I couldn’t figure out where to go to remap this “Gazebo node” I installed the turtlebot from source and using Gazebo 1.9 with hydro ROS. Please, somebody can help me out with this issue. Thanks a lot Originally posted by Robert1 on ROS Answers with karma: 63 on 2014-04-29 Post score: 0 Answer: This appears to be exactly the same problem that is described in this Q/A. The issue arises because apparently, the current version of the kobuki gazebo plugin does not support namespaces. Be sure to read all answers and comments, there are multiple options for solving the problem (some of which are hacky). Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-04-29 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Robert1 on 2014-04-30: Thanks a lot Stefan! I will read the post carefully and let you know the result Comment by Robert1 on 2014-05-02: Hi Stefan, I fixed my odom topic problem using Prima89's tip (modified the file kobuki_gazebo.urdf.xacro). I tried to use the other tips that involves compiling the turtlebot source code but nothing change. I have the turtlebot installed from source and Gazebo&Gazebo-Ros-pakgs installed from binaries and I think this mix causes a conflict between gazebo and turtlebot so Gazebo ignores any new recompiled library from turtlebot source. Anyway, thanks alot for your help!
{ "domain": "robotics.stackexchange", "id": 17814, "tags": "ros, node" }
Minesweeper implementation in C++
Question: I have implemented minesweeper in C++. You can get a box by entering the coordinates -row number, then column; starting from 0- of said box (eg: "0 0"); you can mark a box as a bomb by adding the prefix "mark" (eg: "mark 0 0"); you can unflag a box by adding the prefix "unmark" (eg: "unmark 0 0"). The game ends when all non-bomb boxes have been checked & all bombs have been marked. Any feedback/ideas on how to improve my code would be very much appreciated. main.cpp #include "Game.h" int main() { Game* g = new Game(16, 40); g->playGame(); return 0; } Game.h #ifndef GAME_H #define GAME_H #include "Grid.h" #include <string> class Game { private: Grid* grid; int** closedSet; int len; int foundNum, closedSetSize; public: Game(int n, int b); ~Game(); void printClosedSet(); void playGame(); private: void clearSpace(int n, int m); }; #endif // GAME_H Game.cpp #include "Game.h" #include <queue> #include <cstddef> #include <iomanip> #include <cstring> #include <iostream> Game::Game(int n, int b) : foundNum(0), closedSetSize(0) { grid = new Grid(n, b); len = grid->getSize(); closedSet = new int*[len]; for(int i = 0; i < len; i++) { closedSet[i] = new int[len]; std::memset(closedSet[i], 0, len * sizeof(int)); } } Game::~Game() { delete grid; for(int i = 0; i < len; i++) delete[] closedSet[i]; delete[] closedSet; } void Game::printClosedSet() { std::cout << " "; for(int i = 0; i < len; i++) { std::cout << std::setw(3) << i; } std::cout << std::endl; std::cout << " "; for(int i = 0; i < len; i++) { std::cout << "---"; } std::cout << std::endl; for(int i = 0; i < len; i++) { std::cout << std::setw(3) << i << "|"; for(int j = 0; j < len; j++) { if(closedSet[i][j] == 1) { std::cout << std::setw(3) << grid->getNum(i, j); } else if(closedSet[i][j] == 0) { std::cout << std::setw(3) << "-"; } else std::cout << std::setw(3) << "*"; } std::cout << std::endl; } } void Game::playGame() { std::string s; int n, m; while(true) { printClosedSet(); std::cin >> s; if((s.find_first_not_of("0123456789") == std::string::npos) && ((std::stoi(s) <= len-1) && (std::stoi(s) >= 0))) { n = std::stoi(s); s = "placeholder"; while(s.find_first_not_of("0123456789") != std::string::npos) { std::cin >> s; if((s.find_first_not_of("0123456789") == std::string::npos) && ((std::stoi(s) > len-1) || (std::stoi(s) < 0))) s = "placeholder"; } m = std::stoi(s); if(closedSet[n][m] == 0) { if(grid->getNum(n, m) < 0) { grid->printBombs(closedSet); std::cout << "YOU HAVE LOST" << std::endl; break; } clearSpace(n, m); } } else if (s == "mark" || s == "unmark"){ std::string temp = "placeholder"; while(temp.find_first_not_of("0123456789") != std::string::npos) { std::cin >> temp; if((temp.find_first_not_of("0123456789") == std::string::npos) && (std::stoi(temp) > len-1)) temp = "placeholder"; } n = std::stoi(temp); temp = "placeholder"; while(temp.find_first_not_of("0123456789") != std::string::npos) { std::cin >> temp; if((temp.find_first_not_of("0123456789") == std::string::npos) && (std::stoi(temp) > len-1)) temp = "placeholder"; } m = std::stoi(temp); if((s == "mark") && (closedSet[n][m] == 0)) { closedSet[n][m] = 3; if(grid->getNum(n, m) < 0) ++foundNum; } else if((s == "unmark") && (closedSet[n][m] == 3)) { closedSet[n][m] = 0; if(grid->getNum(n, m) < 0) --foundNum; } } else continue; if((grid->getBombNum() == foundNum) && (closedSetSize == len*len - grid->getBombNum())) { printClosedSet(); std::cout << "YOU HAVE WON" << std::endl; break; } } grid->clearGrid(); grid->generateGrid(); for(int i = 0; i < len; i++) std::memset(closedSet[i], 0, len * sizeof(int)); foundNum = 0; closedSetSize = 0; } void Game::clearSpace(int n, int m) { std::queue<std::pair<int, int>> q; if(closedSet[n][m] == 0) ++closedSetSize; closedSet[n][m] = 1; if(grid->getNum(n, m) == 0) q.push(std::make_pair(n, m)); while(!q.empty()) { n = q.front().first; m = q.front().second; closedSet[n][m] = 1; if((n < len-1) && (closedSet[n+1][m] == 0)) { if(grid->getNum(n+1, m) == 0) q.push(std::make_pair(n+1, m)); closedSet[n+1][m] = 1; ++closedSetSize; } if((n < len-1) && (m < len-1) && (closedSet[n+1][m+1] == 0)) { if(grid->getNum(n+1, m+1) == 0) q.push(std::make_pair(n+1, m+1)); closedSet[n+1][m+1] = 1; ++closedSetSize; } if((n < len-1) && (m > 0) && (closedSet[n+1][m-1] == 0)) { if(grid->getNum(n+1, m-1) == 0) q.push(std::make_pair(n+1, m-1)); closedSet[n+1][m-1] = 1; ++closedSetSize; } if((m < len-1) && (closedSet[n][m+1] == 0)) { if(grid->getNum(n, m+1) == 0) q.push(std::make_pair(n, m+1)); closedSet[n][m+1] = 1; ++closedSetSize; } if((n > 0) && (closedSet[n-1][m] == 0)) { if(grid->getNum(n-1, m) == 0) q.push(std::make_pair(n-1, m)); closedSet[n-1][m] = 1; ++closedSetSize; } if((n > 0) && (m < len-1) && (closedSet[n-1][m+1] == 0)) { if(grid->getNum(n-1, m+1) == 0) q.push(std::make_pair(n-1, m+1)); closedSet[n-1][m+1] = 1; ++closedSetSize; } if((n > 0) && (m > 0) && (closedSet[n-1][m-1] == 0)) { if(grid->getNum(n-1, m-1) == 0) q.push(std::make_pair(n-1, m-1)); closedSet[n-1][m-1] = 1; ++closedSetSize; } if((m > 0) && (closedSet[n][m-1] == 0)) { if(grid->getNum(n, m-1) == 0) q.push(std::make_pair(n, m-1)); closedSet[n][m-1] = 1; ++closedSetSize; } q.pop(); } } Grid.h #ifndef GRID_H #define GRID_H #include <vector> class Grid { private: int** grid; int len; int bombNum; std::vector<std::pair<int, int>> s; public: Grid(int n, int b); ~Grid(); void printBombs(int** closedSet); void generateGrid(); void clearGrid(); int getNum(int n, int m) {return grid[n][m];} int getSize() {return len;} int getBombNum() {return bombNum;} }; #endif // GRID_H Grid.cpp #include "Grid.h" #include <iostream> #include <cstring> #include <algorithm> #include <chrono> #include <random> #include <iomanip> Grid::Grid(int n, int b) : len(n), bombNum(b) { grid = new int*[len]; s.reserve(len); for(int i = 0; i < len; i++) { for(int j = 0; j < len; j++) s.push_back(std::make_pair(i, j)); grid[i] = new int[len]; std::memset(grid[i], 0, len * sizeof(int)); } generateGrid(); } Grid::~Grid() { for(int i = 0; i < len; i++) delete[] grid[i]; delete[] grid; } void Grid::printBombs(int** closedSet) { std::cout << " "; for(int i = 0; i < len; i++) { std::cout << std::setw(3) << i; } std::cout << std::endl; std::cout << " "; for(int i = 0; i < len; i++) { std::cout << "---"; } std::cout << std::endl; for(int i = 0; i < len; i++) { std::cout << std::setw(3) << i << "|"; for(int j = 0; j < len; j++) { if(grid[i][j] < 0) { std::cout << std::setw(3) << "X"; } else if(closedSet[i][j] == 1) { std::cout << std::setw(3) << grid[i][j]; } else std::cout << std::setw(3) << "-"; } std::cout << std::endl; } } void Grid::generateGrid() { shuffle(s.begin(), s.end(), std::default_random_engine(std::chrono::system_clock::now().time_since_epoch().count())); for(int i = 0; i < bombNum; i++) { int randRow = s[i].first; int randCol = s[i].second; grid[randRow][randCol] = -1; if(randRow > 0) { if(grid[randRow-1][randCol] >= 0) ++grid[randRow-1][randCol]; if((randCol > 0) && (grid[randRow-1][randCol-1] >= 0)) ++grid[randRow-1][randCol-1]; if((randCol < len-1) && (grid[randRow-1][randCol+1] >= 0)) ++grid[randRow-1][randCol+1]; } if(randRow < len-1) { if(grid[randRow+1][randCol] >= 0) ++grid[randRow+1][randCol]; if((randCol > 0) && (grid[randRow+1][randCol-1] >= 0)) ++grid[randRow+1][randCol-1]; if((randCol < len-1) && (grid[randRow+1][randCol+1] >= 0)) ++grid[randRow+1][randCol+1]; } if((randCol > 0) && (grid[randRow][randCol-1] >= 0)) ++grid[randRow][randCol-1]; if((randCol < len-1) && (grid[randRow][randCol+1] >= 0)) ++grid[randRow][randCol+1]; } } void Grid::clearGrid() { for(int i = 0; i < len; i++) { for(int j = 0; j < len; j++) grid[i][j] = 0; } } Answer: Unnecessary memory allocations You are using new and delete in many places where it is not necessary to do any memory allocations. Starting in main(), you can just write: Game g(16, 40); g.playGame(); Apart from slightly less typing, this also prevents the inevitable memory leak when you forget to delete what you just allocated. The same goes for Game::grid, this doesn't have to be a pointer, it can be a regular member variable of Game. Let STL containers manage memory for you In those cases where you do need to allocate memory, try to avoid using manual new and delete. In particular, for arrays of which you don't know the size up front, you can use std::vector. It will automatically allocate and deallocate memory for you. If you have a two-dimensional array, you could make a vector of vectors, like so: class Grid { std::vector<std::vector<int>> grid; ... }; Grid::Grid(int n, int b): grid(n, std::vector<int>(n)), ... { ... } However, that brings me to the following: Allocate a single array/vector for all elements of a 2D grid Vectors of vectors, or dynamically allocated arrays of dynamically allocated arrays as you did in your code, are not efficient. To get to a certain element, the CPU would have to follow two pointers, which is inefficient. However, CPUs nowadays are great at multiplications and additions, so it is actually faster to allocate a 1D vector, and calculate yourself how to position 2D elements in it. For example: class Grid { std::vector<int> grid; }; ... Grid::Grid(int n, int b): grid(n * n), ... { ... } ... void Grid::printBombs(...) { ... if (grid[i * len + j] < 0) ... } To avoid having to write that formula all the time, consider making a member function that takes the row and column position, and returns a reference to the desired grid element: int& Grid::gridAt(int row, int col) { return grid[row * len + col]; } Then you can just write: if (gridAt(i, j) < 0) ... Once you have a one-dimensional vector, some other things become easier as well. For example, Make use of STL algorithms If you use STL containers, STL algorithms can now be used as well. For example, to clear the grid you can use [std::fill()][3]: void Grid::clearGrid() { std::fill(grid.begin(), grid.end(), 0); } Think of std::fill() and std::fill_n() as C++'s version of C's memset(), and there's std::copy() and std::copy_n to replace memcpy(), but the algorithms can do much more than that, saving you the trouble of having to implement them yourself. To get more familiar with them, I recommend you read or watch The World Map of C++ STL Algorithms. Simplify parsing the input Parsing the input is a bit complicated in your code, mainly because you either allow coordinates without any prefix, or coordinates with a prefix to be entered. Because you can't be sure if the first thing you read is going to be a number or a prefix, you have to read it as a std::string, check what it is, then use std::stoi() to convert it to a number if necessary. If you would change the way you read the input such that you always have to prefix the coordinates, and use the prefix "reveal" or "get" to reveal what is on a given grid position, you can greatly simplify parsing the input: while (true) { printClosedSet(); std::string command; int row; int col; std::cin >> command >> row >> col; if (command == "reveal") { ... } ... } Consider using emplace() instead of push() where possible Many containers have both push() and emplace() functions. The latter is sometimes more efficient and convenient. In particular, it reserves room for the new element and constructs it in place, and the arguments to emplace() are passed to the constructor of the value type of the container. This allows you to write: std::queue<std::pair<int, int>> q; ... q.emplace(n, m); Note how you no longer need to use std::make_pair(). Create a struct Cell You are actually keeping track of the state of each location on the board using two grids; one is grid and the other closedSet. It's also not so nice to overload ints to mean different things. Consider creating a struct Cell that holds all information of a grid cell: struct Cell { int adjacentBombs; bool bomb; bool revealed; bool marked; }; The you just need one datastructure to hold all information about the board: std::vector<Cell> grid; Prefer '\n' instead of std::endl Prefer using '\n' instead of std::endl; the latter is equivalent to the former, but also forces the output to be flushed, which is usually not necessary and has performance implications.
{ "domain": "codereview.stackexchange", "id": 43381, "tags": "c++, object-oriented" }
General second order system and mass spring damper in control theory
Question: I am studying control theory and most textbooks and web resources define a general second order system in the $s$ domain as $$ G(s) = \frac{\omega_n^2}{s^2+2\,\zeta\,\omega_n\,s+\omega_n^2} \, .\tag1$$ However, the mass spring damper which is clearly a second order system reduces to this equation $$ G(s) = \frac{1}{m\,s^2+b\,s+k}\tag2$$ which further reduces to $$ G(s) = \frac{1/m}{s^2 + (b/m)s + k/m} \, .\tag3$$ The problem is I cannot come to terms with the equivalence between the equation above and the general second order problem which seems to contain the same term on the numerator as the constant coefficient in the denominator. I've tried some books namely Ogata, Nise and I even managed to get my hands on an old book by Franklin and Powell. The three of them present the general second order system as the top one. Can someone clarify this for me, and explain why the mass-spring-damper does not correspond to the general equation? Answer: The easiest way for you to understand (and you should do this to convince yourself) is to write each of these two systems as block diagrams using integrators, gain blocks and feedback loops. Note that in the general system there is unity gain (1-in, 1-out) which means that there is unity feedback gain in the outermost feedback loop. For the first system just let $$G(s)=\frac{y(s)}{x(s)} \, ,$$ where $x$ is the input, $y$ the output. For the specific spring mass system you do not have a 1-in, 1-out relationship. You will see from your block diagram for this system that the outer-most loop will have a feedback gain of $k$. This makes the input-output gain $\frac{1}{k}$. For the spring-mass-damper system $$G(s)=\frac{x(s)}{F(s)} \, ,$$ where $x$ is displacement and $F$ is force. You can further see that if the input is a step force that at steady state (using the final value theorem) $$\lim\limits_{s \to 0} s\frac{1}{s}G(s) = \frac{1}{k} \, ,$$ which is just Hooke's law, explaining why you do not have unity gain.
{ "domain": "physics.stackexchange", "id": 43406, "tags": "newtonian-mechanics, linear-systems" }
Deriving the Schwarzschild solution
Question: I am CS student and i want to create graphical simulation of black hole, so I need to use The Schwarzschild solution to calculate possible coordinates of given body every second. First try in 2D space+ time. So, my question is how using Schwarzschild solution I can calculate Cones like in this picture: Answer: It is with some hesitation I answer your question, because unless you understand what you're calculating you risk severely misinterpreting what's going on. Still, if you're a CS student presumably learning GR is not on the agenda so I'll go ahead anyway. All I can say is tread with care! Anyhow, to calculate the cones in the coordinate system of the observer at infinity is easy, because the tangent of the angle is the velocity of light (using units where $c = 1$). Far from the event horizon $c = 1$ and arctan(1) is 45° so you get cones with the sides at 45° to the vertical. The Schwarzschild metric is: $$ ds^2 = -\left(1-\frac{r_s}{r}\right)dt^2 + \frac{dr^2}{\left(1-\frac{r_s}{r}\right)} + r^2 d\Omega^2 $$ where $r_s$ is the radius of the event horizon. The sides of the cones correspond to radially infalling and outgoing light i.e. $d\Omega = 0$, and because light moves on null geodesics we know $ds = 0$. Therefore the equation simplifies to: $$ \left(1-\frac{r_s}{r}\right)dt^2 = \frac{dr^2}{\left(1-\frac{r_s}{r}\right)} $$ or: $$ \left(\frac{dr}{dt}\right)^2 = \left(1-\frac{r_s}{r}\right)^2 $$ and $dr/dt$ is the coordinate velocity, so: $$ v = \pm\left(1-\frac{r_s}{r}\right) $$ where the $+$ sign gives you the speed of the outgoing light and the $-$ sign gives you the speed of the infalling light. So the angles of the cones at a distance $r$ are just $\arctan(1 - r_s/r)$ As expected, at large distances where $r \gg r_s$ the angle is 45°, and as $r$ approaches $r_s$ the angle goes to zero. However this is only true in the coordinates of the observer far from the black hole. Observers near the black hole will always measure the angle to be 45° at their location. Response to comment: To understand why the angle is arctan(v) look at this spacetime diagram: We draw these diagrams with time running upwards and distance sideways, so an object moving with constant velocity $v$ traces out a straight line. To work out the angle $\theta$ note that in a time $t$ the object moves a distance $vt$, so $\tan(\theta) = vt/t = v$ and therefore $\theta = \arctan(v)$. If the object is a light ray we call the angle $\theta$ the light cone angle, and the area in between the light ray travelling right and a light ray travelling in the other direction is called the light cone. $d\Omega$ is the angular displacement. For an object moving radially towards or away from the black hole the angle it's moving at doesn't change do $d\Omega = 0$. $ds$ is called the line element, and it's an invariant in both special and general relativity. For anything moving at the speed of light the length of the line element is always zero.
{ "domain": "physics.stackexchange", "id": 9689, "tags": "black-holes, software, coordinate-systems, visualization" }
Conserved current of quartic interaction QFT ($φ⁴$-Theory)
Question: The Lagrangian of the real massless $φ⁴$-theory is \begin{align} L=\frac{1}{2}\partial_\mu\phi\partial^\mu\phi-\lambda\phi^4 \end{align} Therefore the action integral has the global symmetry \begin{align} x_\mu \rightarrow x'_\mu=e^{-\alpha}x_\mu \\ \phi(x) \rightarrow \phi'(x') = e^{\alpha}\phi(x) \end{align} It is straight forward to calculate the corresponding conserved noether current \begin{align} J^\mu = \phi\partial^\mu\phi+x^\nu\partial_\nu\phi\partial^\mu\phi-\frac{1}{2}x^\mu\partial_\nu\phi\partial^\nu\phi+\lambda x^\mu\phi^4 \end{align} My problem is, that I can't see, why this current is conserved when looking at its derivative \begin{align} \partial_\mu J^\mu = \frac{3}{2}\partial_\mu\phi\partial^\mu\phi+\phi\partial_\mu\partial^\mu\phi+x^\nu(\partial_\mu\partial_\nu\phi\partial^\mu\phi+\partial_\nu\phi\partial_\mu\partial^\mu\phi)-\frac{1}{2}(\partial_\mu\partial_\nu\phi\partial^\nu\phi+\partial_\nu\phi\partial_\mu\partial^\nu\phi)+\lambda\phi^4+4\lambda x^\mu\phi^3\partial_\mu\phi \end{align} Usually one would use the euler-lagrange equations \begin{align} 4\lambda\phi^3=-\partial_\mu\partial^\mu\phi \end{align} to see, that $\partial_\mu J^\mu=0$. But for example the $\phi^4$-term in the derivative above can in no way cancel out, so how can this current be conserved? Answer: You took incorrectly some derivatives, because $ \partial_\mu x^\mu = 4 \neq 1 $. If I take the derivatives one by one, the correct divergence of the current is $$ \partial_\mu J^\mu = \partial_\mu \phi \partial^\mu \phi + \phi \partial_\mu \partial^\mu \phi + \partial_\mu \phi \partial^\mu \phi + x^\nu \partial_\mu \partial_\nu \phi \partial^\mu \phi + x^\nu \partial_\nu \phi \partial_\mu \partial^\mu \phi - \frac{1}{2} 4 \partial_\nu \phi \partial^\nu \phi - x^\mu \partial_\mu \partial_\nu \phi \partial^\nu \phi + 4 \lambda \phi^4 + 4 \lambda \phi^3 x^\mu \partial_\mu \phi \\ = \phi \partial_\mu \partial^\mu \phi + x^\nu \partial_\nu \phi \partial_\mu \partial^\mu \phi + 4 \lambda \phi^4 + 4 \lambda \phi^3 x^\mu \partial_\mu \phi = 0$$ Note that the $\partial_\mu \phi \partial^\mu \phi $ and $x^\mu \partial_\mu \partial_\nu \phi \partial^\nu \phi$ terms automatically cancel out, and using the equations of motion (euler-langrange equations), the $4 \lambda \phi^4$ term cancels with $\phi \partial_\mu \partial^\mu \phi$ and the $x^\nu \partial_\nu \phi \partial_\mu \partial^\mu \phi$ term cancels with $4 \lambda \phi^3 x^\mu \partial_\mu \phi$
{ "domain": "physics.stackexchange", "id": 93865, "tags": "homework-and-exercises, conservation-laws, field-theory, noethers-theorem, scale-invariance" }
Transformation of coordinates between 2 basis for a tensor of rank 2
Question: In the context of General relativity and tensor calculus, I would like to know where is my error in the following demo : We can start with $u$ and $u'$ curvilinear coordinates : $$\mathbf{e}_{\mathbf{k}}=\frac{\partial \mathbf{M}}{\partial u^i} ; \quad \mathbf{e}_{\mathbf{k}}^{\prime}=\frac{\partial \mathbf{M}}{\partial u^{\prime k}}$$ The transformation follows : $$\mathbf{e}_{\mathbf{k}}^{\prime}=\frac{\partial \mathbf{M}}{\partial u^{\prime k}}=\frac{\partial \mathbf{M}}{\partial u^i} \frac{\partial u^i}{\partial u^{\prime k}}=\mathbf{e}_{\mathbf{i}} \frac{\partial u^i}{\partial u^{\prime k}}$$ So we define : (a) $\mathbf{e}_{\mathbf{i}}=A_i^{\prime k} \mathbf{e}_{\mathbf{k}}^{\prime}$; (b) $\mathbf{e}_{\mathbf{k}}^{\prime}=A_k^i \mathbf{e}_{\mathbf{i}}$ and : (a) $A_i^{\prime k}=\frac{\partial u^{\prime k}}{\partial u^i}$ (b) $A_k^i=\frac{\partial u^i}{\partial u^{\prime k}}$ This way, we can write : $$e'_k.e'_l = g'_{kl} = A_k^i A_l^j e_i.e_j = A_k^i A_l^j g_{ij}$$ Finally, we would have : $$g'_{kl} = A_k^i A_l^j g_{ij}$$ and finally (fixed) : $$g'_{kl} = \frac{\partial u^i}{\partial u^{\prime k}}\,\frac{\partial u^j}{\partial u^{\prime l}} g_{ij}$$ is it correct ? Answer: Is it correct? Yes, the equation $$g'_{kl} = \dfrac{\partial u^i}{\partial u^{\prime k}}\,\dfrac{\partial u^j}{\partial u^{\prime l}} g_{ij}$$ is correct. Here is a slightly different derivation using the dual basis: $$\tilde g_{ij}=g(\tilde e_i,\tilde e_j)=g(e_ke^k\tilde e_i,e_le^l\tilde e_j)=e^k\tilde e_ie^l\tilde e_jg(e_k,e_l)=e^k\tilde e_ie^l\tilde e_jg_{kl}$$ To finally obtain your equation, we simply note that $$e^i=\mathrm du{}^i=\frac{\partial u^i}{\partial \tilde u^k}\mathrm d \tilde u^k=\frac{\partial u^i}{\partial \tilde u^k}\tilde e^k$$ and hence $$e^k\tilde e_i=\frac{\partial u^k}{\partial \tilde u^l} \underbrace{\tilde e^l\tilde e_i}_{=\delta^l{}_i}=\frac{\partial u^k}{\partial \tilde u^i}$$
{ "domain": "physics.stackexchange", "id": 96170, "tags": "general-relativity, coordinate-systems, tensor-calculus" }
Is human skull size related to brain size?
Question: Me and my dad have a disagreement about this. He thinks that if a person had a larger skull they would naturally have a larger brain. I think that he is assuming the evolutionary trend holds on an individual scale and that it is odd to assume that they would be correlated. We are having a hard time finding information on the topic as everything is about skull or brain size when related to intelligence. Sorry if this is not the right place to ask this, please direct me to the right place if I have misjudged. Who is right? Answer: As a disclaimer, first please remember that brain size is only partially related to intelligence, and that whenever this comes up with regards to humans you should remember the long and ongoing history of badly-justified racism on the topic. So let's ignore intelligence, and just focus on brains and skulls. If a person has a bigger skull, then they have a bigger volume inside that skull (neglecting the question of thickness). There's not really much inside the skull besides the brain (average volume ~1350 $cm^3$) and the cerebrospinal fluid (average volume ~125 $cm^3$). There's quite a bit of difference in size between skulls (men's brains are on average ~200 $cm^3$ bigger than women's brains, and individual variation large as well). Folks with big heads don't have their brains sloshing around in massive amounts of cerebrospinal fluid (and it would be quite dangerous if they did!): that space is filled with brain. Bottom line: in humans, a bigger skull generally means a bigger brain.
{ "domain": "biology.stackexchange", "id": 11610, "tags": "human-anatomy, brain" }
I made a little mastermind game in C# to exercise - do I take the right approach?
Question: I'm very new in C#, and the best way to learn, is to do exercises. As I tried to find a challenging exercise, I created this little game called MasterMind. The rules are very simple, and are explained on numerous places (i.e. here). You can find below my version in C# of this game, but written only for the console application, so there's no fancy nice layout using Forms or WPF. I wanted to concentrate on the code. If anyone's interested, can you tell me if I made the right approach? For example, I created a new class called Ball, with just a color field and a method to choose a random color. I probably over-complicated this, but I wanted to work with Classes. Do you have any remarks on naming conventions, other things, ... If anyone has suggestions to code better, I'm all ears :-) You can find the code on my Github (https://github.com/TjerkBeke/MasterMind.git), or here beneath... Program.cs internal class Program { public static void Main(string[] args) { // The number of balls you can pick. The higher the number, the more difficult the game. int NumberOfBalls = 4; // The number of chances you get. The lower, the more difficult the game. int NumberOfChances = 12; // This string holds the overview of attempts already made string Outcome = ""; // Take a random order of balls to guess var BallsToGuess = new List<Ball>(); for (int i = 0; i < NumberOfBalls; i++) { BallsToGuess.Add(new()); } // Show the computer's choice. Uncomment this section to cheat ;-) //Console.WriteLine("The computer chose the following balls"); //Console.WriteLine(ConvertBallColorToString(BallsToGuess)); // Show the intro and game rules ShowIntro(); // Loop until you have no more chances left for (int i = 0; i < NumberOfChances; i++) { Console.WriteLine($"Chances left: {NumberOfChances - i}"); // Choose your balls Console.WriteLine("Provide your choice:"); List<Ball> BallsPicked = GetYourBalls(BallsToGuess.Count); // Calculate the exact and non-exact matches. An exact match gicves you a black pin, a non-exact match gives you a white pin int[] ReturnedPins = CalculatePins(BallsToGuess, BallsPicked); // Show the results to the screen, and show the previous results as well. Outcome = GetOutcome(ReturnedPins, BallsPicked, Outcome); Console.Clear(); Console.WriteLine(Outcome); // Check if you won if (ReturnedPins[0] == 4) { Console.WriteLine("Congratulations!! You have guessed the correct answer!"); Console.WriteLine($"You had { NumberOfChances - i } chances left"); return; } } // If you come to here, it means you didn't win :-( Console.WriteLine("Sorry, but you didn't find the correct solution within the needed number of chances."); Console.WriteLine("The computer chose the following balls"); Console.WriteLine(ConvertBallColorToString(BallsToGuess)); } #region Methods /// <summary> /// Shows the different pins, and the chosen ball collection. /// </summary> /// <param name="returnedPins"></param> /// <param name="ballsPicked"></param> private static string GetOutcome(int[] returnedPins, List<Ball> ballsPicked, string formerAttempt) { string ballColors = ConvertBallColorToString(ballsPicked); string pinColors = ConvertPinColorToString(returnedPins); return formerAttempt + (pinColors + ballColors) + "\n"; } /// <summary> /// Takes an array of 2 integers, and transforms them into a string with the number of characters defined in the integers. /// # is the number of black pins. /// | is the number of white pins. /// </summary> /// <param name="ReturnedPins"></param> /// <returns></returns> private static string ConvertPinColorToString(int[] ReturnedPins) { string PinList = ""; string Black = new string('#', ReturnedPins[0]); string White = new string('|', ReturnedPins[1]); PinList = (Black + White).PadRight(4); return ($"'{PinList}'"); } /// <summary> /// Shows the goal of the game and the rules. /// </summary> private static void ShowIntro() { Console.WriteLine("Welcome to this game of MasterMind."); Console.WriteLine("The goal of the game is to guess what colors and the correct order of colors I have hidden."); Console.WriteLine("You can choose between the colors: BLUE - RED - GREEN - WHITE - PINK - YELLOW"); Console.WriteLine("You provide input by taking the first letters of the colors; so for example for the color yellow, you need to press the Y-key."); Console.WriteLine("After every guess, I will tell you:"); Console.WriteLine("The number of correct colors in the correct place: a black pin"); Console.WriteLine("The number of correct colors not in the correct place: a white pin"); Console.WriteLine("Press a key to start"); Console.ReadLine(); Console.Clear(); } /// <summary> /// Asks for the balls you want to pick, using the readkey function, and creates a List of balls. /// </summary> /// <param name="NumberOfBalls"></param> /// <returns></returns> private static List<Ball> GetYourBalls(int NumberOfBalls) { var yourBalls = new List<Ball>(); for (int i = 0; i < NumberOfBalls; i++) { var yourBallColor = Console.ReadKey().Key; switch (yourBallColor) { case ConsoleKey.W: yourBalls.Add(new(BallColor.White)); break; case ConsoleKey.R: yourBalls.Add(new(BallColor.Red)); break; case ConsoleKey.G: yourBalls.Add(new(BallColor.Green)); break; case ConsoleKey.B: yourBalls.Add(new(BallColor.Blue)); break; case ConsoleKey.P: yourBalls.Add(new(BallColor.Pink)); break; case ConsoleKey.Y: yourBalls.Add(new(BallColor.Yellow)); break; default: yourBalls.Add(new(BallColor.None)); break; } } return yourBalls; } /// <summary> /// Calculates the resulting pins of your guess. An exact match is a black pin, a non-exact match is a white pin. /// </summary> /// <param name="ballsToGuess"></param> /// <param name="ballsPicked"></param> /// <returns></returns> private static int[] CalculatePins(List<Ball> ballsToGuess, List<Ball> ballsPicked) { int blackPin = 0; int whitePin = 0; // First of all, create a duplicate of both ball objects, // as we don't want to change the balls of the original list var duplicateBallsToGuess = new List<Ball>(); for (int ballIndex = 0; ballIndex < ballsToGuess.Count; ballIndex++) { duplicateBallsToGuess.Add(new(ballsToGuess[ballIndex].Color)); } var duplicateBallsPicked = new List<Ball>(); for (int ballIndex = 0; ballIndex < ballsPicked.Count; ballIndex++) { duplicateBallsPicked.Add(new(ballsPicked[ballIndex].Color)); } // First check all exact matches for (int i = 0; i < duplicateBallsPicked.Count; i++) { // check if there is an exact match if (duplicateBallsToGuess[i].Equals(duplicateBallsPicked[i])) { // Black pin, as it is found, and on the exact place blackPin++; duplicateBallsToGuess[i].Color = BallColor.None; duplicateBallsPicked[i].Color = BallColor.None; continue; } } // Now check all non-exact matches for (int i = 0; i < duplicateBallsPicked.Count; i++) { if (duplicateBallsPicked[i].Color == BallColor.None) { // Ball is already mapped to an exact match continue; } // Find the index of color to look for; index of -1 means it is not found int index = duplicateBallsToGuess.FindIndex(ball => ball.Color == duplicateBallsPicked[i].Color); if (index == -1) { // Nothing found, so moving to the next continue; } // White pin, as it is found, but not on the exact place duplicateBallsToGuess[index].Color = BallColor.None; whitePin++; } int[] ReturnedPins = { blackPin, whitePin }; return ReturnedPins; } /// <summary> /// Converts a list of object balls into a string, where the colors are the first letters of their color name /// </summary> /// <param name="balls"></param> /// <returns></returns> public static string ConvertBallColorToString(List<Ball> balls) { string ballcolors = ""; for (int i = 0; i < balls.Count; i++) { switch (balls[i].Color) { case BallColor.White: ballcolors += "W "; break; case BallColor.Red: ballcolors += "R "; break; case BallColor.Blue: ballcolors += "B "; break; case BallColor.Green: ballcolors += "G "; break; case BallColor.Yellow: ballcolors += "Y "; break; case BallColor.Pink: ballcolors += "P "; break; default: ballcolors += "X "; break; } } return ballcolors.Trim(); } #endregion } Ball.cs internal class Ball { #region Properties /// <summary> /// The color you want the ball to have /// </summary> public BallColor Color { get; set; } #endregion #region Constructors /// <summary> /// Empty Constructor - creates a ball with a random color /// </summary> public Ball() { Color = GetRandomColor(); } /// <summary> /// Constructor - creates a ball with a predefined color /// </summary> /// <param name="Color"></param> public Ball(BallColor color) { Color = color; } #endregion #region Methods /// <summary> /// Picks a random color /// </summary> public BallColor GetRandomColor() { // Convert the color enumeration to a list, as this is easier to work with List<BallColor> ballColors = Enum.GetValues(typeof(BallColor)).Cast<BallColor>().ToList(); // Remove the first value from the color enumaration, as it is not a color ballColors.Remove(BallColor.None); // Pick a random value Random random = new(); return ballColors[random.Next(ballColors.Count)]; } #endregion /// <summary> /// Override the Equal method, as we want to comapre ball colors /// </summary> /// <param name="obj"></param> /// <returns></returns> public override bool Equals(object obj) { // If the passed object is null if (obj == null) { return false; } if (!(obj is Ball)) { return false; } return (this.Color == ((Ball)obj).Color); } public override int GetHashCode() { return Color.GetHashCode(); } } enum.cs public enum BallColor { None, White, Red, Blue, Green, Pink, Yellow } Answer: static Members that do not make use of instance members should be static. For example, Ball.GetRandomColor() does not make any use of instance state or instance methods (in other words, it never uses this), so it should be static: public static BallColor GetRandomColor() This also applies to the Program class as a whole: it is never instantiated, and it only has static members, so it can be static as well: internal static class Program private Members that do not need to be accessed from the outside should be private. For example, Ball.GetRandomColor and Program.ConvertBallColorToString are never accessed from the outside, so they should be private: private static string ConvertBallColorToString(List<Ball> balls) private static BallColor GetRandomColor() Properties To me, GetRandomColor looks more like a property than a method. So, I would make it a property: public static BallColor RandomColor { get { // Convert the color enumeration to a list, as this is easier to work with List<BallColor> ballColors = Enum.GetValues(typeof(BallColor)).Cast<BallColor>().ToList(); // Remove the first value from the color enumaration, as it is not a color ballColors.Remove(BallColor.None); // Pick a random value Random random = new(); return ballColors[random.Next(ballColors.Count)]; } } The callsite also needs to be updated accordingly: Color = RandomColor Note: this is somewhat controversial since properties should not have side-effects, but this property uses a PRNG. I can see both sides of the argument. PRNG management You are creating a new PRNG on every call to GetRandomColor. That is not only wasteful, but can also lead to bugs: the PRNG is seeded by default with a time-based seed. So, if you create PRNGs fast enough after each other, they will have the same seed and thus produce the same sequence of "random" numbers! You should only create the PRNG once and reuse it. I would simply store it in a private static field: private static readonly Random random = new(); // Pick a random value return ballColors[random.Next(ballColors.Count)]; Expression-bodied members For members whose body consists only of a single expression, there is an alternative syntax available which removes some "cruft". Ball.GetHashCode and the two Ball constructors fall into this category: public Ball() => Color = RandomColor; public Ball(BallColor color) => Color = color; public override int GetHashCode() => Color.GetHashCode(); this constructor initializer You can use a this constructor initializer to delegate from one constructor to another. In this case, we can delegate from the no-args constructor to the one that takes an argument: public Ball() : this(RandomColor) { } For this simple example, there is not much of a difference whether we call the constructor ourselves in the constructor body or if we let the compiler do it. But for more complex hierarchies with more complex initialization, this can make it much clearer to see what is going on. Generic Enum.GetValues<T> You can remove all the typeof and Casting from the conversion of the enum values to a list by using the generic version of Enum.GetValues<T>. Before: List<BallColor> ballColors = Enum.GetValues(typeof(BallColor)).Cast<BallColor>().ToList(); After: List<BallColor> ballColors = Enum.GetValues<BallColor>().ToList(); Simpler way to pick a random color If you move the None value to the end of enum BallColor, then, instead of filtering it out of a list, you can simply ignore the last element. Before: // Convert the color enumeration to a list, as this is easier to work with List<BallColor> ballColors = Enum.GetValues<BallColor>().ToList(); // Remove the first value from the color enumaration, as it is not a color ballColors.Remove(BallColor.None); // Pick a random value return ballColors[Random.Next(ballColors.Count)]; After: var ballColors = Enum.GetValues<BallColor>(); // Pick a random value return ballColors[random.Next(ballColors.Length - 1)]; record classes For simple data objects like Balls, C# has record classes and record structs. Converting Ball to a record class is relatively simple: Add the record keyword to the class declaration: internal record class Ball Delete GetHashCode and Equals. They are auto-generated for us, based on the properties of the record. (We also get auto-generated ==, != and an implementation of IEquatable, ToString(), and many others.) Positional record class We can even go one step further and make our record a positional record. Positional records declare positional members as part of their record declaration and get additional auto-implemented members for those (for example, constructors). We can make Ball a positional record by adding the Color member as a positional member to the declaration: internal record class Ball(BallColor Color) Now we can simply delete the Ball(BallColor) constructor: it will be autogenerated for us. We can not delete the property declaration, since positional record classes by default generate init-only properties, but we need our property to be settable. According to the rules of records, we also need to initialize our property from the positional member: public BallColor Color { get; set; } = Color; Naming conventions for parameters and local variables Constructor and method parameters as well as local variables should use camelCase naming, not PascalCase. This applies to NumberOfBalls, NumberOfChances, Outcome, BallsToGuess, BallsPicked, ReturnedPins, Black, White, and PinList. Prefer string.Empty over "" for empty strings You should prefer the string.Empty property over an empty string literal for empty strings, for example here: string outcome = string.Empty; Unnecessary parentheses The parentheses here are unnecessary: return formerAttempt + (pinColors + ballColors) + "\n"; You can just remove them: return formerAttempt + pinColors + ballColors + "\n"; Implicit typing using var This is a controversial topic. I, personally, am a big fan of implicit typing / type inference using var. Whenever the type of a variable is obvious in context, I don't see why it should be spelled out. So, I would write var numberOfBalls = 4; var numberOfChances = 12; var outcome = string.Empty; for (var i = 0; i < numberOfChances; i++) var ballsPicked = GetYourBalls(ballsToGuess.Count); var pinList = string.Empty; for (var i = 0; i < numberOfBalls; i++) var blackPin = 0; var whitePin = 0; for (var ballIndex = 0; ballIndex < ballsToGuess.Count; ballIndex++) for (var ballIndex = 0; ballIndex < ballsPicked.Count; ballIndex++) for (var i = 0; i < duplicateBallsPicked.Count; i++) for (var i = 0; i < duplicateBallsPicked.Count; i++) var index = duplicateBallsToGuess.FindIndex(ball => ball.Color == duplicateBallsPicked[i].Color); var ballcolors = string.Empty; for (var i = 0; i < balls.Count; i++) But others may (and will) disagree. Target-typed new I am also a big fan of target-typed new in cases where the type is obvious from context. For example, here: string black = new('#', returnedPins[0]); string white = new('|', returnedPins[1]); Unnecessary local variable I feel like the local variable here does not add much to readability: int[] returnedPins = { blackPin, whitePin }; return returnedPins; I would inline it: return new int[] { blackPin, whitePin }; Namespaces All types should be in namespaces, and there should be one top-level namespace for your project. (I notice you are actually doing this in your GitHub repository but not in the code you posted here.) As of C# 10, file-level namespaces allow us to get rid of the annoying extra level of nesting. Just add namespace MasterMind; to the top of every file. Unnecessary early declaration / initialization In Program.ConvertPinColorToString, pinList is declared and initialized at the top of the method, but never used until later, where it is immediately assigned to, thus overwriting the value it was initialized with. This makes the initialization unnecessary, and it can be removed. But even better, we can move the whole declaration of the variable down to where it is first used: var pinList = (black + white).PadRight(4); StringBuilder Repeatedly concatenating strings is inefficient, since a new string is allocated for each concatenation. While I am generally a fan of purity and referential transparency, unfortunately, the implementation of the string datatype is not optimized for this kind of usage. (A Rope-based implementation would be much more efficient in this regard, for example.) That's why we have System.Text.StringBuilder: private static string ConvertBallColorToString(List<Ball> balls) { System.Text.StringBuilder ballcolors = new(8, 8); for (var i = 0; i < balls.Count; i++) { switch (balls[i].Color) { case BallColor.White: ballcolors.Append("W "); break; case BallColor.Red: ballcolors.Append("R "); break; case BallColor.Blue: ballcolors.Append("B "); break; case BallColor.Green: ballcolors.Append("G "); break; case BallColor.Yellow: ballcolors.Append("Y "); break; case BallColor.Pink: ballcolors.Append("P "); break; default: ballcolors.Append("X "); break; } } return ballcolors.ToString().Trim(); } Pattern Matching C# supports Pattern Matching, which can make Program.ConvertBallColorToString and Program.GetYourBalls easier to read: private static List<Ball> GetYourBalls(int numberOfBalls) { var yourBalls = new List<Ball>(); for (var i = 0; i < numberOfBalls; i++) { var yourBallColor = Console.ReadKey().Key; Ball ball = new(yourBallColor switch { ConsoleKey.W => BallColor.White, ConsoleKey.R => BallColor.Red, ConsoleKey.G => BallColor.Green, ConsoleKey.B => BallColor.Blue, ConsoleKey.P => BallColor.Pink, ConsoleKey.Y => BallColor.Yellow, _ => BallColor.None, }); yourBalls.Add(ball); } return yourBalls; } private static string ConvertBallColorToString(List<Ball> balls) { System.Text.StringBuilder ballcolors = new(8, 8); for (var i = 0; i < balls.Count; i++) { var colorCode = balls[i].Color switch { BallColor.White => "W ", BallColor.Red => "R ", BallColor.Blue => "B ", BallColor.Green => "G ", BallColor.Yellow => "Y ", BallColor.Pink => "P ", _ => "X ", }; ballcolors.Append(colorCode); } return ballcolors.ToString().Trim(); } Use the enum member names for the color strings The string that is used for the color of the pins is actually mostly the first letter of the corresponding enum member. The only exception is None whose string representation is X. An easy way to make this more regular and remove the special case would be to rename BallColor.None to BallColor.XNone or maybe change the representation to _ and rename to _None. Once we do that, converting from the enum value to the string representation becomes much simpler: private static string ConvertBallColorToString(List<Ball> balls) { System.Text.StringBuilder ballcolors = new(8, 8); for (var i = 0; i < balls.Count; i++) { var colorCode = balls[i].Color.ToString().Substring(0, 1); ballcolors.Append(colorCode + "\n"); } return ballcolors.ToString().Trim(); } Some OOP Program does a lot of work to convert Balls to strings. This logic should be part of Ball itself. We can simply provide an override of ToString in Ball: public override string ToString() => Color.ToString().Substring(0, 1); and then Program.ConvertBallColorToString becomes just private static string ConvertBallColorToString(List<Ball> balls) => string.Join(" ", balls); And now, we are not using the fact that balls is a List anymore, we are only using it as an IEnumerable, so let's change the signature: private static string ConvertBallColorToString(IEnumerable<Ball> balls) => string.Join(" ", balls); Tuples The return value of CalculatePins is somewhat awkward. It is an array of integers, but unlike a normal array where every element has the same "meaning", here the elements mean different things. And the different meanings are only expressed through their index, but there is no "name" attached to the meaning. This would be much better expressed through a Tuple with named elements. Let's change the signature of Calculate Pins to return a Tuple: private static (int blackPins, int whitePins) CalculatePins(List<Ball> ballsToGuess, List<Ball> ballsPicked) Of course, we also need to change the return value accordingly: return (blackPin, whitePin); We need to change the callsite as well. In Main, let's change int[] returnedPins = CalculatePins(ballsToGuess, ballsPicked); to var (blackPins, whitePins) = CalculatePins(ballsToGuess, ballsPicked); and we also need to change all uses of returnedPins[0] to blackPins and returnedPins[1] to whitePins For dealing with ConvertPinColorToString and GetOutcome, we have three options: Leave ConvertPinColorToString and GetOutcome alone and call them with an array, e.g. var pinColors = ConvertPinColorToString(new int[] { blackPins, whitePins }). Change the signature of ConvertPinColorToString and GetOutcome to take a Tuple. Change the signature of ConvertPinColorToString and GetOutcome to take the number of black pins and white pins separately. I decided to go for option #3. #regions I am not a big fan of #regions. In general, if your code is so complex that you need to break it down into #regions to understand it, then you should rather break it down into methods and objects. Comments You have a lot of comments in your code. Personally, I think a lot of comments are a Code Smell: your code already tells you what your code does, if you need comments to tell you what your code does, then your code is too complex. More precisely, well-written, well-factored, readable code should tell you how the code does things. Well-chosen, intention-revealing, semantic names should tell you what the code does. The only reason to have comments is to explain why the code does a certain thing in a certain, non-obvious way. For example, here: // Show the intro and game rules ShowIntro(); Do you really need a comment to explain that a method named ShowIntro shows the intro? Especially since the documentation of ShowIntro also says that it shows the intro? Or here: // If the passed object is null if (obj == null) If someone cannot figure out that if (obj == null) means "if the object is null", then why are they reading C# source code in the first place? Mixing I/O and computation There are a couple of places where you are mixing I/O (specifically printing) and computations. Specifically, your Main method. You should try to keep those separate. Most of your methods should only perform computations. Only some methods at the boundary of the system should print or ask for input. OOP Your code is not very object-oriented. Almost everything is static. For example, we are talking about a game of MasterMind here, but there is no Game object to be found! I would expect, for example, to find at least a Game and Player object, likely a separate object representing GameState, and also some objects for handling input, output, display, and rendering. And note I wrote objects, not classes. OOP is about objects, it is not about classes. Classes are templates for objects and they are containers for methods, but they are not what OOP is about. Testing It is very hard to test your code. It is practically impossible to do any sort of meaningful testing without having to manually play through an entire game. This gets very tedious very quickly, and discourages from testing early and often. Ideally, tests should be automatic and fast, which allows them to run every couple of seconds after every change in the code. Overall design These last points are all intertwined with each other. Testing the logic in the code is hard because the logic is intermixed with the I/O, and so I can't simply call a method and pass in some fake data to test, e.g. the logic for losing the game, I have to actually play through the entire game and lose. I can't automatically play a game by passing in a "fake" player, because there is no such thing as a "player" in the code. It is also not easy to introduce these changes because most logic is contained in a single Main method. I would start with breaking up the methods. You already have comments in some places that say something like "first do this", "now do that", "do A", "do B", etc. These comments are telling you that you have separate steps there … so make them separate methods. As a next step, I would strictly separate I/O and computation. Make sure that all methods take arguments and return values, so that you can easily test them by simply calling them and checking the return value instead of having to play the game. Then, I would introduce more objects: a game, two players, and whatever else you need. Note that objects represent domain concepts, not real-world physical things. For example, a certain kind of relationship between two objects can itself be an object. Once you have separated the game logic from the I/O, it should then also be possible to explore building a GUI for the game in WPF or MAUI, or turning the game into a web service that can be played remotely, or turning it into a web app, only by implementing the appropriate interfaces, but without touching the code of your game logic at all.
{ "domain": "codereview.stackexchange", "id": 42647, "tags": "c#, console" }
Is special relativity falsified by the Aharonov-Bohm effect?
Question: The Lorenz gauge is the only Lorentz invariant electrodynamic gauge. If the vector potential has physical meaning, as in the Aharonov-Bohm effect (ABE), then the gauge condition can not be arbitrarily chosen and Lorentz invariance seems gone. How can the integration path in ABE be Lorentz invariant? Answer: This is not the case. The Aharanov-Bohm effect yields an observable of the form $$\oint_{\mathcal C} A,$$ where $\mathcal{C}$ is some circuit. This is however gauge invariant. A way to see this is by noting that it can be written in terms of the field strength $$\int_\Sigma F,$$ via Stokes' theorem. In here we chose a surface $\Sigma$ whose boundary is $\mathcal{C}$. The point however is that in the Aharanov-Bohm effect, the observable corresponds to an experiment that happens in $\mathcal{C}$, not $\Sigma$. In particular, the unintuitive part of it is that there is no apparent reason why the observable should contain information of $F$ everywhere on $\Sigma$ if the particles never went there. Therefore, if you want to describe physics locally, you must use the potential $A$ instead of the field strength $F$. Gauge invariance is however not lost.
{ "domain": "physics.stackexchange", "id": 70272, "tags": "electromagnetism, special-relativity, gauge, aharonov-bohm" }
Frequency of gravitational wave detection
Question: You may have heard in the news that the LIGO experiment recently detected a gravitational wave. Though I'm not an astronomer, the paper is a good read and mostly accessible. The detection of the gravitational wave is one thing, but the black hole merger is quite new to me. From the data I could collect in the paper and this site, the source is estimated at 1.3 billion years, and the chirp lasted only a few milliseconds. My question: what is the frequency of events of this size order? Is there any estimate of the density of such events in the universe? Answer: Harry (2009) cited several different sources stated that, as far we know, the rate of detectable events will be 40 neutron star mergers per year 30 10 M$_\odot$ black hole mergers per year 10 neutron star/black hole mergers per year This is within a radius of about 200 Mpc. This cannot, however, be used to extrapolate the total rate of such events, because of detection bias - the more massive the objects, the more easy they are to detect. The same thing happens with exoplanets, but for different reasons (e.g. planets that are more massive or closer to their stars are easier to detect via transit or by radial velocity methods).
{ "domain": "astronomy.stackexchange", "id": 1422, "tags": "gravitational-waves" }
Torque required to turn a drum/barrel
Question: I need to spec a motor to turn a mixing barrel. The barrel contains loose earth and can be filled to a maximum of 50% of its interior volume. It is a horizontal cylinder, and will rotate through its central axis. What effort/torque is required to start the barrel turning? In my case, the mass of the barrel when filled to its maximum is 50 kg, its length is 1.2m, diameter is 700mm. Friction in the bearings and the drive is negligible. Answer: BoTE Assume the pile will, on average have a slope of 45 degrees. Take $\theta = 0$ to be the horizontal on the side where the pile accumulates. Then integrate: $$ \bar\tau = \int_0^R dr \int_{-\frac{3\pi}{4}}^{\frac{\pi}{4}} d\theta \rho l (-g) r^2 \cos(\theta) $$ $$ \bar\tau = g \rho l \frac{R^3}{3} \left[ \sin(\theta) \right]_{\theta=-\frac{3\pi}{4}}^{\frac{\pi}{4}} $$ $$ \bar\tau = g \rho l \frac{R^3}{3} [\frac{\sqrt{2}}{2} - -\frac{\sqrt{2}}{2} ] $$ $$ \bar\tau = g \rho l \frac{\sqrt{2}}{3} R^3 $$ Where $\rho$ represents the mass density of the mixture, $l$ the depth of the drum, and $R$ the radius of the drum. Now $\rho l \frac{\pi R^2}{2} = 50\text{ kg}$ so $$ \bar\tau = g \frac{2\sqrt{2}}{3 \pi} (50\text{ kg}) R $$
{ "domain": "physics.stackexchange", "id": 1064, "tags": "newtonian-mechanics, torque" }
How can navigation be aborted even if goal is not reached?
Question: Hi, how can I cancel navigation with move_base and stop the robot by sending a message or a service call even if the goal sent to move_base is not reached yet? Regards, Sabrina Originally posted by Sabrina on ROS Answers with karma: 285 on 2011-11-08 Post score: 0 Answer: Or from the command line, $ rostopic pub move_base/cancel actionlib_msgs/GoalID '{}' Originally posted by bhaskara with karma: 1479 on 2011-11-09 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Sabrina on 2011-11-09: Thank you, that was the solution I was looking for.
{ "domain": "robotics.stackexchange", "id": 7237, "tags": "navigation, move-base" }
What is the purpose of erasing a type application to a term-application in parametric polymorphism?
Question: From Types and Programming Languages by Pierce 23 Polymorphism 23.7 Erasure and Evaluation Order in a full-blown programming language, which may include side- effecting features such as mutable reference cells or exceptions, the type- erasure function needs to be defined a little more delicately than the full era- sure function in §23.6. For example, if we extend System F with an exception- raising primitive error (§14.1), then the term let f = (λX.error) in 0; evaluates to 0 because λX.error is a syntactic value and the error in its body is never evaluated, while its erasure let f = error in 0; raises an exception when evaluated. What this shows is that type abstractions do play a significant semantic role, since they stop evaluation under a call-by-value evaluation strategy and hence can postpone or prevent the evaluation of side-effecting primitives. We can repair this discrepancy by introducing a new form of erasure appropriate for call-by-value evaluation, in which we erase a type abstraction to a term-abstraction erasev (x) = x erasev (λx:T 1 . t 2 ) = λx. erasev (t 2 ) erasev (t 1 t 2 ) = erasev (t 1 ) erasev (t 2 ) erasev (λX. t 2 ) = λ_. erasev (t 2 ) erasev (t 1 [T 2 ]) = erasev (t 1 ) dummyv where dummyv is some arbitrary untyped value, such as unit. Is the purpose of "erase a type abstraction to a term-abstraction" to prevent the body of a type abstraction from being evaluated? What is the purpose of erasing a type application to a term-application by adding dummyv? What is the evaluation rule for an application when the argument is unit? (I can't find it in the section for type Unit.) Thanks. Answer: Q1 Is the purpose of "erase a type abstraction to a term-abstraction" to prevent the body of a type abstraction from being evaluated? Yes. We can repair this discrepancy by introducing a new form of erasure appropriate for call-by-value evaluation, in which we erase a type abstraction to a term-abstraction Q2 What is the purpose of erasing a type application to a term-application by adding dummyv? The erasure introduces a term-abstraction into a type-abstraction: eraseᵥ(λX. t₂) = λ_. eraseᵥ(t₂) So the erasure needs a term-application instead of a type-application. An argument of the term-application can be any untyped value because it will be discarded, so for example unit. Q3 What is the evaluation rule for an application when the argument is unit? (I can't find it in the section for type Unit.) The unit value unit is a value. Therefore we can use (E-APPABS) for an application. (λx:T₁₁. t₁₂) v₂ ⟶ [x ↦ v₂] t₁₂ (E-APPABS)
{ "domain": "cs.stackexchange", "id": 14590, "tags": "programming-languages, type-theory, types-and-programming-languages, polymorphisms" }
Identify a hanging plant with spherical leaves
Question: This plant was found in the "drylands" section of a public conservatory (you can see a potted cactus in the background). It did not have a label, and there was nobody around to ask about it. The plant has hanging stems about a meter long, with spherical leaves about five millimeters in diameter -- the leaves appear to be genuinely spherical, not curled-up flat leaves. Any idea what it is? Answer: It seems that this might be Curio rowleyanus, sometimes called "string-of-pearls". The Wikipedia article indicates it has spherical leaves about 6mm in diameter on trailing stems up to about a meter. It is a succulent and the spherical leaves reduce evaporation for volume contained. It grows in the dry parts of southwest Africa.
{ "domain": "biology.stackexchange", "id": 12386, "tags": "species-identification, botany" }
Why does the freezing point of a sample gas occur at a certain temperature?
Question: From "North Carolina Measures of Student Learning: NC’s Common Exams Chemistry" (Source, Number 20): The graph below shows a cooling curve for a sample of gas that is uniformly cooled from $155~^\circ\mathrm{C}$. Why does the freezing point of the substance occur at $–20^\circ\mathrm{C}$? because the latent heat energy is absorbed by the substance as it is converted from a liquid to a solid because the latent heat energy is released into the air as the substance is converted from a liquid to a solid because the average kinetic energy is increasing for the substance as it is converted from a solid to a liquid because the average kinetic energy is decreasing for the substance as it is converted from a solid to a liquid Its really just a simple questioning and I think the answer is 3, because the average kinetic energy is increases when converted from solid to liquid. Any reason why I'm wrong and the answer says it is 2? Answer: The first thing is that it has to be going from liquid to solid because the substance is being cooled. Like tap water put in the freezer: liquid to solid. On that note, the average kinetic energy goes down (directly assumed because that's what temperature is, and it's going down.) Which leaves us with two answer choices: latent heat absorbed or latent heat released, where latent heat is the heat of a (thermodynamic system) at a constant temperature (like a phase change!) If you're gaining energy, particles are going to have more velocity, like those virtual diagrams of gases with fast moving balls. Since we're going to a solid, losing energy makes sense because particles can't move as fast and are trapped in a fixed location (like an ice cube, which has a definite cuboid shape). So liquid to solid, latent heat because the temperature is constant, and losing energy to become a solid - B is the answer.
{ "domain": "chemistry.stackexchange", "id": 2557, "tags": "energy, heat, phase" }
ros2_control/asio : topics stops being received
Question: Hello, I am having issues with a ros2_control on humble distro and ubuntu 22.04. I have a custom hardware system that communicates with the hardware with TCP/IP. I implemented the communication using asio, running in a thread. I am using a custom controller that basically is the forward_command_controller, without the realtime buffers. It subscribes to a single Float64MultiArray commands topic. For some reason, the controllers subscriber stops receiving topics after a few messages. This results in the commands not being updated and sent to the hardware. I suspect that using asio in a thread inside the ros2_control architecture might be causing problems but I am not certain. Fell free to ask for more information. Any help would be appreciated ! EDIT: I tested with a minimal example with and without asio running in a thread and the result is the same. The topics stop being received after some short time. So the issue seems unrelated to asio. Originally posted by Sam_Prt on ROS Answers with karma: 28 on 2023-01-18 Post score: 0 Answer: This was a bug related to issue #656 in rmw_fastrtps. See the issue I opened on ros2_control/issues/923. The bug is now fixed. Originally posted by Sam_Prt with karma: 28 on 2023-07-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 38232, "tags": "ros2" }
Auto change page title while bookmarking a page
Question: Here is what I'm trying: Please review the code: function quicklyChangePageTitle() { var currentTitle = document.title; // remember original title document.title = "temp title"; // change to the temporary title setTimeout(function() { // revert back to original title document.title = currentTitle; }, 1); } document.addEventListener("keydown", function(event) { if (event.ctrlKey && event.keyCode == 68) { // Ctrl + D quicklyChangePageTitle(); } }); Answer: That seems like a very fragile thing to do. If it works at all, you would still only succeed on browsers where the keyboard shortcut is ControlD. It probably won't work on a Mac or on a mobile device. It will probably break if the user interface is in a non-English locale with different keyboard shortcuts. It would also have no effect if the user initiates the bookmark creation using the menu rather than a keyboard shortcut. You should probably step back and consider why such a hack is necessary in the first place. Ideally, you should design the page titles such that they already make reasonable bookmark titles.
{ "domain": "codereview.stackexchange", "id": 6516, "tags": "javascript, timer, event-handling" }
Gauge invariance in QED
Question: I could never understand the choice of gauge in QED. Let's say I know that $A_{\mu}$ has 4 components, hence 4 degrees of freedom. For, say, a photon I need only two. Let's say I pick Lorentz gauge and set $\partial_{\mu} A^{\mu} = 0$ What does it change? I know, that it makes the equations of motion symmetric, but how can I see explicitly that I have 3 degrees of freedom now? For a photon, one usually goes further and choses $A^0 = 0$ and $\nabla \cdot \vec{A} = 0$. Somehow it reduces the number of dof to 2... I can't see all of that. I mean I do understand that the constraints should reduce the number of dof in the system, but there has to be some systematic approach, like, say, Lagrangian multipliers in Class. Mech., not just "I want to do this cuz it looks cool and makes my life easier"=( Answer: In usual theories the number of freedom of a system can be obtained by looking at the number of variables in relation to the number of equations describing the system. In the case of classical electrodynamics one would be tempted to derive the equation of motion for the photon from the Lagrangian ${\scr{L}}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$ and try to constrain the 4 components of the $A_\mu$ field to two. However, the $A_\mu$ field experiences gauge invariance $A_\mu \rightarrow A_\mu - \partial_\mu \alpha$. The values $\alpha$ can be choosen freely in the Lagrangian and this gauge invariance is responsible for a redundancy in the description of the system, the true number of degrees of freedom remains hidden. To find out the true physical degrees of freedom the system needs to be quantized and the gauge redundancy must be isolated. This is done through the Gupta-Bleuler formalism in the QED. The more general procedure is called Fadeev-Popov quantization and is also applicable to non-Abelian theories. The main point in the quantization procedure is to write the photon field as a Fourier decomposition with annihilation and creation operators $a$ and $a^\dagger$: $$A_\mu =\int \frac{d^4k}{(2\pi)^4}\sum_{\lambda=0}^3(e^{-ikx}a^\dagger(k)\epsilon_\mu(k, \lambda) + e^{ikx}a(k)\epsilon_\mu^*(k, \lambda)) .$$ The former four degrees of freedom of the system are now in the 4 linearly independent polarization vectors $\epsilon(k)$. The Lorentz gauge $\partial_\mu A^\mu =0$ must now be imposed on the quantum level, hence on the Hilbert space giving $k_\mu \epsilon^\mu= 0$. This is restricting the possible polarizations of the photon by eliminating the longitudinal polarizaiton. Hence, one degree of freedom is getting lost. By continuing the procedure and using the massless condition $k^2=0$ one can make another possible polarization decouple from the physical degrees of freedom and leaving the system with only 2 pyhsical transverse polarizaions. The process of the quantization is higly non-trivial and so is the counting of degrees of freedom.
{ "domain": "physics.stackexchange", "id": 33514, "tags": "quantum-electrodynamics, gauge-invariance, gauge" }
Choosing a diffraction grating
Question: I am interested in doing some solar spectroscopy on my own for fun. How do I go about choosing the appropriate diffraction grating? Answer: There isn't a lot of engineering here. Just use a grating dense enough to give you a comfortable angle. Less dense grating (fewer lines per mm) means a more shallow angle and narrow spectrum, and that can be harder to use. Super-dense grating might be expensive and fragile. Stay somewhere in the middle - you'll have to do a few trial runs on your own, see what works for you. I made this spectrograph for my kids, works well enough: http://sci-toys.com/scitoys/scitoys/light/spectrograph/spectrograph.html I used the cheap grating that they provide on their own store, 1000 lines / mm, not bad: http://www.scitoyscatalog.com/product/DIFFRACTION.html
{ "domain": "astronomy.stackexchange", "id": 339, "tags": "spectroscopy" }
Centrifugal force on body in motion in space
Question: I'm trying to understand the motion of a body in space for simulation purposes and wondering if my understanding of centrifugal force is correct. Imagine a body in space with no gravitional/EM forces acting on it. The body is a ship that can apply thrust along different axes (X+, X-, Y+, Y-, Z+, Z-). The body starts from rest by applying a thrust force along X-. It achieves a certain velocity (along X+) and turns off the thrust force. It now wants to move along Y+ direction, hence it applies a thrust in Y- direction. To an observer sitting at an absolute position (somewhere above the plane of the ship and looking down Z-), will the ship be now moving at a line 45deg to its previous path? Or, will there be a centrifugal force acting on the ship, thus throwing the ship outwards along radius of curvature? And to actually make the desired turn the ship will have to apply an additional thrust that will point to center of curvature? Is the later observation same as the train moving on earth surface along a curved path, hence needing an angle of bank to avoid going off course? Answer: The ship can apply thrust along +x,-x,+y,-y,+z & -z. After reaching a velocity $v$ in the +x direction, the thrust is switched off. Now if thrust is switched on along -y direction, the body will first traverse a curved path during acceleration until it makes an angle with the +x direction attaining some velocity $v'$ in the +y direction when the thrust is switched off. The body will go along that direction with a constant velocity of $(v^2+v'^2)^{1/2}$. If the thrusts are identical the angle will be $45^o$ and $v=v'$. During the curved motion objects in the ship will experience a pseudo force (but not centrifugal force) in the -y direction. If the ship can apply thrust along all directions then, the thrust can be applied in such a way that it is always along the radius of curvature of the path of the ship, radially outward applying force on the ship radially inward. In this case objects in the ship will experience a centrifugal force (is a pseudo force) radially outwards.
{ "domain": "physics.stackexchange", "id": 27769, "tags": "classical-mechanics, centrifugal-force" }
What are pressures, air-flow rates and ground clearance of some real-life hovercraft?
Question: I'm trying to get a feel for the real life behavior of hover craft air cushions and the associated power consumption. To get a feel for relevant numbers, I'd like to know actual pressure used to lift said hover craft the clearance (average or range) between skirt and water/ground surface the air loss through that clearance One could treat the clearance as a simple slit, however the air flow under a hover craft is more complex that that due to the round shape of the skirt. In practical operation the craft will hover around an equlibirium where the pressure is "weight of hovercraft" / "area under skirt" and the clearance is higher or lower depending on airflow (or the airflow is higher or lower depending on clearance required). An empirical formula for the relationship between pressure, clearance, skirt length and air flow would be even better, but real life numbers for one situation are ok. Answer: Although I cannot assist you with explicit formulas myself, I would like to make you aware of the book Theory and Design of Air Cushion Craft by Liang Yun, Alan Bliault, which is available (with limitations) on Google Books. In particular, I would suggest the chapters; [2] Air cushion theory and [12] Lift system design. Sure, some pages are not included in the preview, but considering the overall quality of the book, I find this a small price to pay. Having had a look at it myself, I also believe it will be of great assistance, whatever expertise you currently possess in the field of air cushion vehicles.
{ "domain": "engineering.stackexchange", "id": 4716, "tags": "fluid-mechanics" }
Lagrangian formalism (demonstration)
Question: My question is about the multiplicity of the Lagrangian to a Physics system. I pretend to demonstrate the following proposition: For a system with $n$ degrees of freedom, written by the Lagrangian $L$, we have that: $$L'=L+\frac{d F(q_1,...,q_n,t)}{d t}$$ also satisfies Lagrange's equations, where $F$ is any arbitrary function, but differentiable. I saw the resolution to this problem and I found that another interesting proposition was needed to complete the demonstration, and it was: $$\frac{\partial \dot{F}}{\partial \dot{q}}=\frac{\partial F}{\partial q}$$ Why is this true? How could I say that the relation between the temporal variation of $F$ and the temporal variation of $q$, is equivalent to the relation between $F$ and $q$. Maybe this is a silly question, but I don't get it in my head. I can try examples and see that it's true, but I can't figure it out why. Answer: Hint: $\frac{\partial }{\partial \dot{q}}\dot{F}=\frac{\partial}{\partial \dot{q}}\left(\frac{\partial F}{\partial q}\dot{q}+\frac{\partial F}{\partial \dot{q}}\ddot{q}+\frac{\partial F}{\partial t}\right)$ What does $F=F(q,t)$ imply about $\frac{\partial}{\partial \dot{q}} F$?
{ "domain": "physics.stackexchange", "id": 20982, "tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism, differentiation" }
Work done as change of potential, how total derivative is converted to partial derivative
Question: I am reading Goldsetein's Classic Mechanics 3rd edition in Chapter 1 it says, If work done in moving form point 1 to 2 denoted by $W_{12}$, is independent of the path it should be possible to express it as a change in quantity that depends only on the positions of end points. This quantity may be designates as $-V$, so that for a differential length we have the relation $$ \begin{equation} \label{eq:1} \mathbf{F}\cdot\text d\mathbf{s} = -\text dV \tag{1} \end{equation} $$ or $$ F_{\mathrm{s}}=-\frac{\partial V}{\partial s} \tag{2} $$ I am not sure how (2) comes from (1)? How does the dot product goes away and a partial derivative is introduced? Also, it goes on to say, if the force applied on particle is given by a gradient of a scalar function that depends both on position and time. The work done when it travels a distance $\text ds$ is, $$\mathbf{F} \cdot \text d \mathbf{s}=-\frac{\partial V}{\partial s} \text ds \tag{3}$$ I am clear as to how (1) comes but how does that change to (3) if $V$ is function of both position and time? Answer: We know that $F_s=\vec{F}\cdot \vec{u}_s$ moreover $\vec{F}\cdot \vec{\mathrm{d}s}=\vec{F}(\mathrm{d}s\cdot \vec{u}_s)=(\vec{F}\cdot \vec{u}_s)\mathrm{d}s=F_s\mathrm{d}s$ and according to your third equation: $\vec{F}\cdot\vec{\mathrm{d}s}=-\dfrac{\partial V}{\partial s}\mathrm{d}s$. So at the end we get: $F_s\mathrm{d}s=-\dfrac{\partial V}{\partial s}\mathrm{d}s$ and so: $F_s=-\dfrac{\partial V}{\partial s}$. Now if $V$ is time dependent, we have : $F_s=-\dfrac{\partial V}{\partial s}+\dfrac{\partial V}{\partial t}\dfrac{dt}{ds}=-\dfrac{\partial V}{\partial s}+\dfrac{\partial V}{\partial t}\dfrac{1}{v_s}$ (because $dV=\frac{\partial V}{\partial s}\mathrm{d}s+\frac{\partial V}{\partial t}\mathrm{d}t$ and $\vec{F}\cdot\vec{\mathrm{d}s}=-\frac{\partial V}{\partial s}\mathrm{d}s$).
{ "domain": "physics.stackexchange", "id": 63248, "tags": "newtonian-mechanics, forces, classical-mechanics, work, differentiation" }
Isn't the uncertainty principle just non-fundamental limitations in our current technology that could be removed in a more advanced civilization?
Question: From what I understand, the uncertainty principle states that there is a fundamental natural limit to how accurately we can measure velocity and momentum at the same time. It's not a limit on equipment but just a natural phenomenon. However, isn't this just an observational limit? There is a definite velocity and momentum, we just don't know it. As in, we can only know so much about the universe, but the universe still has definite characteristics. Considering this, how do a wide range of quantum mechanical phenomena work? For example, quantum tunneling - its based on the fact that the position of the object is indefinite. But the position is definite, we just don't know it definitely. Or the famous light slot experiment? The creation of more light slots due to uncertainty of the photon's positions? What I am basically asking is why is a limit on the observer, affecting the phenomenon he is observing? Isn't that equivalent to saying because we haven't seen Star X, it doesn't exist? It's limiting the definition of the universe to the limits of our observation! Answer: Manishearth's answer is correct, and this is just a minor extension of it. Manishearth correctly points out that the problem is your statement: There is a definine velocity and momentum, we just don't know it. Your statement is the hidden variables idea, and courtesy of Bell's theorem we currently believe that hidden variables are impossible. Take the example of a hydrogen atom, and ask what the position of the electron is. The problem is that properties like position are properties of particles. It doesn't make sense to ask what the position is unless there is a particle at that position. But the electron is not a particle. The question of what an electron really is may entertain philosophers, but for our purposes it's an excitation in a quantum field and as such doesn't have a position. If you interact with the electron, e.g. by firing another particle at it, you will find that the interaction between the particle and electron happens at a well defined position. We tend to think of that as the position of the electron, but really it isn't: it's the position of the interaction. The uncertainty principle applies because it's not possible for an interaction, like our example of a colliding particle, to simultaneous measure the position and momentum exactly. So you're sort of correct when you say it's an observational limit, but it's a fundamental one.
{ "domain": "physics.stackexchange", "id": 57703, "tags": "quantum-mechanics, measurement-problem, heisenberg-uncertainty-principle, observers" }
When does $ S\cdot v=\mathrm{cost}$ hold?
Question: In fluids dynamics what are the limitations for the use of the volume flow rate conservation law? $$ S\cdot v=\mathrm{cost}$$ Does it still hold when there are external forces acting on the system, e.g. gravity, or when there is a source adding fluid in the system? I'll make an example where I do get confused. Consider the motion of the fluid in the picture. It comes out of a tube of constant section $S$. Now continuity equation says $S v= cost$ but the velocity here cannot be constant in the tube from Bernoulli equation since the fluid change in altitude. Furthermore it would not make sens to have constant velocity because that would means that the fluid, when it is released from the bucket, "already knows" how high it is. So this makes me think that the continuity equation is not valid in this case, but what is the reason of this? Is it because of gravity? In which other situations is continuity equation not valid? Answer: The situation where the equation holds is when the fluid is imcompressible. In your example where you have a fluid flowing in a tube under gravity, you can imagine two situations. One is where there is no dissipative interaction between the fluid and the tube, and one where there is. In the situation with no dissipation, then the fluid should accelerate as it flows down the tube. This seems paradoxical because $v$ (the speed of the fluid) is not a constant and $S$ (the cross-sectional area of the pipe) is so $Sv$ is not constant. The resolution here is that the fliud is not really acting incompressible. To see what is really happening imagine water flowing down a rain gutter downspout. The bottom of one is shown here. At the top, the downspout is completely filled with water, but at the bottom, only a fraction of the cross-sectional area is occupied by water. It is the product of the velocity times the cross-sectional area actually occupied by water that remains constant here. Now you can imagine another case, where you pour syrup down this downspout. What might happen in this case is that the syrup is so viscous that it moves very slowly and always occupies the entire cross-section. Here we have that the velocity of the syrup is constant over the length of the downspout, so there is no contradiction. The velocity is constant because the force of gravity is balanced by the drag forces from the walls of the downspout.
{ "domain": "physics.stackexchange", "id": 30980, "tags": "fluid-dynamics, conservation-laws, flow" }
C++20 std::array implementation
Question: I'd like to have a review on my C-style array wrapper. I based this on std::array implementation. I hope you can leave some feedback! array.ixx module; #include <cstddef> #include <stdexcept> #include <utility> #include <type_traits> #include <compare> export module array; export namespace stl { template<class T, std::size_t N> struct array { using value_type = T; using size_type = std::size_t; using reference = value_type&; using const_reference = const value_type&; using pointer = value_type*; using const_pointer = const value_type*; using iterator = value_type*; using const_iterator = const value_type*; T _items[N ? N : 1]; constexpr reference at(size_type pos); constexpr const_reference at(size_type pos) const; constexpr reference operator[](size_type pos); constexpr const_reference operator[](size_type pos) const; constexpr reference front(); constexpr const_reference front() const; constexpr reference back(); constexpr const_reference back() const; constexpr pointer data() noexcept; constexpr const_pointer data() const noexcept; constexpr iterator begin() noexcept; constexpr iterator end() noexcept; constexpr const_iterator begin() const noexcept; constexpr const_iterator end() const noexcept; [[nodiscard]] constexpr bool empty() const noexcept; constexpr size_type size() const noexcept; constexpr size_type max_size() const noexcept; constexpr void fill(value_type value); constexpr void swap(array& other) noexcept(std::is_nothrow_swappable_v<T>); }; template<std::size_t I, class T, std::size_t N> constexpr T& get(array<T, N>& a) noexcept; template<std::size_t I, class T, std::size_t N> constexpr const T& get(const array<T, N>& a) noexcept; template<std::size_t I, class T, std::size_t N> constexpr T&& get(array<T, N>&& a) noexcept; template<std::size_t I, class T, std::size_t N> constexpr const T&& get(const array<T, N>&& a) noexcept; template<class T, std::size_t N> constexpr bool operator==(const array<T, N>& lhs, const array<T, N>& rhs); template<class T, std::size_t N> constexpr auto operator<=>(const array<T, N>& lhs, const array<T, N>& rhs); } template<class T, std::size_t N> constexpr T& stl::array<T, N>::at(size_type pos) { /** * @brief: Returns a reference to the element at specified location pos, with bounds checking. * If pos is not within the range of the container, an exception of type std::out_of_range is thrown. * * @param: pos - position of the element to return. * * @return: Reference to the requested element. * * @excep: std::out_of_range if !(pos < size()). * * @complex: O(1). */ return !(pos < N) ? throw std::out_of_range("Out of range") : _items[pos]; } template<class T, std::size_t N> constexpr const T& stl::array<T, N>::at(size_type pos) const { /** * @brief: Returns a reference to the element at specified location pos, with bounds checking. * If pos is not within the range of the container, an exception of type std::out_of_range is thrown. * * @param: pos - position of the element to return. * * @return: Reference to the requested element. * * @excep: std::out_of_range if !(pos < size()). * * @complex: O(1). */ return !(pos < size()) ? throw std::out_of_range("Out of range") : _items[pos]; } template<class T, std::size_t N> constexpr T& stl::array<T, N>::operator[](size_type pos) { /** * @brief: Returns a reference to the element at specified location pos. No bounds checking is performed. * * @param: pos - position of the element to return. * * @return: Reference to the requested element. * * @excep: None; * * @complex: O(1). */ return _items[pos]; } template<class T, std::size_t N> constexpr const T& stl::array<T, N>::operator[](size_type pos) const { /** * @brief: Returns a reference to the element at specified location pos. No bounds checking is performed. * * @param: pos - position of the element to return. * * @return: Reference to the requested element. * * @excep: None; * * @complex: O(1). */ return _items[pos]; } template<class T, std::size_t N> constexpr T& stl::array<T, N>::front() { /** * @brief: Returns a reference to the first element in the container. * Calling front on an empty container is undefined. * * @param: None. * * @return: Reference to the first element. * * @excep: None; * * @complex: O(1). */ return *_items; } template<class T, std::size_t N> constexpr const T& stl::array<T, N>::front() const { /** * @brief: Returns a reference to the first element in the container. * Calling front on an empty container is undefined. * * @param: None. * * @return: Reference to the first element. * * @excep: None; * * @complex: O(1). */ return *_items; } template<class T, std::size_t N> constexpr T& stl::array<T, N>::back() { /** * @brief: Returns a reference to the last element in the container. * Calling back on an empty container causes undefined behavior. * * @param: None. * * @return: Reference to the last element. * * @excep: None; * * @complex: O(1). */ return *(_items + N); } template<class T, std::size_t N> constexpr const T& stl::array<T, N>::back() const { /** * @brief: Returns a reference to the last element in the container. * Calling back on an empty container causes undefined behavior. * * @param: None. * * @return: Reference to the last element. * * @excep: None; * * @complex: O(1). */ return *(_items + N); } template<class T, std::size_t N> constexpr T* stl::array<T, N>::data() noexcept { /** * @brief: Returns pointer to the underlying array serving as element storage. * * @param: None. * * @return: Pointer to the underlying element storage. For non-empty containers, * the returned pointer compares equal to the address of the first element. * * @excep: None; * * @complex: O(1). */ return _items; } template<class T, std::size_t N> constexpr const T* stl::array<T, N>::data() const noexcept { /** * @brief: Returns pointer to the underlying array serving as element storage. * * @param: None. * * @return: Pointer to the underlying element storage. For non-empty containers, * the returned pointer compares equal to the address of the first element. * * @excep: None; * * @complex: O(1). */ return _items; } template<class T, std::size_t N> constexpr T* stl::array<T, N>::begin() noexcept { /** * @brief: Returns an iterator to the first element of the array. * If the array is empty, the returned iterator will be equal to end(). * * @param: None. * * @return: Iterator to the first element. * * @excep: None; * * @complex: O(1). */ return _items; } template<class T, std::size_t N> constexpr T* stl::array<T, N>::end() noexcept { /** * @brief: Returns an iterator to the element following the last element of the array. * This element acts as a placeholder; attempting to access it results in undefined behavior. * * @param: None. * * @return: Iterator to the element following the last element. * * @excep: None; * * @complex: O(1). */ return _items + N; } template<class T, std::size_t N> constexpr const T* stl::array<T, N>::begin() const noexcept { /** * @brief: Returns an iterator to the first element of the array. * If the array is empty, the returned iterator will be equal to end(). * * @param: None. * * @return: Iterator to the first element. * * @excep: None; * * @complex: O(1). */ return _items; } template<class T, std::size_t N> constexpr const T* stl::array<T, N>::end() const noexcept { /** * @brief: Returns an iterator to the element following the last element of the array. * This element acts as a placeholder; attempting to access it results in undefined behavior. * * @param: None. * * @return: Iterator to the element following the last element. * * @excep: None; * * @complex: O(1). */ return _items + N; } template<class T, std::size_t N> constexpr bool stl::array<T, N>::empty() const noexcept { /** * @brief: Checks if the container has no elements, i.e. whether begin() == end(). * * @param: None. * * @return: true if the container is empty, false otherwise. * * @excep: None; * * @complex: O(1). */ return begin() == end(); } template<class T, std::size_t N> constexpr std::size_t stl::array<T, N>::size() const noexcept { /** * @brief: Returns the number of elements in the container. * * @param: None. * * @return: The number of elements in the container. * * @excep: None; * * @complex: O(1). */ return N; } template<class T, std::size_t N> constexpr std::size_t stl::array<T, N>::max_size() const noexcept { /** * @brief: Returns the maximum number of elements the container is able to * hold due to system or library implementation limitations. * * @param: None. * * @return: Maximum number of elements. * * @excep: None; * * @complex: O(1). */ return N; } template<class T, std::size_t N> constexpr void stl::array<T, N>::fill(value_type value) { /** * @brief: Assigns the given value value to all elements in the container. * * @param: value - the value to assign to the elements * * @return: None. * * @excep: None; * * @complex: O(n). */ for (auto& i : _items) { i = value; } } template<class T, std::size_t N> constexpr void stl::array<T, N>::swap(array& other) noexcept(std::is_nothrow_swappable_v<T>) { /** * @brief: Exchanges the contents of the container with those of other. * * @param: other - container to exchange the contents with * * @return: None. * * @excep: None; * * @complex: O(n). */ for (std::size_t i = 0; i < size(); i++) { std::swap(_items[i], other[i]); } } template<std::size_t I, class T, std::size_t N> constexpr T& stl::get(array<T, N>& a) noexcept { /** * @brief: Extracts the Ith element element from the array. * I must be an integer value in range [0, N). * * @param: array - whose contents to extract * * @return: A reference to the Ith element of a. * * @excep: None; * * @complex: O(1). */ static_assert(I < a.size()); return a[I]; } template<std::size_t I, class T, std::size_t N> constexpr T&& stl::get(array<T, N>&& a) noexcept { /** * @brief: Extracts the Ith element element from the array. * I must be an integer value in range [0, N). * * @param: array - whose contents to extract * * @return: A reference to the Ith element of a. * * @excep: None; * * @complex: O(1). */ static_assert(I < a.size()); return a[I]; } template<std::size_t I, class T, std::size_t N> constexpr const T& stl::get(const array<T, N>& a) noexcept { /** * @brief: Extracts the Ith element element from the array. * I must be an integer value in range [0, N). * * @param: array - whose contents to extract * * @return: A reference to the Ith element of a. * * @excep: None; * * @complex: O(1). */ static_assert(I < a.size()); return a[I]; } template<std::size_t I, class T, std::size_t N> constexpr const T&& stl::get(const array<T, N>&& a) noexcept { /** * @brief: Extracts the Ith element element from the array. * I must be an integer value in range [0, N). * * @param: array - whose contents to extract * * @return: A reference to the Ith element of a. * * @excep: None; * * @complex: O(1). */ static_assert(I < a.size()); return a[I]; } template<class T, std::size_t N> constexpr bool stl::operator==(const array<T, N>& lhs, const array<T, N>& rhs) { /** * @brief: Checks if the contents of lhs and rhs are equal, that is, they have the same number of elements * and each element in lhs compares equal with the element in rhs at the same position. * * @param: lhs, rhs - arrays whose contents to compare. * * @return: true if the contents of the arrays are equal, false otherwise. * * @excep: None; * * @complex: O(n). */ std::equal(lhs.begin(), lhs.end(), rhs.begin()); } template<class T, std::size_t N> constexpr auto stl::operator<=>(const array<T, N>& lhs, const array<T, N>& rhs) { /** * @brief: The comparison is performed as if by calling std::lexicographical_compare_three_way on two arrays * with a function object performing synthesized three-way comparison * * @param: lhs, rhs - arrays whose contents to compare. * * @return: lhs.size() <=> rhs.size(). * * @excep: None; * * @complex: O(1). */ return lhs.size() <=> rhs.size(); } ``` Answer: template<class T, std::size_t N> constexpr const T& stl::array<T, N>::back() const { return *(_items + N); } bug: Shouldn't this be accessing element (N - 1)? (Same issue with the non-const version). Otherwise everything looks pretty good. It's just nitpicking below: T _items[N ? N : 1]; I think this works (allows zero-sized array with no compiler error, still ensures that begin() == end() because we use N to calculate them). But some comments to explain it would be nice. template<class T, std::size_t N> constexpr T& stl::array<T, N>::at(size_type pos) { return !(pos < N) ? throw std::out_of_range("Out of range") : _items[pos]; } Just style, but I think it's a bit clearer to put the throw outside of the return statement (we don't return anything if we throw). template<class T, std::size_t N> constexpr T& stl::array<T, N>::at(size_type pos) { if (!(pos < N)) throw std::out_of_range("Out of range"); return _items[pos]; } template<class T, std::size_t N> constexpr void stl::array<T, N>::swap(array& other) noexcept(std::is_nothrow_swappable_v<T>) { for (std::size_t i = 0; i < size(); i++) { std::swap(_items[i], other[i]); } } Could use the size_type typedef for the loop index. There is an array version of std::swap which calls std::swap_ranges internally, so I think we can just do: std::swap(_items, other._items). Don't forget to implement reverse iterators! If you want to go for completeness, there's also make_array, to_array, tuple_size, tuple_element and the deduction guide. See cppreference.
{ "domain": "codereview.stackexchange", "id": 42284, "tags": "c++, array, c++20" }
If an Asteroid was to strike the Earth, would it affect the Earth's rotation?
Question: If an Asteroid was to strike the Earth, would it affect noticeably the Earth's rotation, and if so, how large would this Asteroid have to be? Answer: To have a noticeable effect the impactor needs to be BIG. Most questions about "what would happen if ... hits" can be answered by the "Earth impact effects program" (Impact: Earth!) Here are calculations for a 100km stony asteroid... A brute like this would have a good chance of wiping out most complex life on the planet. There has been nothing like this in the last 4 billion years (or so)... It could cause the length of the day to change by "up to 2.42 seconds" As gerrit said, it would be the last of our worries.
{ "domain": "astronomy.stackexchange", "id": 6665, "tags": "earth, asteroids, impact" }
Why do we have profiles with fraction millimeter dimensions?
Question: We have standard profiles such as this type of tube with outer diameter of 60.3mm, used for example in railing. Why is it 60.3mm rather than exactly 60mm? In what application does the extra 0.3mm make a significant difference? I've also seen profiles ending in .7mm. These sound inconvenient to produce compared to full millimiters. Answer: Circular hollow sections were standardised a long time ago and in inches for historical reasons. This should be obvious if you look at a complete list and convert the units. The question is better asked the other way around: Why should we change the standard diameter from 60.3mm to 60.0mm, which would an additional cost for the manufactorers, when the 0.3mm difference doesn't make a significant difference anyway?
{ "domain": "engineering.stackexchange", "id": 4107, "tags": "civil-engineering, steel" }
Why is QED renormalizable?
Question: My understanding of renormalizability is that a theory is renormalizable if it the divergences in its amplitudes can be cancelled out by finitely many terms. I see that by adding counterterm (in the MS-bar scheme) $$L_{ct}=-\frac{g^2}{12\pi^2}\left(\frac{2}{\epsilon}-\gamma+\ln4\pi\right),$$ the one-loop divergence of QED can be made finite. However, I do not see how this makes QED renormalizable? Surely as we work with diagrams with more loops, we will get more counterterms - given that we can have diagrams with arbitrarily many loops, do we not need an infinite number of counterterms to cancel these out? Answer: QED has only a finite number of irreducible divergent diagrams. The main notion of divergence of a diagram is power-counting: The term every diagram represents has the form of a fraction like $$ \frac{\int\mathrm{d}^n p_1\dots\int\mathrm{d}^n p_m}{p_1^{i_1}\dots p_k^{i_k}}$$ and you can compute the difference between the momentum power in the numerator and denominator and call it $D$. Heuristically the diagram diverges like $\Lambda^D$ in a momentum scale $\Lambda$ if $D > 0$, like $\ln(\Lambda)$ if $D=0$, and is finite if $D < 0$. This can fail - the diagram can be divergent for $D < 0$ - if it contains a smaller divergent subdiagram. If you work out the general structure of $D$ for the diagrams of QED, you should be able to convince yourself that QED has only a finite number of divergent one-particle irreducible diagrams. That cancelling the irreducible diagrams is enough to cancel iteratively the divergences in all higher-order diagrams containing them in arbitrary combinations to all orders is a non-trivial statement sometimes called the BPHZ theorem, whose technical meaning - though not by this name - is explained by the Scholarpedia article on BPHZ renormalization.
{ "domain": "physics.stackexchange", "id": 70664, "tags": "quantum-field-theory, quantum-electrodynamics, renormalization, feynman-diagrams, dimensional-analysis" }
Callback rate is faster than the received message
Question: Hello people. I have a node which subscribes to two topics with different frequencies and using two different callback functions I publish data to two different topics. My code is structured like this: rospy.init_node('face_finder', anonymous=True) marker_publisher = rospy.Publisher('box_visual', Marker, queue_size=100) rospy.sleep(1) path_pub = rospy.Publisher('/path', Path, queue_size=100) rospy.sleep(1) def get_odom(msg): ... path_pub.publish(path) def callback(data): ... marker_publisher.publish(faces) marker_publisher.publish(faces_caption) marker_publisher.publish(est_pose) def face_finder(): rospy.Subscriber("/odom", Odometry, get_odom, queue_size = 1) rospy.Subscriber("/line_segments", LineSegmentList, callback, queue_size = 1) rospy.spin() if __name__ == '__main__': face_finder() I receive data from /odom topic with 20 Hz, the get_odom is executed and I publish to /path with 20 Hz (that makes sense). I receive data from /line_segments with 40 Hz, callback is executed but then, I publish to /box_visual with an unstable frequency (120 - 160 Hz). I'm not using any rospy.Rate() command to set a fixed rate, so I expect that callback function will be called every time I get a new message from /line_segments, right? That means a publishing rate around 40 Hz. Why I don't see that here and what I'm I possibly missing? The two callbacks use data from the same global variables, but they don't change the content of the same global variables (I don't know if this affects somehow my problem) I use Ubuntu 16.04 LTS on dual boot, ROS kinetic and python. Thank you in advance! Originally posted by Spyros on ROS Answers with karma: 51 on 2020-12-04 Post score: 0 Answer: You have (at least) three publish calls in the callback callback: marker_publisher.publish(faces) marker_publisher.publish(faces_caption) marker_publisher.publish(est_pose) This means, for every message you get, you publish (at least) three. This would lead to "bursts" of three messages on the box_visual topic. Then silence for roughly 1/40 s, and another three messages. As the frequency obtained by rostopic hz (which I assume you got the numbers from) is more an "average" frequency, i.e. number_of_messages/time, this could, depending on some timing issues, lead to the "unstable" frequency of around 120 Hz. To explicitly state this: Your callback is firing with the expected 40Hz and not running faster then the received message, you publish multiple times to the same topic from within the callback. If you want to have approximately the same frequency of the outgoing topic, you should just publish one. If you need to publish multiple markers, consider using a MarkerArray. Originally posted by mgruhler with karma: 12390 on 2020-12-04 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Spyros on 2020-12-04: I didn't think that that would be the problem. I'm trying to switch now to markerArray to see if the rate is fixed but rviz crashes after a while. I think I need to empty/reset the array after each callback execution. How can I do that? (maybe that's a subject for a different question). With using just markers I could just send a DELETE marker with the same nm and id. Comment by mgruhler on 2020-12-07: As a MarkerArray is just that, an Array of Markers, you can do it exactly this way. Or you could use DELETEALL... I think this should do the trick as well...
{ "domain": "robotics.stackexchange", "id": 35832, "tags": "ros, python2.7, ros-kinetic, callbacks" }
Why do blood vessels in the eye not obstruct vision?
Question: As light enters the eye, it reaches the photoreceptors at the "base" of the retina, which then pass that signal to the bipolar and ganglionic neurons -- the latter of which send the signal outside of the eye via their axons (collectively forming the optic nerve). The exit point of the optic nerve is sometimes referred to as the "blind spot" because there are no photoreceptors present there and therefore no sensory information is gathered. Now, I know photoreceptors exist everywhere else along the retina, so it's not surprising that we perceive vision from the otherwise broadly distributed photoreceptors. However, my question: why do the blood vessels associated with the superficial vascular plexus (which exist between incoming light and the rest of the retina) not obstruct our vision? More broadly, I guess of interest is: why none of the vascular plexuses (or cell structures of the bipolar and ganglionic neurons for that matter) obstruct our vision despite existing between the photoreceptors and incoming light? Sources: LEFT: Figure 1 from Zhongjie et al (2020) ; RIGHT: Figure 5 from Selvam et al (2018) Fu, Z., Sun, Y., Cakir, B., Tomita, Y., Huang, S., Wang, Z., Liu, C.H., S Cho, S., Britton, W., S Kern, T. and Antonetti, D.A., 2020. Targeting neurovascular interaction in retinal disorders. International journal of molecular sciences, 21(4), p1503 Selvam, S., Kumar, T. and Fruttiger, M., 2018. Retinal vasculature development in health and disease. Progress in retinal and eye research, 63, pp.1-19. Answer: Avoid the fovea Figure 2 from the same paper shows the distribution relative to the fovea: As you can see, it's pretty much devoid of this superficial vasculature, so anything you are directly focusing on, say, text you read on a computer screen (or even a book!) is not impacted. Receptive fields might be bigger than you think Receptive field sizes for retinal ganglion cells in the primate retina are about 50-300 um, depending on eccentricity (distance from the fovea). Capillaries are going to be around the size of a red blood cell in diameter, so about 10 um; it seems like by the time you get to the far periphery, these vessels are mostly going to be quite small relative to receptive field size, and they are even a bit small in the vicinity of the fovea. Tissue isn't that opaque I'm mostly focusing on the RBC size themselves, because RBCs have a bit of pigment in them, but otherwise, tissue is overall quite transparent. If you've ever looked at an unstained tissue section less than 100 microns thick, you know that it doesn't look like much at all. If you've lost track of one in any volume of water, good luck finding it. For the same reason that RGCs being on the "wrong side" of the inverted vertebrate eye, this thickness of tissue just doesn't seem to be that big of a problem, and it doesn't seem that any affordances for this issue have evolved outside of the fovea in primates (whereas you can see in the figure above that there is a clear exclusion of these vessels from the fovea). We perceive with our brain, not our eyes The general idea of predictive coding models of the brain is that you have some generative model of the world that is constantly making predictions, and sensory organs merely provide evidence to update those models which is propagated as an error signal with respect to the original model; if everything is static and as predicted, nothing needs to be propagated in the brain to alter the perception. Much of what you think you are "seeing" at a given moment you aren't seeing at that moment at all, but merely "remembering" what you saw previously, and having not seen any evidence to the contrary, continue to "see" it there. When a person looks at an object, they do not typically look at one spot, but quickly saccade around to scan different parts of it and form a complete model of the object. It will escape attention until it moves or changes in some way. These blood vessels are going to be quite static, and not provide much of a changing visual image, so there's nothing there for the brain to be interested in.
{ "domain": "biology.stackexchange", "id": 11890, "tags": "human-anatomy, vision, human-eye, electrophysiology, senses" }
Difference between rotational and translational KE for a point particle on disk spinning around a stationary axis
Question: My textbook states that a particle spinning around a stationary axis has rotational kinetic energy only. For a rolling object, KE consists of both translational and rotational kinetic energy, which means that they describe different quantities of motion. For a single particle in a disk spinning around a stationary axis, why doesn't the particle have an expression for translational kinetic energy? Answer: I think you (or the textbook author) is mixing up the ideas of a point particle and a rigid body. A point particle only has translational KE. Since its "radius of a point particle" is zero by definition, there is no way to tell (in Newtonian mechanics) if it is rotating or not. For a rigid body which is not a point, the total KE is the sum of the KEs of every point particle in the body. However, you can split the KE into two parts: (1) the KE of a point particle with the total mass of the body, at the center of mass of the body and (2) the translational KE of the particles moving relative to the center of mass. So the rotational KE of a rigid body is really part of the translational KE of all the point particles in the body. If the center of mass of the rigid body is not moving, the rotational KE of the body is the total translational KE of all the point particles, since part (1) defined above is zero.
{ "domain": "physics.stackexchange", "id": 79074, "tags": "newtonian-mechanics, energy, rotational-kinematics" }
HD TV Signal reception at night
Question: Why is on air HD TV signal reception better at night? During the daytime the reception of HD TV broadcasts is markedly lower than at night with the same antenna position. Does the sun have components in its radiation that cause this effect? Answer: Yes, the solar wind is part of the explanation. One of the ways radio waves propagate is by bouncing off of the ionosphere. (See for example this Wikipedia article on skywave.) The layer they bounce off of changes in height due to the Earth's changing orientation with respect to the solar wind. It is lower during the day and higher at night which results in longer transmission of radio signals.
{ "domain": "physics.stackexchange", "id": 36835, "tags": "electromagnetic-radiation, radio, antennas" }
Project Euler #52 - Permuted multiples
Question: It can be seen that the number, 125874, and its double, 251748, contain exactly the same digits, but in a different order. Find the smallest positive integer, x, such that 2x, 3x, 4x, 5x, and 6x, contain the same digits. The following code works on SpiderMonkey 1.8.5. But didn't work on my nodejs. It takes about two seconds, and gives the correct answer. My two major concerns are my style. And my choice in functions. Personally I think str_sorted is horrible, int -> str -> array -> str. // Like Python's range. function range(start, stop, step){ if (typeof(stop)==='undefined'){ stop = start; start = 0; } if (typeof(step)==='undefined') step = 1; if (stop === null){ while (true){ yield start; start += step; } }else{ for (number = start; number < stop; number += step){ yield number; } } } function str_sorted(num){ num = num.toString(); arr = Array(); for (index in num){ arr.push(num[index]); } return arr.sort().join(''); } function permuted(num){ tmp = str_sorted(num); return tmp == str_sorted(num * 2) && tmp == str_sorted(num * 3) && tmp == str_sorted(num * 4) && tmp == str_sorted(num * 5) && tmp == str_sorted(num * 6); } function find(iterable, fn){ for (num in iterable){ if (fn(num)){ return num; } } } print(find(range(1, null), permuted)); The way that I run this code is via the command line. $ js p52.js 142857 $ This runs the the SpiderMonkey 1.8.5 interpreter. I got it from the Arch repository at some point. Answer: Overall, your code looks pretty decent. The indentation is spot-on, the variable names make sense, the code is clear and easy to read. But there are some big issues with your code. The first thing that popped in my eyes was this: if (typeof(stop)==='undefined'){ Please, don't use typeof to check if it is defined or not. You have a very interesting object called arguments. Try this instead: if (arguments.length < 3){ A few lines below, you have this: if (stop === null){ What if I pass undefined or false? You need to predict those, since they are valid values and, somewhat, in context. And yes, it is possible to pass undefined to a function, in one of these ways: range(1, undefined, 3); //simply pass undefined range.call({}, 1, undefined, 3); //mostly the same range.apply({}, [1,,3]); //it may happen! //... more similar variants ... You need to take care of those. This is one of a few examples. You have the following function: function str_sorted(num){ num = num.toString(); arr = Array(); for (index in num){ arr.push(num[index]); } return arr.sort().join(''); } Let's focus on the following bit: arr = Array(); There are 2 wrong things here: This is a local variable without var You are using the Array constructor as a function You either use [] or use new Array() The fix is simple: function str_sorted(num){ var arr = []; num = num.toString(); for (index in num){ arr.push(num[index]); } return arr.sort().join(''); } But you still have something there that's bothering me: num = num.toString(); for (index in num){ arr.push(num[index]); } Why are you converting it to a string and then iterating it? Is it to sort the numbers in the string? If that's the case, here's the whole function: function str_sorted(num){ return num.toString().split('').sort().join(''); } Using string.split('') will split a string by each character, making an array of numbers. Let's check your permuted() function. It also suffer from a tiny flaw: function permuted(num){ tmp = str_sorted(num); return tmp == str_sorted(num * 2) && tmp == str_sorted(num * 3) && tmp == str_sorted(num * 4) && tmp == str_sorted(num * 5) && tmp == str_sorted(num * 6); } I have no idea what you are trying to do, but here is the mistake: tmp = str_sorted(num); There, you forgot a var. Change it to this: var tmp = str_sorted(num); Lets take a look at the find function. This one isn't too bad: function find(iterable, fn){ for (num in iterable){ if (fn(num)){ return num; } } } Just more of the same: for (num in iterable){ You forgot a var there. Change it to this: for (var num in iterable){ I'm not sure if this is part of the code to be reviewed or not. But let's analyze the last line: print(find(range(1, null), permuted)); Please, don't use print. It isn't even a standard! It's just supported in some consoles (like Chrome). Use console.log() instead. For now, I will leave the review as is. If I find any more issues, in the future, I will check on them.
{ "domain": "codereview.stackexchange", "id": 14658, "tags": "javascript, beginner, programming-challenge" }
Metapopulation structure - book recommendations
Question: What book would you recommend me to study: the dynamics of metapopulations, the structure of metapopulations, the evolution in structured metapopulations? I am not looking for an introduction but for a book that offers good and extensive mathematical formulations. Answer: Probably the best source to start would be Ilkka Hanksi's work, you can find a full list here: http://www.helsinki.fi/science/metapop/People/IlkkaPub2.htm. The seminal work would be "Ecology, Genetics and Evolution of Metapopulations" It gives a strong mathematical treatment
{ "domain": "biology.stackexchange", "id": 2162, "tags": "evolution, ecology, population-dynamics, book-recommendation, population-genetics" }
Why did Kolmogorov publish Karatsuba's algorithm?
Question: Karatsuba's algorithm for fast multiplication was first published in A. Karatsuba and Yu. Ofman (1962), "Multiplication of Many-Digital Numbers by Automatic Computers", Proceedings of the USSR Academy of Sciences 145: 293–294. According to Karatsuba (1995, "The complexity of computations", Proc. Steklov Institute of Mathematics 211: 169–183), this paper was actually written by Kolmogorov (and possibly Ofman) without Karatsuba's knowledge. By modern standards this seems a strange and grave breach of ethics. Why would Kolmogorov have done this? What did he gain? Answer: This paper, in Russian, Gricenko, S. A., Karatsuba, E. A., Korolyov, M. A., Rezvyakova, I. S., Tolev, D. I., & Changa, M. E. (2012). Scientific contributions of A. A. Karatsuba / Научные достижения Анатолия Алексеевича Карацубы. Современные проблемы математики, 16(0), 7-30. states the following (items 1—3). Karatsuba presented his algorithm at a seminar led by Kolmogorov. Kolmogorov prepared an article that had two results of his students, Karatsuba and Ofman. One of the results was Karatsuba's algorithm, the other was an unrelated result of Ofman. The article clearly attributed the results. It stated that the multiplication algorithm is due to Karatsuba and the other result is due to Ofman. We can only guess why Kolmogorov did that. I am afraid that the only person who could answer the question why Kolmogorov published the paper without Karatsuba's permission or knowledge was Kolmogorov himself. Perhaps he thought that it was a good way to publish the results of his students. Note that the article correctly attributed all results. The article of Karatsuba and Ofman was published in the Proceedings of the USSR Academy of Sciences; it is my understanding that it had to be submitted/presented by a member of the USSR Academy of Sciences. Here is the relevant quote from the paper of Gricenko et al (in Russian): Этот результат был доложен Анатолием Карацубой на семинаре А. Н. Колмогорова в МГУ в 1960 г., после чего семинар был Колмогоровым закрыт. Первая статья с описанием этого метода [2] была подготовлена самим Колмогоровым. Там он представил два разных и не связанных друг с другом результата двух своих учеников, и хотя в статье Колмогоров четко отметил, что одна теорема (не связанная с быстрым умножени- ем) принадлежит Ю. Офману, а другая теорема (с первым в истории быстрым умножением) принадлежит А. Карацубе, эта публикация под именами двух авторов надолго сбила с толку читателей, которые полагали, что оба автора внесли вклад в создание быстрого умножения, и даже называли этот метод двумя именами. English translation: This result was presented by Anatoly Karatsuba in A. N. Kolmogorov's seminar at the Moscow State University in 1960, after which the seminar was closed by Kolmogorov. The first article with the description of this algorithm [2] was prepared by Kolmogorov himself. In it he presented two different results from his two students that were unrelated to each other, and although Kolmogorov clearly noted in the article that one theorem (unrelated to fast multiplication) belonged to Y. Ofman, and the other theorem (with the first fast multiplication algorithm in history) belonged to A. Karatsuba, this publication on behalf of the two authors for a long time confused readers, who supposed that both authors had stake in the invention of fast multiplication, and even referred to the algorithm using both names.
{ "domain": "cstheory.stackexchange", "id": 3487, "tags": "ho.history-overview" }
Weights on a lever
Question: I had a very simple question that I just wanted to make sure of. I remember from my A Level physics that the further something is away from the fulcrum, the larger the torque (the weight multiplied by the distance from the fulcrum). However I don't remember ever covering anything that had both forces on the same side of the fulcrum, and in opposite directions. To give some more context, I've been asked to design some way to add weight to a hydraulic cylinder, so that the hydraulic pumps can be properly tested before being used. Here is a crudely drawn picture of my idea, far from the engineering drawing that my tutor would want me to do. Am I right in thinking that, not including the weight of the bar itself. If the cylinder pushes up half way between the weight and the pivot, it would see twice the force? So a 200KG weight on the end would test the pump as though it had 400KG on it? Is there any corrections I would have to do based on the angle to get the actual vertical force? I'm sure this question is a lot simpler than the usual things posted here but I'd appreciate the help. Answer: Am I right in thinking that, not including the weight of the bar itself. If the cylinder pushes up half way between the weight and the pivot, it would see twice the force? So a 200KG weight on the end would test the pump as though it had 400KG on it? Based on your diagram, this is all correct. Is there any corrections I would have to do based on the angle to get the actual vertical force? Assuming the bar is close to horizontal, then no. If the angle of the bar were not negligible, there would be a component of the weight of the end mass that would end up being supported axially through the bar and (presumably) through the pivot point and whatever that may be attached to.
{ "domain": "physics.stackexchange", "id": 40360, "tags": "newtonian-mechanics, forces" }
How to understand Hawking's interpretation of the quantization of the field?
Question: In Hawking's paper "Particle Creation by Black Holes" he says the following: The operator $\phi$ can be expressed as $$\phi=\sum_i f_i a_i+\bar{f}_ia_i^\dagger.$$ The solutions $\{f_i\}$ of the wave equation $f_{i;ab}g^{ab}=0$ can be chosen so that on past null infinity $\mathscr{I}^-$ they form a complete family satisfying the orthonormality condition (1.2) where the surface $S$ is $\mathscr{I}^-$ and so that they contain only positive frequencies with respect to the canonical affine parameter on $\mathscr{I}^-$. The operators $a_i$ and $a_i^\dagger$ have the natural interpretation as the annihilation and creation operators for ingoing particles i.e. for particles at past null infinity $\mathscr{I}^-$. Now I'm quite probably missing something extremely basic here. But why the coefficients of the modes which are positive frequency with respect to the canonical affine parameter on $\mathscr{I}^-$ can be interpreted as creation and annihilation operators of particles on $\mathscr{I}^-$? I do know that the basic point of QFT is indeed: (1) pick a set of modes which are complete in the KG inner product and positive/negative frequency with respect to some timelike Killing vector field and (2) expand the field in these modes, upon quantization, the coefficients become creation and annihilation operators in a Fock space giving a "particle" interpretation. But here still. Here we have a few issues: The modes are not positive frequency with respect to a timelike Killing field, but rather with respect to a parameter which is actually a null coordinate. In that case, how does one justify that the coefficients become creation and annihilation operators upon quantizing? Still, I can't see why we can interpret the resulting creation and annihilation operators as creating and annihilating particles on $\mathscr{I}^-$? Why on $\mathscr{I}^-$? How do we justify this? Answer: All that is a bit rough and modern views on this subject exist (see e.g. our book https://www.springer.com/it/book/9783319643427 for a recent book also on these ideas, there is a free version of the book in the archives). However, I guess you are considering a spacetime containing a black hole. Close to past null infinity, spacetime is assumed to be similar to Minkowski spacetime (the black hole forms later). It is possible to fix the null parameter on that null surface such that it coincides with (extends) the standard Killing time in the considered spacetime far from the event horizon. In this picture, that is nothing but Minkowski time. Actually there is a conformal trasformation involved in the procedure that is singular at null infinity and everything goes right if dealing with massless particles (see the quoted book). So, you may assume to deal with the standard quantization procedure in Minkowski spacetime as soon as you stay close to the past null infinity. Actually these modes cannot be complete and information must be added concerning the past event horizon for instance. This route leads in particular to the so-called Unruh state (are the quoted book).
{ "domain": "physics.stackexchange", "id": 50862, "tags": "general-relativity, black-holes, antimatter, hawking-radiation, qft-in-curved-spacetime" }
Why does the time/space tradeoff exist
Question: I'd like to know why when choosing how to optimize an algorithm that there almost always (always?) exists a time/space tradeoff. Definition: https://simple.wikipedia.org/wiki/Space-time_tradeoff. My favorite example is a karnaugh map for simplifying boolean logic. It abstracts boolean logic away with just clever geometry: Is there some sort of law from some mathematical theory to explain why this exists? Intuitively it seems to make sense, but the math would be nice. Is this some property of information/computation? Is it guaranteed for any computation that can be performed? What about computations theoretically possible with infinite time, is a reciprocal guaranteed to exist that instead uses infinite space (or a middle ground that makes the time axis not go to infinity anymore)? Answer: It doesn't. You're biased to results you find interesting. If we measure a particular algorithm's space $s$ and time $t$ complexity, and then improve the algorithm. One of the following things can and does happen: We reduce $s$ and leave $t$ unchanged. We reduce $t$ and leave $s$ unchanged. We reduce both $s$ and $t$. We reduce $s$ and $t$ increases. WOW! We reduce $t$ and $s$ increases. WOW! Now 1, 2 and 3 happen all the time. Why are you not asking for a "mathematical principle" for them? Because they are boring. Since we strictly improved the algorithm it must mean the previous algorithm was suboptimal, and is immediately forgotten. Only in cases 4 and 5 we suddenly have two algorithms which do not have a clear winner, and only then do you start comparing them and wondering about a "mathematical principle" behind this difference. What's happening is that given two objectives, you get a plot of possible solutions: Anything above and to the right of the critical red line is immediately forgotten as non-interesting, as there are algorithms that are strictly better in both $s$ and $t$. Your mind only looks at the red line (also known as the Pareto frontier), ignoring the rest of the plot.
{ "domain": "cs.stackexchange", "id": 9838, "tags": "computability, optimization, information-theory" }
Concise HTTP server
Question: Here's my code for a consise node.js HTTP server. It does work as intended, but I'd like to ask if there's any glitch or to be improved. var httpServer = function(dir) { var component = require('http') .createServer(function(req, res) { var fs = require('fs'); var path = require("path"); var url = require('url'); var mimeTypes = { "html": "text/html", "jpeg": "image/jpeg", "jpg": "image/jpeg", "png": "image/png", "js": "text/javascript", "css": "text/css" }; var uri = url.parse(req.url) .pathname; var filename = path.join(dir, unescape(uri)); var indexFilename = path.join(dir, unescape('index.html')); var stats; console.log(filename); try { stats = fs.lstatSync(filename); // throws if path doesn't exist } catch (e) { res.writeHead(404, { 'Content-Type': 'text/plain' }); res.write('404 Not Found\n'); res.end(); return; } if (stats.isFile()) { // path exists, is a file var mimeType = mimeTypes[path.extname(filename) .split(".")[1]]; res.writeHead(200, { 'Content-Type': mimeType }); var fileStream = fs.createReadStream(filename) .pipe(res); } else if (stats.isDirectory()) { // path exists, is a directory res.writeHead(200, { 'Content-Type': "text/html" }); var fileStream = fs.createReadStream(indexFilename) .pipe(res); } else { // Symbolic link, other? // TODO: follow symlinks? security? res.writeHead(500, { 'Content-Type': 'text/plain' }); res.write('500 Internal server error\n'); res.end(); } }); return component; }; var port = 9999; var dir = 'www'; var HTTPserver = httpServer(require('path') .join(__dirname, dir)) .listen(port, function() { console.log('HTTP listening ' + port); }); Answer: simpler var port = 9999; var directory = 'www'; var http = require('http'); var url = require('url'); var path = require("path"); var fs = require('fs'); var mimeTypes = { "html": "text/html", "js": "text/javascript", "css": "text/css", "jpeg": "image/jpeg", "jpg": "image/jpeg", "png": "image/png", "gif": "image/gif", "svg": "image/svg" // more }; var request = function(req, res) { var uri = url.parse(req.url).pathname; var dir = path.join(__dirname, directory); var filepath = path.join(dir, unescape(uri)); var indexfilepath = path.join(dir, unescape('index.html')); console.info('filepath', filepath); var f = function(err, stats) { if (stats === undefined) // path does not exit 404 { res.writeHead(404, { 'Content-Type': 'text/plain' }); res.write('404 Not Found\n'); res.end(); return; } else if (stats.isFile()) // path exists, is a file { var mimeType = mimeTypes[path.extname(filepath).split(".")[1]]; res .writeHead(200, { 'Content-Type': mimeType }); var fileStream = fs .createReadStream(filepath) .pipe(res); return; } else if (stats.isDirectory()) // path exists, is a directory { res .writeHead(200, { 'Content-Type': "text/html" }); var fileStream = fs .createReadStream(indexfilepath) .pipe(res); return; } else { // Symbolic link, other? // TODO: follow symlinks? security? res .writeHead(500, { 'Content-Type': 'text/plain' }) .write('500 Internal server error\n') .end(); return; } }; fs.stat(filepath, f); return; }; var serverUp = function() { console.info('HTTP server listening', port); return; }; var component = http .createServer(request) .listen(port, serverUp);
{ "domain": "codereview.stackexchange", "id": 11886, "tags": "javascript, node.js, http, server" }
Improving performance hacker rank Median
Question: I'm trying to compete on HackerRank and my answer got accepted, but the times are not so good. I have a friend who sent the answer in C# too but somehow made it a lot faster. I'm wondering what can I do to improve it. using System; using System.Collections.Generic; using System.IO; using System.Text; class Solution { static void Main(String[] args) { int N; StringBuilder st= new StringBuilder(); N = Convert.ToInt32(Console.ReadLine()); int[] x = new int[N]; List<double> a= new List<double>(); string[] s = new string[N]; for(int i=0; i<N ;i++){ string tmp = Console.ReadLine(); string[] split = tmp.Split(new Char[] {' ', '\t', '\n'}); s[i] = split[0].Trim(); x[i] = Convert.ToInt32(split[1].Trim()); bool r=true; if(s[i]=="r"){ int index= a.BinarySearch(x[i]); if(index>=0){ a.RemoveAt(a.LastIndexOf(x[i])); } else{ r=false; } }else{ var index = a.BinarySearch(x[i]); if (index < 0) index = ~index; a.Insert(index,x[i]); } if(!r || a.Count==0){ st.AppendLine("Wrong!"); } else{ st.AppendLine(calcularModa(a).ToString()); } } Console.WriteLine(st.ToString()); } static double calcularModa(List<double> a){ int i= a.Count/2; if(a.Count % 2 !=0){ return a[i]; }else{ return ((a[i - 1] + a[i]))/2; } } } Answer: Besides of the pointed by Jesse, I'd like to comment on a couple of things that might improve your code. First, since all that you are parsing is a string, you don't need to call Convert.ToInt32(str), just int.Parse(str). This is what's really going on, you're just taking a few more extra method calls. But where I see you're losing most of your time is in the following: int index= a.BinarySearch(x[i]); if(index>=0){ a.RemoveAt(a.LastIndexOf(x[i])); //... } Notice that you are looking for the index with Binary Search O(log n) and then you are again looking for the index, but not only once more, but this time in O(n). Instead, you should: int index= a.BinarySearch(x[i]); if(index>=0){ a.RemoveAt(index)); //... }
{ "domain": "codereview.stackexchange", "id": 3715, "tags": "c#, programming-challenge, median" }
Conway's Game Of Life in Java
Question: I've written a simple demo for Conway's Game Of Life for self-practice. How can I improve the code, especially the processLife() method which traverses and updates the next state of every element in the matrix. The complete code is down below; For those whose eyes are tired of looking at code here: Gist Link public class GameOfLifeDemo { private static final int HEIGHT = 10; private static final long PERIOD = 120*1; public static void main(String[] args) throws InterruptedException { boolean [][] matrix = new boolean[HEIGHT][HEIGHT]; // generateRandom(matrix); // random values matrix // testGlider(matrix); // test for Glider testTumbler(matrix); // test for Tumbler while(true) { Thread.sleep(PERIOD); printMatrix(matrix); processLife(matrix); System.out.println("-----------------------------------------------------"); } } private static void processLife(boolean[][] matrix) { boolean[][] tempMatrix = new boolean[matrix.length][matrix[0].length]; copyMatrix(matrix, tempMatrix); // sweep the matrix for(int i = 0; i < HEIGHT; i++) { for(int j = 0; j < HEIGHT; j++) { // count alive neighboors int countAlive = 0; for(int k = i-1; k <= i+1; k++) { for(int t = j-1; t <= j+1; t++) { if((k == i && t == j) || (t < 0 || t >= HEIGHT) || (k < 0 || k >= HEIGHT) ) continue; else { if(matrix[k][t]) { countAlive++; } } } } handleRules(tempMatrix, i, j, countAlive); } } copyMatrix(tempMatrix, matrix); } // rules // if cell have neighboors smaller than 1, die of loneliness // if cell have neighboors greater than 4, die of overpopulation // if only cell have 3 or 4 neighboors, live private static void handleRules(boolean[][] matrix, int i, int j, int countAlive) { if(countAlive <= 1 || countAlive >= 4) matrix[i][j] = false; else if(countAlive == 3 || countAlive == 4) matrix[i][j] = true; } private static void copyMatrix(boolean[][] src, boolean[][] dst) { for(int i = 0; i < HEIGHT; i++) System.arraycopy(src[i], 0, dst[i], 0, HEIGHT); } private static void printMatrix(boolean[][] matrix) { for(int i = 0; i < matrix.length; i++) { for(int j = 0; j < matrix[i].length; j++) { if(matrix[i][j] == true) { System.out.print('X'); } else { System.out.print(' '); } } System.out.println(); } } private static void generateRandom(boolean[][] matrix) { for(int i = 0; i < HEIGHT; i++) for(int j = 0; j < HEIGHT; j++) matrix[i][j] = Math.random() < 0.5; } /* * Test Method for Generating a Glider * * X * X * XXX * */ private static void testGlider(boolean[][] matrix) { matrix[0][1] = true; matrix[1][2] = true; matrix[2][0] = true; matrix[2][1] = true; matrix[2][2] = true; } /* * Test Method for Generating a Tumbler * * XX XX * XX XX * X X * X X X X * X X X X * XX XX * */ private static void testTumbler(boolean[][] matrix) { matrix[0][2] = true; matrix[0][3] = true; matrix[0][5] = true; matrix[0][6] = true; matrix[1][2] = true; matrix[1][3] = true; matrix[1][5] = true; matrix[1][6] = true; matrix[2][3] = true; matrix[2][5] = true; matrix[3][1] = true; matrix[3][3] = true; matrix[3][5] = true; matrix[3][7] = true; matrix[4][1] = true; matrix[4][3] = true; matrix[4][5] = true; matrix[4][7] = true; matrix[5][1] = true; matrix[5][2] = true; matrix[5][6] = true; matrix[5][7] = true; } } Answer: Everything is static. You could be using Object Oriented Programming, but you are simply using static functions and variables in the main function. Better would be to have a GameOfLife class which you instantiate (preferably to the initial state of the board). Something like this: class GameOfLife { private boolean[][] board; public GameOfLife(boolean[][] board) { this.board = copyOfBoard(board); } } And then that class could have a advanceGeneration() function and similar things. Don't use boolean[][] for this. It's less clear what you are doing here. Instead, use an enum: public enum CellType { DEAD, ALIVE } Granted, the boolean[][] initializes to entirely false, whereas this CellType[][] doesn't, but I feel like the readability is worth it. At the very least, define some constants: public static final boolean ALIVE = true; public static final boolean DEAD = false; private static final long PERIOD = 120*1; I have no idea what this is by the name. What's the *1 for anyway? What is this the period of? Or is this referring to the punctuation mark? while(true) { Thread.sleep(PERIOD); printMatrix(matrix); processLife(matrix); System.out.println("-----------------------------------------------------"); } Oh you are using it as a delay between states of the board. In that case, it should be named something better. However, you are working too hard here. You could just use Java's built in Timer class: Timer timer = new Timer(); timer.scheduleAtFixedRate(new TimerTask() { // Code goes here }, 0, TIME_BETWEEN_FRAMES); That uses java.util.Timer, but if you want to go for a Swing timer (aka you want to use a GUI), then you could do this: // This is a javax.swing.timer // this::functionToCall is really just any ActionListener, but I personally like lambda functions Timer timer = new Timer(TIME_BETWEEN_FRAMES, this::functionToCall); // You could do: // Timer timer = new Timer(TIME_BETWEEN_FRAMES, (e) -> { // // Code goes here // }); timer.start(); private static void handleRules(boolean[][] matrix, int i, int j, int countAlive) { if(countAlive <= 1 || countAlive >= 4) matrix[i][j] = false; else if(countAlive == 3 || countAlive == 4) matrix[i][j] = true; } This modifies the result. You almost always want your functions to act like mathematical functions rather than procedures. Additionally, Wikipedia lists the CGoL rules as follows: Any live cell with fewer than two live neighbours dies, as if caused by under-population. Any live cell with two or three live neighbours lives on to the next generation. Any live cell with more than three live neighbours dies, as if by over-population. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction. Rewrite this like so: private static boolean computeValueForNeighbours(boolean currentValue, int countAlive) { if (currentValue) { if (countAlive < 2) return false; if (countAlive == 2 || countAlive == 3) return true; if (countAlive > 3) return false; } if (countAlive == 3) return true; return false; } Note that this may also have different behaviour than your function. if(countAlive <= 1 || countAlive >= 4) matrix[i][j] = false; else if(countAlive == 3 || countAlive == 4) That last countAlive == 4 will never be true, since it would have matched the previous if statement if it was. private static void processLife(boolean[][] matrix) { boolean[][] tempMatrix = new boolean[matrix.length][matrix[0].length]; copyMatrix(matrix, tempMatrix); You copy the matrix to a temporary matrix and back every time. This is very inefficient. Although we tend to like immutability, I would avoid it in this case. There are two things bad here: You don't need to make a copy there and back. Just do something like boolean[][] oldState = copyOf(matrix); // Now store the new state into the `matrix`, using the oldState to obtain the information you need You don't need to copy the entire matrix. For an n x n matrix, copying takes O(n^2) extra space. You can get by with O(n) space, or copying only a single row. The idea is to modify the matrix in place, using a single-dimensional array to hold the old values of the row you are working on. Then, you simply use that array when you get the old values for computing the new line. You will need another variable as well. Something like this (doesn't completely work and I don't recommend this code style; this is just to get the idea out there): boolean[] row = new boolean[matrix[0].length]; for (int y = 0; y < matrix.length; y++) { boolean previousX = false; for (int x = 0; x < matrix[y].length; x++) { int countAlive = 0; if (previousX) countAlive++; if (row[x]) countAlive++; if (row[x + 1]) countAlive++; if (row[x - 1]) countAlive++; if (matrix[y][x + 1]) countAlive++; if (matrix[y + 1][x - 1]) countAlive++; if (matrix[y + 1][x]) countAlive++; if (matrix[y + 1][x + 1]) countAlive++; previousX = row[x]; row[x] = matrix[y][x]; matrix[y][x] = computeValueForNeighbours(matrix[y][x], countAlive); } } That looks very awkward because row stores part of one row and part of the next row. The if statements do go by the top, middle, and bottom row of the nearby cells. Please improve the readability of this when you actually code it.
{ "domain": "codereview.stackexchange", "id": 19335, "tags": "java, game-of-life" }
What percentage of human body's cells are contained in blood?
Question: What percentage of human body's cells are contained in blood vs. the rest of the body? Answer: The human body has about $3.72 × 10^{13}$ cells according to Bianconi et al. 2013, although this is disputed. It also has about 5 liters of blood, which is made approximately 40-45% of erythrocytes. Let's assume 2.5 liters = 2500 $cm^3$ of red blood cells (RBC). Assuming a mean value of 5 millions RBC per $mm^3$ there should be in total $5 \cdot 10^6 \cdot 10^3 \cdot 2500 = 7.5 \cdot 10^{12}$ RBC. By completely taking out white blood cells and thrombocytes out of the equation, a number of ${7.5 \over {3.72 \cdot 10}} \cdot 100 = 20 \%$ of number of cells are contained in the blood (approximately).
{ "domain": "biology.stackexchange", "id": 3515, "tags": "human-biology" }
Autocorrelation function between received and training signal used for channel estimation
Question: I have read a paper that performs Channel Estimation of Wireless Channel as follows. A training sequence of length $N$, lets call it $a_N(i)$ for $i\in[0:N-1]$ is repeated twice then sent out over the channel. Assume the transmit sequence formed of this two sequences is denoted by $s_{CE}(t)$. The received signal is given by (assuming channel of length $T_{CH}$) is $$r_{CE}(t) = \sum_{t'=0}^{T_{CH}-1}h(t')s_{CE}(t-t')+n(t)$$ The authors claim that if one takes the autocorrelation between a sequence $a_N(i)$ and the received signal then we can mathematically write the autorcorrelation as $$R(t)= \frac{1}{N}\sum_{d=0}^{N-1}r_{CE}(t+d\underbrace{-N+1}_{????}) a_{N}^*(d) $$ My question is why is there a need for the term I have underbrace $-N+1$. I thought that the autocorrelation is in general expressed as $$R(t)= \frac{1}{N}\sum_{d=0}^{N-1}r_{CE}(t+d) a_{N}^*(d) $$ Thanks looking forward for your view! Update: The following is the reference Link I particular (25) and (26) are my main concerns. Answer: I think it's just a trick which makes $R(t)=0, \mathrm{when}\quad t<0$. First, $-N+1$ is just a time offset and not affect the essence of function. $r_{CE}(t)$ is bounded to $[0, 2N+T_{CH}-2]$, and $a_N(t)$ to $[0, N-1]$. In the form you gave, $R(t)$ is bounded to $[-N+1, 2N+T_{CH}-2]$. May be the author want the non-zero value of $R(t)$ also starts at 0, so he just move it right with $N-1$ length, which causes the $-N+1$ term in the equation. --------- Why $R(t)$ is bounded as that ------------- This is just a explanation to give some intuition, not a formal proof. Consider two function $f(t)$ and $g(t)$, which $f(t)=0, t \notin [a, b]$ and $g(t)=0, t\notin [c, d]$. Now we want to calculate the sum of products, i.e., $S=\sum_t f(t)g(t)$, $[a,b]$ and $[c,d]$ must be overlapped, right? Thus we get $a \le d $ and $c\le b$. Get back to the $R(t)=\sum _d r_{CE}(t+d)a^*(d)$(note here the argument is $d$), $a^*(d)$ is bounded to $[0, N-1]$ and $r_{CE}(t+d)$ to $[-t, L-1-t]$, in which $L=2N+T_{CH}-1$ is the length of $r_{CE}(d)$. Here we get $a=0, b=N-1, c=-t, d=L-1-t$. Using $a \le d $ and $c \le b$, we get $-(N-1) \le t \le L-1$. As to why $r_{CE}$ is bounded as that, you can make a same process to understand it.
{ "domain": "dsp.stackexchange", "id": 2782, "tags": "digital-communications" }
Calculating the Potential from the E-Field
Question: I find that often times I'll be tripped up by questioning whether or not I can do something mathematically, and be unable to come up with a satisfying answer. This is, unfortunately, one of those times. I'm told: A uniform electric field, $\vec{E} = E_0\hat{x}$. What is the potential, expressed using cylindrical coordinates, $V(s,\phi,z)$? My first course of action is: We know... $$|r| = \sqrt{x^2 + y^2 + z^2} = \sqrt{x^2} = x = E_0$$ $$\tan^{-1}{\frac{y}{x}} = \theta = 0$$ $$z = z = 0$$ So the electric field only has a component in the $\hat{r}$ direction. Now, we know that $\vec{E} = -\nabla V(r, \phi, z) = - \frac{\partial V}{r} - \frac{1}{r}\frac{\partial V}{\theta} - \frac{\partial V}{z}$ So, I think "Oh. I just have to integrate." ... but over what? Do I integrate three times, once w.r.t. radius, then phi, then z? I'm pretty sure that won't give me the right answer. If I decide to express $\nabla V$ in terms of cartesian coordinates, I get $- \nabla V(x,y,z) = E_0 \hat{x}$ ... but the question still remains. I feel like this is definitely the easy part of the problem, and I can often do the more complicated parts—it's just small things like this often throw me off. How would I go about extracting the potential from either of those equations? I know I have to integrate, but... where? Answer: It seems to me that you have more of a conceptual issue than a mathematical one. To hopefully remedy this, let me remind you of a couple of facts. Given an electric field $\mathbf E$, an electric potential $V$ for $\mathbf E$ is any scalar function $V$ for which \begin{align} \mathbf E = -\nabla V \end{align} It follows that if $V$ is such a potential, then we can integrate both sides along a curve $C$ to obtain \begin{align} \int_C\mathbf E\cdot d\boldsymbol \ell = -\int_C \nabla V\cdot d\boldsymbol \ell \end{align} If $C$ is a curve with endpoints $\mathbf a$ and $\mathbf b$, then the gradient theorem tells us that the right hand side can be evaluated in terms of the values of $V$ at these endpoints alone; \begin{align} -\int_C \nabla V\cdot d\boldsymbol \ell = V(\mathbf a) - V(\mathbf b) \end{align} We now have the freedom to choose a reference point at which we decide what the value of the potential is (this comes from the fact that in step 1, the condition that the field be the gradient of the potential does not uniquely specify the potential) at some chosen reference point $\mathbf b = \mathbf x_0$, and then combining steps 2 and 3 allows us to compute the value of the potential at any other point $\mathbf a = \mathbf x$. In other words, combining these remarks with steps 2 and 3 we obtain \begin{align} V(\mathbf x) = V(\mathbf x_0) + \int_C \mathbf E\cdot d\boldsymbol \ell \end{align} where $C$ is any path from $\mathbf x$ to $\mathbf x_0$. In short, the electric potential is computed by choosing its value at a certain reference point, and then performing a line integral along any path to another point at which you want to determine its value. In this way, you can obtain the functional form of $V$ at any point $\mathbf x$ you like.
{ "domain": "physics.stackexchange", "id": 12401, "tags": "electrostatics, electric-fields, potential, voltage, integration" }
Modelling deadlock in cyclic pipelines in Haskell with coroutine libraries
Question: I'm modelling two processes which have been put into a cyclic pipeline - the output of each feeds into the input of the other - so as to work out when they've deadlocked (i.e., neither can progress since they're each waiting on output from the other). I've used the conduit package for this, as I couldn't see an easy way to do it using pipes, and streaming doesn't really look suited to this sort of task. It looks like monad-coroutine would be another possibility for this, but I didn't investigate it further. Define the problem as follows: We have 2 processes executing a program made up of two sorts of instruction: "send" and "receive". Each process has an incoming mailbox and can send to an outgoing mailbox, each of which is the end of a FIFO queue of unbounded length. Let's assume the items of data they're sending and receiving are Ints. The instruction send n will send the int n to the process's outgoing mailbox (from which it can be retrieved by whatever program is monitoring the other end of the queue). receive will try and retrieve an int from the process's incoming mailbox. If there's nothing in the mailbox, the process will block until there is. Processes execute each instruction in turn, and if there are no more instructions, they exit. Now, assume we join 2 processes A and B "head to toe" in a cycle: process A receives data from process B's outbox, and sends data to process B's inbox, and vice versa. Given a particular program P, we wish to simulate their execution to find out (a) whether the 2 processes deadlock, when given program P, and (b) how many items of data each process sends before the 2 of them either deadlock or come to the end of their instructions. I think this problem is of interest because pre-emptive concurrency doesn't seem like a good way of solving it -- most concurrency libraries do their best to help you avoid deadlock, not model processes that have got into deadlock. (But perhaps I'm wrong, and this is easily modelled with a standard concurrency library - I'd be keen to hear.) Also it gave me a good reason to look at some of the streaming data packages (conduit, pipes and streaming), all of which I believe are modelled around the idea of processes that can "yield" data "downstream", or "await" it from "upstream", which is exactly what this problem requires. Here's my code: (NB: contains possible spoilers for the Advent of Code 2017, day 18 problem, Part 2 - but this is not relevant to my question, which is about modelling deadlock with coroutines.) -- requires packages: -- microlens-platform -- mtl -- conduit -- conduit-combinators {-# LANGUAGE TemplateHaskell #-} {-# LANGUAGE FlexibleContexts #-} import Conduit ( (.|), ConduitM, yield, await, fuseBoth, yieldMany, runConduitPure ) import Data.Conduit.Lift (execStateC) import Data.Conduit.List (consume) import Lens.Micro.Platform import Control.Monad.State (MonadState) import Control.Monad (when, unless) data Instr = Send Int | Recv -- ... stub: imagine further operations here, e.g. -- acting on a store, conditional jumps to other instructions, etc. deriving Show -- | state of a program data ProgState = ProgState { _program :: [Instr] -- ^ remaining instructions to run , _outputCnt :: Int -- ^ how many times we've done a "Send" } deriving Show -- programs initially haven't sent anything mkProgState :: [Instr] -> ProgState mkProgState instrs = ProgState instrs 0 makeLenses ''ProgState -- | perform one operation, using 'yield' and 'await' -- to "send" and "receive" values. -- return a Bool - whether we can continue, or are -- blocked on a receive and should abort. applyOp :: MonadState ProgState m => Instr -> ConduitM Int Int m Bool applyOp instr = case instr of Send n -> do yield n outputCnt += 1 return True Recv -> do valM <- await case valM of Nothing -> return False Just _val -> -- stub: ..do something with received vals return True -- Given initial state: -- Execute instructions in sequence until either -- (a) there are no more left, or -- (b) we're blocked while receiving -- and return the new state. runLoop :: Monad m => ProgState -> ConduitM Int Int m ProgState runLoop state = execStateC state loop where loop :: MonadState ProgState m => ConduitM Int Int m () loop = do prog <- use program unless (null prog) $ do -- still more instructions let instr = head prog canContinue <- applyOp instr when canContinue $ do program %= tail -- step forward 1 instruction loop -- | put 2 program processes in sequence, one feeding the other. -- In addition to program states, takes input to program A, -- and returns output from program B. pipe :: [Int] -> (ProgState, ProgState) -> ((ProgState, ProgState), [Int]) pipe input (stateA, stateB) = let (=.|=) = fuseBoth -- join 2 conduits in sequence, -- and return results from both as a tuple -- get the side effect result of both programs A and B, -- also what B emits, as a list (using 'consume') conduit = yieldMany input .| runLoop stateA =.|= runLoop stateB =.|= consume in runConduitPure conduit -- simulate the effect of joining our pipeline to its own -- start, creating a cycle - and keep running until -- the processes finish or are deadlocked (i.e., -- produce no output because neither can continue) runCycle :: ProgState -> ProgState -> (ProgState, ProgState) runCycle = loop [] where loop input stateA stateB = do let ((stateA', stateB'), output) = pipe input (stateA, stateB) if null output then (stateA', stateB') else loop output stateA' stateB' -- Give 2 processes a program to run that is guaranteed -- to result in them deadlocking, when joined in a cyclic -- pipeline. -- count how many items each outputs before deadlock happens. test :: (Int, Int) test = let instrs = [ Send 1 , Send 2 , Recv , Recv , Recv ] (stateA, stateB) = runCycle (mkProgState instrs) (mkProgState instrs) in (stateA ^. outputCnt, stateB ^. outputCnt) main :: IO () main = do let (aCount, bCount) = test putStrLn $ "program A emitted " ++ show aCount ++ " items" putStrLn $ "program B emitted " ++ show bCount ++ " items" My questions are: Can you see any opportunities for improvement here, especially simpler ways of modelling the problem? Could pipes or streaming be used instead? I couldn't see an obvious way to do so. Or would some other package be better - should I have tried monad-coroutine? Or machines, perhaps? (I saw from its description that it might be relevant, but haven't investigated further -- ...there's only so many hours in the day.) I assume modelling this with any sort of pre-emptive multitasking library, like Control.Concurrent, would be tricky and pointless. (Since concurrency libraries tend to be about avoiding deadlock, rather than letting it happen and letting you inspect threads' current state.) Answer: Here's your simpler way of modelling the problem: Each Send Int eliminates the other program's next Recv until neither sends. data Instr = Send Int | Recv deriving Eq execute :: [Instr] -> [Instr] -> (Bool, [Int], [Int]) -- Success, outputs execute (Send i:x) y = (\(a,b,c) -> (a,i:b,c)) $ execute x (delete Recv y) execute x (Send i:y) = (\(a,b,c) -> (a,b,i:c)) $ execute (delete Recv x) y execute [] [] = (True, [], []) execute _ _ = (False, [], [])
{ "domain": "codereview.stackexchange", "id": 29121, "tags": "haskell, concurrency, generator" }
What is the meaning of 'moderately exponential' running time?
Question: I was reading some research paper where for hypergraph of bounded rank $k$ they have given moderately exponential algorithm. The runtime of the algorithm is $e^{ \mathcal {O}(k^2\sqrt n) \cdot poly(n)}$. Here $k$ is the rank of the hypergraph, and $poly(n)$ means polynomial in variable $n$. What is the meaning of moderately exponential running time ? I have seen this link, but did not understand much. Answer: Moderately exponential is not a widely accepted term in general theoretical computer science, though it might have a widely accepted meaning in the area to which the paper you are citing belongs. From context, it seems that the meaning here is $\exp O(n^\alpha)$ for $\alpha < 1$; an exponential running time would be $\exp O(n)$ or worse.
{ "domain": "cs.stackexchange", "id": 9659, "tags": "algorithms, terminology, time-complexity" }
Digital Downconversion of Signal
Question: DSP-Newbie here... Why are we using the ratio of the center frequency of the signal to the sampling frequency of the signal spectrum when we downconvert the signal? I have an Spectrum with Carriers which reaches from $-f_{s}/2 ... f_{s}/2$ ($f_{s}$ is the Sampling-frequency) (see picture) The downconversion/shifting of the spectrum is $$e^{-j2\pi(LO/f_s)}$$ Answer: For a centre frequency of $F_C$ for a modulated signal $v(t)$. $$ r(t) = v_I(t) \sqrt{2} \cos 2\pi F_C t - v_Q(t) \sqrt{2}\sin 2\pi F_C t $$ Assuming a direct RF sampling at a rate of $F_S=1/T_S$ before downconversion, we plug $t=nT_S$ above. $$ r(nT_S) = v_I(nT_S) \sqrt{2} \cos 2\pi F_C nT_S - v_Q(nT_S) \sqrt{2}\sin 2\pi F_C nT_S $$ The expression $F_C nT_S$ above becomes $\frac{F_C}{F_S}n$ to yield $$ r(nT_S) = v_I(nT_S) \sqrt{2} \cos 2\pi \frac{F_C }{F_S} n - v_Q(nT_S) \sqrt{2}\sin 2\pi \frac{F_C }{F_S} n $$ Perhaps this is the ratio of the centre frequency to the sample rate you are asking about. EDIT: Downconversion can be embedded in a downsampling operation as follows. In a conventional operation, we will have an LO downconverting the signal to baseband, filter it and then downsample it according to the symbol rate, as shown below. Here, the filter is operating at a higher rate which is unnecessary when we have to throw $M-1$ out of every $M$ samples anyway. This is a hint for what we need to do. First, we interchange the operations of downconvesion and filtering so our filter needs to be implemented in passband now. If the lowpass filter impulse response is $h[n]$, the passband filter response is $$ g[n] = h[n] e^{j2\pi \frac{F_C}{F_S} n}$$ The interchange of downconversion and filtering is shown below. Next, we slide the downsampling operation past the LO whose frequency gets multiplied by $M$, i.e., $F_C/F_S \cdot M$. We assume that $$F_C = \frac{F_S}{M}$$ then our LO frequency after sliding it past the downsampler becomes $$\frac{F_C}{F_S}M = \frac{F_S}{M}\frac{M}{F_S}=1$$ which implies that no downconversion is actually required! However, remember that filter is still operating it at a higher rate. So we slide the downsampler past the filter now and implement it as a polyphase filter. The original spectrum is shown below. This operation changes the spectrum such that all spectral replicas from $-F_S/2$ to $F_S/2$ end up at baseband overlapped on each other. The interesting part is that at the input of the polyphase arms, they possess a unique phase profile across the whole $360^{\circ}$ around the frequency IQ spectrum. A rough sketch is below. This is only true in an exact manner for the sinusoidal signals, so the rest of the task for a signal with bandwidth is performed by the filter. When implemented as a polyphase filter, this filter operates with the delays matching those of the spectral replicas shown above. A final summation at the end cancels everything out leaving the desired spectrum. This also implies that if the desired spectrum is at $F_S/M$ and we want to employ the initial lowpass filter $h[n]$ instead of the bandpass filter $g[n]$, a corresponding rotation is required after each polyphase arm to line everything up before the final cancelation.
{ "domain": "dsp.stackexchange", "id": 6548, "tags": "digital-communications" }
How to derive the number of independent parameters in the $\chi$ matrix from the Choi matrix?
Question: In the section on Quantum process tomography, Page 391, Chapter 8, Quantum Computation and Quantum Information by Nielsen and Chuang. it is given that In general, $\chi$ will contain $d^4−d^2$ independent real parameters, because a general linear map of $d$ by $d$ complex matrices to $d$ by $d$ matrices is described by $d^4$ independent parameters, but there are $d^2$ additional constraints due to the fact that $\rho$ remains Hermitian with trace one; that is, the completeness relation $\sum_i E_i^\dagger E_i=I$ is satisfied, giving $d^2$ real constraints. A post, Why does the $\chi$ matrix have $d^4-d^2$ independent parameters?, has been asked on this, which contains an answer by @DaftWullie, which make use of the Choi matrix, where it seems like $\chi$ is used to represent the Choi matrix, not the $\chi$ matrix. $\chi$ is a hermitian $d^2\times d^2$ matrix which must have $d^2$ diagonal real (independent) terms, and the number of terms below the diagonal is $\dfrac{(d^2-1)d^2}{2}$ that contains a total of $2\times\dfrac{(d^2-1)d^2}{2}=d^4-d^2$ of real terms. Therefore, the total number of independent real parameters considering only that $\chi$ is hermitian, is $(d^4-d^2)+d^2=d^4$. The Choi matrix is given by, $\sigma=(I_R\otimes\mathcal{E})(|\alpha\rangle\langle\alpha |)$ where $|\alpha\rangle=\dfrac{1}{\sqrt{d}}\sum_i |i_R\rangle\otimes|i_Q\rangle$ is a maximally entangled state of the systems $R$ and $Q$. $$ \sigma=(I_R\otimes\mathcal{E})(|\alpha\rangle\langle\alpha |)=\dfrac{1}{d}\sum_{i,j}|i_R\rangle\langle j_R|\otimes\mathcal{E}(|i_Q\rangle\langle j_Q|) $$ which can be interpreted as the block matrix with $\frac{1}{d}\mathcal{E}(|i_Q\rangle\langle j_Q|)$ as the $(i,j)^{th}$ block. The $\chi$ matrix is defined by setting $E_i=\sum_m e_{im}\tilde{E}_m$, with $\{\tilde{E}_m\}$ being any orthonormal basis for the set of operators on the state space, such that $$ \mathcal{E}(\rho)=\sum_i E_i\rho E_i^\dagger=\sum_{m,n}\chi_{mn}\tilde{E}_m\rho\tilde{E}_n^\dagger $$ where the $(m,n)^{th}$ element of $\chi$ is $\chi_{mn}=\sum_i e_{im}e_{in}^*$ such that $$ \chi=\sum_{m,n}\chi_{mn}|m\rangle\langle n| $$ $$ \sigma=(I_R\otimes\mathcal{E})(|\alpha\rangle\langle\alpha |)=\sum_{m,n}\chi_{mn}(I\otimes \tilde{E}_m)|\alpha\rangle\langle\alpha |(I\otimes \tilde{E}_n^\dagger)=\sum_{m,n}\chi_{mn}|\tilde{E}_m\rangle\langle\tilde{E}_n| $$ where \begin{align}|\tilde{E}_m\rangle&=(I\otimes \tilde{E}_m)|\alpha\rangle\\ &=(I\otimes \tilde{E}_m)\dfrac{1}{\sqrt{d}}\sum_i |i_R\rangle\otimes|i_Q\rangle\\ &=\dfrac{1}{\sqrt{d}}\sum_i |i_R\rangle\otimes\tilde{E}_m|i_Q\rangle \end{align} and therefore $$ \chi_{mn}=\langle\tilde{E}_m|\sigma|\tilde{E}_n\rangle $$ Now, the Choi matrix can be written as, \begin{align} \sigma&=\sum_{m}(I\otimes {E}_m)(\dfrac{1}{d}\sum_{i,j}|i_R\rangle\langle j_R|\otimes|i_Q\rangle\langle j_Q|)(I\otimes {E}_m^\dagger)\\ &=\sum_{i,j}|i\rangle\langle j|\otimes\dfrac{1}{d}\sum_m{E}_m|i\rangle\langle j|{E}_m^\dagger\\ \end{align} Therefore, the $(i,j)^{th}$ block of the Choi matrix $\sigma$ is $\dfrac{1}{d}\sum_m{E}_m|i\rangle\langle j|{E}_m^\dagger$. The $(k,l)^{th}$ term of the $(i,j)^{th}$ block of the Choi matrix is, $$ \sigma_{ij,kl}=\langle k|(\dfrac{1}{d}\sum_m{E}_m|i\rangle\langle j|{E}_m^\dagger)|l\rangle\\ =\dfrac{1}{d}\sum_m\langle k|{E}_m|i\rangle\langle j|{E}_m^\dagger|l\rangle\\ $$ $$ \sigma_{ij,kl}=\dfrac{1}{d}\sum_m\langle k|{E}_m|i\rangle\langle j|{E}_m^\dagger|l\rangle $$ We are free to choose $\tilde{E}_m=\sqrt{d}|t\rangle\langle q|$ such that $m=qd+t$ equates the Choi matrix and the $\chi$ matrix, therefore $$ \boxed{\chi_{ij,kl}=\dfrac{1}{d}\sum_m\langle k|{E}_m|i\rangle\langle j|{E}_m^\dagger|l\rangle} $$ Applying the trace preserving condition, $\sum_m E_m^\dagger E_m=I\implies\sum_m \langle k|E_m^\dagger E_m|i\rangle=\delta_{ik}$ This much is clear! Then it says, This directly influences the $\chi$: $$ \sum_j\chi_{ij,kj}=\delta_{ik} $$ There are (complex) constraints for all $d^2$ values of $i,k$. However, since everything is Hermitian, again this corresponds to $d^2$ real constraints. Thus, the total freedom remaining is $d^4-d^2$. How do we obtain $\sum_j\chi_{ij,kj}=\delta_{ik}$ ? How does imposing the trace-preserving condition $\sum_m E_m^\dagger E_m=I$ on the Choi matrix obtains the number of constraints on the $\chi$ matrix ? My Attempt Thanks @JSdJ for the hint. $\chi_{mn}=\langle\tilde{E}_m|\sigma|\tilde{E}_n\rangle\implies \sigma=\sum_{m,n}\chi_{mn}|\tilde{E}_m\rangle\langle\tilde{E}_n|$ Therefore, in order for $\sigma=\chi$, we need $|\tilde{E}_m\rangle=|m\rangle$, where $m:0\to d^2-1$ $(I\otimes\tilde{E}_m)|\alpha\rangle=|\tilde{E}_m\rangle=\dfrac{1}{\sqrt{d}}\sum_i |i\rangle\otimes\tilde{E}_m|i\rangle=\dfrac{1}{\sqrt{d}}[|0\rangle\otimes\tilde{E}_m|0\rangle+|1\rangle\otimes\tilde{E}_m|1\rangle+\cdots+|d^2-1\rangle\otimes\tilde{E}_m|d^2-1\rangle]$ Let's consider the case when $d=2\implies d^2-1=3$, Then, where $\tilde{E}_m|i\rangle$ is the $i^{th}$ column of $\tilde{E}_m$, ie., $|i\rangle\otimes\tilde{E}_m|i\rangle$ is a column vector with the $i^{th}$ column of $\tilde{E}_m$ as it's $i^{th}$ block, all other elements are zero. So, $\sum_i |i\rangle\otimes\tilde{E}_m|i\rangle$ is the column vector with the columns of $\tilde{E}_m$ stacked on top of each other in order. We can divide $|m\rangle$ into $d$ blocks of dimension $d$ such that let's take $m=qd+t$, where $q,t:0\to d-1$. So, for $|\tilde{E}_m\rangle=|m\rangle$ we need to choose $\tilde{E}_m=\sqrt{d}|t\rangle\langle q|$, such that $\sigma=\sum_{m,n}\chi_{mn}|\tilde{E}_m\rangle\langle\tilde{E}_n|=\sum_{m,n}\chi_{mn}|m\rangle\langle n|=\chi$ $\therefore \chi_{ij,kl}=\dfrac{1}{d}\sum_m\langle k|{E}_m|i\rangle\langle j|{E}_m^\dagger|l\rangle$ Applying the trace preserving condition, $\sum_m E_m^\dagger E_m=I\implies\sum_m \langle k|E_m^\dagger E_m|i\rangle=\delta_{ik}$, how do we reach\begin{align} \sum_j\chi_{ij,kj}&=\delta_{ik} \end{align} Answer: Counting the number of free parameters of Choi operators The number of independent parameters in the $\chi$ matrix is identical to the number of independent parameters in the Choi matrix, or in the channel itself, because these are all isomorphic representations of the same objects. A map $\Phi$ acting on (the set of linear operators defined on) a Hilbert $H$ is a channel (meaning, it's completely positive and trace-preserving) iff its Choi representations, defined as $J(\Phi)\equiv\sum_{ij}\Phi(|i\rangle\!\langle j|)\otimes|i\rangle\!\langle j|$, is positive semidefinite and such that $\operatorname{tr}_1(J(\Phi))=I$. The set of Hermitian operators on a Hilbert space $H$ is a vector space of (real) dimension $\operatorname{dim}(H)^2$. The set of positive semidefinite operators on a Hilbert space $H$ is a (compact, convex) subset of this space, and thus has the same number of independent parameters. The set of Choi operators $J(\Phi)$ of channels $\Phi$ is a subset of the set of positive semidefinite operators on $H\otimes H$. It thus has $\dim(H\otimes H)^2=\dim(H)^4$ independent real parameters. However, the trace-preserving condition removes some of these degrees of freedom. The trace-preserving condition, which reads $\operatorname{tr}_1(J(\Phi))=I$, amounts explicitly to the $\dim(H)^2$ conditions: $$\sum_i J(\Phi)_{ij,ik}=\delta_{jk}, \qquad \forall j,k=1,...,\dim(H). $$ You can verify that these conditions are linearly independent, and thus remove $\dim(H)^2$ free parameters. You conclude that the set of Choi operators is a subset of $\operatorname{Lin}(H\otimes H)$ with $\dim(H)^4-\dim(H)^2$ free parameters. Reason directly from the $\chi$ representation You can also just reason directly in terms of the $\chi$ matrix, because $\chi$ and Choi are related via $\chi_{mn}=\langle \tilde E_m|J(\Phi)|\tilde E_n\rangle$, $J(\Phi)=\sum_{mn} \chi_{mn} |\tilde E_m\rangle\!\langle \tilde E_n|$ for some choice of orthonormal operatorial basis $\{\tilde E_m\}$. I think it's worth showing where some of the relations given in this answer come from though, because the notation can be tricky here. Namely, what exactly the kets and bras in $J(\Phi)=\sum_{mn} \chi_{mn} |\tilde E_m\rangle\!\langle \tilde E_n|$ mean. I'll somewhat follow the notation in the linked answer, so that the relation between channel/map $\Phi$ and its Choi representation reads $$\Phi(\rho) = \sum_{m,n} \chi_{m,n} P_m \rho P_n^\dagger.$$ Each $P_m$ is here an operator acting on $H$, and $\{P_m\}$ is a basis for the space of such operators. The Choi representation derived from this decomposition is $$J(\Phi) = \sum_{ij} \Phi(E_{ij})\otimes E_{ij} \equiv \sum \chi_{m,n} (P_m E_{ij} P_n^\dagger)\otimes E_{ij}.$$ Consider the action of $J(\Phi)$ on some simple vector $|\alpha\rangle\otimes|\beta\rangle$, which gives $$J(\Phi)(|\alpha\rangle\otimes|\beta\rangle)= \sum_{m,n,i} \chi_{m,n} (P_m E_{i,\beta} P_n^\dagger |\alpha\rangle)\otimes |i\rangle \\ = \sum \chi_{m,n} (P_n^\dagger)_{\beta,\alpha} (P_m |i\rangle)\otimes |i\rangle = \sum \chi_{m,n} (\bar P_n)_{\alpha\beta} (P_m)_{ji} (|j\rangle\otimes |i\rangle).$$ In other words, the action of $J(\Phi)$ on a generic vector $v\in H\otimes H$ can be written as $$J(\Phi)v=\sum_{m,n} \chi_{m,n}\langle \operatorname{vec}(P_n), v \rangle \operatorname{vec}(P_m),$$ where $\operatorname{vec}(P)$ denotes the vectorisation of an operator $P$. Explicitly, if $P:H\to H$, then $\operatorname{vec}(P)\in H\otimes H$ with $\operatorname{vec}(P)_{ij}=P_{ij}$. The above relation is what we mean when writing $J(\Phi)$ as a sum over ket-bras of a basis of operators. So, armed with this knowledge, how do we compute $\operatorname{tr}_1(J(\Phi))$ from the expression with the $\chi$ matrix, which is what we want to do to derive what trace-preserving translates into for $\chi$? At the cost of being pedantic, we can start observing that $\operatorname{tr}_1(J(\Phi))$ means explicitly $$\operatorname{tr}_1(J(\Phi)) = \sum_{\alpha} (\langle\alpha|\otimes I)J(\Phi)(|\alpha\rangle\otimes I) \equiv \sum_{\alpha\beta\gamma} (\langle\alpha|\otimes\langle\gamma|)J(\Phi)(|\alpha\rangle\otimes|\beta\rangle) \, |\gamma\rangle\!\langle \beta|,$$ and using the expression for $J(\Phi)$ in terms of $\chi_{m,n}$ we get $$(\langle\alpha|\otimes\langle\gamma|)J(\Phi)(|\alpha\rangle\otimes|\beta\rangle) = \sum_{m n\alpha\beta\gamma} \chi_{m,n} (\bar P_n)_{\alpha,\beta} (P_m)_{\alpha,\gamma},$$ and therefore $$\operatorname{tr}_1(J(\Phi)) = \sum \chi_{m,n} P_n^\dagger P_m,$$ hence the trace-preserving condition becoming $\sum_{m,n} \chi_{m,n} P_n^\dagger P_m = I$.
{ "domain": "quantumcomputing.stackexchange", "id": 4318, "tags": "quantum-operation, nielsen-and-chuang, kraus-representation, quantum-process-tomography" }
Packaging ROS with homebrew
Question: Are there plans or have there been attempts to package ROS as homebrew formulas in a similar way to the debs on Ubuntu? Edit: For future reference: With pointers from William a proof of concept has been created, but there are a few issues that need to be addressed: https://github.com/ros-infrastructure/bloom/issues/254#issuecomment-37401069 Originally posted by demmeln on ROS Answers with karma: 4306 on 2014-02-18 Post score: 4 Answer: I have looked to do it for a long time now, but I have never had the time to setup everything. For now, all the time I can afford to spend on it is to keep it building from source on OS X. From my perspective, the main challenges to getting this working are as follows: Generating Homebrew formulae for packages using bloom Setting up a CI system on OS X which has a clean starting point for each build Building bottles, pushing them to some storage (like download.ros.org/bottles), and updating the formula which uses the bottle I think that generating Formulae for packages is the only sustainable way to approach this problem. There are nearly 800 packages in Hydro and 200+ in desktop-full. Packaging them by hand is error prone and unmaintainable. So coming up with a bloom generator for the formulae would be crucial. I started that work, but didn't get very far at all: https://github.com/wjwwood/bloom_homebrew The CI system for OS X has also been a challenge. I spent some time trying to setup a diskimage with COW for OS X which would behave like chroot/pbuilder for linux but ran into a lot of issues. The other thing I tried was using a VM and snapshots through a Python API, but that required Parallels which is not free and was slow. The Homebrew guys just clear /usr/local with git clean -fdx between builds for their "homebrew brewbot" system (https://www.kickstarter.com/projects/homebrew/brew-test-bot). I believe we (OSRF) are setting up a similar setup using jenkins on our build farms: http://build.osrfoundation.org/view/SDFormat/job/sdformat-any-devel-homebrew-amd64/ If that is successful, I might try to use it for packaging ROS. Finally, for bottle support (which would be awesome), we would need to also build each formula with the --build-bottle option, create the bottle, upload it somewhere (like download.ros.org/bottles) and then update the formula again, probably pushing the formula to https://github.com/ros/homebrew-hydro/. So I have looked into it before, but as always progress on this is subject to my free time, which is in pretty low supply at the moment. Originally posted by William with karma: 17335 on 2014-02-19 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by demmeln on 2014-02-19: As always thanks for the comprehensive answer. At least the current state of affairs is out there now. I don't think I will manage to do it just now, but maybe me or otgera can work towards it in the future. I fully agree that automatic bottle creation is a must. I can wounder if we can get support Comment by demmeln on 2014-02-19: from the homebrew guys for things like CI and bottle creation. However they might not be happy about 800+ incoming formulae. In any case publishing at least the core packages of a base install would be a gigantic leap for OSX support. Comment by demmeln on 2014-03-09: Upstream ticket to track progress on this: https://github.com/ros-infrastructure/bloom/issues/254
{ "domain": "robotics.stackexchange", "id": 16998, "tags": "ros, homebrew, bloom-release, osx" }
A possible simplification of the Time-Independent Schrodinger equation?
Question: The Time Independent Schrodinger Equation is as follows: $$-\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2}+V(x)\psi(x)=E\cdot\psi(x)$$ Due to the fact that $E=E_{kinetic}+E_{potential}$ and the fact that $V$ is the potential energy, the equation could be simplified to $$-\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2}=(E-V(x))\cdot\psi(x)=\frac{1}{2}mv^2\cdot\psi(x)$$ which can be rearranged to get $$-\frac{\hbar^2}{p^2}\frac{d^2\psi}{dx^2} = \psi(x)$$ After applying the De Brogile relationship, we have that $$-\left(\frac{\lambda}{2\pi}\right)^2\frac{d^2\psi}{dx^2}=\psi(x)$$ which can be simplified to $$\frac{d^2\psi}{dx^2} + k^2\psi(x) = 0$$ This is the differential equation for a system with zero potential energy, and it would be absurd to think that all systems behave as such. As such, what is the mistake in this simplification of the Schrodinger Equation, or if there is none, then is there any confirmation in the literature of this form? Answer: Thanks to @QMechanic and @JahanClaes for clearing up this question in the comments. As $$k=\frac{\sqrt{2m(E-V)}}{\hbar} = \frac{\sqrt{\hat{p}^2}}{\hbar}=-i\frac{\partial}{\partial x},$$ we have that $$k^2\psi=-\frac{\partial^2 \psi}{\partial x^2}$$ and thus when substituted into the derived equation we obtain an identity, and thus the derived equation is equivalent to the Schrodinger Equation. The confusion arose by considering momentum and kinetic energy as artifacts of classical mechanics, but it is resolved by considering momentum in the context of quantum mechanics, where $\hat{p}=-i\hbar \frac{\partial}{\partial x}.$
{ "domain": "physics.stackexchange", "id": 43658, "tags": "quantum-mechanics, schroedinger-equation" }
Should number of classes be the same in few shot learning train and test?
Question: I used to believe in k-way-n-shot few-shot learning, k and n (number of classes and samples from each class respectively) must be the same in train and test phases. But now I come across a git repository for few-shot learning that uses different numbers in the train and test phase : parser.add_argument('--dataset') parser.add_argument('--distance', default='l2') parser.add_argument('--n-train', default=1, type=int) parser.add_argument('--n-test', default=1, type=int) parser.add_argument('--k-train', default=60, type=int) parser.add_argument('--k-test', default=5, type=int) parser.add_argument('--q-train', default=5, type=int) parser.add_argument('--q-test', default=1, type=int) Are we allowed to do so? Answer: In few-shot learning, the number of classes does not have to be the same in the training and inference stages. Generally speaking, the number of classes in training is bigger than that in inference. The most crucial setting in few-shot learning is that the classes in the inference phase must not be present in the training phase. In other words, the intersection of the classes in training and the classes in inference is the empty set.
{ "domain": "datascience.stackexchange", "id": 10483, "tags": "deep-learning, one-shot-learning, few-shot-learning" }
Picard CollectGcBiasMetrics ignoring certain chromosomes/sequences
Question: What's the easiest way to run Picard GCBias ignoring certain chromosomes/sequences in the reference? Looking at the CollectGcBiasMetrics, it seems there isn't a bed file option that can be passed. java -jar ~/picard-tools-1.134/picard.jar CollectGcBiasMetrics --help USAGE: CollectGcBiasMetrics [options] Documentation: http://broadinstitute.github.io/picard/command-line-overview.html#CollectGcBiasMetrics Tool to collect information about GC bias in the reads in a given BAM file. Computes the number of windows (of size specified by WINDOW_SIZE) in the genome at each GC% and counts the number of read starts in each GC bin. What is output and plotted is the "normalized coverage" in each bin - i.e. the number of reads per window normalized to the average number of reads per window across the whole genome.. Version: 1.134(a7a08c474e4d99346eec7a9956a8fe71943b5d80_1434033355) Options: --help -h Displays options specific to this tool. --stdhelp -H Displays options specific to this tool AND options common to all Picard command line tools. --version Displays program version. CHART_OUTPUT=File CHART=File The PDF file to render the chart to. Required. SUMMARY_OUTPUT=File S=File The text file to write summary metrics to. Default value: null. WINDOW_SIZE=Integer The size of windows on the genome that are used to bin reads. Default value: 100. This option can be set to 'null' to clear the default value. MINIMUM_GENOME_FRACTION=DoubleFor summary metrics, exclude GC windows that include less than this fraction of the genome. Default value: 1.0E-5. This option can be set to 'null' to clear the default value. IS_BISULFITE_SEQUENCED=Boolean BS=Boolean Whether the SAM or BAM file consists of bisulfite sequenced reads. Default value: false. This option can be set to 'null' to clear the default value. Possible values: {true, false} METRIC_ACCUMULATION_LEVEL=MetricAccumulationLevel LEVEL=MetricAccumulationLevel The level(s) at which to accumulate metrics. Default value: [ALL_READS]. This option can be set to 'null' to clear the default value. Possible values: {ALL_READS, SAMPLE, LIBRARY, READ_GROUP} This option may be specified 0 or more times. This option can be set to 'null' to clear the default list. INPUT=File I=File Input SAM or BAM file. Required. OUTPUT=File O=File File to write the output to. Required. ASSUME_SORTED=Boolean AS=Boolean If true (default), then the sort order in the header file will be ignored. Default value: true. This option can be set to 'null' to clear the default value. Possible values: {true, false} STOP_AFTER=Long Stop after processing N reads, mainly for debugging. Default value: 0. This option can be set to 'null' to clear the default value. Given this, and wanting to calculate GCBias the using Picard (not other software), what's the best option? Answer: The proper solution is to use a different tool (some of this you could do with computeGCBias in deepTools). But since you don't want to do that, you'll have to manually remove unwanted chromosomes from the BAM file(s): $ cat foo.awk BEGIN{split(excludes, excludeList, " ")} {exclude=0 if($1 == "@SQ") { for(ex in excludeList) { if(ex == substr($2, 4)) { exclude=1 break } } } else { \ for(ex in excludeList) { if(ex == $3) { exclude=1 break } } } if(!exclude) print} $ excludes="chr1 chr2" $ samtools view -h alignments.bam | awk -v excludes="$excludes" -f foo.awk | samtools -bo alignments.filtered.bam The awk script is stripping unwanted chromosomes from the header and the body. You can then run the result through picard.
{ "domain": "bioinformatics.stackexchange", "id": 393, "tags": "sam, picard" }
Error starting Turtlebot 2
Question: Hey, I am currently using a Turtlebot 2 with ROS groovy. When I start the script minimal.launch in the package turtlebot_bringup on the robot's laptop, I get the following error : nodelet: /usr/include/eigen3/Eigen/src/Core/DenseStorage.h:69: Eigen::internal::plain_array<T, Size, MatrixOrArrayOptions, 16>::plain_array() [with T = float, int Size = 4, int MatrixOrArrayOptions = 0]: Assertion `(reinterpret_cast<size_t>(array) & 0xf) == 0 && "this assertion is explained here: " "http://eigen.tuxfamily.org/dox-devel/TopicUnalignedArrayAssert.html" " **** READ THIS WEB PAGE !!! ****"' failed. [FATAL] [1368451090.542690660]: Service call failed! [FATAL] [1368451090.544932863]: Service call failed! [FATAL] [1368451090.544932853]: Service call failed! [mobile_base_nodelet_manager-5] process has died [pid 26924, exit code -6, cmd /opt/ros/groovy/lib/nodelet/nodelet manager __name:=mobile_base_nodelet_manager __log:=/home/turtlebot/.ros/log/687078bc-bbcf-11e2-97f0-446d57c545ae/mobile_base_nodelet_manager-5.log]. log file: /home/turtlebot/.ros/log/687078bc-bbcf-11e2-97f0-446d57c545ae/mobile_base_nodelet_manager-5*.log [bumper2pointcloud-9] process has died [pid 27236, exit code 255, cmd /opt/ros/groovy/lib/nodelet/nodelet load kobuki_bumper2pc/Bumper2PcNodelet mobile_base_nodelet_manager bumper2pointcloud/pointcloud:=mobile_base/sensors/bumper_pointcloud bumper2pointcloud/cliff_events:=mobile_base/events/cliff bumper2pointcloud/bumper_events:=mobile_base/events/bumper __name:=bumper2pointcloud __log:=/home/turtlebot/.ros/log/687078bc-bbcf-11e2-97f0-446d57c545ae/bumper2pointcloud-9.log]. It seems that the problem come from the bumper2pc node, because if I comment the include tag corresponding to the bumper2pc I don't get the error... Moreover if I connect the Kobuki base to my desktop computer, I can start the script and everything works perfectly. All packages are up to date on both computers. How can I solve this problem ? Caroline Originally posted by CarolineQ on ROS Answers with karma: 395 on 2013-05-13 Post score: 1 Original comments Comment by Daniel Stonier on 2013-05-13: What model/make is your laptop Caroline? Given that your pc is fine with the same software, maybe that can point us toward actually being able to reproduce the problem. Comment by CarolineQ on 2013-05-13: @Daniel : Thanks for your help, the model of my laptop is : Asus Eee PC 1025C. Comment by jorge on 2013-05-14: Another question; do you have 32 or 64 bits OS installed? Comment by CarolineQ on 2013-05-14: @jorge : On the robot's laptop (Asus) I have a 32 bits OS installed, could this be a problem ? Thanks for help. Comment by Daniel Stonier on 2013-05-14: Yes, I think we have narrowed down the problem. Eigen will align long, float and double variables differently on 32/64 bit, so what may work on one, might not on the other. We'll try and reproduce here. Comment by Daniel Stonier on 2013-05-14: Follow the bug report on github. Comment by CarolineQ on 2013-05-14: I will follow the bug report. Thanks ! Answer: We fixed it and released just now. New version should be available soon on shadow-fix repo. Thanks for your inputs! Originally posted by jorge with karma: 2284 on 2013-05-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14153, "tags": "turtlebot" }
Possible and probable source of Curiosity Rover's $\small\sf{CH_4}$ detection on Mars
Question: Past week, NASA announced Curiosity Rover measured 21 ppbv of $\small\sf{CH_4}$ on Mars. This week, NASA's Curiosity Mars rover found a surprising result: the largest amount of methane ever measured during the mission — about 21 parts per billion units by volume (ppbv). However, apparently, Mars atmosphere is methane free. ESA satellites are not measuring it: "We did not detect any methane over a range of latitudes in both hemispheres, obtaining an upper limit for methane of about 0.05 parts per billion by volume, which is 10 to 100 times lower than previously reported positive detections" (Korablev et al.,2019). What is the possible and probable source of the methane Rover is measuring? -Korablev, O. et al. (2019):"No detection of methane on Mars from early ExoMars Trace Gas Orbiter observations." Nature 568, pages 517–520 Answer: The possible source of methane is biological, and that is what everyone is hoping for, but the more likely source is geological, produced by chemical reactions between rock and water deep underground and issuing though fissures to the surface. We are dealing with very small amounts here which are difficult to detect and measure with precision. Once out in the open, they are dispersed by breezes and decomposed by sunlight, so that could explain why they come and go. It has been suggested that methane abundance varies with the seasons, but I can't see any mechanism for that, apart from clathrates. Clathrates are a sort of ice composed of water and methane, and are abundant on Earth in the ocean and in Arctic tundra. When heated, clathrates decompose and release methane. Another suggestion is that the Curiosity rover is incontinently leaking methane wherever it goes and contaminating the samples it analyses, but I cant see any explanation for that either. I think methane has been detected on Mars, but we will have to keep an open mind on the source until more data has been gathered. My own opinion is that there has never been life on Mars, but I agree that we can't rule out the possibility that 4 billion years ago conditions were sufficiently Earth-like that Mars evolved its own primeval soup and life got started in the same way as it did on Earth, but has since been eradicated or driven underground.
{ "domain": "earthscience.stackexchange", "id": 1808, "tags": "geochemistry, planetary-science, mars, methane" }
Applying ideal gas law to every day problems
Question: Three classical examples of seeing the ideal gas law in action: crumpled water bottle in cold car crumpled water bottle after your flight lands fridge door harder to open a second time after first. I'm afraid I'm having trouble understanding all three, and I believe it stems from a single reason: the derivation of the ideal gas law is for a closed container subject to the conditions of temperature, pressure, volume, number of moles in THAT closed container. The explanations I see regarding these every day problems are either vague in regards to which quantities in the gas law are considered as being held constant, or vague in regard to describing the conditions IN the container; they always tend to talk about conditions outside the container. What am I missing here? Can someone elaborate, specifically telling me which variables are considered constant in each case, and which are fluctuating and why? Answer: 1. Crumpled water bottle in cold car System: air inside the closed bottle. Amount of gas is constant. (Bottle is closed.) Pressure is constant (atmospheric pressure on the outside of the bottle doesn't change, so at equilibrium the pressure inside the bottle must equal the external atmospheric pressure.) Temperature decreases. (You moved your bottle from room temperature to the colder car, so the air in the bottle will reach equilibrium with the environment at a lower temperature.) Therefore, volume decreases. (Your bottle is not rigid, so it crumples to decrease the volume in response to the other conditions.) 2. Crumpled water bottle after plane lands System: air inside the closed bottle. Amount of gas is constant. (Bottle is closed.) Temperature is constant. (Your plane cabin is heated for your comfort.) Pressure increases. (Although the cabin is pressurised, it isn't pressurised to the atmospheric pressure at ground level, and it isn't completely airtight. When the plane lands, it goes from the lower-pressure sky-level to the higher-pressure ground level. So the external pressure on the bottle increases, and at equilibrium the internal pressure must match that.) Therefore, volume decreases. (Your bottle is not rigid, so it crumples to decrease the volume in response to the other conditions.) 3. Fridge door harder to open second time after first System: air in fridge. After closing the door and powering on your new fridge, after some time you find it harder to open the door. Amount of gas is constant. (Up to the moment you open the fridge, the fridge is closed.) Volume is constant. (Fridges don't crumple.) Temperature decreases. (Your new fridge works.) Therefore, pressure inside the fridge decreases. As a result, the external pressure is greater than the internal pressure, which means there is a force on the fridge door acting inwards due to the pressure difference. So you will find it harder to open the door.
{ "domain": "physics.stackexchange", "id": 55795, "tags": "ideal-gas, gas, states-of-matter" }
DNA preparation for sequencing
Question: In shotgun sequencing method or some related method that DNA is break up into random fragment. The fragment that have about 3kb in size is inserted into plasmid by enzyme ligase and then plasmid transform to E.coli using electroporator. As i understand, the goal is amplify the fragment to be sequencing. At this time, i have a question : After the e.coli growth up, we have too many different type of miscellaneous DNA strand with same size in a colony. How to purify a type of fragment to be sequencing ? For example, i have 20.000 different DNA strand with the same 3kb in size after break up into random fragment : -- AAAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAATTTTTTT -- (3kb fragment of chromosome 1) -- GGGGXXXXXAAAAAAAAAAA...AAAAAAAAAAAAAAAAAAAAAAAA -- (3kb fragment of chromosome 2) -- 19998 DNA fragment left ............................. -- (3kb fragment of random chromosome) So after amplify them i have : 1000000x of -- AAAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAATTTTTTT -- 1000000x of -- GGGGXXXXXAAAAAAAAAAA...AAAAAAAAAAAAAAAAAAAAAAAA -- 1000000x of -- 19998 DNA fragment left........................ -- These DNA strand is mixed together in a colony. We cannot run sequencing them together as mixed, so how to have the only one for sequencing : 1000000x of -- AAAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAATTTTTTT -- Answer: I hope I understood your question correctly. I think what you are missing is that after you break up the DNA, you dilute the concentration so that each bacteria will incorporate ~1 DNA molecule. Then, you dilute the bacteria on a plate and spread it so that they each single bacteria is far from other bacteria. Finally, after they grow you get one genetically identical colony per DNA fragment.
{ "domain": "biology.stackexchange", "id": 1198, "tags": "molecular-biology, dna-sequencing" }
What are these tiny creatures swimming around my aquarium?
Question: I was checking out my aquarium today, and decided to scoop out a water sample and take a look at it more closely. What I discovered were numerous, very, very tiny white worms (?) swimming around in the water. The worms hover around 1 mm or smaller, almost invisible to the naked eye, with some exceptional specimens reaching nearly 2 mm in length. They swim very fluently, not by wiggling or waving their bodies around, but simply, effortlessly gliding through the water. I checked them out under a microscope, and recorded some videos. I apologize in advance for the lackluster quality, as I'm not working with expensive equipment here. It took me hours to locate the worms under the microscope, made no easier by how surprisingly fast they are. Here's a picture: When I first saw this, I wasn't sure I was looking at the right thing. As you can see in this video, it appears to stay rooted at the tail, while wiggling its head (?) around in various directions. Its most notable characteristic is the two ball-like structures on the front of its body. I have a feeling that those structures will be key here. This next part was very poorly recorded, but boy did it surprise me when I saw it. Eventually, the worm stretched forward, its head seemingly morphing into three parts (video): Finally, here's one more clip showing off how fast they can be when they want to (I swear it's not even sped up!). I wasn't able to get it back under the microscope at this point. Does anyone know what these are, or at least what kinds of worms they might be related to (with respect to the ball-like structures on its head)? For a little extra background, I found these in my native tank sourced with creatures caught in southern Alberta. The tank contains some longnose dace, which have been a source of concern due to heavy flashing (scratching) ever since I caught them. It makes me wonder if these worms are related, such as some kind of parasite. I'm happy to have them if they're some harmless detritus worm-- any members of the cleanup crew are welcome-- but if they're something to be worried about, I'd prefer to know. Answer: Hard to tell because of the poor picture/video quality, but almost immediately the body shape, distinct "face", size and behavior made me think of a small invertebrate animal called a rotifer (Phylum Rotifera). Rotifer eating protists. Photo Credit: Jacqueline Ronson, 2016 There are about 2000 species occurring worldwide within this primarily freshwater phylum, each of which is fairly simple anatomically. Their name (which means "wheel bearer") comes from the two crowns of quick-moving cilia they possess on their heads: Source: Gifer These cilia attempt to push food posteriorly toward the rotifer's back-set "mouth" (which is actually a modified pharynx called a mastax: Photo Credit: Jean-Marie Cavanihac Rotifers are small (always < 2 mm) and so eat small planktonic and protist prey. From UC Berkeley: As rotifers are microscopic animals, their diet must consist of matter small enough to fit through their tiny mouths during filter feeding. Rotifers are primarily omnivorous, but some species have been known to be cannibalistic. The diet of rotifers most commonly consists of dead or decomposing organic materials, as well as unicellular algae and other phytoplankton that are primary producers in aquatic communities. As for body shape: Although all Rotifer species are bilateral animals, they vary quite a bit in body shape. Box-like shaped rotifers are referred to as being loricate, while more flexible worm-shaped species are considered to be illoricate. Photo Credit: Juan Carlos Fonseca Mata Regarding sometimes staying "rooted at the tail" as the OP observed: Some rotifers are found usually free-swimming, while others "glue" down their posterior "toes" to anchor themselves to some substrate or debris. Again from UC Berkeley: The final region of the rotifer body is the foot; this foot ends in a "toe" containing a cement gland with which the rotifer may attach itself to objects in the water and sift food at its leisure. Source: Starr et al. 2013 (Cengage Learning) See here for further "fun facts!"
{ "domain": "biology.stackexchange", "id": 9603, "tags": "species-identification, zoology, invertebrates, limnology" }
Adaptive Digital Filter Block Diagram Question
Question: I'm currently attempting to study up on adaptive digital filters. My book presents the diagram I've included below and I'm having trouble understanding conceptually what it's indicating. The problem deals with noise cancelation. The idea is that someone is driving and makes a phone call. The x(k) is their voice input. There's a reference mic at v(k) which picks up road noise. I know that the ultimate goal is to filter road noise from our transmitted voice signal. The desired output is obviously: $$ d(k)=x(k)+v(k) $$ The error in this case is: $$ e(k)=x(k)+v(k)-y(k) $$ Taking a quote from my book, it states If the speech x(k) and the additive road noise v(k) are uncorrelated with one another, then the minimum possible value for $e^2(k)$ occurs when y(k) = v(k), which corresponds to the road noise being removed completely from the transmitted speech signal e(k). I don't understand how e(k) is our output of the system though. It seems to me that if we minimize our error, then it approaches zero. This means that $d(k)-y(k) = e(k)=0$ Consequently if our output is the error and we've minimized it, it seems like we're outputting 0 not a transmitted signal e(k) with the road noise removed!? I guess I'm asking why our desired output d(k) isn't our output....why is the error the output? Can somebody help me understand this conceptually? Thank you for your help! Please let me know if I need to clarify anything. Answer: Judging from the figure, the situation is slightly different from your explanation in the question. The noise $v(k)$ is the actual noise in the signal, not the noise picked up by the reference microphone. So the noisy signal is $d(k)=x(k)+v(k)$. If you knew $v(k)$ you could simply subtract it from $x(k)$ without the need to use an adaptive filter. What you have is another noise signal $r(k)$, which is a filtered version of the noise $v(k)$. This unknown filter is depicted by the "black box" in the figure. It is filtered because the transfer function from the noise source (road, tires, etc.) to the reference microphone is different from the transfer function to the microphone recording the speech. What the adaptive filter is trying to do is estimate the noise in the speech signal from the reference noise, i.e. it tries to invert the unknown filter in the black box. This can be achieved by minimizing the power of the error signal $e(k)$. The reason is that if you assume that speech and noise are uncorrelated, the output of the adaptive filter can only reduce the noise component in $d(k)$, not the speech component. So the power of $e(k)$ becomes a minimum if the output of the adaptive filter $y(k)$ equals $v(k)$. You don't need to worry that the speech signal is removed because $y(k)$ cannot model the speech signal at all, because noise and speech are assumed to be uncorrelated. So ideally the error signal $e(k)$ contains only clean speech. Note that due to noise and speech being uncorrelated, the power of $e(k)$ can never become zero. Neither can the error signal itself become zero (for all $k$).
{ "domain": "dsp.stackexchange", "id": 2554, "tags": "noise, adaptive-filters" }
CFG of all regular expressions over a binary alphabet
Question: I'm working on creating a rather difficult CFG and I am getting stuck on finishing it. The CFG in question is meant to contain all valid regular expressions using the alphabet {0, 1, (, ), *, +, e} (e for epsilon). Some examples I know that should be accepted are things like: e 0 01 1010 0*1*0*1* 0*11+(10)*+(e+1*0*) ((100*)*(10*)*)* While things such as these would be rejected: ee )(e+1*)* (10)*++( et cetera I've been building up case by case and I have this rather ugly looking CFG that prevents most incorrect cases, but it does not come close to getting all the correct ones S → (N) | M+M | N | (N)* M → 0N | 1N | 0N | 1N | (N) | (N)* | M+M | e N → 0N | 1N | 0N | 1N | ɛ Apologies if this has been asked before, I tried searching everywhere here and on Google and I was not able to find someone else trying to create the same or similar CFG, but if this is a repeat I'd appreciate being pointed to the original!! If helpful, I've been using this tool to test my CFG: https://web.stanford.edu/class/archive/cs/cs103/cs103.1156/tools/cfg/ Answer: You are quite close to the solution. We will use a few variables, each corresponding (intuitively) to some other "thing". Specifically, we will use the variables $S,E,A,B$. $S$ is the starting variable. $E$ is a variable that will produce a valid regular expression (its called $E$ as short for "expression"). $A$ will be some valid string over the alphabet $\{0, 1\}$, and $B$ will be a non-empty string over that same alphabet. The CFG will now be: $S\rightarrow E$ $E \rightarrow (E)(E) \space | \space E+E \space|\space E^* \space | \space (E) \space|\space A$ $A\rightarrow B\space |\space e$ $B \rightarrow 0B \space | \space 1B \space | \space 0 \space | \space 1$ I hope this CFG is what you are looking for! (I don't know if it is, since you didn't state the definition of the syntax of a regular expression using this alphabet, so I have only tried to go by the examples)
{ "domain": "cs.stackexchange", "id": 17848, "tags": "context-free, regular-expressions" }
Is electric field inside a conductor really always zero?
Question: The standard explanation in textbooks goes that in the presence of electric field (e.g. external electric field) the free electrons inside the conductor will keep moving until electrostatic equilibrium is reached and that they will create a field which will cancel out external field (e.g. electrons move to one side leaving positive charge on another side, this creating electric field that cancels out the external field). But what if the external field is so strong that there is just not enough free electrons to cancel out external field? What if there is nothing left to be moved? E.g. I have electric field lines going from left to right, all free electrons move to the left surface, there is nothing left to move, but it's still not enough to cancel out external field. Will there still remain an electric field inside the conductor then? If not, what will happen? Answer: The "nothing left to move" scenario is impossible with a metal: the electrostatic potential needed to deplete a metal is far beyond what is sufficient to destroy any experiment. It is, however, possible in a semiconductor, where the density of charge carriers is low enough. The silicon technology that is everywhere these days is based on manipulating depletion regions in silicon crystals, So the answer to your question "what will happen" is that you make something like a diode or transistor, depending on the geometry and the applied potential. It's not as simple as applying a potential and sweeping the charge out: you need a way, with doping, an insulated gate, or a Schottky barrier, to prevent replenishment of the charge via whatever electrodes are applying the potential.
{ "domain": "physics.stackexchange", "id": 97331, "tags": "electrostatics, electric-fields, conductors" }
Does plasmolysis kill plant cells?
Question: I know when a plant cell becomes fully plasmolysed that the protoplast shrinks away and eventually is no longer joined to the cell wall. Is this irrevocable damage? Can the cells survive it and continue to function if they are put back into a favourable solution. If so how does the protoplast rejoin to the cell wall? Answer: No. Cell-plasmolysis is not necessarily fatal for the cell. Plant cells normally recover from this condition when water is available. (But cell death can take place in excessive or prolonged lack of water). It is an incomplete truth that during plasmolysis, plant cells get detached from the cell wall. Actually the plasmodesmatal connections remain there and a portion of the cortical (outermost)-layer of cytoplasm, mainly containing "cortical endoplasmic reticulum", a group of endoplasmic reticulum spanning close to the cell wall; remains there. Now, when plasmolysis take place, the cell-membrane shifts apart from the cell wall; but at the place in-between, there forms fine cytoplasmic threads called Hechtian strands. So the connection is retained. (As well at some portion of cell-wall, the bulk protoplast remain touched there) Here are photos showing hechtian strand; taken from the book Plant Cell Biology- structure and function, by Gunning and Steer, Jones and Bartlett publishers, Copyright 1996; Image-1 (Book image 16-C) Hechtian strands of stretched plasma membrane extended from the plasmolysed protoplast to the cell wall (K. Hecht , 1912, was the first to study them in detail). This confocal micrograph shows strands in a 16 μm- deep zone. Some strands extend from plasmodesmata (arrows), where the membrane passes through the cell wall, butmany attached to other sites. in 1931 another early investigator J. Plowe, pushed plasmolysed protoplast along inside their cell walls, observing that existing strands stretched and broke, and that new strands could form. In today's terminology we would say that new strands arise when freebinding sites on the plasma membrane contact and bind their ligands in the cell wall. Reference: Plant Cell Biology- structure and function, by Gunning and Steer, Jones and Bartlett publishers, Copyright 1996 Update: An interesting information; there is one method of plant protoplast isolation without enzymic digestion of wall; which uses plasmolysis. Since the protoplast(and cell membrane) get shifted from part of cell wall; it becomes easy to rupture the wall without rupturing the cell-membrane. (Discussed in this website). The protoplast remains alive after plasmolysis otherwise we won't be able to use this method for protoplast culture, cell fusion etc.
{ "domain": "biology.stackexchange", "id": 6291, "tags": "cell-biology, plant-physiology" }
Signature verification - machine learning
Question: I have to do a project on signature verification. My goal is having a program with two images as inupt (one is a genuine signature of a selected subject and the other one is the one that I want to test). I have a dataset to train the algorithm with 3000 signatures: half of them are genuine and the other ones are forgery, and they are taken from 150 subjects. My doubt regards the confrontation between the two images I use as input (the genuine one and the one I want to test). How can I deal with it? Answer: I think a Siamese Network is the solution you are looking for. Arxiv has a paper which precisely addresses your problem.
{ "domain": "datascience.stackexchange", "id": 4247, "tags": "machine-learning, classification, image-classification" }
Breaking bonds with energy
Question: I'm learning about breaking molecular bonds currently and I'm wondering what are the methods to actually use the amount of bond energy to break the bond. What I'm saying is, knowing the bond energy, how can you actually use this information to break the bond? Do you heat it up the equivalent amount as the Kj/mol or something similar to that? Answer: Yes, the energy showed that is needed to break bonds is usually in the form of heat. I don't believe there is another common way to do it, but don't quote me on it. I haven't learned about lasers and other such technologies :). Keep in mind that as bonds are broken, they release energy themselves. As bonds are created, they take in energy on their own. Often, you will just need to start a reaction with some outside energy source (A bunsen burner for example) and it will be able to continue on its own because it produces energy from breaking bonds. If you look at the formation of Magnesium Oxide, all you really have to do is light a small end of a piece of Magnesium ribbon, and the whole thing burns because it creates energy as it reacts. I hope that helps!
{ "domain": "chemistry.stackexchange", "id": 5080, "tags": "bond" }
Why don't electric workers get electrocuted when only touching one wire?
Question: I know that when electricians work on the poles on the streets, if they only touch one wire at a time they will be fine. However, from my understanding, the negative wire is connected to a large negative terminal, and the positive wire to a large positive terminal. Now for example, if you touch the positive wire, wouldn't all of the electrons get stripped away from your body? Another way to look at it is that current always flows from a higher to a lower potential. If the potential in the wire is 100V (I have no idea what it is) and your potential is 0, wouldn't current flow through you? Answer: AC or DC, you only get electrocuted if current passes through your body. (Current passing through any part of your body can be dangerous, and possibly cause an electrical burn, but current passing across your heart is the one that's really dangerous.) Touching just one wire at a time gives the current nowhere much to go. You are right to think that some electrons can get stripped from your body when you touch a bare wire. But not many. Once they've gone, unless your body gets new electrons from somewhere else, the current stops. If you're standing in a pool of water, or touching a metal pole, or another wire that can conduct lots of electrons from somewhere else, you're fried. So how many electrons get conducted away from the human if it has no other source of electrons? In this case, the human acts as a capacitor. Now, Wikipedia tells me that one standard for this approximates a human as having capacitance $C=100\,\mathrm{pF}$. This is pretty tiny. (For comparison, the capacitors you might see if you open up a computer or other electronics can easily have capacitance billions of times larger.) If the voltage is $V=100\,000\,\mathrm{V}$ (which is really quite high), the charge that would be transferred is $Q = C\, V = 10^{-5}\,\mathrm{C} \approx 10^{14}$ electrons. You probably have $10\,000$ times more electrons in a single eyelash, so that's not many. The danger there is that at such high voltages, a lot of things become conductive that aren't normally. P.S. Most electrical wires that you see in a city (though not those for trams, or the really big high-voltage lines) are actually insulated. (I worked as an electrician through college, and have touched many such wires.) So the birds aren't frequently touching anything dangerous to begin with. But even if they were, they'd only be in danger if they had somewhere to get new electrons. There are plenty of places (especially around those big transformers you see) where you can find exposed power, though, so be careful.
{ "domain": "physics.stackexchange", "id": 7660, "tags": "electricity, electric-circuits, everyday-life" }
Why do atomic clocks only use caesium?
Question: Modern atomic clocks only use caesium atoms as oscillators. Why don't we use other atoms for this role? Answer: "Because that is how the second is defined" is nice - but that immediately leads us to the question "why did Cesium become the standard"? To answer that we have to look at the principle of an atomic clock: you look at the frequency of the hyperfine transition - a splitting of energy levels caused by the magnetic field of the nucleus. For this to work you need: an atom that can easily be vaporized at a low temperature (in solids, Pauli exclusion principle causes line broadening; in hot gases, Doppler broadening comes into play) an atom with a magnetic field (for the electron - field interaction): odd number of protons/neutrons an atom with just one stable isotope (so you don't have to purify it, and don't get multiple lines) a high frequency for the transition (more accurate measurement in shorter time) When you put all the possible candidate elements against this table, you find that Cs-133 is your top candidate. Which made it the preferred element; then the standard; and now, pretty much the only one used. I found much of this information at http://www.thenakedscientists.com/forum/index.php?topic=12732.0
{ "domain": "physics.stackexchange", "id": 23235, "tags": "time, metrology, atomic-clocks" }
The alignment of fingers in our hand
Question: I noticed that in my hand the index and middle finger are aligned in one direction and the next two fingers somewhat in the opposite direction. My question is, why are our fingers aligned in that way? Moreover, is it of any evolutionary significance? For clarity, I am posting a picture of my hand. Answer: This is normal, and the effect is accentuated dramatically if you pretend to be holding an imaginary round object (somewhat smaller than a baseball, for instance.) The fingertips will all approximately point towards the very end of the thumb. (See upper right image.) If the stone were smaller, the effect would be more visible.) This orientation of the fingers makes the hand and fingers extremely versatile, being able to grasp and manipulate objects of a wide variety of shapes. It also allows fine manipulation of thumb and fingers. Touching each fingertip to the tip of the thumb quickly shows the same effect - the fingertips approximately align with the thumb. If they didn't align this way, we would lose some of our very fine (as in requiring great precision, not as in quality) motor skills. I'm not going to hypothesize on why the hand evolved the way it did; it's very efficient and that seems reason enough.
{ "domain": "biology.stackexchange", "id": 5342, "tags": "human-anatomy, morphology" }
In the FLRW metric, how do I convert from $k$ real to $k= -1,0,1$?
Question: In my notes I wrote that the FLRW metric is reported as $$ ds^2 = -dt^2 + a(t)^2( d\, \chi^2 +f_k(\chi)^2 d\,\Omega^2 )$$ with $d\,\Omega^2= d\theta^2 + \sin{\theta}^2 d\phi^2$ and $$f_k(\chi)= \begin{cases} k^{-\frac{1}{2}} \sin{ \sqrt{k}\chi},& k>0,\quad \text{open universe}\\ \chi & k=0,\quad \text{flat universe}\\ (-k) ^ {-\frac{1}{2}} \sinh{\sqrt{-k}\chi},& k<0,\quad \text{closed universe} \end{cases} $$ but in the book it is instead reported as $$ f_k(\chi)= \begin{cases} \sin{\chi},& k=1,\quad \text{open universe}\\ \chi & k=0,\quad \text{flat universe}\\ \sinh{\chi},& k=-1,\quad \text{closed universe} \end{cases} $$ what's the transformation that convert the first notation to the second one? Or did I make an error? I cannot simply use $ \chi' = \sqrt{k} \chi$ since there is the factor $k$ outside. Answer: The full metric is $$ ds^2 = - dt^2 + a(t)^2 [ d\chi^2 + \frac{1}{k} \sin^2 \sqrt{k} \chi d \Omega^2 ] $$ For $k \neq 0$, we rescale $\chi \to \frac{1}{\sqrt{|k|}}\chi$ and redefine $a(t) \to \sqrt{|k|} a(t)$. Then, the metric is $$ ds^2 = - dt^2 + a(t)^2 [ d\chi^2 + \sin^2 \chi d \Omega^2 ] . $$ which is the original metric but with $|k|=1$.
{ "domain": "physics.stackexchange", "id": 94695, "tags": "general-relativity, cosmology, space-expansion" }
Why can we use Gauss' law to compute electric field?
Question: For simplicity I'm considering only the sphere case. In the Gauss' Law formulation we have some field $E$ introduced by charges $Q$ inside some sphere, then we compute flux and integrate, and we get result $Q/e_0$. Right. But this formulation doesn't take into account any possible outer charges, because $E$ used in this law comes only from $Q$. I suppose that any outer field effects will cancel out, because flux coming from it must go through surfaces with opposite normals. But we didn't make any proof of that, as far as I have seen in Gauss' Law proofs. So, when we measure electric field inside the sphere with uniformly distributed $Q$, we make some assumptions on uniformity of $E$ inside this sphere and we consider hollow Gauss' sphere with radius smaller than radius of the outer sphere. And then we use $E$ to compute total flux going through inner sphere and conclude that it must be zero. Right. But why can we use this $E$? We don't know that there aren't any effects from outer charges fields. So why can we use Gauss' Law? Answer: The electric field used in Gauss's law is not just the electric field from the interior charge distribution. It may contain contributions from exterior sources. Usually, we do assume that there are no exterior sources, which is what allows us to impose symmetry considerations. Why are we able to do this? Because Maxwell's equations are linear, and as such, the sum of two solutions to Maxwell's equations is itself a solution. Hence, you can consider the exterior and interior charge densities separately. You can calculate the fields separately and then add them up at the end. Here's a mathematical way of looking at it. You have Gauss's law in integral form: $$\int_V \frac{\rho}{\epsilon_0} \, dV = \oint_S E \cdot dS$$ Linearity of Maxwell's equations means that you can split up $\rho = \rho_\text{int} + \rho_{\text{ext}}$ and the same for $E$. The two parts must separately obey this equation. So that means we have $$\begin{align*} \int_V \frac{\rho_\text{int}}{\epsilon_0} \, dV &= \oint_S E_\text{int} \cdot dS \\ \int_V \frac{\rho_\text{ext}}{\epsilon_0} \, dV &= \oint_S E_\text{ext} \cdot dS \end{align*}$$ But $\rho_{\text{ext}} = 0$ everywhere in $V$, so the second RHS integral must be zero. When we do Gauss's law problems, we implicitly look at the $E_\text{int}$, the field generated by charges internal to the volume. Any external field and charges can be separated out in this way, so that the symmetry of the field generated by internal charges is still manifest, allowing us to compute the field magnitude thanks to that symmetry. It's true that, strictly speaking, we can't draw any conclusions about the total $E$-field without some information about exterior charges or fields, but by linearity, we know that our field from the internal charges will never be affected by such information.
{ "domain": "physics.stackexchange", "id": 6563, "tags": "electrostatics, electric-fields, charge, gauss-law" }
periodic boundary conditions for vortex in a square lattice
Question: I am trying to follow this paper and track the dynamics of vortex motion on a discrete (square) lattice. The idea is to simulate the time evolution of the Gross-Pitaevskii (GP) equation, which reads (in rescaled units) $$i \partial_t \psi = - \nabla^2 \psi + |\psi|^2 \psi.$$ As initial condition one takes a wavefunction of the form of Eq. (10) in the above-mentioned paper, $$\psi(z) = \prod_{z_+ \in [Z_+]} \frac{(z-z_+)}{|z-z_+|} \prod_{z_- \in [Z_-]} \frac{(z^*-z_-^*)}{|z-z_-|}$$ where $z=x+iy$ are complex coordinates. This wavefunction describes a set of vortices (V) with coordinates $z_+$ in $Z_+$ and a set of anti-vortices (AV) with coordinates $z_-$ in $Z_-$. This is equivalent to using a wavefunction of the type $e^{i \phi}$ (polar coords.) for vortices and $e^{-i \phi}$ for anti-vortices, with $\phi$ the polar angle. One can calculate the velocity as gradient of the phase of the complex wavefunction. An example of the associated velocity field, for a V at (x,y)=(0,2.5) and AV at (0,-2.5) is shown below. This should correspond to Fig. 1 in the paper. However, there are several things I do not understand. First, why do the authors say that they cannot look at the dynamics of just one vortex, but must include an anti-vortex in the initial conditions to satisfy the periodic boundary conditions? Second, apart from the initial V-AV pair, they also seem to include "a few mirrow images with respect to the boundary". However, they do not specify how many nor where they are positioned. (I assume that the mirrow image of a V is and AV and vice-versa, please correct me if I'm wrong). Lastly, they mention "evolving the system in imaginary time" in order to "cool it down" (although no temperature is included in the model!) and then "turning on a superflow" and continuing the evolution in real-time. I though all was needed was the discretization of the GP equation and following its evolution given the initial wavefunction and periodic boundary conditions ($\psi(\text{left margin})=\psi(\text{right margin})$ and $\psi(\text{top margin})=\psi(\text{bottom margin})$). Edit Following the suggestion of @BebopButUnsteady, I repeated the 'unit' of my system, the V-AV pair, until obtaining the following tiling pattern (vortices are circles, AVs are squares) Let us first look at the real (left) and imaginary (right) parts of the wavefunctions for the initial V-AV pair: One can see that, especially in the imaginary part, the values on the boundaries are not equal. Now we start adding copies of the "unit cell" along the x axis, and plot a cut though the imaginary part of the wf: It appears that, as we increase the number of copies, the values at the ends get closer and closer to 0, which would be the ideal periodic case. Of course this is just numerics, I have no formal proof yet that this procedure actually converges. Answer: You cannot have a total vorticity with periodic boundary conditions, since if you take a path around all of your vortices, it will have a non-zero circulation. But you have periodic bc, so you can continuously deform that path to a point, and a point has zero circulation. Mirror images are not quite the same as in electrostatics. We want periodic boundary conditions. To get periodic boundary conditions you imagine tiling the plane with your system. This will trivially be periodic and so you can just take a tile, and work with that. So you want the mirror image of a vortex to be a vortex. (In electrostatics you usually use mirror images to enforce not periodic boundary conditions, but to enforce constant voltage. That's why you flip the sign of the mirror charge. Here we want periodic b.c. If you flipped the sign of the vortices I believe you would get *anti*periodic boundary conditions, but don't quote me on that.) This evolving in imaginary time is presumably an "annealing" type of operation. You are free to run the GP equation on any initial condition you want. However, to cleanly see the interaction of the vortices, we want to the vortices to be in their ground state. Otherwise when we turn on the time evolution they will get rid of their excess energy by shedding waves and other junk. One way to get to the ground state is to evolve you equation in "imaginary time". Your usual time evolution is $\exp(i\hat{H}t)$. If plug in $t = i\tau$ you get $\exp(-\hat{H}\tau)$. Applying this to a state exponentially suppresses the higher-energy components, so you get rid of the high energy stuff. This is related to finite-temperature (just replace $\tau$ with $\beta$ and you have the partition function), but for your purposes you can just consider it a convenient mathematical trick. Note that since you're annealing anyway, the specific details of what state you start might not be so important, since you will end in the same place anyway (hopefully). Finally, they are a little thin on the details, so if you plan to use this work, you should just send the authors an email asking for details.
{ "domain": "physics.stackexchange", "id": 8592, "tags": "fluid-dynamics, condensed-matter, boundary-conditions, bose-einstein-condensate, superfluidity" }
How to reduce frequency resolution for high sampling rate and lot of samples?
Question: I'm kinda new in this field, and I have a following situation. There is a 1s long audio signal, with Fs = 96000Hz, and I have 96000 samples accordingly. If I do a single FFT on the entire signal, I get this 1Hz frequency and 48000 points, resulting in a very "fat" logarithmic spectrum, like in the picture below. How can I reduce resolution and thin out my plot without reducing sampling rate? Thanks in advance. Answer: A simple way is to split the data into smaller N sample segments (pick N to suit your desired frequency resolution, then average the power or amplitude of all the FFT spectra: $$ \hat X = \frac{1}{N} \sum^{N-1}_{n=0} X_n X_n^* $$ Where $X_n, n \in \left\{0:N-1\right\}$ is your series of shorter FFT results and $\hat X$ is the averaged power spectra. If I remember correctly, for noise signals, the variance is reduced by $\frac{1}{N}$.
{ "domain": "dsp.stackexchange", "id": 5779, "tags": "fft, frequency-spectrum" }
Hawking Radiation from the WKB Approximation
Question: Reading this paper which is itself an exposition of Parikh and Wilczek's paper, I get to a point where I fail to be able to follow the calculation. Now this is undoubtably because my calculational skills have been affected by decades of atrophy, so I wonder if someone can help. The paper computes the tunneling transmission coefficient for a particle (part of a particle/antiparticle pair) created just inside the horizon. The WKB transmission coefficient is given by $$T = exp({-\frac{2}{\hbar} Im(S)})$$ where "Im" is the imaginary part, and the action, $S$, is evaluated over the classically forbidden region. Using Painleve-Gullstrand coordinates for the black hole, the paper derives, fairly straightforwardly, $$Im(S) = Im \int_{2M}^{2(M-\omega)}{dr \int_{0}^{\omega}d\omega' {\frac{1}{1-\sqrt{\frac{2(M-\omega')}{r}}}}}$$ (eq 41). $\omega$ is the energy carried out by the tunneling particle. Now the next step is where I get stuck. The following equation (42) suggests that the r integration has been performed to get from (41)->(42). Here's what happens when I try to do it: Presumably the contour integral being referred to is in the complexified r variable. The way the "r" appears isn't very nice, so we make a substitution $z=r^{\frac{1}{2}}$, giving $$Im \int_{2M}^{2(M-\omega)}dz{\frac{2z^2}{z-\sqrt{2(M-\omega')}}}$$. The talk of deforming the contour "in the E plane" suggests we make the energy slightly imaginary - add $i\delta$ to $\omega$ where $\delta$ is small. The expression $\sqrt{2(M-\omega+i\delta)}$ can then be expanded in powers of delta, leaving $\sqrt{2(M-\omega)}+i\delta$ (after the expansion I've redefined delta to take out the constant factors - this doesn't matter because we're going to contour-integrate round the pole anyway, so moving the pole up and down a bit makes no difference). So the z-plane looks like the first diagram here, where the large X is the displaced pole. We ultimately want to evaluate the integral between the two points on the real axis. Well, the one thing I can do is integrate round the green contour shown in the second figure. The answer is just $2\pi i$ times the residue at the simple pole, i.e. in this case $$2\pi i\cdot\lim_{z \to {\sqrt{2(M-\omega')}+i\delta}}(z-\sqrt{2(M-\omega')}+i\delta)\cdot \frac{2z^2}{(z-\sqrt{2(M-\omega')}+i\delta} $$, $$=2\pi i\cdot2\cdot(\sqrt{2(M-\omega')}+i\delta)^2$$ $$=8\pi i \cdot(M-\omega')$$ approx. This looks similar to what I want, namely $4\pi i \cdot(M-\omega')$ in order to get equation (42) (apart from a factor of 2). However (1) The next step would be to relate the closed contour integral to the integral along the real axis. Unfortunately, this relies on the integrand vanishing for large $|z|$ in the positive half plane, but this appears not to be the case here (2) And anyway we want the integral between $\sqrt{2(M-\omega)}$ and $\sqrt{2M}$, so how would I deal with the integral along other parts on the real axis (i.e. outside of the classically forbidden region)? Any hints would be welcome (or an alternative way of computing the tunneling transmission coefficient). Answer: The (finite) imaginary part comes solely from the singularity at $r = 2(M-\omega^\prime)$ in the integration over $r$ away from this singularity the integral is finite and real. Performing a change of variables: $r = 2(M-\omega^\prime) + u$, since only the singularity contributes to the imaginary part , we can approximate the integrand by by his singular part in the vicinity of u = 0 : $\frac{1}{1-\sqrt{\frac{2(M-\omega^\prime) }{r}}} \approx - 4(M-\omega^\prime) \frac{1}{u}$ Thus, the integral over $r$: $I= 4(M-\omega^\prime) \mathrm{Im}\int_{-2\omega^\prime}^{(\omega-\omega^\prime)} \frac{du}{u}$. Using the relation $\frac{1}{u} = P.V.(\frac{1}{u} ) + \pi i \delta(u)$. (P.V. denotes the Cauchy principal value. We get: $I= 4(M-\omega^\prime) $
{ "domain": "physics.stackexchange", "id": 5494, "tags": "quantum-mechanics, hawking-radiation, approximations, quantum-tunneling, semiclassical" }
Class project: user-editing page for a haiku website
Question: I am making a website using just HTML and CSS for a classroom project (we are in the 2nd week of boot-camp.) I have my site made, and want to get others' opinion on the code. I am mostly looking to have the site scale properly on different browsers, but other input is appreciated as well. body { font-family: 'Anton', sans-serif; margin: auto; } .top { background-color: #F25266; border: #666561 solid 1em; } h1 { color:#F4EF6B; text-decoration: underline; text-align: center; } .middle { background-color:#ECECEC; border-left: #666561 1em solid; border-right: #666561 1em solid; } .pic { display: block; margin: auto; width: 40%; } form > label{ color: #2E90BF; column-count: 1 } form > input { background-color: #2E90BF; border-color: #666561; width: 14.2%; column-count: 1 } #button_one { font-family: 'Anton', sans-serif; color:#FFF; background: #2E90BF; border-color: #000; height: 2em; width: 100%; font-size: 1em; cursor: pointer; margin-top: 1em; } #button_one:active { background-color: #2E90BF; transform: translateY(4px); } .bottom { background-color:#F25266; border: #666561 solid 1em; } ul { list-style: none; column-count: 10; } a:link { color: #6DBF2E; } a:visited { color: #6DBF2E; } a:hover { color: yellow; } <!DOCTYPE html> <html> <head> <title>Edit User</title> <!-- Google Fonts --> <link href="https://fonts.googleapis.com/css?family=Anton" rel="stylesheet"> <!-- Link to CSS Sheet --> <link type="text/css" rel="stylesheet" href="assets/stylesheets/style.css" /> </head> <body background> <div class="top"> <h1 style>User Information</h1> </div> <div class="middle"> <img src="https://i.pinimg.com/736x/57/7b/3c/577b3c221b6ebdc218c347d1260a102e--japanese-haiku-poetry-lessons.jpg" alt="Haiku Example Pic" class="pic"/> <form> <label>E-Mail:</label> <input> <label>Password: </label> <input> <label>Confirm Password: </label> <input> <label>First Name: </label> <input> <label>Last Name: </label> <input> </br> <button type="submit" id="button_one">Submit</button> </form> </div> <div class="bottom"> <ul> <li><a href="form_create_haiku.html"><span>Create Haiku</span></a></li> <li><a href="form_edit_haiku.html">Edit Haiku</a></li> <li><a href="form_edit_user.html">Edit User</a></li> <li><a href="form_signup_user.html">Signup User</a></li> <li><a href="haiku_list.html">List</a></li> <li><a href="haiku_one.html">One</a></li> <li><a href="haiku_rules.html">Rules</a></li> <li><a href="index.html">Index</a></li> <li><a href="user_list.html">User List</a></li> <li><a href="user_one.html">User One</a></li> </ul> </div> </body> </html> Answer: About the HTML It’s typically a good idea to add a meta-charset element, which gives the character encoding of the page (e.g., "UTF-8"). This should be the first element in the head element: <head> <meta charset="utf-8" /> <!-- … --> </head> The body element can’t have a background attribute (it’s obsolete). You should associate each label with its input. This can be done with the for attribute or by nesting the input inside the label. You can easily test if it works correctly: click the label, and the corresponding input should get the focus. You have the closing tag </br>, but miss the opening tag -- however, the br element has no closing tag to begin with, so you probably want to use <br> or <br />. But it doesn’t seem to be correct to use br in this context anyway. br must only be used for meaningful line-breaks. If you just need some vertical space, use CSS instead. The list seems to be navigation. If that’s the case, you could use the nav element: <nav class="bottom"> <ul> <!-- … --> </ul> </nav>
{ "domain": "codereview.stackexchange", "id": 27316, "tags": "beginner, html, css" }
PointCloud2 orientation Problem? [I need some help!]
Question: Hello All I am having some problems with the orientation of ht PointCloud2 image published. I could tell it's not the problem with the TF because the axis looks correct according to the Rviz. Do you guys know where-else did I forgot to change the orientation? thank you in advance! The problem is, as you could see from the image below. The TF transform is correct. But then the PointCloud2 looks like it has some "roll yaw pitch" error; the clouds should be on the ground plane; it shouldn't be flying at the mid air. Originally posted by Gazer on ROS Answers with karma: 146 on 2013-07-03 Post score: 0 Original comments Comment by thebyohazard on 2013-07-03: You didn't say what the problem was... Answer: Hi Gazer, Just a guess, but I suspect your problem is actually due to the TF being wrong. Check out the REP here: http://www.ros.org/reps/rep-0103.html For robots it's conventional to put the X+ as the 'forward' direction, but there's a lot of camera stuff that uses Z+ as the forward direction instead. Cheers, Gavin Originally posted by Gav with karma: 478 on 2013-07-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14805, "tags": "ros" }
Providing an intuitive description of scalar and vector quantities in physics
Question: Often the standard introduction to the concept of scalars and vectors in physics is something along the lines of: A scalar is a quantity that is completely described by a single number (it has no directional dependence). A vector is a quantity that requires both a magnitude (a number) and a direction to be specified in order to completely describe it. This is then followed up by examples of the two, such as mass, charge and distance are all examples of scalar quantities, whereas, velocity, displacement and force are all examples of vectors. I feel what is often missed out is a physically intuitive explanation of why certain quantities are vector quantities whilst others are scalars. I have thought up of a few examples and I'm hoping that people can provide feedback, and/or suggest some intuitive examples/explanations. An example of a scalar quantity is temperature. Indeed, the temperature at a point is completely specified by a single number. It is rotationally invariant, in the sense that facing northwards, or south-eastwards (or indeed any other direction) at the same point does not affect the temperature at that point. Therefore it has no directional dependence and is a scalar. Another example would be distance. A distance of $n$-meters measured from one point to another remains $n$-meters, regardless of which direction it is measured in and hence it is a scalar quantity. An example of a vector quantity would be an objects velocity. In order to determine the motion of an object it is clearly not enough to simply provide the speed at which it is travelling. The object will travel in a particular direction, and so two objects travelling at the same speed, but in different directions will end up at completely different locations. Hence, in order to completely specify the motion of an object one must use a vector quantity - the objects velocity. Finally, the force acting on a object is also a vector quantity, since it acts in a particular direction on the body. Whether the force acts from the top, bottom or sides of the body will have different effects on the body, hence its action is clearly directionally dependent and must be described by a vector (two forces with the same magnitude, but acting in different directions on the same object will have different effects on the object). Hopefully what I've written is coherent. I'm hoping to convey this information to a person with a fairly minimal background knowledge in physics, so any feedback about it would be much appreciated. Edit To clear up ambiguity. I am not asking whether I have understood the concept correctly or not. This is more a question about How one should give an elementary introduction to vectors and scalars? I am not currently in education and so don't have a lecturer or fellow students to ask about this. I have been asked by someone to explain it to their teenage son and I wanted to hear the thoughts of others (most likely more capable than I) on how to teach this concept. Answer: Think of it in the context of classical mechanics, or more intuitively your everyday environment. For example, take a cup of coffee on a table. It has a certain mass. This is just a pure number, we don't think of it as having 'direction' - what would it mean for mass to have direction? This is just a physically intuitive idea of a scalar quantity. Now push the coffee along the table; you're pushing it in a certain direction with a force of a certain magnitude; this is just a physically intuitive idea of a vector quantity.
{ "domain": "physics.stackexchange", "id": 32673, "tags": "vectors, education, tensor-calculus, covariance, invariants" }
ROS2: Specify RTI Connext as rmw implementation on NVidia Jetson Nano not work
Question: I'm trying to build my package with rmw_connext_cpp. It works fine on my PC (64bit, Ubuntu 18.04). Here is what I did: vim .bashrc then add line export RMW_IMPLEMENTATION=rmw_connext_cpp However, when I try to do the same things on Nvidia Jetson Nano, the systems shows the following error: -- Found ament_cmake: 0.7.3 (/opt/ros/dashing/share/ament_cmake/cmake) -- Using PYTHON_EXECUTABLE: /usr/bin/python3 -- Found rclcpp: 0.7.7 (/opt/ros/dashing/share/rclcpp/cmake) -- Found rosidl_adapter: 0.7.5 (/opt/ros/dashing/share/rosidl_adapter/cmake) -- Found rmw_implementation_cmake: 0.7.2 (/opt/ros/dashing/share/rmw_implementation_cmake/cmake) CMake Error at /opt/ros/dashing/share/rmw_implementation/cmake/rmw_implementation-extras.cmake:47 (message): The RMW implementation has been specified as 'rmw_connext_cpp' through the environment variable 'RMW_IMPLEMENTATION', however it is not in the list of supported rmw implementations, which was specified when the 'rmw_implementation' package was built. Thanks!!! BTW, I had already install rmw_implementation, rmw_connext_cpp, and rosidl_typesupport_connext_cpp. Originally posted by PowerfulMing on ROS Answers with karma: 40 on 2019-09-03 Post score: 0 Answer: To be able to use rmw_connext_cpp, you need to install Connext on your system. However, RTI doesn't provide Connext builds for arm64 platforms. rmw_connext_cpp is not declaring itself as supported, because it is detecting that Connext is not available on your system. More info: It is necessary to install alternative DDS implementations before you use them: https://index.ros.org/doc/ros2/Installation/Dashing/Linux-Install-Binary/#install-additional-dds-implementations-optional The debian packages for RTI connext are available under a non-commercial license on amd64 but not on arm64 which is why you're seeing different behavior. Originally posted by tfoote with karma: 58457 on 2019-09-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 33726, "tags": "ros2" }
Do mobile robots use variable transmissions?
Question: I've built a simple robot that can move. I'm using brushless motors. I've realized that at slow speeds the motors pull a lot of current. I didn't understand the power / current / efficiency charts when I started. I see it in the charts, now, that the efficiency in electric motors is at high RPM and the low speed torque is awesome but comes at a price of high current. For mobile robots, do people generally just provide high-current batteries for slow speeds, ignore slow speeds, or do they build robots with variable transmissions instead of fixed gearboxes? When I look around I just see designs with fixed gearboxes but I feel like I'm missing something. If I look beyond hobby type projects I see talk about CVT and the Inception drive but I'm wondering anyone at the home / hobbiest level do anything like that. Answer: For a simple answer: no, most mobile robots do not use variable transmissions. I believe it is far easier to just design a motor and gearbox combination that works well in your narrow range of desired speeds, than deal the extra mechanical complexity and cost of a variable transmission. The long answer is that it depends on what you consider a mobile robot. Let's take a quick survey of the range of "mobile robots". This can be a huge list, but I will try to span the space from very small to very large. Plum Geek Ringo Robot. Brushed motors, no transmission at all! Parallax BOT Bot. Brushed motors, no variable transmission. iRobot Roomba/Create. Brushed motors, no variable transmission. FLIR Packbot. Brushless motors, no variable transmission. DARPA LAGR Program Robot. Electric wheelchair motors, no variable transmission. Stanley. Desiel engine with variable transmission. Tesla Model 3, brushless electric motor with variable transmission. So it looks like variable transmissions are really only needed when you have gas engines. Or very high performance electric motors.
{ "domain": "robotics.stackexchange", "id": 2172, "tags": "mobile-robot, brushless-motor" }
What is a term of the type $\bot\rightarrow A$?
Question: The sentence $\bot\rightarrow A$ is provable in intuitionistic logic for any type $A$. The proof is trivial: \begin{align} \bot&\vdash\bot \\ \hline \bot&\vdash A \\ \hline &\vdash\bot\rightarrow A \end{align} It means there has to be a lambda term with this type (the type has to be inhabited). What is it? Answer: There are several ways of writing such a term, depending on how we write the proof terms for the elimination rule for $\bot$, which is $$\frac{\quad\bot\quad}{A}$$ The corresponding rule in $\lambda$-calculus is $$\frac{\Gamma \vdash e : \bot}{\Gamma \vdash \mathtt{absurd}_A(e) : A}.$$ (We call $\mathtt{absurd}_A$ an eliminator.) Thus, the term of type $\bot \to A$ you seek is simply $$\lambda x : \bot \,.\, \mathtt{absurd}_A(x).$$ Another way of writing the same thing is with a $\mathtt{match}$ statement. To see how this works, let us first consider a $\mathtt{match}$ statement for disjunctions. The elimination rule for $\lor$ is $$\frac{A \lor B \qquad {\begin{matrix}A \\ \vdots \\ C\end{matrix}} \qquad {\begin{matrix}B \\ \vdots \\ C\end{matrix}}}{C}$$ The corresponding term constructor for $\lambda$-calculus can be written as $$\frac{\Gamma \vdash e_1 : A+B \qquad \Gamma, x : A \vdash e_2 : C \qquad \Gamma, y : B \vdash e_3 : C}{\Gamma \vdash (\mathtt{match}\;e_1\;\mathtt{with}\;\mathtt{inl}(x) \to e_2 \mid \mathtt{inr}(y) \to e_3 \; \mathtt{end}) : C}$$ In general, the $\mathtt{match}$ statement which eliminates an $n$-fold sum $A_1 + A_2 + \cdots + A_n$ has $n$ cases. The empty type $\bot$ is like a nullary sum, so it corresponds to a $\mathtt{match}$ statement with zero cases: $$\frac{\Gamma \vdash e : \bot}{\Gamma \vdash (\mathtt{match}\;e\;\mathtt{with}\;\mathtt{end}) : C}$$ Indeed, this is how you can do it in Coq: Definition ex_falso_quodlibet (A : Set) (x : Empty_set) : A := match x with end.
{ "domain": "cstheory.stackexchange", "id": 2948, "tags": "lo.logic, lambda-calculus, typed-lambda-calculus" }
Unitarity gauge in Higgs mechanism in P&S's QFT
Question: To my understanding, after spontaneous symmetry breaking, if we parametrized Higgs field: $$\\ \phi(x)= \frac{1}{\sqrt{2}} \left(\begin{array}{c} 0 \\ v+h(x) \end{array}\right)e^{i\pi(x)/v}, $$ we already ran out of $SU(2)$ gauge freedom (to put all v.e.v to one component), where $\pi(x)$ is the goldstone field. Then we can take unitarity gauge to eliminate goldstone field, which leave $$ \phi(x)= \frac{1}{\sqrt{2}} \left(\begin{array}{c} 0 \\ v+h(x) \end{array}\right)$$ and in this step, we make use of $U(1)$ transformation. So up to this step, we ran out of all gauge freedoms. However, I am confused about P&S's eq.(20.110): $$ \phi(x)= U(x)\frac{1}{\sqrt{2}} \left(\begin{array}{c} 0 \\ v+h(x) \end{array}\right) \tag{20.110}$$ where $U(x)$ is a general $SU(2)$ gauge transformation according to the book. So why we still have $SU(2)$ gauge freedom here? What's the book's logic? Answer: What P&S are saying in that section is just that starting from the general parametrization $$ \phi(x)= U(x)\frac{1}{\sqrt{2}} \left(\begin{array}{c} 0 \\ v+h(x) \end{array}\right)$$ You can go to the unitary gauge by doing a transformation that eliminates the matrix $U(x)$. They haven't fixed the gauge yet in 20.110
{ "domain": "physics.stackexchange", "id": 91898, "tags": "standard-model, higgs, gauge-invariance, gauge" }
How to use a parameter file with a rosrun
Question: Hi I have a node like the laser_filter, amcl or move base with many parameters. Using a launch file allows me to store the parameters within a yaml file like: <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen"> <rosparam file="$(find cfg_pkg)/cfg/common_costmap.yaml" command="load" ns="global_costmap" /> <rosparam file="$(find cfg_pkg)/cfg/common_costmap.yaml" command="load" ns="local_costmap" /> <rosparam file="$(find cfg_pkg)/cfg/global_costmap.yaml" command="load" /> <rosparam file="$(find cfg_pkg)/cfg/local_costmap.yaml" command="load" /> <rosparam file="$(find cfg_pkg)/cfg/base_local_planner.yaml" command="load" /> </node> But I like to start the node directly but how are the options to load parmater files? Using the using the underline like with normal parameters are not working? I am looking for something like: rosrun move_base move_base _load:=????? # Greetings Originally posted by Markus Bader on ROS Answers with karma: 847 on 2014-02-04 Post score: 0 Original comments Comment by dornhege on 2014-02-05: Actually one simple trick depending on what you want to do is: Run the launch file once (that loads all parameters correctly like you set it up) and the just run the node. The parameters from the launch stay on the param server. Answer: rosrun itself doesn't load parameters. rosparam is the tool for that. Basically the command is the same like in the lauch file, you'd only need to take care of pushing things into the node's namespace, e.g. <rosparam command="load" file="$(find cfg_pkg)/cfg/common_costmap.yaml" ns="global_costmap"> -> rosparam load $(rospack find cfg_pkg)/cfg/common_costmap.yaml my_node_name/global_costmap Originally posted by dornhege with karma: 31395 on 2014-02-05 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Markus Bader on 2014-02-07: Thanks I will try that
{ "domain": "robotics.stackexchange", "id": 16888, "tags": "ros, load" }
View programming pattern in C++ without raw pointers
Question: I'm trying to achieve, for lack of a better term, a "view" pattern in C++. It's probably most comparable to "views" in the Database world, where one can make a query, and then perhaps aggregate the data, or turn otherwise interact with it without actually duplicating the data. I've tried to reduce it down to a simple example, where the Data is any integer, and the View is the opposite of that integer (x, -x). This example is pretty contrived, but when we're talking about matrices or other complex forms of data, I could see this pattern become quite useful, especially with respect to generic programming The code compiles, and fundamentally, works. Unfortunately, there are alternative implementations of this pattern, and that's why I'm asking for a code review. In the long term, are there any reasons that I should be concerned about using this design pattern in my code? Perhaps with respect to: Avoiding, Identifying Bugs Class Flexibility Code Readability Some alternative implementations I can think of: Using raw pointers instead of const references for the Data class Using inheritance and implementing the "view" functions within the Data class There are some important things to note about this code as well. The View class can not be copied or moved, meaning that its lifespan is identical to that of the Data class. There is also the flexibility of being able to mutate the View class when Data::view() is called, for example, setting a scalar. The two classes are mutual friends, but the Data only really has a use for the constructor of the View. Data.h #ifndef DATA_H #define DATA_H #include "View.h" class Data { public: Data(const int value) : view_(*this), value_(value) {} const View& view() const { return view_; } int value() const { return value_; } private: friend View; const int value_; const View view_; }; #endif // DATA_H View.h #ifndef VIEW_H #define VIEW_H class Data; class View { public: View() = delete; View(const View&) = delete; View(View&&) = delete; View operator=(const View&) = delete; View operator=(View&&) = delete; int value() const; private: friend class Data; const Data& data_; View(const Data& data) : data_(data) {} }; #endif // VIEW_H View.cpp #include "View.h" #include "Data.h" int View::value() const { return -data_.value_; } Example.cpp #include "Data.h" #include <iostream> int main() { Data data{ 5 }; const View& view = data.view(); std::cout << data.value() << std::endl; std::cout << view.value() << std::endl; } Answer: Lesser item first: class View { public: View() = delete; View(const View&) = delete; View(View&&) = delete; View operator=(const View&) = delete; View operator=(View&&) = delete; ... private: ... Data& data_; View(const Data& data) : data_(data) {} }; You have provided a non-default constructor. This means the default constructor is not implicitly declared. You also have a data member that is of a reference type. Even if you hadn't provided a non-default constructor, the default constructor would have implicitly defined as deleted. Bottom line: You don't need to say View() = delete. There's also no need for defining all four of the copy and move constructors and assignment operators as deleted. The very nice capability of defining a special member function as deleted did not exist in older versions of C++. One insteadhad to use the ugly trick of declaring that special member special member function as private but never providing an implementation. You are obviously using C++11 (or perhaps even C++14), so you have this nice capability available (and you are using it). You should also understand the rules by which the compiler implicitly declares and defines special member functions that you have not explicitly declared. The move constructor and move assignment operator are not implicitly declared if the code explicitly declares any one of a copy constructor, a copy assignment operator, or the destructor. Declaring any one of those C++03 special member functions for a class essentially turns off move semantics for that class. This is essential in making C++11 backward compatible with C++03 code. The move constructor is not implicitly declared if the code explicitly declares a move assignment operator. The same applies to the move assignment operator if a move constructor is explicitly declared. The copy constructor and assignment operator are implicitly declared and defined as deleted if the code explicitly declares a move constructor or a move assignment operator. What this means is that defining the move constructor (or the move assignment operator) as deleted implicitly knocks out the other three. All you need is class View { public: View(View&&) = delete; ... private: ... Data& data_; View(const Data& data) : data_(data) {} }; Since the non-default constructor is private, specifying View() = delete is okay for readability reasons. On the other hand, when I see a slew of XXX = delete definitions where only one is needed is a sign to me that the author of the code doesn't know the language and that I should poke deeply into that code during review. Regarding how you are using views There are multiple schools of thought on what "views" are and how they should be implemented. One is that the thing being viewed should know all about the view. Another is that the thing being viewed should know nothing at about the view. Both can be correct. I'll give two concrete examples to illustrate. One example is JSON. There are seven types of JSON values: An object, an array, a string, a number, and the three special values true, false, and null. The two boolean values can obviously be collapsed to one type, and a numerical value can be expanded to two types (floating point or integral). It helps very much to have different views on a value that implement the details. Most reasonable C++ implementations of JSON do exactly that. Since a specific JSON value can be of one and only one of those types, most implementations contains an enum value that indicates the type and a union of views, only one of which is active, the view corresponding to the enum value). The other example is very large multidimensional arrays of numerical data. There are a number of different packages that address this problem, and in a variety of ways. Very few use standard library containers. A std::vector<std::vector<std::vector< ... std::vector<double>...>>> is a phenomenally bad idea. Sometimes these arrays are so very large they will not fit into the memory of the largest of computers. Suppose you have found such an array of interest, representing years or even decades of data collected around the globe, but you are only interested in a couple of days worth of data over a tiny portion of the Earth. What you want is a view on that huge array that easily lets you slice and dice that huge array down to a very small size. In this case, the underlying huge array does not need to know that the view even exists. On the other hand, the view needs to know all about that huge array. You have obviously taken the first approach. Whether that is the correct approach depends very much on what you are trying to view and why you are trying to view things that way.
{ "domain": "codereview.stackexchange", "id": 19310, "tags": "c++, design-patterns, memory-management, pointers, reference" }