anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Extracting watermark from a video using Python
Question: I'm extracting a few frames from a video: Comparing the similarity or equality of pixel color (from first two images) Saving to a new image Comparing the new image (conjunction of first two images) and the next image, etc. Can you review my codes for efficiency and best coding practices? Code import sys import os import numpy as np from PIL import Image, ImageDraw def main(obr1,obr2): img1= Image.open("%s" %(obr1)) img2= Image.open("%s" %(obr2)) im1 = img1.convert("RGBA") im2 = img2.convert("RGBA") pix1 = im1.load() pix2 = im2.load() im = Image.new("RGBA", (im1.width, im1.height), (0, 0, 0, 0)) draw = ImageDraw.Draw(im) x = 0 y = 0 while y != im1.height-1 or x != im1.width-1: if pix1[x,y] == pix2[x,y]: draw.point((x,y),fill=pix1[x,y]) else: p1 = np.array([(pix1[x,y][0]),(pix1[x,y][1]),(pix1[x,y][2])]) p2 = np.array([(pix2[x,y][0]),(pix1[x,y][1]),(pix1[x,y][2])]) squared_dist = np.sum(p1**2 + p2**2, axis=0) dist = np.sqrt(squared_dist) if dist < 200 and pix1[x,y] !=(0,0,0,0) and pix2[x,y] != (0,0,0,0): color = (round(pix1[x,y][0]+pix2[x,y][0]/2), round(pix1[x,y][1]+pix2[x,y][1]/2), round(pix1[x,y][2]+pix2[x,y][2]/2), round(pix1[x,y][3]+pix2[x,y][3]/2)) #color=pix1[x,y] draw.point((x,y),fill=color) else: draw.point((x,y),fill=(0,0,0,0)) if x == im1.width-1: x=0 y=y+1 else: x=x+1 im.save('test%s.png' %(z), 'PNG') print("Zapisano obraz test%s.png" %(z)) imglist = sys.argv[1:] z=0 while imglist != []: exists = os.path.isfile("./test%s.png" % (z-1)) if exists: obr1="test%s.png" % (z-1) obr2=imglist.pop() print("Porównywanie obraza %s i %s" % (obr1,obr2)) main(obr1,obr2) print("Analiza skończona") z=z+1 else: obr1=imglist.pop() obr2=imglist.pop() print("Porównywanie obraza %s i %s" % (obr1,obr2)) main(obr1,obr2) print("Analiza skończona") z=z+1 Answer: Best practices A collection of general best practices for Python code can be found in the infamous Style Guide for Python Code (also called PEP8). While your code looks quite reasonable, there a two major points from the Style Guide I would like to point out to you. First, add documentation to your functions (and maybe choose a more descriptive name than main, more on that shortly). Future-you will be very grateful for this. Second, always use a single whitespace before and after = when assigning to a variable (no whitespace if used as keyword arguments in a function ! Relevant section of PEP8 here). The code is also generally better to read if you add a trailing whitespace after , like so: main(obr1, obr2) instead of main(obr1,obr2). Another thing that I would consider a Python best practice, is to wrap code that is to be executed in a "scripty" manner in a if __name__ == "__main__": clause (also see the official documentation on that topic). That would allow you reuse/import the function currently named main into other functions without running the while loop. Therefore, I would like to suggest the following coarse code-level structure: # imports would go here ... def compare_images(filename1, filename2): """Compare two images and store the comparison to file""" # function logic would go here def main(): """Process arguments from command line""" imglist = sys.argv[1:] z = 0 while imglist != []: # ... if __name__ == "__main__": main() I would also recommend to give some of the variables a more descriptive name (what do obr1 and obr2 stand for?). Also keep in mind that most of the people reading your code (including me) do not speak your mother tongue, so it's always nice to translate console output to English before posting it here. Efficiency .load() should probably not be necessary as per the documentation (this assumes your actually using Pillow fork and not the old and crusty PIL). The most striking point in terms of efficiency is that Python is often terribly slow at loops. So the easiest way to gain performance is to get rid of them. But how? NumPy to the rescue! NumPy does all those pesky loops in C and is therefore orders of magnitudes faster compared to looping over array data in Python "by hand". So what you would generally do to benefit from this is to get your image data as NumPy array (see this SO answer for a hint) and then work on those NumPy arrays with array operations, like masking. I will try to convey what I mean by that in a short example, maybe I can fully adapt it to your example later. im1_np = ... # get data as numpy array, see SO post im2_np = ... # get data as numpy array, see SO post result = np.zeros_like(im1_np) # same dtype and shape as input matching_pixels = im1_np == im2_np # boolean mask with true where pixels match exactly result[matching_pixels] = im1_np[matching_pixels] # this is your if clause As you can see, there a no "manual" loops involved, everything is done by NumPy in the background. Now to the else path. First, I think there might be some errors here, feel free to comment if I'm wrong. What (I think) you basically want to do, is to compute the difference between corresponding pixels and set them to a certain color if they are below a given threshold. Mathematically this would be expressed similar to this: $$ \sqrt{(r_1-r_2)^2 + (g_1-g_2)^2 + (b_1-b_2)^2} < 200 $$ Your code does the following at the moment: $$ \sqrt{r_1^2 + r_2^2 + g_1^2+g_2^2 + b_1^2+b_2^2} < 200 $$ When working from my definition above, the code becomes as follows: dist_mask = np.sum(im1_np-im2_np, axis=2) < threshold # remove pixels already set in the if clause dist_mask = np.logical_and(dist_mask, np.logical_not(matching_pixels)) # remove all-zero pixels dist_mask = np.logical_and(dist_mask, np.sum(im1_np, axis=2) > 0) dist_mask = np.logical_and(dist_mask, np.sum(im2_np, axis=2) > 0) # set color in result image as mean of both source pixels result[dist_mask] = (im1_np[dist_mask]+im2_np[dist_mask]) / 2. I leave threshold as variable since I'm not sure your original computation works the way you expect it and the threshold as chosen by you is meaningful. (Note: You can simply leave out the sqrt if you square the threshold value). Apart from that, the code is a relatively strict transformation of your original conditions, it's just that instead of looping over the images pixel by pixel, everything is done in array operations. Under the assumption that you actually want to assign the average pixel value of both source images, this can be optimized further, since the if condition of exact pixel equality is a subset of distance < threshold. This would save you a mask computation (matching_pixels would not be needed anymore) and the negation/and operation with the dist_mask. In case of exact equality, summing both values and dividing them by two should leave you with the original value (Warning: Watch out for quirks with floating point values and/or range-limited integer values). To be fully compatible with your original code you would then have to go back to PIL to store the image to disk. This should also be described in the SO post linked above. Other things You are sometimes using string formatting in a weird way. If you just want make sure that a variable is a string, pass it to str(...) instead of using string formatting. If you really need string formatting such as where you create the output filename, it is often recommended to use .format(...) (Python 2, Python 3) or f-strings (Python 3) to format string output. There is a nice blog post here that compares all ways of doing string formatting in Python I mentioned.
{ "domain": "codereview.stackexchange", "id": 34221, "tags": "python, performance, beginner, image" }
install navigation form sources
Question: Hello,I have installed the Groovy Ros-Comm from source. Now,I need the navigation package.So I download the navigation source code from Browse Software http://www.ros.org/browse/list.php. At the same time, I download all the packages that navigation depends ,and put them in the navigation.I compiled the navigation using rosmake.Than it shows as following: error: /home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/include/robot_pose_ekf/nonlinearanalyticconditionalgaussianodo.h:22:59: did not have pdf/analyticconditionalgaussian_additivenoise.h: In fact,there is the file at bfl/src/.I have download the bfl and put it in the navigation package. PS: I can not install navigation through sudo apt-get install navigation Please help me.How can I install navigation from source.What causes this error.How to solve it.Thank you very much. [ rosmake ] rosmake starting... [ rosmake ] No package specified. Building stack ['navigation-groovy-devel'] [ rosmake ] Packages requested are: ['navigation-groovy-devel'] [ rosmake ] Logging to directory /home/sxl/.ros/rosmake/rosmake_output-20141127-143344 [ rosmake ] Expanded args ['navigation-groovy-devel'] to: ['bondcpp', 'bond', 'smclib', 'bondpy', 'test_bond', 'bfl', 'voxel_grid', 'flann', 'pcl_ros', 'nav_core', 'nodelet', 'move_base_msgs', 'pcl', 'dwa_local_planner', 'clear_costmap_recovery', 'move_slow_and_clear', 'base_local_planner', 'rotate_recovery', 'amcl', 'map_server', 'fake_localization', 'robot_pose_ekf', 'move_base', 'laser_geometry', 'nodelet_topic_tools', 'test_nodelet_topic_tools', 'test_nodelet', 'navfn', 'tf_conversions', 'kdl_conversions', 'eigen_conversions', 'tf', 'costmap_2d', 'carrot_planner', 'pcl_msgs'] [rosmake-0] Starting >>> catkin [ make ] [rosmake-1] Starting >>> bondcpp [ make ] [rosmake-1] Finished <<< bondcpp ROS_NOBUILD in package bondcpp No Makefile in package bondcpp [rosmake-2] Starting >>> eigen_conversions [ make ] [rosmake-1] Starting >>> kdl_conversions [ make ] ……………… [rosmake-3] Starting >>> std_srvs [ make ] [rosmake-3] Finished <<< std_srvs ROS_NOBUILD in package std_srvs No Makefile in package std_srvs [rosmake-3] Starting >>> amcl [ make ] [ rosmake ] Last 40 linesbot_pose_ekf: 4.3 sec ] [ costmap_2d: 4.2 sec ] [ actionli... [ 4 Active 69/82 Complete ] {------------------------------------------------------------------------------- vals = rospack.get_depends(package, implicit=True) File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 227, in get_depends s.update(self.get_depends(p, implicit)) File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 227, in get_depends s.update(self.get_depends(p, implicit)) File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 227, in get_depends s.update(self.get_depends(p, implicit)) File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 227, in get_depends s.update(self.get_depends(p, implicit)) File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 227, in get_depends s.update(self.get_depends(p, implicit)) File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 221, in get_depends names = [p.name for p in self.get_manifest(name).depends] File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 159, in get_manifest return self._load_manifest(name) File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 198, in _load_manifest retval = self._manifests[name] = parse_manifest_file(self.get_path(name), self._manifest_name, rospack=self) File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 190, in get_path raise ResourceNotFound(name, ros_paths=self._ros_paths) rospkg.common.ResourceNotFound: cmake_modules ROS path [0]=/home/sxl/ros_catkin_ws1/install_isolated/share/ros ROS path [1]=/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel ROS path [2]=/home/sxl/ros_catkin_ws1/src/slam_gmapping ROS path [3]=/home/sxl/ros_catkin_ws1/src/navigation-1.8.3 ROS path [4]=/home/sxl/ros_catkin_ws1/src/common_rosdeps ROS path [5]=/home/sxl/ros_catkin_ws1/src/pluginlib ROS path [6]=/home/sxl/ros_catkin_ws1/src/visualization_msgs ROS path [7]=/home/sxl/actionlib ROS path [8]=/home/sxl/ros_catkin_ws1/install_isolated/share ROS path [9]=/home/sxl/ros_catkin_ws1/install_isolated/stacks CMake Error at /home/sxl/ros_catkin_ws1/install_isolated/share/dynamic_reconfigure/cmake/cfgbuild.cmake:78 (string): string sub-command REPLACE requires at least four arguments. Call Stack (most recent call first): /home/sxl/ros_catkin_ws1/install_isolated/share/dynamic_reconfigure/cmake/cfgbuild.cmake:99 (gencfg_cpp) CMakeLists.txt:21 (include) -- Configuring incomplete, errors occurred! -------------------------------------------------------------------------------} [ rosmake ] Output from build of package costmap_2d written to: [ rosmake ] /home/sxl/.ros/rosmake/rosmake_output-20141127-143344/costmap_2d/build_output.log [rosmake-2] Finished <<< costmap_2d [FAIL] [ 4.22 seconds ] [ rosmake ] Halting due to failure in package costmap_2d. [ rosmake ] Waiting for other threads to complete. [rosmake-3] Finished <<< amcl [PASS] [ 3.36 seconds ] [ rosmake ] Last 40 linesbot_pose_ekf: 7.3 sec ] [ actionlib: 4.3 sec ] [ 2 Active 70/82 Complete ] {------------------------------------------------------------------------------- [ 57%] Built target ROSBUILD_genmsg_cpp make[3]: 正在进入目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' [ 57%] Built target rospack_gensrv make[3]: 正在进入目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' make[3]:正在离开目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' make[3]:正在离开目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' [ 57%] Built target rospack_genmsg make[3]: 正在进入目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' make[3]: 正在进入目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' [ 57%] Built target rospack_gensrv_all make[3]:正在离开目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' make[3]:正在离开目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' [ 57%] Built target rosbuild_precompile [ 57%] make[3]: 正在进入目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' Built target rospack_genmsg_all make[3]:正在离开目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' make[3]: 正在进入目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' [ 71%] [ 85%] [100%] Building CXX object CMakeFiles/robot_pose_ekf.dir/src/nonlinearanalyticconditionalgaussianodo.cpp.o Building CXX object CMakeFiles/robot_pose_ekf.dir/src/odom_estimation_node.cpp.o Building CXX object CMakeFiles/robot_pose_ekf.dir/src/odom_estimation.cpp.o In file included from /home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/src/nonlinearanalyticconditionalgaussianodo.cpp:18:0: /home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/include/robot_pose_ekf/nonlinearanalyticconditionalgaussianodo.h:22:59: 致命错误: pdf/analyticconditionalgaussian_additivenoise.h:没有那个文件或目录 编译中断。 In file included from /home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/src/odom_estimation.cpp:37:0: /home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/include/robot_pose_ekf/odom_estimation.h:40:41: 致命错误: filter/extendedkalmanfilter.h:没有那个文件或目录 编译中断。 make[3]: *** [CMakeFiles/robot_pose_ekf.dir/src/odom_estimation.cpp.o] 错误 1 make[3]: *** 正在等待未完成的任务.... make[3]: *** [CMakeFiles/robot_pose_ekf.dir/src/nonlinearanalyticconditionalgaussianodo.cpp.o] 错误 1 In file included from /home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/include/robot_pose_ekf/odom_estimation_node.h:44:0, from /home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/src/odom_estimation_node.cpp:37: /home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/include/robot_pose_ekf/odom_estimation.h:40:41: 致命错误: filter/extendedkalmanfilter.h:没有那个文件或目录 编译中断。 make[3]: *** [CMakeFiles/robot_pose_ekf.dir/src/odom_estimation_node.cpp.o] 错误 1 make[3]:正在离开目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' make[2]: *** [CMakeFiles/robot_pose_ekf.dir/all] 错误 2 make[2]:正在离开目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' make[1]: *** [all] 错误 2 make[1]:正在离开目录 `/home/sxl/ros_catkin_ws1/src/navigation-groovy-devel/robot_pose_ekf/build' -------------------------------------------------------------------------------} [ rosmake ] Output from build of package robot_pose_ekf written to: [ rosmake ] /home/sxl/.ros/rosmake/rosmake_output-20141127-143344/robot_pose_ekf/build_output.log [rosmake-1] Finished <<< robot_pose_ekf [FAIL] [ 7.39 seconds ] [ rosmake ] Halting due to failure in package robot_pose_ekf. [ rosmake ] Waiting for other threads to complete. [rosmake-0] Finished <<< actionlib [PASS] [ 7.03 seconds ] [ rosmake ] Results: [ rosmake ] Built 73 packages with 2 failures. [ rosmake ] Summary output to directory Originally posted by Alice63 on ROS Answers with karma: 63 on 2014-11-27 Post score: 0 Answer: You're doing so many things wrong.. I don't even.. You don't state which version of ROS you're using or which OS you're using. Don't use rosbuild in a catkin workspace. At this point, almost all of ROS is using catkin; there shouldn't be any need to use rosbuild at all. Why are you checking out ros_comm from source? That's a pretty serious modification to ROS, and you don't give any reason for actually needing to do so. Downgrading the version of ros_comm that you're using to one from an older version of ROS is a highly likely to introduce bugs and incompatibilities. Groovy is no longer supported, You should probably be using a newer version of ROS. As stated on the installation page, ROS packages are prefixed with ros- and the distro name. Try: sudo apt-get install ros-hydro-navigation Or, if you're still using Groovy: sudo apt-get install ros-groovy-navigation Originally posted by ahendrix with karma: 47576 on 2014-11-28 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Alice63 on 2014-12-01: @ahendrix Because I have installed ros-groovy ros-comm on ubuntu12.04 on armel. You have ever told me that it no longer support armel. So I install ros-comm from source.But now I need the navigation package.I have downloaded the navigation and the relevant package. Comment by Alice63 on 2014-12-01: @ahendrix When I compile the navigation it shows there are not the file of bfl.In fact,I have downloaded the bfl packages,and put it in the navigation packages Comment by ahendrix on 2014-12-01: You REALLY need to mention things like your ROS version, OS and your architecture in the question; otherwise I assume that you're using a standard installation of a recent version of ROS on Ubuntu x86. Comment by ahendrix on 2014-12-01: navigation is a dry (rosbuild) package on groovy, and bfl is a wet (catkin) package. You should have separate catkin and rosbuild workspaces. navigation and other dry packages should be in the rosbuild workspace, and bfl and other wet packages should be in the dry workspace. Comment by ahendrix on 2014-12-01: You should build and source your wet (catkin) workspace before attempting to build your dry (rosbuild) workspace.
{ "domain": "robotics.stackexchange", "id": 20184, "tags": "ros, navigation, source" }
Why is aurora borealis circular in shape when viewed from space?
Question: These are some false color images of the northern lights captured by NASA's satellite Dynamics Explorer-1. Image credits: Dynamics Explorer - 1 (DE-1) Spin-Scan Auroral Imaging (SAI) Photo Gallery It's interesting that what we see as longitudinal bands from Earth are part of larger rings when viewed from space. I thought that the aurora is formed when charged particles from the sun ionize the earth's atmosphere. I cannot understand why that can happen only in this circular ring. Are there any simple explanations for this phenomenon? Answer: Yes, the field is stronger at the north pole, but it also has very little surface area perpendicular to the sun. The majority of the north pole field lines perpendicular to the sun are looped around in the other direction far away from the earth where the field density is much lower. Consider electrons traveling toward the earth with only one velocity. As they get closer to the earth the magnetic field gets stronger. Eventually the electrons will spin around the field lines of sufficient strength and travel along just that field line. Since that field line intersects the surface of the earth at just one geomagnetic latitude, the electrons will only interact at that latitude forming a band. In the below picture, the field density is eventually strong enough to deflect the particles along the field lines. Before that point, it is interacting with the field lines from the magnetic north pole. Now the electrons do not immediately interact with the atmosphere, but they instead get reflected back and forth between the poles and drift eastward (west for positive ions) while doing so, causing the night sky to be illuminated just as well as the day side when the electrons eventually interact with the atmosphere. That should explain why the aurora is not as prevalent at the north pole and why it is visible as a ring around the day and night side.
{ "domain": "physics.stackexchange", "id": 97423, "tags": "electromagnetic-radiation, earth, plasma-physics, geomagnetism, solar-wind" }
Phase Information at Higher Frequencies in Continuous Wavelet Transform
Question: I'm using the code I found here to compute the wavelet transform of a sine wave with a constant frequency. #!/usr/bin/python2 from pylab import * import matplotlib.pyplot as plt import numpy as np import scipy x = np.linspace(0, 10, 65536) y = np.sin(2 * pi * 60 * x) N = len(y) Y = np.fft.fft(y) J = 128 scales = np.asarray([2 ** (i * 0.1) for i in range(J)]) morletft = np.zeros((J, N)) for i in range(J): morletft[i][:N/2] = sqrt(2 * pi * scales[i]) * exp(-(scales[i] * 2 * pi * scipy.array(range(N/2)) / N - 2) ** 2 / 2.0) U = empty((J, N), dtype=complex128) for i in range(J): U[i] = np.fft.ifft(Y * morletft[i]) plt.imshow(abs(U[:,scipy.arange(0,N,1)]), interpolation='none', aspect='auto') plt.title("Sine Wave") plt.xlabel("Translation") plt.ylabel("Scale") plt.show() The result looks alright. What I'm really interested in looking at is the phase. I'm extracting phase from the above code using: imshow(np.unwrap(np.angle(U)), aspect='auto') and it looks like this: Why is the phase information present at frequencies higher than (or scales lower than, NOTE: inverted y-axis) that of the signal? Answer: As you can see there is energy also in higher frequencies than your original. This is probably since every level of the wavelet has a band, so the wavelet can show energy at frequencies higher. The phase exists only as long as there is energy... so it directly connected to it. You can see if the data on the spectrum exists only where there is non zero energy in the spectrum.
{ "domain": "dsp.stackexchange", "id": 4876, "tags": "frequency-spectrum, wavelet, python" }
Complexity of multiplying bivariate polynomials of degree n
Question: Let $P(X,Y)$ and $Q(X,Y)$ be two bivariate polynomials of degree at most $n$. Using $O(n^2)$ FFTs, we can compute the product $PQ$ in time $O(n^3\log n)$. Q: Is there a faster algorithm to compute $PQ$? Answer: It suffices to describe how to evaluate $P(\omega^u,\omega^v)$ at the roots of unity. Suppose $P(X,Y)=\sum_{i,j} a_{i,j} X^i Y^j$. Let $$F_{b,c}(X,Y) = \sum_{i,j} a_{2i+b,2j+c} X^i Y^j$$ where the sum is over all $i,j$ with $i\le n/2, j \le n/2$. Then $$P(X,Y) = F_{0,0}(X^2, Y^2) + Y F_{0,1}(X^2, Y^2) + X F_{1,0}(X^2, Y^2) + XY F_{1,1}(X^2, Y^2).$$ Therefore you can evaluate $P(\omega^u,\omega^v)$ at the $n^2 $ roots of unity by evaluating each $F_{b,c}(\omega^{2u},\omega^{2v})$ at the $(n/2)^2$ roots of unity. This leads to a recursive algorithm whose running time is given by $$T(n) = 4 T(n/2) + O(n^2),$$ which has the solution $T(n) = O(n^2 \log n)$. This immediately leads to an algorithm to compute the product $P(X,Y) Q(X,Y)$, analogous to the standard algorithm for (univariate) polynomial multiplication with DFTs, by evaluating at the roots of unity, multiplying pointwise, and then applying the inverse DFT. I'll let you fill in the details from here.
{ "domain": "cs.stackexchange", "id": 20483, "tags": "polynomials, multiplication" }
Transform RGB image to *look like* Infrared
Question: Context: I'm trying to improve a pose estimation model so that it works better when my camera is in Infrared mode. Unfortunately I only have RGB images to train on. I realize that you can't convert RGB to IR directly, but my hypothesis is that converting the RGB images to look more like IR, and then training on a dataset of combined RGB and IR images, will lead to better performance. Are there any libraries that have tried to implement a function like this? I'm essentially looking for a function that something like this ("IR effect") - http://funny.pho.to/infrared/ Answer: Without very much additional info, I'd presume the red channel of your camera has the highest correlation with the infrared spectrum. Since you only have RGB and no knowledge of how that was calculated from the color sensor pixels: Take the R channel, it's as good as it gets. The "IR effect" is for artistic purposes only. I'd doubt it has anything to do with what you see in an IR picture. Notice that I also think you haven't tried to properly understand the physics of your problem: While I'd presume that a visible light photograph is highly correlated with a near-IR photo, that's very likely not the case for mid- and long-wavelength IR and certainly not for far infrared. So, define your requirements better. And: ML really depends on having data. A lot of it. you might start with wrong data, but as DNN really don't lend themselves to understanding of the weights, it's very questionable that anything that you do with "emulated" data transfers to "real" IR data well.
{ "domain": "dsp.stackexchange", "id": 10240, "tags": "image-processing, transform" }
Add one to a very large integer
Question: I was asked to implement this in an interview a while back. I didn't pass the interview, mostly likely because it took me far too long to do. However, I'm interested to know how well implemented the actual code is. The task was to: Add one to a positive number represented as an array of integers. The question is the same as this one. However, I have done it in JavaScript and my implementation is different (recursion rather than loops), so I hope they are sufficiently different. var addToArray = function(n){ var result = []; function carryOne(n) { if (n.length === 0) { result.push(1); } else { var length = n.length-1; if (n[length] < 9) { n[length] += 1; result.push(n); } else { result.push(0); carryOne(n.slice(0, length)); } } } carryOne(n); result.reverse(); var flatten = Array.prototype.concat.apply([], result); console.log(flatten); } I've just noticed I have missed the semicolons on the closing braces on the function definitions but I'm keeping it like that for honesty. In general it works but I think the need to reverse and flatten the array at the end is quite inelegant, so I suspect there is a better solution. Answer: I agree with the other answers, it's a bit over-engineered. That's usually caused by the lack of knowledge in the language's native API. Not to worry, every one has encountered this situation at least once in their job interview days. Mine was "In any language, reverse a string in as few lines of code as possible". Only after my interview did I discover that there was strrev. The first problem I see is that your iteration of the original array is in one direction, while adding digits in to result is in another direction. This causes you to end up doing an unnecessary reverse. Instead of push, why not unshift which adds to the beginning of the array instead of the end? That way, both operations are in the same direction. The next one was looking for the array of integers that you mentioned. It was only after I saw n.length did I realize that n was an array. Make your variables verbose, name them so everyone knows what it's for. Then I'm not really sure what the purpose of the last concat is. Isn't result already an array? Anyways, took a shot at your problem. There is actually a reduceRight in JS which is perfect for this. It's the same as reduce, only it starts from the end of the array. var number = [1,9,9,9]; function addOne(array) { // I wouldn't want to mutate the original array, so I slice var arrayCopy = array.slice(); // `reduceRight` acts like `reduce`, except it starts from the end. // It's also like `forEach` except it allows you to carry a value. arrayCopy.reduceRight(function(carry, current, index) { // Add normally var sum = current + carry; // Update our carry carry = (sum / 10) | 0; // Update the current digit in the array arrayCopy[index] = sum % 10; // If we're at the last digit and we have a carry, add it in if(index === 0 && carry !== 0) arrayCopy.unshift(carry) // Return the carry for the next operation return carry // Here's our "add one" }, 1); return arrayCopy; } // You can optionally add .join('') to get a string instead. var result = addOne(number); document.write(JSON.stringify(result));
{ "domain": "codereview.stackexchange", "id": 16301, "tags": "javascript, algorithm, recursion, interview-questions, integer" }
Set log level for each node in ROS CPP
Question: I want to set the log level for each node in ROS CPP. The node is defined as follows: ros::init(argc, argv, "my_ros_node"); What I am looking for is the CPP version of following command: rospy.init_node('my_ros_node', log_level=rospy.DEBUG) I am using ROS Indigo on Ubuntu 14.04 LTS. As always thank you very much. Originally posted by ravijoshi on ROS Answers with karma: 1744 on 2017-11-18 Post score: 1 Original comments Comment by clyde on 2017-11-19: You can set it per package. See: http://wiki.ros.org/rosconsole#Configuration Comment by ravijoshi on 2017-11-19: @clyde: Thank you very much. I am assuming that there must be a way to do it in CPP. It is possible to do in Python, as I posted above. However I am looking for CPP version. Thanks again. Answer: http://wiki.ros.org/rosconsole#Changing_Logger_Levels & #q43397 Originally posted by lucasw with karma: 8729 on 2017-11-19 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 29401, "tags": "roscpp" }
What does Quantum Circuit Wires and Separated mean?
Question: From Quantum Circuits, there are two statements that are not clear. Quantum Circuits Quantum circuits are collections of quantum gates interconnected by quantum wires. The actual structure of a quantum circuit, the number and the types of gates, as well as the interconnection scheme are dictated by the unitary transformation, U, carried out by the circuit. Though in our description of quantum circuits we use the concepts input and output registers of qubits, we should be aware that physically, the input and the output of a quantum circuit are not separated as their classical counterparts are; this convention allows us to describe the effect of unitary transformation carried out by the circuit in a more coherent fashion. In all descriptions of quantum circuits in addition to gates, we see quantum wires that move qubits and allow us to compose more complex circuits from simpler ones that, in turn, are composed of quantum gates. We compose components by connecting the output of one to the input of another; we also compose operations when the results of an operation are used as input to another. The composition does not affect the quantum states. The quantum wires do not perform any transformations in a computational sense; sometimes we can view them as transformations carried out by the Pauli identity operator ∑I. What does "the output of a quantum circuit are not separated as their classical counterparts are; " mean? What does it mean are not separated as their classical counterparts? In classical, we have transistors acting as gates - so aren't those connected too? "we see quantum wires that move qubits and allow us to compose more complex circuits from simpler ones that, in turn, are composed of quantum gates" mean. What are these wires physically in a quantum computer? I understand that we read quantum circuits from left to right and that they are unitary transformations. Are the wires just a way to show this flow is from left-to-right? Answer: For most qubit modalities quantum wires are best thought of as time while quantum gates are best thought of as electromagnetic pulses applied to individual qubits. This is in contrast to classical gates, where classical wires are etchings of metal inside a substrate while classical gates are, as you say, transistors arranged in series and/or in parallel. Take, for example, a NOT gate (which negates a bit). Classically this is instantiated with an etching of an input wire $a$, forked to go into a PMOS transistor in series with an NMOS transistor, with the output wire $\bar a$ at that point of connection. We can separately probe the input wire $a$ independently from the output wire $\bar a$. But this is in contrast with a Pauli $X$ gate (which similarly negates a qubit). To apply the $X$ gate to a qubit $|\psi\rangle$ at a particular depth of your circuit, you wait until the depth of note and then you apply an electromagnetic pulse (e.g., a laser) to your qubit as $X|\psi\rangle$. But the input wire is just time up to the $X$ gate, which can't be separately probed independently of the output wire. I also really like the analogy that quantum circuits are akin to musical writing. Time marches left to right; the single notes and chords correspond to single-qubit gates and multi-qubit gates, respectively, and there's no feedback or fan-out/fan-in. Whereas a classical circuit has such fan-in and fan-out, and often utilizes feedback - for example to design flip-flops, a quantum circuit cannot effectuate the same kind of feedback.
{ "domain": "quantumcomputing.stackexchange", "id": 5243, "tags": "quantum-gate, quantum-circuit, classical-computing" }
Why is the ionization energy for Hydrogen non-zero?
Question: There are no other electrons to collide, repel and kick Hydrogen's single electron to a distant nucleus. And that a single electron is tightly attracted to the nucleus by the electrostatic energy between them. So it seems to me, that Hydrogen does not require ionization energy at all. But when I checked on the sources - the Hydrogen's ionization energy is relatively high. Even Hydrogen makes $NaH$, which just doesn't stay for long and has extreme need for becoming $Na$ metal and Hydrogen gas. So in this case why can't we say that Hydrogen does not require ionization energy? Answer: The ionization energy (IE) of an atom or molecule describes the minimum amount of energy required to remove an electron (to infinity) from the atom or molecule in the gaseous state. With ionization energy, an electron is not "kicked out" by other electrons, but rather it is "the energy required for the electron to 'climb out' and leave the atom." Since the electron "is drawn inwards by positive electrostatic potential," it would make sense to infer that the more "tightly drawn" the electron is to the nucleus, the more energy it would require to "climb out." More of this can be found on the wikipedia page.
{ "domain": "chemistry.stackexchange", "id": 1850, "tags": "hydrogen, covalent-compounds, ionization-energy, electrostatic-energy" }
Thermodynamics: Piston spring assembly
Question: We have a piston assembly (filled with gas) connected to a spring. The top of the piston is open to the atmosphere. The gas is reversibly heated to 100C. This is an example problem in my thermodynamics handbook (Koretsky, Ex 2.9) The process is reversible and work is then given by $$ W = -\int^{V_2}_{V_1}PdV $$ The displacement of the spring can be written in terms of the change in volume $$ x = \frac{V-V_1}{A} = \frac{\Delta V}{A} $$ A force balance on the piston yields $$ P_{air}A = P_{ext}A + kx $$ $$ P_{air} = P_{ext} + \frac{kx}{A^2}$$ Plugging these equation into the first equation: $$ W = -\int^{V_2}_{V_1}PdV = -\int^{V_2}_{V_1}P_{ext}dV -\int^{\Delta V = V_2-V_1} _{0}\frac{k \Delta V}{A^2}d({\Delta V} ) $$ $$ W = -P_{ext}(\Delta V) - \frac{k \Delta V^2}{2A^2} $$ Applying the ideal gas law $$ \frac{P_1V_1}{T_1} = \frac{P_2V_2}{T_2} = \frac{V_2}{T_2}(P_{ext} + \frac{kx}{A^2})$$ and solving this equation will give $V_2$ and the work can be found. EDIT: The change in internal energy is given by $$ \Delta u = \int^{T_2}_{T_1}C_pdT = R\int^{T_2}_{T_1}[(A-1) + BT + DT^{-2}]dT $$ $$ \Delta u = R[(A-1)T+\frac{B}{2}T^2 - \frac {D}{T}] | [T_2 T_1] $$ with the parameters for he heat capacity for air from the tables in the book, the internal energy change can be found. Total heat transfer is then $Q = \Delta u - W$ My question. How would I model the transient behaviour of the system? The spring's displacement over time as well as the pressure change over time? EDIT: Fixed a integration error in the 6th formula. Answer: If the piston oscillates, then the process can't be reversible. The kinetic energy would certainly be dissipated over time by viscous stresses (an irreversible effect) until the system attained a new steady state. And, what happened to the changes in internal energy U of the gas as it is expanded or compressed. That is certainly omitted from these analyses. There is nothing in the problem statement that says that the reversible expansion is carried out isothermally And, what if the piston mass is negligible? Since no one responded to my comments regarding the original post, it is difficult to say more at this time. THIS IS AN EDIT TO THE RESPONSE, ONCE MORE INFORMATION WAS MADE AVAILABLE. The force balance on the piston is: $$PA=P_{atm}A+kx$$where x is taken to be zero at time zero. The change in volume of the gas is given by:$$V-V_0=Ax$$So, combining these equations gives: $$P=P_{atm}+\frac{k}{A^2}(V-V_0)$$ The rate at which work is being done on the surroundings is thus $$\dot{W}=\left[P_{atm}+\frac{k}{A^2}(V-V_0)\right]\frac{dV}{dt}$$The rate of change of internal energy of the gas is given by: $$\frac{dU}{dt}=nC_v\frac{dT}{dt}$$So, from the first law of thermodynamics, $$nC_v\frac{dT}{dt}=\dot{Q}-\left[P_{atm}+\frac{k}{A^2}(V-V_0)\right]\frac{dV}{dt}$$If we integrate this with respect to time, we get: $$nC_v(T-T_0)=\int_0^t{\dot{Q}dt}-P_{atm}(V-V_0)-\frac{k}{A^2}\frac{(V-V_0)^2}{2}\tag{1}$$ where $$T=\frac{PV}{nR}=\frac{\left[P_{atm}+\frac{k}{A^2}(V-V_0)\right]V}{nR}$$and $$T_0=\frac{P_{atm}V_0}{nR}$$ So, $$T-T_0=\frac{P_{atm}(V-V_0)}{nR}+\frac{\left[\frac{k}{A^2}(V-V_0)\right]V}{nR}$$ If I substitute this result for the temperature difference into Eqn. 1 to obtain an equation for the volume solely in terms of the cumulative heat added, I obtain: $$\gamma \left[P_{atm}(V-V_0)+\frac{k}{A^2}\frac{(V-V_0)^2}{2}\right]+\frac{k}{A^2}\frac{V_2-V_0^2}{2}=(\gamma -1)Q$$where Q is the cumulative amount of heat added through time t.
{ "domain": "physics.stackexchange", "id": 33005, "tags": "homework-and-exercises, thermodynamics, differential-equations" }
single_to_multi_fast5 does not collect all the single files if the input folder contains mixed types of fast5 files
Question: I have a dataset that contains thousands of mixed multiple and single fast5 files in a non-homogenous folder structure. The reads and fast5 files are coming from different labs and some labs gave single fast5 files and some multi fast5 files, without clear mappings. My final goal is to convert all the fast5 files to multi fast5 files. There are commands in ont_fast5_api which is an interface to HDF5 files of the Oxford Nanopore. Using this API we can convert single fast5 files and multi fast5 files to each other. this is the link to this API github page: https://github.com/nanoporetech/ont_fast5_api So I made a test mixed folder to check if I can use the API without missing reads as a result of conversion. The test folder structure is similar to my original folder structure: mixed Subfolder_A single_fast5 x 4000 Subfolder_B multiple_fast5 (contains 4000 single reads fast5) single_fast5 x 2 multiple_fast5_1 (contains 4000 single reads fast5) multiple_fast5_2 (contains 4000 single reads fast5) multiple_fast5_3 (contains 100 single reads fast5) Here is a screenshot of the structure with a few files from each folder: and I created two other folders outside of the mixed folder as follows: single multi Here my question is about single_to_multi_fast5 and multi_to_single_fast5 commands. test_path=mixed multi_path=multi single_path=single multi_to_single_fast5 -i $test_path/ -s $single_path/ --recursive in the $single_path, I expect to see, all of the reads in multi files converted to single files (12100 reads in 12100 single fast5 files), but I see 8100 single files. So I have 4000 missing reads. Now I convert single read fast5 file to multi: single_to_multi_fast5 -i $test_path/ -s $multi_path/ --filename_base $output_name --batch_size 1000 --recursive in the $multi_path folder, I expect to see that generated multi files contain all my single-read files (4000 + 2), but I see 3006 in the mapping summary. So again I missed 996 single reads in my multi files. Since I made a small subsample of data, I am sure that the files are not overwritten. Because my final goal is converting everything to multi I was going to do the following pipeline: orig_path=original_folder intermediate_path=intermediate final_path=final_multi # converts/collects all multi files in the original folder multi_to_single_fast5 -i $orig_path/ -s $intermediate_path/ --recursive single_to_multi_fast5 -i $intermediate_path/ -s $final_path/ --filename_base $output_name --batch_size 1000 --recursive # converts/collects all single files in the original folder single_to_multi_fast5 -i $orig_path/ -s $final_path/ --filename_base $output_name --batch_size 1000 --recursive However because it seems I get missing reads during the process, I cannot use this pipeline. At the same time, my original dataset is super huge, I cannot check if individual files are single or multi. It would take ages (I guess). Is there a solution for solving this problem besides checking every file? Maybe my pipeline is not quite suitable for what I want to do. Answer: It looks like you're creating too much additional work for yourself. What I'm trying to lead to with my comments is that the file names should help in working out if a file represents single reads or multiple reads, in which case the single reads can be shifted into a separate folder. Based on the file names that I would expect, a search for files containing '_read_[0-9]' should be sufficient to fish out the single fast5 files: find . -name '*_read_[0-9]*' > single_reads.txt mkdir -p single_reads cat single_reads.txt | while read fname; do mv -i ${fname} single_reads; done [That single_reads.txt file could be split into multiple files if directory entries get too large.] After which you should have a single folder that contains thousands of single-read fast5 files, and other folders that contain multi-fast5 files, removing the problem of needing to deal with mixed file types.
{ "domain": "bioinformatics.stackexchange", "id": 2625, "tags": "nanopore, fast5" }
Magnetic field loops do not knot or link
Question: The magnetic field is composed of closed loops (assuming there is no magnetic monopole). How does one prove any two magnetic loops do not knot to form a link? Answer: You don't. Take a set of short permanent magnets. Chain them together. Make a knot out of the chain, and connect the ends. Or form two chains. Make them into linked closed loops.
{ "domain": "physics.stackexchange", "id": 74602, "tags": "electromagnetism, magnetic-fields, topology" }
openni_node.launch no longer in openni_camera?
Question: I'm not sure if this is a bug or a planned change, but up until not too long ago, I'd swear I was always able to fire up a connection to my Kinect with the command: $ roslaunch openni_camera openni_node.launch Now in the latest Electric RC-1 Debian packages, the file openni_node.launch no longer exists. Not sure if it disappeared earlier than RC-1. Has anyone else run into this or am I having a senior moment? --patrick UPDATE: Thanks to Mike's answer below, I tried launching from the new location: $ roslaunch openni_launch openni.launch And amid the INFO messages, I get a number of ERROR messages as listed below. Also, I get a blank image in image_view when I try: $ rosrun image_view image_view image:=/camera/rgb/image_color Here now is the output from openni.launch: [ERROR] [1314123870.086794930]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/compressed/set_parameters] [ERROR] [1314123870.096387560]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/theora/set_parameters] [ INFO] [1314123874.255698250]: Number devices connected: 1 [ INFO] [1314123874.256049202]: 1. device on bus 001:20 is a Xbox NUI Camera (2ae) from Microsoft (45e) with serial id 'B00362222624036B' [ INFO] [1314123874.257676854]: Searching for device with index = 1 [ INFO] [1314123874.334089426]: Opened 'Xbox NUI Camera' on bus 1:20 with serial number 'B00362222624036B' [ INFO] [1314123874.370155878]: rgb_frame_id = '/camera_rgb_optical_frame' [ INFO] [1314123874.370254144]: depth_frame_id = '/camera_depth_optical_frame' [ INFO] [1314123874.375764411]: using default calibration URL [ INFO] [1314123874.375919878]: camera calibration URL: file:///home/patrick/.ros/camera_info/rgb_B00362222624036B.yaml [ERROR] [1314123874.376112291]: Unable to open camera calibration file [/home/patrick/.ros/camera_info/rgb_B00362222624036B.yaml] [ WARN] [1314123874.376192120]: Camera calibration file /home/patrick/.ros/camera_info/rgb_B00362222624036B.yaml not found. [ INFO] [1314123874.378165834]: using default calibration URL [ INFO] [1314123874.378270805]: camera calibration URL: file:///home/patrick/.ros/camera_info/depth_B00362222624036B.yaml [ERROR] [1314123874.378358246]: Unable to open camera calibration file [/home/patrick/.ros/camera_info/depth_B00362222624036B.yaml] [ WARN] [1314123874.378610024]: Camera calibration file /home/patrick/.ros/camera_info/depth_B00362222624036B.yaml not found. [ WARN] [1314123874.380757992]: Using default parameters for RGB camera calibration. [ WARN] [1314123874.380868691]: Using default parameters for IR camera calibration. Originally posted by Pi Robot on ROS Answers with karma: 4046 on 2011-08-23 Post score: 1 Answer: Launch files have moved to the "openni_launch" package. Originally posted by fergs with karma: 13902 on 2011-08-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Pi Robot on 2011-08-23: Got it--thanks! Comment by fergs on 2011-08-23: You'll also have to dynamic reconfigure to turn on depth_registration, as it no longer does so automatically (you can also set a rosparam parameter) Comment by Pi Robot on 2011-08-23: Thanks Fergs,--however, launching the new openni.launch file gives me errors and I get a blank RGB image using image_view. I have updated my question with the error output. Comment by sven_007 on 2012-02-16: I have the same problems as above.Have you solved them? Would you please tell me how to get it?and how to dynamic reconfigure to turn on depth_registration?Thanks very much! Comment by sven_007 on 2012-02-16: I have the same problems as above.Have you solved them? Would you please tell me how to get it?
{ "domain": "robotics.stackexchange", "id": 6493, "tags": "kinect, openni-camera" }
What is difference between Taylor series and Legendre polynomial?
Question: As i know, taylor series is used for analysis f(x). So if i have many orders, i can approximate f(x). In this point of view, what is difference Legendre Polynomials? Answer: Taylor series approximation of a function involves matching derivatives of the function and the polynomial at one point. All information to construct the Taylor series is obtained from the function and its derivatives at that point. So you get a smooth approximation around that chosen point. In the case of approximation by Legendre polynomials, you approximate a function in an interval in a least squares sense. The function values in this interval as well as its derivatives will generally be different from the corresponding values of the approximating polynomial, but the mean squared error in the chosen interval attains a minimum for a given degree of the polynomial.
{ "domain": "dsp.stackexchange", "id": 2092, "tags": "image-processing" }
Question about oersted's experiment
Question: So I was studying my physics notes. They are on the magnetic field shown by a straight current carrying wire. To demonstrate that theres the oersted's experiment. The starting goes like this- Insert a thick copper wire between 2 points, X and Y, in a circuit. The wire should be perpendicular to the plane of paper. Place a compass horizontally to the wire. Switch on the current. The compass needle shows deflection My question is - what does placing the wire XY perpendicular to the plane of paper mean? And why would you do that? Please help. Answer: My question is - what does placing the wire XY perpendicular to the plane of paper mean? It means you poke the wire through the paper. If you were to shake some iron filings on to the paper they'd line up along the magnetic field lines like little compass needles. The magnetic field lines can be drawn like this: By the way, despite what you might be told by people who can't answer your questions, there are people who understand magnetism and how it works. Don't listen to people who tell you physics can't supply the answers. It can. That's why we do physics. We do physics to understand the world, not to make predictions. Don't forget that Maxwell wrote On Physical Lines of Force. The force is real, and so are those iron filings and the pretty patterns: Whilst there are no "lines of force" per se in the space around the wire, they do map out something real - a magnetic field. Electromagnetic field interactions result in linear and rotational forces. A uniform magnetic field is a place where the linear forces cancel, but the rotational forces do not. Hence when you throw an electron through the middle of a solenoid, it follows a helical path. Note though that a typical magnetic field is not uniform, and so we see linear forces too. Hence two wires like the above attract one another. But if you reverse the current in one of the wires, they repel.
{ "domain": "physics.stackexchange", "id": 34420, "tags": "electromagnetism" }
What is the scientific name of this lovely orange black flying beetle?
Question: Found on Rangoon creeper (Combretum indicum) Answer: This is a blister beetle from the species Mylabris pustulata. Here is another photo for comparison: Source: http://www.knowyourinsects.org/Coleoptera1.html And, just for fun, the specimen in my link and yours, side by side: It's worth mentioning that Mylabris has been confused with the Genus Hycleus, and the synonymy is complex.
{ "domain": "biology.stackexchange", "id": 7494, "tags": "species-identification, entomology" }
Order of operations when a real audio signal becomes complex
Question: I have a real audio signal $x[n]$, and I'd like to apply a frequency shift by $f$ and then envelope it by $\cos(\omega n + \phi)$. If I wanted to do the frequency shift in the ideal sense, from posts here like this one, I think I'd first construct an analytic signal using the Hilbert transform of $x$, $$x_\mathrm{a} \triangleq x[n] + j \hat{x}[n]$$ where $\hat{x}[n]$ is the Hilbert Transform of $x[n]$. $$ \hat{x}[n] \triangleq \mathscr{H}\Big\{ x[n] \Big\} = (h*x)[n] $$ and $$ h[n] = \begin{cases} \frac{\big(1 - (-1)^n\big)}{\pi n} \quad & n \ne 0 \\ \\ 0 & n = 0 \end{cases}$$ and then shift the frequency $$\tilde{x}[n] = e^{-2\pi j n \frac{f}{f_{s}}} \left(x[n] + j \hat{x}[n]\right)$$ Right so far? Now I have a complex audio signal, and so I'm not sure how to apply the envelope. An envelope on a real signal would just be $\cos(\omega n + \phi) x[n]$. But since it's now complex, do I take the real part of $\tilde{x}[n]$ and then scale by $\cos(\omega n + \phi)$? Or do I need to stay in the complex domain and make the envelope analytic before applying it, and then I take the real part? I'd guess the former. But if it's the latter, or they end up being the same, I would be confused why an analytic signal for the envelope $\exp(j \omega n)$ just resembles another frequency shift, when I'm trying to apply an envelope not a frequency shift. Answer: It kind of depends on why you're modulating with a cosine. What you're doing is the same shape as a frequency-shifter combined with a ring-modulator, so I'm going to write an answer assuming that's your goal. The short answer is: you still want the real-valued cosine, and it doesn't matter whether you do the cosine-modulation or the complex-sinusoid-modulation first. It does matter whether you modulate before or after the Hilbert filter, and that can affect what happens to low frequencies. Details Here's the spectrum of a real signal, which includes negative frequencies: An analytic/Hilbert filter (ideally infinite, approximated with finite FIR or IIR in practice) removes the negative frequencies: Modulating (multiplying) by a complex sinusoid shifts the spectrum upwards: I think the clearest way to think about this is with the convolution theorem: multiplying (modulating) in the time-domain convolves in the frequency-domain. A complex sinusoid's spectrum is an offset spike, so convolution just shifts everything to the right. The spectrum of the cosine signal is similar, but you have two spikes (after all, a cosine can be expressed as the sum of two complex exponentials): The result of that is two copies of the spectrum which get added together. This gives the signature sound of a ring-modulator: every frequency peak in the input becomes doubled. If you Hilbert-filtered this cosine, you'd lose that doubling and be left with a one-sided (Bode) frequency-shift. Order of operations It doesn't matter whether you modulate with the complex sinusoid or the cosine first. The end result is the same - you end up with two shifted copies of the spectrum (from the cosine) combined with a horizontal shift (from the frequency-shift): So actually, you could cut out the middle-man and (instead of using a cosine) modulate by these two complex sinusoids, and add together the results. This is equivalent, but you explicitly control the location of these spikes, instead of their centre & spacing. However, it does matter whether you do these modulations before or after the Hilbert. It also makes a difference whether you're shifting up or down, in terms of whether you get any reflections or negative frequencies in your result: Any negative frequencies here will be reflected back into positive frequencies when you take the real result. It's up to you to decide which of these look right. If you're using "sum of two complex-modulated copies" (instead of the cosine-modulator) then you could even make different choices for both sides, depending whether they're up or down. If you're modulating with a cosine and then a frequency-shift, there's an additional option of modulating right at the end (on the real output), or doing the cosine before the Hilbert and the complex-shift afterwards. The diagrams would get a bit cluttered so I've skipped them, but you should be able to figure out the results yourself.
{ "domain": "dsp.stackexchange", "id": 11576, "tags": "audio-processing, hilbert-transform" }
The factor $3$ in the definition of the quadrupole moment tensor
Question: I can find two different ways of writing the quadrupole moment tensor $$Q = \int \mathrm{d}^3r \rho(r) \left(3 r\otimes r - |r|^2I\right)$$ or $$Q = \int \mathrm{d}^3r \rho(r) \left(r\otimes r - \frac{|r|^2}{3}I\right).$$ I am confused. Which one is it? Answer: There is no reason that physicists have to agree on how to normalize a quadrupole moment. Both conventions are in use, and equations involving $Q$ differ depending on the choice, so ultimately everything you actually measure, such as an electrostatic force, is the same regardless of the convention. In my experience, the second choice is in more common use. Sometimes $Q$ isn’t defined as trace-free, so the second term is missing. So that’s another choice. Sometimes people prefer to define a quadrupole moment $Q_{lm}$ based on spherical harmonics rather than a cartesian one. Yet another choice!
{ "domain": "physics.stackexchange", "id": 74099, "tags": "electromagnetism, multipole-expansion" }
ROS2 parameters file and launch argument override
Question: I am using ROS2 foxy. I would like to launch a node with a set of default parameters in a .yaml file and add support for launch argument to override such parameters. Default parameters: /imu: ros__parameters: spi_dev: "/dev/spidev0.0" output_rate: 200.0 # Hz # more params here I tried the following but does not work, what am I missing? I fear that the parameters passed from launch arguments are not in the node namespace but global, thus are not picked up. import os from ament_index_python.packages import get_package_share_directory from launch import LaunchDescription from launch_ros.actions import Node from launch.actions import DeclareLaunchArgument from launch.substitutions import LaunchConfiguration package_name = 'my_sensors' param_name = 'my_default_params.yaml' def generate_launch_description(): rate_arg = DeclareLaunchArgument( name='rate', default_value="100", description="the sensor output data rate") device_arg = DeclareLaunchArgument( name='device', default_value="/dev/spidev0.0", description="the imu spi device") arguments = [ rate_arg, device_arg, ] default_parameters = os.path.join( get_package_share_directory(package_name), 'config', param_name ) override_parameters = { 'output_rate': LaunchConfiguration('rate'), 'spi_dev': LaunchConfiguration('device'), } imu = Node( package=package_name, executable='imu_node', name='imu', parameters=[default_parameters, override_parameters], output='screen', emulate_tty=True ) return LaunchDescription([ *arguments, imu ]) Thank you so much for any help. Answer: I solved the issue by declaring the node in the global namespace /: imu = Node( package=package_name, executable='imu_node', name='imu', namespace='/', # <<<<<<<<<<<<<<<< parameters=[default_parameters, override_parameters], output='screen', emulate_tty=True ) In this way, ros parameters are resolved correctly without any modification or namespace prefix on parameters declaration.
{ "domain": "robotics.stackexchange", "id": 38831, "tags": "ros2, foxy, launch" }
HHL algorithm, how to decide n qubits to prepare for expressing eigenvalue of A?
Question: I am trying to understand the HHL algorithm for solving linear systems of equations (Harrow, Hassidim, Lloyd; presented in arXiv:0811.3171 and explained on page 17 of arXiv:1804.03719). By reading some papers, I think I got rough idea but there are many things I still do not understand. Let me ask some. When applying Quantum Phase Estimation, in page 49 of the same article, it says "Prepare $n$ register qubits in $|0\rangle^{\bigotimes n}$ state with the vector $|b\rangle$", so that, by applying QPE to $|b\rangle |0\rangle^{\bigotimes n}$, we can get $\sum_j \beta_j |u_j\rangle |\lambda_j\rangle$. And $|\lambda_j\rangle$ is the $j^{th}$ eigenvalue of matrix $A$ and $0 < \lambda_j < 1$, and $\left|u_j\right>$ is the corresponding eigenvector. I also understand $|\lambda_j\rangle$ is the binary representation for fraction of $j^{th}$ eigenvalue of $A$. (i.e. $\left|01\right>$ for $\left|\lambda\right>=1/4$) My questions are, Q1: How to decide $n$, how many qubits to prepare? I assume it is related to the precision of expressing the eigenvalue, but not sure. Q2: What to do if $\lambda_j$ of $A$ is $≤ 0$ or $≥ 1$? Answer: The number $n$ decides the size of the register to be used for phase estimation, which in turn determines the accuracy. If you knew your eigenvalues (for a unitary) were a subset of the ${2^n}^{th}$ roots of unity, $e^{2\pi i m/2^n}$, then using $n$ bits is guaranteed to give you the exact answer. Assuming you don't have exactly this guarantee, then you can think of the phase estimation as returning the best $n$-bit approximation of those values, i.e. $\phi/(2\pi)$, where the eigenvalue is $e^{i\phi}$, would be approximated to within $1/2^{n+1}$ (with caveats about the probability of this happening, which we can lower-bound by $4/\pi^2$, and can improve further by using a slightly larger $n$). If memory serves, what you want to do to prepare $A$ is two things (I'm leaving out the connection between implementing a non-unitary $A$ and the unitaries required for phase estimation, since you didn't ask): Add enough $\mathbb{I}$ so that $A^{(1)}=A^{(0)}+B\mathbb{I}$ is non-negative (i.e. all $\lambda_i\geq 0$) Rescale to $A=\epsilon A^{(1)}$ so that the maximum eigenvalue is less than 1. These operations don't change the eigenvectors, and change the eigenvalues by known amounts that you can compensate for later, $\lambda_i=\epsilon(\lambda_i^{(0)}+B)$. You might worry that this rescaling requires you to know the very information that you're trying to calculate. However, it's easy to make at least some crude estimates of the limits of the spectrum via, for example, Gershgorin's Circle Theorem. Actually, you could probably get away with just a rescaling (and no $\mathbb{I}$) if you ensure all eigenvalues are in the range $-1/2$ to $1/2$, due to the periodicity of the Quantum Fourier Transfer, but making use of it gives you the maximum opportunity to spread all the eigenvalues out as much as possible, and hence to get as accurate an estimate on them as possible.
{ "domain": "quantumcomputing.stackexchange", "id": 143, "tags": "quantum-algorithms" }
Fokker-Planck: uniqueness and convergence to stationary distribution
Question: Consider the Langevin equation ($N$-dimensional) with nonlinear drift term, but expressible as a gradient of a function $U(\vec{x})$. Namely, consider the stochastic process described by the set of equations: $\frac{\partial x_n}{\partial t} = \frac{\partial}{\partial x_n} U(\vec{x}) + \sqrt{2c} \eta_n\,.$ The problem can be reformulated in terms of the probability distribution $P(\vec{x},t)$, through the following Fokker-Planck equation: $\frac{\partial P(\vec{x},t)}{\partial t} = \bigg( - \sum_{i=1}^N \frac{\partial}{\partial x_i} \big( \frac{\partial}{\partial x_i}U(\vec{x})\big) + c \sum_{i,j=1}^N \frac{\partial^2}{\partial x_i \partial x_j} \bigg) P(\vec{x},t)$ The equation above admits the following stationary solution: $P^s(\vec{x}) = \mathcal{N} e^{\frac{-U(\vec{x})}{c}}$ Is there a simple way to convince yourself that, in this case, given any initial distribution I always converge only to above $P^s(\vec{x})$? Answer: Okay, so to give some detail to Roger Vadim's comment (a sketch, for full detail cf. Risken... ;) ): $$ \partial_t P = L P = \sum_i \partial_i \left(-\partial_i U P + c \partial_i P\right) $$ (I don't follow the convention of letting differential operators acting on everything to their right) has a negative semidefinite operator on the right-hand side. I.e. all eigenvalues of $L$ have non-positive real part. And the zero eigenvalue corresponds to the equilibrium/stationary solution. For a solution decomposed into eigenfunctions of $L$, $$ P = \sum_i c_i f_i\,, $$ with time-dependent coefficients $c_i$ and eigenvalues $\lambda_i$, i.e. $$ L f_i = \lambda_i f_i\,, $$ we find that $$ c_i\left(t\right) = c_{i,0} \text{e}^{\lambda_i t}\,. $$ Now, since all eigenvalues are negative, the exponentials get smaller and smaller at large times, except for the coefficient of the stationary/equilibrium solution, since it is the eigenfunction to the eigenvalue zero. To see that eigenvalues of $L$ have negative (rather, non-positive) real part, we need the fact that $\Delta$ is a negative semidefinite operator. You see this easily through its Fourier transform $-k^2$. More generally, in case of a non-isotropic diffusion term we would instead need a positive (semi-) definite matrix to form the operator $\sum_{i,j}\partial_i \left(D_{ij} \partial_j P\right)$. Another nice bit of intuition - nothing directly to do with the above - is: If the drift term vanishes, you see easily that the solution approaches the constant equilibrium solution. The Laplace operator on the right sort of smoothens out bumps in the solution: If you have a maximum of $P$ somewhere rising above the stationary solution, because it is a maximum there is $\Delta P < 0$ at this point and thus $P$ at this point is decreasing in time.
{ "domain": "physics.stackexchange", "id": 88825, "tags": "probability, stochastic-processes, asymptotics" }
C++ implementation of a Stack with dynamic C-style array
Question: I implemented a Stack in C++ using a dynamic C-style array. I tried sticking to the most important functions a Stack has to have to be usable. This is meant to only be for integers. I appreciate any suggestions on how to improve this. Here is my code: Stack.h #ifndef STACK_H #define STACK_H #include <cstdint> #include <algorithm> class Stack { public: Stack(int32_t u_size = 10) : m_current_ind{ -1 }, m_size{ 0 }, m_capacity{ u_size }{ // constructor m_stack_arr = new int32_t[u_size]; }; ~Stack() { // destructor delete[] m_stack_arr; }; Stack(const Stack& other) : m_current_ind{ -1 }, m_size{ 0 }, m_capacity{ 0 }{ // copy constructor for (int32_t i = 0; i < other.size(); i++) { push(other.m_stack_arr[i]); } } Stack(Stack&& other) : m_current_ind{ -1 }, m_size{ 0 }, m_capacity{ 0 }{ // move constructor m_stack_arr = std::exchange(other.m_stack_arr, nullptr); } Stack& operator=(const Stack& other) { // copy assignment operator if (this != &other) { clear(); for (int32_t i = 0; i < other.size(); i++) { push(other.m_stack_arr[i]); } } return *this; } Stack& operator=(Stack&& other) { // move assignment operator if (this != &other) { m_stack_arr = std::exchange(other.m_stack_arr, nullptr); m_size = other.m_size; m_capacity = other.m_capacity; m_current_ind = other.m_current_ind; } return *this; } void clear(int32_t new_capacity = 10); // clear stack and initialise new empty array bool is_empty() const; // check if stack is empty int32_t pop(); // delete value on top and return it int32_t peek() const; // return value on top of the stack void push(int32_t u_value); // add value to stack int32_t size() const; // return size private: void extend(); // double capacity int32_t m_current_ind; int32_t m_size; int32_t m_capacity; int32_t* m_stack_arr; }; #endif Stack.cpp #include "Stack.h" #include <stdexcept> void Stack::clear(int32_t new_capacity) { delete[] m_stack_arr; m_stack_arr = new int32_t[new_capacity]; m_capacity = new_capacity; m_current_ind = -1; m_size = 0; } bool Stack::is_empty() const { return !m_size; } int32_t Stack::pop() { if (!is_empty()) { m_size--; return m_stack_arr[m_current_ind--]; } throw std::out_of_range("Stack is already empty."); } int32_t Stack::peek() const { if (!is_empty()) { return m_stack_arr[m_current_ind]; } throw std::out_of_range("Stack is empty."); } void Stack::push(int32_t u_value) { m_size++; m_current_ind++; if (m_size > m_capacity) { extend(); } m_stack_arr[m_current_ind] = u_value; } int32_t Stack::size() const { return m_size; } void Stack::extend() { int32_t* temp = new int32_t[m_capacity * 2]; m_capacity *= 2; std::copy(m_stack_arr, m_stack_arr + m_size, temp); delete[] m_stack_arr; m_stack_arr = temp; } ``` Answer: Prefer std::vector to dynamic allocation I understand this project was meant for practice, I have done a similar project, who wouldn't want to mess around with memory allocation, but doing so leads to errors when the programmer is not careful or chooses to ignore certain things. Using std::vector gives you the following for free Copy constructor Copy assignment Move constructor Move assignment This enables you to focus more on the business logic of the problem. Stack class does not cater for default constructed objects Right now, your class assumes that users do not have default constructible objects. This is okay because you are holding Integral values, but you might change it in the future. The following int *p = new int{} allocates and construct an int in memory, this is okay for int because it has a default constructor. Your class might hold objects that are very expensive to construct and as such, users might need to just have a stack that can hold their default constructible objects and construct them when they are needed. This give rise to placement new, see more about placement new here CODE REVIEW In Stack constructor, you declared the following Stack(int32_t u_size = 10) This is okay, you want a default stack to have 10 elements, A better approach is to declare a default stack to hold 0 elements, this is the behavior of std::vector. The stack can grow when necessary. If the user wants to explicitly request for space, declare a reserve method that allocates space. You might also consider a shrink_to_fit method, well am going overboard for a simple stack class. You forgot to allocate memory in your copy constructor. This is dangerous, you are reading into memory that was never allocated. The result is undefined, it might work today and crash tomorrow. A correct approach is this Stack(const Stack& other) : m_current_ind{ -1 }, m_size{ 0 }, m_capacity{ other.m_capacity } { // copy constructor m_stack_arr = new int32_t[other.m_size]; for (int32_t i = 0; i < other.size(); i++) { push(other.m_stack_arr[i]); } } Now this is better, we explicitly created m_stack_arr. Move constructor does not work. On testing your move constructor, I resulted to a segmentation fault. m_size was not given the appropriate size, the same case with m_capacity. using a swap method, your move constructor can be defined as follow void swap(Stack& lhs, Stack& rhs) { using std::swap; swap(lhs.m_current_ind, rhs.m_current_ind); swap(lhs.m_size, rhs.m_size); swap(lhs.m_capacity, rhs.m_capacity); swap(lhs.m_stack_arr, rhs.m_stack_arr); } This method just swap the internal representation of objects. Using this method, move constructor and move assignment becomes easy Stack(Stack&& other) noexcept { // move constructor swap(*this, other); other.m_stack_arr = nullptr; } We set the value of other.m_stack_arr to a nullptr to make it a cheap resource for the compiler to destroy. Move assigment is also similar, a self move is not bad and no need for a check here. Stack& operator=(Stack&& other) noexcept { swap(*this, other); other.m_current_ind = -1; other.m_size = 0; other.m_stack_arr = nullptr; return *this; } Both methods are marked noexcept because we are guaranteed that move assignment and move constructor would not throw an exception, this gives room for compiler to optimize some certain things. In clear method, you delete m_stack_arr before allocating a new one, what if new throws an exception, you have lost m_stack_arr. void Stack::clear(int32_t new_capacity) { int32_t* temp_arr; = new int32_t[new_capacity]; delete[] m_stack_arr; std::copy(m_stack_arr, m_stack_arr + m_size, temp_arr); //.. Your code goes in here } Here we created our memory and store it temp_arr, if that doesn't throw, we delete our old array and copy the new one. A more optimized version would be to use memcpy to copy the bytes from memory, but that is a different topic entirely.
{ "domain": "codereview.stackexchange", "id": 40184, "tags": "c++, reinventing-the-wheel, stack" }
Resultant velocity in rolling motion
Question: Why is the resultant velocity of a particle inside a body undergoing rolling without slipping always perpendicular to the line segment connecting it and the instantaneous axis of rotation? $P_2V_2$ is the resultant velocity. $P_2V_2=\omega R^2+V_{CM}$ $P_0$ is the instantaneous axis of rotation. Rephrased question: why is $P_0P_2$ always perpendicular to $P_2V_2$ for all $P_2$ situated inside the circle? Answer: If ${\bf v}$ is the velocity of some material point belonging to the disk, and ${\bf v}_0$ is the velocity of the point of contact, then, since they are material points belonging to the same rigid body, they obey ${\bf v} - {\bf v}_0 = \boldsymbol{\omega} \times ({\bf r} - {\bf r}_0)$, where ${\bf r}$ and ${\bf r}_0$ are position vectors to the respective points and $\boldsymbol{\omega}$ is the angular velocity of the rigid body. The no slip/penetration condition provides ${\bf v}_0 = {\bf 0}$. Therefore, the velocity of a material point is given by ${\bf v} = \boldsymbol{\omega} \times ({\bf r} - {\bf r}_0)$. Therefore, ${\bf v}$ is perpendicular to ${\bf r} - {\bf r}_0$ for all ${\bf r}$ belonging to the rigid body.
{ "domain": "physics.stackexchange", "id": 83218, "tags": "newtonian-mechanics, kinematics, rotational-kinematics" }
If a language is not Turing reducible to two languages, may it still be Turing reducible to their "union"?
Question: Consider a language $L$ that is undecidable relative to $L_1$ and is also undecidable relative to $L_2$. Suppose, however, that there is a "multi"-oracle Turing machine $M$ that can query both the $L_1$ oracle as well as the $L_2$ oracle such that $M$ decides $L$. In other words, $L$ requires both the $L_1$ and $L_2$ oracles to be decided, but neither one alone suffices to decide $L$. I would like to come up with such languages $L$, $L_1$, and $L_2$. At first, I was hoping that the oracles for the rejecting and the accepting problems for Turing machines together could be queried to decide the halting problem (but neither oracle alone could be used decide it), but that turns out to be a dead end. Now I'm wondering if I can approach the problem by trying to come up with two languages such that neither is Turing-reducible to the other, but I'm struggling to find examples. Answer: A classical theorem of Sacks states that if $L$ is not computable, then it is almost surely not computable relative to a random oracle. In other words, if $O$ is a random oracle, then the probability that $L$ is computable given $O$ is zero. Now take your favorite uncomputable $L$, and choose $L_1,L_2$ at random among all languages such that $L = L_1 \Delta L_2$ (here $\Delta$ is symmetric difference). Individually, $L_1,L_2$ are random oracles, and so almost surely $L$ is not computable given just $L_1$ or just $L_2$. However, it is clearly computable given both. Another solution is to take any two incomparable Turing degrees $L_1,L_2$ and $L = \{ 0x : x \in L_1 \} \cup \{ 1x : x \in L_2 \}$. The proof then follows practically by definition.
{ "domain": "cs.stackexchange", "id": 8476, "tags": "turing-machines, undecidability, oracle-machines" }
Hide / Show panels depending of value of a dropdown
Question: According to the value of a dropdown menu I have to show/hide two panels. For option 'derive' and 'reject' I should show the comment panel, and just for the 'derive' panel I should also show a derivation panel. There are other This works, but how could I refactor the code to make it shorter and more efficient without losing clarity? $("#formTabs").on( "change", "select.actions",function(){ var parentContainer = $(this).closest("fieldset"); if(this.value == "reject" || this.value == "derive"){ parentContainer.find(".commentPanel").show(); } // other 3 options where the panel should not show up. else { parentContainer.find(".commentPanel").hide(); } // Just for the derive option, a derivation panel must be shown. if(this.value == "derive"){ parentContainer.find(".derivationPanel").show(); } else { parentContainer.find(".derivationPanel").hide(); } }); Answer: The only "efficiency" improvement I see is caching the result of the this.value == "derive" check to avoid evaluating the string comparison twice. The effect of this is close to nothing, on the other hand it's good to reduce the number of duplicated literals. ".commentPanel" and ".derivationPanel" are also duplicated literals, so it would be good to eliminate those too. Lastly, I would recommend some minor style improvements: Don't put a space between ( "change" Put a space after the comma in "select.actions",function Put a space between ){ in function(){, everywhere Putting it together: $("#formTabs").on("change", "select.actions", function() { var parentContainer = $(this).closest("fieldset"); var commentPanel = parentContainer.find(".commentPanel"); var derivationPanel = parentContainer.find(".derivationPanel"); var isDerive = this.value == "derive"; if (this.value == "reject" || isDerive) { commentPanel.show(); } else { // other 3 options where the panel should not show up. commentPanel.hide(); } // Just for the derive option, a derivation panel must be shown. if (isDerive) { derivationPanel.show(); } else { derivationPanel.hide(); } }); Note that by extracting the commentPanel, derivationPanel, isDerive variables you don't lose any efficiency at all, as all the right-hand sides would be evaluated in the code no matter what. This rewritten version is better, because of the string literals now appear only once, which eliminates typos, and if you need to change something, you can do it at a single place instead of two as in the original.
{ "domain": "codereview.stackexchange", "id": 10318, "tags": "javascript, optimization, jquery" }
An implementation of maximum subarray finding algorithm in STL style
Question: I am reading Stroustrup now and I was very impressed with how flexible the STL library is thanks to iterators and generic programming. Along the way, I am also reading Cormen's book and decided to try to implement an algorithm for searching for the maximum subarray in the style of STL. I wrote in C++11. I would be very grateful to hear your opinion about this code. template<typename For> // Requires Forward_iterator<For>() auto find_maximum_subarray (For begin, For end) { typename std::iterator_traits<For>::value_type max_sum_subarr = *begin; typename std::iterator_traits<For>::value_type max_sum = *begin; For subarr_begin = begin; For left = begin; For right = ++begin; for (; begin != end; ++begin) { if (max_sum_subarr > 0) max_sum_subarr += *begin; else { max_sum_subarr = *begin; subarr_begin = begin; } if (max_sum_subarr > max_sum) { left = subarr_begin; right = begin; ++right; max_sum = max_sum_subarr; } } return make_tuple (left, right, max_sum); } Answer: Do not rely on the client code to #include necessary headers. The client has no idea which headers your file requires. Spell them out explicity: #include <iterator> #include <tuple> Are you sure you are using c++11? I am getting error: 'auto' return without trailing return type; deduced return types are a C++14 extension In general, naked auto returns in an interface is a dubious idea. Again, think of the client. The client should not analyze your template to deduce what it actually returns. It is OK to have them in the helper functions not exposed to the client. It took me a while to figure out why your code works correctly. The side effect of ++ in For right = ++begin; is very easy to miss.
{ "domain": "codereview.stackexchange", "id": 38074, "tags": "c++, algorithm, c++11, stl" }
Am I correctly showing a hydride shift on the alkyl halide?
Question: I did this quiz and I forgot the hydride shift. With the hydride shift, would these be the right answer? Answer: When you solvolyze the starting bromide a secondary carbocation would be generated. Since a secondary carbocation is not particularly stable, the molecule will explore other pathways such as hydrogen shift, alkyl shift, etc., in order to produce a more stable carbocation. The figure below, explores these options for your starting bromide. Realize that although I've drawn a discreet secondary carbocation that then undergoes rearrangement, in reality cation generation and alkyl or hydrogen shift may occur in a more or less concerted fashion. In this case there are 3 options (actually there are 4 options, but the fourth one would produce a primary carbocation, so I left it out - can you find it?), a, b and c. Pathways a and c produce secondary carbocations; however pathway b, involving hydrogen shift, produces a more stable tertiary carbocation. Capture of the tertiary carbocation from pathway c by solvent should produce the expected product.
{ "domain": "chemistry.stackexchange", "id": 2151, "tags": "organic-chemistry, reaction-mechanism" }
Issues in RSA setup
Question: Suppose we have public key: $$n= 1015, e= 3$$ and private key: $$d= 635, p= 35, q= 29, \phi(n)= 952$$ For $m = 100$, we have $$c = m^e ~mod~n = 100^3 mod~1015 = 225.$$ To decipher this, let us take $$c^d~mod~n$$ which is $$225^{635}~mod~1015$$ which equals $$680$$ But $680 \neq 100$ so this means that RSA incorrectly decrypted it right? Why does this happen? Answer: Your public key is not a legal RSA public key. In RSA, $n$ must be a product of two primes, but 35 is not a prime. Therefore, things don't work right: for instance, you got the wrong value of $\phi(n)$.
{ "domain": "cs.stackexchange", "id": 5688, "tags": "cryptography, encryption, modular-arithmetic" }
How to test a ros package
Question: My goal is to check if a ROS package has been installed from source correctly and can work without any error. So I'm looking for a common way to do such a test. I've downloaded the source code of the ROS Base and some other ROS packages from the github and I can see that there is a directory named test in most of packages, such as src/geometry/tf/test/, src/actionlib/test/. But I don't know how to use these test directories. So is there some common way to do test for each package? If there is not, does it mean that I have to read all of source code and code some test cases by myself to test ROS packages? Originally posted by bear234 on ROS Answers with karma: 71 on 2018-04-16 Post score: 0 Answer: You can run all tests by calling catkin_make run_test. See the catkin documentation for more info. Obviously, this just runs all defined/implementes tests and cannot guarantee that it "can work without any error" (though you should be pretty safe with the core packages). Note that you need to call catkin_test_results as specified on the link above to actually get the results of all tests that have run. Originally posted by mgruhler with karma: 12390 on 2018-04-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 30642, "tags": "ros, rostest" }
shock-resistant electromechanical linear actuator?
Question: I was wondering, if there is a shock load resistant electromechanical actuator that can withstand powerful impacts? Is there an electromechanical counterpart for pneumatic cylinders that is as shockresistant as hydraulic or pneumatic actuators? And if not, is there some kind of way to make them withstand impacts via external parts? Answer: EDrive Corporation makes a linear actuator called the Eliminator SS rated at 65in/sec and a thrust of 4000lbf. If you want to increase its shock resistance you can take the actuator and mount it in a ruggedized housing of some sort with an opening for actuator arm to do its work. You can call the company and find out if it has a shock rating.
{ "domain": "engineering.stackexchange", "id": 2406, "tags": "robotics, actuator, linear-motors" }
Without the Michelson-Morley experiment, is there any other reason to think speed of light is the universal speed limit?
Question: If the Michelson-Morley experiment hadn't been conducted, are there any other reasons to think, from the experimental evidence available at that time, that Einstein could think of the Special Theory of Relativity? Is there any other way to think why the speed of light is the ultimate speed limit? Answer: A lot of people find it somewhat surprising, but Einstein's initial formulation of special relativity was in a paper, On the electrodynamics of moving bodies, that makes very little reference to the Michelson-Morley result; instead, it is largely based on the symmetry of electromagnetic analyses in different frames of reference. From a more modern perspective, there is a strong theoretical case to be made that special relativity is, at the very least, a strong contender for the description of reality. These are beautifully summed up in Nothing but Relativity (doi), but the argument is that under some rather weak assumptions, which are essentially the homogeneity and isotropy of space, and the homogeneity of time, plus some weak linearity assumptions you are essentially reduced to either galilean relativity, or special relativity with some (as yet undetermined) universal speed limit $c$, with no other options. To get to reality, you need to supplement this theoretical framework with experiment - there's no other way around it. The Michelson-Morley experiment is, of course, the simplest piece of evidence to put in that slot, but in the intervening century we have made plenty of other experiments that fit the bill. From a purely mechanical perspective, the LHC routinely produces $7\:\mathrm{TeV}$ protons, which would speed at about $120c$ in Newtonian mechanics: it is very clear that $c$ is a universal speed limit, because we try to accelerate things faster and faster, but (regardless of how much kinetic energy they hold) they never go past $c$. If you want something from further back, this is precisely the reason we developed the isochronous cyclotron in the late 1930s and then switched to synchrotrons back in the 1950s - cyclotrons require particles to keep in sync with the driving voltage, but if they approach the speed of light they can no longer go fast enough to keep up. We have upwards of eighty years of history of being able to mechanically push things to relativistic regimes. If you wish for an answer inscribed within "experimental physics as of 1888, minus the Michelson-Morley result" then, as I said, the symmetry properties of electromagnetism (which are directly compatible with SR as derived from $v\ll c$ experiments, but require aether theories to make sense in galilean relativity) were plenty to convince Einstein that SR was the right choice. Edit: As pointed out in a comment, Einstein's original paper does make some reference to Michelson-Morley(-type) experiments, in his second paragraph: Examples [like the reciprocal electrodynamic action of a magnet and a conductor], together with the unsuccessful attempts to discover any motion of the earth relatively to the “light medium,” suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest. However, apart from this small nod, he makes no substantive references to the aether or its equivalents: the paper starts with the relativity postulates (based on the constancy of the speed of light), uses those to construct special relativity (as pertains transformations between moving frames, and so on), and then builds his case for it on the transformation properties of the equations of electromagnetism: these provide the deeper fundamental insight that underlies the symmetry of analysis of electromagnetic situations performed on different moving frames of reference.
{ "domain": "physics.stackexchange", "id": 36219, "tags": "electromagnetism, special-relativity, speed-of-light, inertial-frames, lorentz-symmetry" }
If a connected graph has a bridge then it has a cut vertex
Question: Is it true that if a connected graph has a bridge then it has a cut vertex? In my point of view, I don't think it is true to consider that a graph having a cut edge will definitely have cut vertex. Consider a graph with a single edge $uv$; if we remove this edge, the graph will get disconnected but if we remove $u$, the graph will be connected as a graph with the single vertex is still consider connected. Answer: An edge $e=uv$ is a bridge in $G$ if there is no path between $u$ and $v$ in $G-e$. In general, $u$ and $v$ are cut vertices, but there are some special cases you must treat with care. (1) If $G \simeq K_2$, then depending on the definition of connectivity, the $G-u \simeq K_1$ might and might not be considered connected*. (2) There is another case in which you would not consider $u$ to be a cut vertex; An edge $uv$ is called a pendant edge if $\deg(u) = 1$. In this case, if $uv$ is a pendant edge and $\deg(v)>1$, you would not typically call $u$ a cut vertex. Postlude All that being said, definitions in graphs are hard (case in point: Diestel). When we try to make definitions very general, special cases sometimes become absurd. Note 1 * To address David Richerby's objection, $K_1$ is considered disconnected when deriving connectivity from k-connectivity, in which you require the graph to have more than $k$ vertices. By that definition ("a 1-connected graph is called connected"), $K_1$ is disconnected by having too few vertices. 2 To answer Thinker's follow-up question; since you wanted to know what to answer to a hypothetical exam question, I would do it this way: Let $e = uv$ be a bridge. We have three cases: $\deg(u), \deg(v) > 1$. The statement is true. $\deg(u) = 1, \deg(v) > 1$. The statement is true, $e$ is a pendant. $\deg(u) = \deg(v) = 1$. The statement is true if and only if $K_1$ is considered connected.
{ "domain": "cs.stackexchange", "id": 11563, "tags": "graphs" }
Relationship between magnitudes of forward and reverse kinetic rate constants
Question: Consider the reversible unimolecular reaction: $$\ce{A <=>[k_1][k_2] B}$$ We know that the forward reaction is often considerably more thermodynamically favourable than the reverse reaction, and therefore the relationship $k_1 \gg k_2$ holds between the rate constants. The rate constants are in the same units, and so it is possible to write this relationship. A similar example would be a reaction that is bimolecular on both sides, $\ce{A + B <=> C + D}$, etc. However, consider this reversible reaction: $$\ce{A + B <=>[k_1][k_2] C}$$ Assume the reaction proceeds at rate $k_1 [\ce{A}][\ce{B}]$ in the forward direction and $k_2 [\ce{C}]$ in reverse. If the overall reaction rate has units $M s^{-1}$, then $k_1$ necessarily has units $M^{-1} s^{-1}$ and $k_2$ has units $s^{-1}$. My question is: For this second case, we can no longer impose that $k_1 \gg k_2$, due to difference in units, but can we say anything about their relationship? Does setting $k_1$ to some value constrain the choice of $k_2$ in any way? Answer: A chemical reaction may be a very complicated thing, involving many different paths and elementary reactions. Basically it always leads to the formation of an equilibrium, where the lowest possible energy will be the most favoured. Under certain conditions more than one state will be populated and hence there will be enough potential energy to interconvert molecules in certain states. We usually refer to that as an dynamic equilibrium. Changes under equilibrium conditions will most likely be elementary reactions (or an infinitely complicated chain of these.) The population of the various states seems to be static at equilibrium, since the overall change of enthalpy is zero. In a dynamic picture that only means, that forward and backward reaction cancel each other. In that sense they are the same reaction in different directions. Therefore they can be described as one trajectory on the potential energy surface. (If there is a chemical reaction involving many different elementary reactions and the equilibrium still occurs, then the mathematical description will be much more complicated as all elementary reactions will be correlated. However, the main conclusions should stay the same.) \begin{aligned} \Delta G_r &= \Delta G_r^\circ + \mathcal{R}T\cdot \ln K = 0\\ \Delta G_r^\circ &= − \mathcal{R}T \cdot \ln K \end{aligned} The equilibrium constant may be derived from the mass-action law and should be defined as a product of activities (for the standard state). For $\ce{A + B <=> C}$ this will lead to $$K^\circ= \frac{a(\ce{C})}{a(\ce{A})\cdot a(\ce{B})}$$ The activities are proportional to concentrations (in first approximation) and are unit less ($c^\circ$ being the standard concentration $1 \:\mathrm{mol/L}$) $$a=\gamma\frac{c}{c^{\circ}}$$ For reasonable dilutions ($c\to0\:\mathrm{mol/L}$) one can assume that the activity coefficient becomes one $\gamma\approx1$ and therefore rewrite the equilibrium constant with concentrations. $$K^\circ= \frac{c(\ce{C})/c^\circ}{c(\ce{A})\cdot c(\ce{B})/(c^\circ)^2}$$ At equilibrium the forward and backward reaction are coupled and therefore have to reflect the equilibrium constant. $$K^\circ= \frac{k_f}{k_b} = \frac{c_{\text{eq}}(\ce{C})/c^\circ}{c_{\text{eq}}(\ce{A})\cdot c_{\text{eq}}(\ce{B})/(c^\circ)^2}$$ As one can see, the constants also need to have the same unit. As they are at equilibrium, they are the same reaction with different populated states. If you are moving away from the equilibrium, thing will start to get a little bit more messy. On reaction will be faster than the other and rates will not be comparable any more.
{ "domain": "chemistry.stackexchange", "id": 1292, "tags": "physical-chemistry, kinetics, theoretical-chemistry" }
Why do mainstream speech models no longer require a personalized training step?
Question: Back in the Windows XP era, when setting up Windows OS-built-in speech/dictation, I had to speak out a bunch of programmed-in text samples to the speech-to-text engine to personalize my voice profile. Today, with networked speech-to-text engines like Siri or Cortana, I can just start dictating. The quality of the text-to-speech conversion seems equivalent, though my memory may be faulty on that aspect. Have speech models advanced past the need for any personalization of the training data? Or, do they just do the personalization under the covers now, without an explicit training wizard? Or, do they not do training, even though it would still be beneficial (e.g. because it's inconvenient)? Answer: Have speech models advanced past the need for any personalization of the training data? There were two aspects which improved accuracy significantly: Deep learning and neural networks greatly improved the accuracy. Amount of training data that major companies use has grown over years by order of magnitude. Companies collected so much data that effect of adaptation decreased. Or, do they just do the personalization under the covers now, without an explicit training wizard? There is a small adaptation usually going on, but it is very marginal in effect. It is basically matching your voice with some baseline voices and produces a vector of similarities and then this vector is used in realtime and adjust neural network input (so called i-vector adaptation). This kind of adaptation is pretty fast, you can adapt from 2-3 seconds of speech. For technical details you can read https://www.microsoft.com/en-us/research/uploads/prod/2018/04/ICASSP2018_CortanaAdapt.pdf Or, do they not do training, even though it would still be beneficial (e.g. because it's inconvenient)? There are some cases where adaptation would be beneficial but again there are multiple aspects here: It works good without adaptation. Neural network recognition does not fit well with adaptation actually. You need many many GPU nodes to train a big neural network, it is very hard to adjust it afterwards. You can adjust a small layer with adaptation data but effect is usually small just because neural network is pretty tightly tied thing and you can't simply modify a bit without retraining. Like I said above the amount of training data is so huge that your custom data is probably already in the training set and adaptation will not help much Adaptation can also harm. Imagine your speech had an unusual crack or beep from background or something like music and system adapted to it. Then it will actually decode your normal clean speech with less accuracy than unadapted system. Adaptation is not very convenient for users. Why do you need to adapt when you can simply start using the system. So the system design moved to the "it just works" and it is a good direction.
{ "domain": "cs.stackexchange", "id": 12974, "tags": "algorithms, machine-learning, speech-recognition" }
Quaternion of a 3D vector
Question: My drone is to reach position x,y, z and orient itself along vector3D (ax, by, cz) wrt origin. In order for me to visualize this on rviz I must represent them in quaternion form. I understand that quaternions and vectors don't represent the same thing, but if I take my arbitrary axis as x-axis (vector [1,0,0]) and calculate quaternions between the two vectors, I get this result: As you can see here, the poses aren't all normal to the faces of the cube (my vectors are!) on rviz. What must I do to simply get the vector representation in quaternion form to visualize them? Originally posted by swethmandava on ROS Answers with karma: 102 on 2016-03-13 Post score: 0 Answer: def pose_from_vector3D(waypoint): #http://lolengine.net/blog/2013/09/18/beautiful-maths-quaternion-from-vectors pose= Pose() pose.position.x = waypoint[0] pose.position.y = waypoint[1] pose.position.z = waypoint[2] #calculating the half-way vector. u = [1,0,0] norm = linalg.norm(waypoint[3:]) v = asarray(waypoint[3:])/norm if (array_equal(u, v)): pose.orientation.w = 1 pose.orientation.x = 0 pose.orientation.y = 0 pose.orientation.z = 0 elif (array_equal(u, negative(v))): pose.orientation.w = 0 pose.orientation.x = 0 pose.orientation.y = 0 pose.orientation.z = 1 else: half = [u[0]+v[0], u[1]+v[1], u[2]+v[2]] pose.orientation.w = dot(u, half) temp = cross(u, half) pose.orientation.x = temp[0] pose.orientation.y = temp[1] pose.orientation.z = temp[2] norm = math.sqrt(pose.orientation.x*pose.orientation.x + pose.orientation.y*pose.orientation.y + pose.orientation.z*pose.orientation.z + pose.orientation.w*pose.orientation.w) if norm == 0: norm = 1 pose.orientation.x /= norm pose.orientation.y /= norm pose.orientation.z /= norm pose.orientation.w /= norm return pose I realized my mistake, I didn't normalize the quaternion. This code works! Originally posted by swethmandava with karma: 102 on 2016-04-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24091, "tags": "ros, rviz, quaternion, markers" }
Rob Pike's Golang presentation excercise of day 1
Question: I'm following this presentation. At the very end there's an exercise about solving Fibonacci and says that instead of addition, make the operation setteable by a function. Is the following is a good solution? How close might it be to what Rob Pike would have written? package main import "fmt" func add(a, b int) int { return a + b } func subtract(a, b int) int { return a - b } func power(a, b int) int { return a ^ b } // fib returns a function that returns // successive Fibonacci numbers. func fib(op func(int, int) int) func() int { n0, n1 := 0, 1 return func() int { n0, n1 = n1, op(n0, n1) return n0 } } func main() { f := fib(add) // Function calls are evaluated left-to-right. fmt.Println(f(), f(), f(), f(), f(), f(), f()) f = fib(subtract) // Function calls are evaluated left-to-right. fmt.Println(f(), f(), f(), f(), f(), f(), f()) f = fib(power) // Function calls are evaluated left-to-right. fmt.Println(f(), f(), f(), f(), f(), f(), f()) } Answer: This seems like a good solution except for one thing: func power(a, b int) int { return a ^ b } The ^ operator doesn't do what you think it does. It's a bitwise xor operator. There is no power operator in Go, there's only math.Pow for float64s. On a side note, I personally would create a type for operators: type Op func(int, int) int It's easier to type in function definitions and allows you to extend it with methods like String().
{ "domain": "codereview.stackexchange", "id": 16242, "tags": "go, fibonacci-sequence" }
2D Multi Scale Dot Enhancement Filter based on Gaussian Filter and Hessian Matrix
Question: I'm trying to implement an algorithm that enhance dot and line like structures based on Hessian matrix, the algorithm uses Gaussian filter with different scales before calculating the hessian matrix for every pixel and later the Eigen values. I want to use the algorithm to enhance pulmonary nodules (dot like structures) to extract them for later classification. The Gaussian filter is used to reduce noise and preserve objects with specific scales (diameters in my case). Here is the algorithm that I'm trying to implement: A = imread('sliceX.png'); scales = [3, 5, 7, 11, 17]; % the scales in number of pixels % correspond to diameter of nodules in mm [1, 1.6, 2.4, 3.8, 6] Ecircle = zeros(512, 512, length(scales)); % Ecircle will store the % results of each enhancement scale. for sn=1:length(scales) % sn for scale number % Smooth the original 2D image with a 2D Gaussian function of scale % Sigma_s. B = imgaussfilt(A, scales(sn)); [gx, gy] = gradient(double(B)); [gxx, gxy] = gradient(gx); [gxy, gyy] = gradient(gy); % it's normaly [gyx, gyy] but since % gyx = gxy it's ok % loop through the image to calculte the eigen values for every % pixel and based on that we choose the value of each scale %enhancement filter for x = 1:512 for y = 1:512 % construct 2*2 hessian matrix for every pixel h = [gxx(x, y), gxy(x, y); gxy(x, y), gyy(x, y)]; e = eig(h); % returns a vector with the two eigen values % lambda1 and lambda2 in which lambda1 = e(1) and % lambda2 = e(2) lambda1 = e(1); lambda2 = e(2); if abs(lambda1) < abs(lambda2) temp = lambda1; lambda1 = lambda2; lambda2 = temp; end if lambda1 < 0 && lambda2 < 0 Ecircle(x, y, sn) = (abs(lambda2)^2)/abs(lambda1); end end end % multiply each enhancement scale by (sigma^2) as mentioned in the % article Ecircle(:, :, sn) = Ecircle(:, :, sn) * (scales(sn)^2); end I = max(Ecircle, [], 3); This implementation for enhancing dot like structures (nodules), and can enhance line structures (vessels in my case) with just changing the if condition. The problem is I'm not getting the supposed results that I should get: The result of the developer of the method in the paper: And this is my result: abvisioly the nodule in the image is greatly enhanced by there are great noise in the output image plus all the vessels joints are enhanced too (small and big) I think the problem is with the Gaussian smoothing filter imgaussfilt(), and precisely with sigma or scales(sn) in this example. I've read that sigma should be in the same units of x and y i.e. number of pixels so I've transformed the diameters (that I got from the experimental results of the original article) to the number of pixels using PixelSpacing attribute form the original Dicom file metadata. The diameters are [1, 1.6, 2.4, 3.8, 6] to cover approximately all the diameters of possible nodules. Where I did get wrong and If the problem is with the value of Sigma,How can I specify sigma? and how can I use the imgaussfilt() function correctly? Note: here is the image used in the example: input image Answer: Questioner's answer... sigma have the same units as x and y i.e. number of pixels. In multi-scale filtering, the size of the filter must change when the sigma changes. Obtain the number of pixels per one millimeter or the vice-versa. (I did this using the property of pixel spacing included in the DICOM metadata in Matlab you can do this as info=dicominfo('image.dcm'); and spacing=info.PixelSpacing;) What I was doing wrong is not changing the filter size when I changed sigma in the different scales. So, the solution to my problem is like this: the size of the gaussian kernel should be 2 times or preferably 3 times the value of sigma on eather side of the origin as Mr. Cris suggested. Gaussian filtering with Matlab's Image Processing Toolbox cutoff = ceil(3*sigma); In matlab there is two options, using fspecial() or imgaussfilt(), fspecial() is not recommended anymore in the newer versions of Matlab and latter is the recommended one. h = fspecial('gaussian',2*cutoff+1,sigma); B = conv2(A,h,'same'); or... B = imgaussfilt(A, sigma, 'FilterSize', 2*cutoff+1); Another thing that I did that corrected my results is converting my input image into type double using im2double() function. Here is the result:
{ "domain": "dsp.stackexchange", "id": 7755, "tags": "image-processing, matlab, filter-design, gaussian, multi-scale-analysis" }
Using Seurat to compare mutant vs.wt
Question: I am interested in using Seurat to compare wild type vs Mutant. I don't know how to use the package. How can I test whether mutant mice, that have deleted gene, cluster together? Answer: Single-cell analysis to compare samples is a long a difficult process. There is very good documentation for 10x Genomics cellranger, the DropSeq Pipeline and the Seurat R package. These tools all have GitHub repositories and the authors are very responsive if you encounter issues. Depending on the technology used to generate the data, you'll need to use either cellranger or DropSeq to process the FASTQ files. These are designed to account for the the different experimental design. Still, it is important to ensure that your samples have been processed in the same way to reduce batch effects. This question is to open-ended to cover all of the details needed to perform the analysis but here is some of the main things to consider and the general process to perform such as analysis. You will need competence using the command-line (shell) and programming in R (or Python) to perform single-cell analysis. I recommend performing this analysis on a remote server as some steps are memory intensive. The general overview to compare samples is: Demultiplex the reads for different indicies. You need to do this even if you've only put one sample per lane (to remove the index from the reads). cellranger mkfastq or Illumina's bcl2fastq will do this. Obtain a reference genome (FASTA) and gene annotation (GTF) for the species you are working with. You can prepare a reference transcriptome with cellranger mkgtf and cellranger mkref. Run cellranger count or the DropSeq pipeline on each sample separately. These will both perform STAR (splice-aware) alignment of paired-end RNA-Seq reads and count UMIs for each cell-barcode and gene. Cells will be identified and filtered by an automatic UMI threshold (this can be changed by forcing with expected number of cells). This will return a gene-barcode matrix. You need to aggregate results of cellranger runs for different samples with cellranger aggr. This will perform downsampling by default to normalise the number of reads between samples. It will also perform graph-based (Louvian) clustering on the combined samples and return HTML and Cloupe summary data to explore, in addition to a combined gene-barcode matrix for all samples. Single-cell RNA-Seq experiments are subject to batch effects. The CCA method from Seurat and the MNN method from scran are both available in the respective R packages to account for this (with different approaches). Batch effect correction needs to be performed before downstream analysis to ensure comparisons are valid. This methods require an overlap between the samples to estimate technical errors and account for them. You need to bear in mind that neither of these methods works unless: some of the cells detected in each sample are expected to be the same cell type biological replicates of each sample have been performed Once you have performed this correction, then standard Seurat workflows such as those shown in the tutorials can be performed which will identify clusters. You can use these to identify clusters specific to each genotype. Bear in mind that some overlap should be expected and if there isn't one, batch effect correction will over-correct for this.
{ "domain": "bioinformatics.stackexchange", "id": 782, "tags": "r, scrnaseq, seurat, single-cell, cellranger" }
Coercing a list of nodes into the most probable tree
Question: Suppose that we have an RTF document which contains sections and sub-sections. The sections and subsections all have headings that are visually marked up (e.g., bold and italic), but the document structure is not made explicit (i.e., we have a linear flow of text). From the section headings, we wish to automatically determine the most likely document structure. For titles on the same sectioning level, we know that they probably have the same kind of markup, and they probably increment in numbering, but we don't know exactly what they should look like (e.g., bold/italic; arabic/roman; how deep subsectioning goes), and their relatedness may be fuzzy (the author might forget a number in a sequence, for instance 1. First section, Second section, 3. Third section). To make things more explicit, we can assume assume that we have a features vector fs with a fixed number of features, that combines linearly into a fitness function f(fs) = w₁ f₁ + w₂ f₂ … wₙ fₙ, given a weight vector ws. The fitness function is arbitrary, this is just to make the problem explicit. So from list G: We wish to create a tree G' such that: G' maintains the preorder relations of G G' maximizes some fitness function f(G'), where f calculates how alike nodes are that are children of the same parent (same markup; incrementing numbering). (root not displayed) My question: does this problem reduce to another well-known problem? This problem reminds me of a lot of stuff I've seen before, from finding the best path through a DAG to hierarchical clustering, except nothing seems to hit the sweet spot in terms of describing or solving the problem. I guess it's closest to the problem of finding the minimum spanning tree, except calculating the spanning tree score is not as straighforward. I have thought of my own solution, but I was surpirised that I could find no resources that deal with this problem exactly. My solution would be a dynamic programming algorithm that attaches a score to each possible tree as a linear function. (Meaning we can cache subtrees without re-calculating everything.) We can learn the constants of our function using some expectation maximization algorithm based on existing document structures. Answer: In the end, I ended up solving my particular problem using Stochastic Context Free Grammars. There may be some additional cost incurred for production, or otherwise probability operations that you can't express in the CFG. The probabilistic Earley algorithm (Stolcke, 1995) allows us to intervene in the parsing process somewhat by producing callbacks on scannning a token (or predicting / completing). I have arbitrarily chosen my rule probabilities, but you can train them using the inside-outside algorithm, although that will probably mess up the the soundness of any probability-tinkering callbacks. Practically, this question lead me to create a Probabilistic Earley Parser for Javascript and for Java. These libraries only allow the user to multiply scan probability given a token at a position the sentence, but it is easy to add other callbacks with more information. Please create an issue in Github if you need this. Also, these libraries do not support training a grammar on a test set (inside-outside), because I did not have time to implement this. For an exposition of my use case, consider the chapter in my master thesis Automatic Assignment of Section Structure to Texts of Dutch Court Judgments, Inferring a Section Hierarchy
{ "domain": "cs.stackexchange", "id": 8070, "tags": "algorithms, graphs, optimization, trees" }
How can I learn about how cloning works in nature (twins)?
Question: I would like to learn about how the process of cloning works in nature - how are twins "created" after fertilization what genetic changes can occur in each of the twins. Where should I start? What are some interesting books on the topic, as well as recent research papers (or just the most important research topics that I could use to start looking for more information)? Answer: To learn about the process of how cloning works, you need to appreciate the underlying principles. A general bio textbook should suffice (most popular one is Alberts, where you should focus on chapter 21, but maybe read other chapters for background) Nature Scitable is also a good source. I'd discourage reading research papers at this stage, primary research paper are often very specialized, and reviews are aimed at other researchers in the field, that is unless you find one that is focused on communicating science to the public.
{ "domain": "biology.stackexchange", "id": 1084, "tags": "cloning" }
Node on RosAndroid is not receiving images from PC
Question: Hey guys, I have created a node on android to subscribe to a topic for images from PC. PC publishes images continuously & node on android phone subscribes for images. I have set the right IP to both ROS_MASTER_URI & ROS_IP as suggested here, http://answers.ros.org/question/72145/android-tutorial-pubsub-doest-subscribe-from-a-pc-node/ Still I am unable to achive it. So please let me know the solution, if anybody has here. Originally posted by stark on ROS Answers with karma: 1 on 2015-01-05 Post score: 0 Answer: Somehow network setup with ROS Android seems to confuse many users. We'll need some more information: Real Android device, or simulator Version of ROS, ROS Android & Android itself Network setup (LAN, WAN, etc) Also, just to be clear: are you trying to subscribe on your Android device to topics published by your PC, or is the PC trying to subscribe to topics published by your Android device? Edit: assumptions: you're using a physical Android device - not the simulator and both your ROS pc and the Android device are on the same network (a simple home LAN) you have checked that the Android device and the ROS pc have working IP connectivity you checked the Android application and that it is able to connect to ROS masters other than its own Make sure your ROS pc has ROS_IP set to its own IP, and that ROS_MASTER_URI=http://$ROS_IP:11311/. In most cases, a home LAN doesn't have proper DNS for the pc and/or the Android device, resulting in strange behaviour. Setting ROS_IP should work around that. On the Android device, make sure to use the ROS_MASTER_URI with the IP address of the master (pc), not its hostname. Once the Android node should be connected, use roswtf, rosnode and rostopic to diagnose your graph. Originally posted by gvdhoorn with karma: 86574 on 2015-01-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by stark on 2015-01-07: I am trying to subscribe on android device. PC publishes images continuously.. Comment by gvdhoorn on 2015-01-07: Could you update your original question with the other information? Comment by stark on 2015-01-07: All your assumptions are correct.I am able to run android_pubsub tutorial successfully, but i am facing this problem with image_transport tutorial. I will now try to diagonise praph..
{ "domain": "robotics.stackexchange", "id": 20484, "tags": "ros, ros-master-uri, ros-ip, android" }
Kinetic energy of system consisting of rod and rolling cylinder
Question: Suppose we have a cylinder with radius $r$ and mass $m_1$ rolling (without slipping and forward in the image below) on a table with a rod hanging on a point that's fixed to the periphery of our cylinder. The rod has length $l$ and mass $m_2$. How can we then find the kinetic energy of this system? I'll try to present an image so that it becomes more clear for you. If we let our generalized coordinates be $\alpha$ and $\beta$ respectively, we can uniqeuly define the configuration of this system since the cylinder is rolling WITHOUT slipping. Notice that the point of contact between the cylinder and the floor is momentarily at rest, meaning we can find the energy of the cylinder as $$ \frac{1}{2}(\frac{1}{2}m_1r^2 + m_1 r^2) \dot{\alpha}^2 = \frac{3}{4}m_1 r^2 \dot{\alpha}^2$$ I'm sure of this step. Now comes the hard part, to find the kinetic energy of the rod. We know that rigid body motion can be split up into both rotational and translational motion. We see that the point that's at instaneous rest and for which we have pure rotation about is the point about the periphery. Meaning the rod has the rotational energy $$\frac{1}{2} (\frac{1}{3}m_2l^2) \dot{\beta}^2$$ However, I'm extremely unsure of whether the angular velocity is correct here. I don't really know how to think in this scenario and in problems alike these in general where there're plenty of rigid bodies are connected and whose energies have to be taken into consideration. I'd be glad if anyone could explain the details on how to think in this problem, and then how to go on in more general problems alike these. Please, I'm not asking for a solution. Only a explanation for how to think about the rotational vector at the point of contact between the rod and cylinder. Thanks. Answer: When setting up the kinetic energy in terms of generalized coordinates, it is in many instances easiest to write $$ T = T_\text{tr,cm} + T_\text{rot,cm} $$ where $T_\text{tr,cm}$ is the translational kinetic energy of the center of mass (treating it as a point mass) and $T_\text{rot,cm}$ is the rotational kinetic energy as measured about the center of mass. One can typically find $T_\text{tr,cm}$ by deriving the Cartesian coordinates $(x_{cm}, y_{cm})$ of the center of mass of the object in terms of the generalized coordinates, and then calculating $$ T_\text{tr,cm} = \frac{1}{2} m \left( \dot{x}^2_{cm} + \dot{y}^2_{cm}\right). $$ In the case you've presented, it might be helpful to find the Cartesian coordinates of the attachment point as an intermediate step.
{ "domain": "physics.stackexchange", "id": 88225, "tags": "homework-and-exercises, energy, lagrangian-formalism, rotational-dynamics" }
Binary (Unary) Recommendation System with Biased Views
Question: I would like to create a content recommendation system based on binary click data that also takes views into account. What content a user has been exposed to, and therefore has the chance to click on, is currently biased by a rule based system that is not always documented. I do have view data (if a user saw the content on their screen, regardless of whether it was clicked.), and am wondering how to take this into account with a traditional matrix factorization recommendation system such as this item-item approach, or if there are other other better options. Any suggestions for implementation in Python are a bonus! Answer: Item-Item collaborative filtering can be applied to the unary data. This resource is good for learning item-item collaborative filtering on unary data. In your case, you just have positives: clicks. From here, you can proceed in two ways: Binary Classification: For binary classification, you need to define "negatives". Usually implicit feedback or unary data does not have true negatives. So, in order to define your negatives, you can do a couple of things: Negative Sampling: For each positive, you can sample a negative randomly A view and no click as a negative: If the content was shown to the user and the user chose to not click on it counts as a negative. But, it has a selection bias of your rule-based system, which is already in place. Learning-to-rank Learning to rank based approaches such as BPR-MF perform well on unary data. This library is well documented for BPR-MF and works just with unary data. Learning from Multi-Channel Feedback If you want to learn from both views and clicks, this work comes to mind.
{ "domain": "datascience.stackexchange", "id": 7673, "tags": "python, recommender-system, binary" }
Seeing light travelling at the speed of light
Question: Imagine there are two cars travelling "straight" at the speed of light*, $A$, and $B$. $B$ is following directly behind $A$. Suddenly, $B$ switches on its headlights. Will $A$ be able to see this light? My answer is no, since $A_v = B_v = c$ (the light will always stay stationary relative to $B$. This will probably lead to it gathering up, and intensifying. *I realize this is impossible, but it's a question my Grade 9 [Honours] teacher asked, so we don't need to get into Relativity, $m = \frac{m_0}{\sqrt{1 – (v / c)^2}}$, cough cough. (I think.) Answer: I can think of three ways to answer this: It can't happen. It really can't happen. See #1. Okay, that's probably enough ;-) Since you say we don't need to consider special relativity, suppose that the universe actually obeys Galilean relativity. That's the technical term for the intuitive way to think about motion, where velocities are measured with respect to some absolute rest frame, and there's nothing special about the speed of light or any other speed. If that were the case, then yes, the light beam would never catch up to car A. The energy contained in the light would presumably pile up in the headlight where it was emitted at first, but afterwards perhaps it would spread out sideways, or would be reabsorbed by the headlight as heat. We don't really have a good answer, because that's not the way the universe works - in fact, there's a lot of physics, both experimental and theoretical, that has been done to prove that it can't work that way. No matter how you try to resolve the problem, at some point you will run into a contradiction. The best thing you could probably do would be to draw a parallel to some sort of wave that travels with respect to some fixed reference frame, at a speed much less than that of light. Sound, for instance. Sound waves travel with a certain speed with respect to the air, which defines a single absolute reference frame, and their speed is much less than that of light, so there are no special relativistic effects to worry about. Your headlight scenario would then be roughly equivalent to an airplane traveling at the speed of sound. What happens in that case is that the airplane creates a sonic boom, a shock wave which results from the energy in the emitted sound waves piling up at the airplane and eventually being forced to spread out sideways. So one might guess that in your hypothetical situation, the headlights of car B would create a light shock wave that would spread out perpendicular to the direction of motion. This actually can happen in certain physical situations, namely when something is traveling through a transparent material that slows down the speed of light. This means that light itself travels at a slower speed, but not that the "universal speed limit" is any different. The effect is called Cherenkov radiation and it does indeed work out much like a sonic boom would.
{ "domain": "physics.stackexchange", "id": 91814, "tags": "speed-of-light, velocity" }
Why are neural networks so data hungry?
Question: Stephen Wolfram published an interesting long post on machine learning this week. He illustrates a function approximation application with the following target function, piecewise flat with three regions. I understand one can describe such a function with five parameters, the three constant levels (initially low, high in the middle and mid on the right) and the two discontinuity points. As a network architecture, the following picture is given. If my count is right, there are 19 weights (4+12+3 arrows) and 8 biases (count of all neurons but the input one, 4+3+1), totalling 27 parameters. The activation function is said to be ReLU for all neurons. With this frame, we have 27 parameters in the model to estimate a 5 parameter function. The following image illustrates how the model fits the function as the number of examples grows. From 10 thousand examples to 10 milion examples. The magnitude of data required is much higher than the complexity of the target function and the approximating network. How should this (dis)proportion of data to problem parameters be understood? Answer: One example or sample is basically a pair of data $(x_i, y_i)$ with $x_i$ randomly picked from x axis and $y_i$ from the piecewise function. As you can see, it doesn't provide a lot information to calculating the weighting factors. By 10,000,000 randomly samples, one may capture the turning points. But if it is not randomly sampled, I would guess the number of sample can be less. The process is different from calculating model parameters from the piecewise function parameter as the neural net doesn't know it is a piecewise function as a priori. The function it expects is a general function, which can be very complex.
{ "domain": "ai.stackexchange", "id": 3688, "tags": "neural-networks" }
How can we prove that "extract almost minimum" operation in a priority queue cannot be done in o(log n)?
Question: Suppose we want to create a priority queue with 2 operations: insert and extract almost min. Extract almost min operation selects either the first minimum or a second minimum item from the structure randomly, removes that item from the structure and outputs that item. How can we prove that both of these operations cannot be done in $o(\log n)$?. My approach: If you could prove that you can sort a list with a comparison based algorithm in $O(n)$ you could prove it by contradiction. But to sort a list we will need $n$ calls of extract almost min and with $o(log n)$ time which would be $o(n\log n) +$ swaps to fix the elements that are 2nd largest. Would that be enough to prove this contradiction that this is faster than $O(\log n)$? Or is there a way to sort it in $O(n)$ or prove this? Answer: What you have thought is not a bad start. However, more arguments are needed to establish the proposition. Given $n$ items, we can insert all of them into the priority queue and then extract almost min $n$ times to obtain a list of all items. Unfortunately, the obtained list is not sorted unless the minimum is extracted every time. A mix of heapsort and insertion sort Here is the adjusted procedure such that the obtained list right before the next extraction is sorted always. Insert all $n$ items into a priority queue $pq$. Initialize an empty list $answer$. While $pq$ is not empty: Extract almost min to get an "almost minimum" $\alpha$. Append $\alpha$ to $answer$. While $\alpha$ is smaller than the item right before it, swap the two. Return $answer$, which is a sorted list of the given $n$ items. The outer step 3 is just insertion sort but with the next item provided by "extract almost min" operation on $pq$. It is much faster than usual insertion sort, since an item will not be moved any more after it has been moved backwards once as "the item right before" some "$\alpha$". This fact will be proved below in detail. The number of swaps and comparisons at step 3.3 Suppose we executed the procedure. Consider an item $\beta$ in $answer$ that was swapped with some $\alpha$ at some moment when step 3.3 was executed. That means $\alpha$ was extracted later than $\beta$ and $\alpha$ is smaller than $\beta$. Consider the moment when $\beta$ was output by "extract almost min" on $pq$. Every item in $pq$ other than the minimum was not smaller than $\beta$. Hence, $\alpha$ must be that minimum. So the same $\beta$ can be swapped with one $\alpha$ only. The swap between the same pair of $\alpha$ and $\beta$ cannot happen more than once, since $\alpha$ moves to the front always and $\beta$ moves to the back always. Hence at most $n-1$ swaps are made at step 3.3 during the whole execution of the procedure. For each execution of the while loop of step 3.3, every comparison is followed by a swap except the last comparison. So the number of comparisons at step 3.3 during the whole execution of the procedure is $n-1$ more than the number of swaps, i.e., at most $2(n-1)$. Conclusion Towards a contradiction assume both insertion and extraction can be done in $o(\log n)$ time. The running time of the procedure above is $$o(n\log n)+o(n\log n)+ O(n)+ O(n) + O(n)=o(n\log n).$$ Note that both big $O$-notation and little $O$-natation are used. However, it is known there is no comparison sort that uses $o(n\log n)$ comparisons. This contradiction means it is not true that both insertion and extraction can be done in $o(\log n)$. An exercise Prove the same proposition if "almost min" is defined as any one of the $m$ minimum items for some constant $m$ instead.
{ "domain": "cs.stackexchange", "id": 20462, "tags": "algorithms, data-structures, algorithm-analysis, runtime-analysis, priority-queues" }
Chaining multiple predicates
Question: I have a some filters that chains around 30 Expression<Func<T, bool>> using linq to entities. Currently this is how I am managing them ... //project filter Expression<Func<Project, bool>> projectFilter = FilterEnabled(); projectFilter = projectFilter.And(GetProjectByOrganization()) .And(GetProjectByProductLine()) .And(GetProjectByProjectType()) ... //subproject filter Expression<Func<SubProject, bool>> subProjectFilter = FilterEnabled(); subProjectFilter = subProjectFilter .And(...) ... //activity filter Expression<Func<Activity, bool>> activityFilter = FilterEnabled(); activityFilter = activityFilter .And(...) ... The problem is .And(Expression<Func<T, bool>>) extension method goes on for another 30 lines or so. How can I manage this another way instead of having to add .And 30+ times for each filter critera. Each filter method looks something like this. public Expression<Func<Project, bool>> GetProjectByProjectId() { return prj => FilterCriteria.ProjectId == null || prj.ProjectID == FilterCriteria.ProjectId.Value; } This is also my predicate builder class where I created the .And() public static class PredicateBuilder { public static Expression<Func<T, bool>> And<T>(this Expression<Func<T, bool>> a, Expression<Func<T, bool>> b) { ParameterExpression p = a.Parameters[0]; SubstExpressionVisitor visitor = new SubstExpressionVisitor(); visitor.subst[b.Parameters[0]] = p; Expression body = Expression.And(a.Body, visitor.Visit(b.Body)); return Expression.Lambda<Func<T, bool>>(body, p); } public static Expression<Func<T, bool>> Or<T>(this Expression<Func<T, bool>> a, Expression<Func<T, bool>> b) { ParameterExpression p = a.Parameters[0]; SubstExpressionVisitor visitor = new SubstExpressionVisitor(); visitor.subst[b.Parameters[0]] = p; Expression body = Expression.Or(a.Body, visitor.Visit(b.Body)); return Expression.Lambda<Func<T, bool>>(body, p); } } internal class SubstExpressionVisitor : System.Linq.Expressions.ExpressionVisitor { public Dictionary<Expression, Expression> subst = new Dictionary<Expression, Expression>(); protected override Expression VisitParameter(ParameterExpression node) { Expression newValue; if (subst.TryGetValue(node, out newValue)) { return newValue; } return node; } } Answer: To answer the direct question, the obvious way is to have an All method which takes a params Expression<Func<T, bool>>[] expressions argument. You'll still have a 30-line call, but you won't have And on each line. There are potential side-benefits: It should reduce the amount of wrapping an expression just to unwrap it immediately. If you decide that you want AndAlso instead of And (is there any reason for not using it?) then it's much simpler to change. I would also be tempted to favour a refactor which allows null expressions. If instead of public Expression<Func<Project, bool>> GetProjectByProjectId() { return prj => FilterCriteria.ProjectId == null || prj.ProjectID == FilterCriteria.ProjectId.Value; } you have public Expression<Func<Project, bool>> GetProjectByProjectId() { return FilterCriteria.ProjectId == null ? null : (prj => prj.ProjectID == FilterCriteria.ProjectId.Value); } (with whatever type coercion is necessary to make that compile - I haven't tested it) then you can filter out null expressions in All. This both simplifies the expression constructed and the SQL generated from it, which might improve performance and might also make debugging it easier. Finally, two comments about SubstExpressionVisitor: public Dictionary<Expression, Expression> subst = new Dictionary<Expression, Expression>(); On the basis of coding to the interface rather than the implementation, the type should be IDictionary<Expression, Expression>. I personally would also prefer to hide the field and take the substitutions as arguments to the constructor rather than by modifying a public field after construction, but that's a subjective issue of style. If making that change, the type can become IReadOnlyDictionary<Expression, Expression> instead. protected override Expression VisitParameter(ParameterExpression node) { Expression newValue; if (subst.TryGetValue(node, out newValue)) { return newValue; } return node; } Too many people use Contains and then [key] rather than TryGetValue, so thumbs up for getting that right. I would observe, though, that foo.TryGetValue(key, out val) ? val : defaultVal is a common enough pattern that you could consider factoring it out as an extension method to IDictionary<K,V> and IReadOnlyDictionary<K,V>.
{ "domain": "codereview.stackexchange", "id": 26184, "tags": "c#, linq" }
The speed of sound is proportional to the square root of absolute temperature. What happens at extremely high temperatures?
Question: The speed cannot increase unboundedly of course, so what happens? Answer: It's a bit misleading to simply say the speed of sound is proportional to $\sqrt{T}$ because life is a bit more complicated than that. You've probably seen http://en.wikipedia.org/wiki/Speed_of_sound and this does indeed say in the introduction that the speed of sound is a function of the square root of the absolute temperature. However if you read on you'll find: $$c = \sqrt{\frac{P}{\rho}}$$ but for an ideal gas the ratio P/$\rho$ is roughly proportional to temperature hence you get $c \propto \sqrt{T}$. Imagine doing an experiment in the lab at atmospheric pressure where you raise the temperature and see what effect it has on the speed of sound. In this experiment the pressure, $P$, is constant at one atmosphere, so as you increase the temperature the density falls and the speed of sound does indeed increase. But because the density is falling the mean free path of the air molecules increases, and at some temperature the mean free path becomes comparable with the wavelength of sound. When this happens the air will no longer conduct sound so the speed of sound ceases to be physically meaningful. You could do a different experiment where you put a known amount of gas into a container of constant volume and then increase the temperature. In this experiment the density is constant, so as you increase the temperature the pressure increases and the speed of sound increases. In this experiment the mean free path is roughly constant so you don't run into the problem with the first experiment. However as the temperature rises the gas molecules will dissociate and then ionise to form a plasma. The speed of sound is then given by http://en.wikipedia.org/wiki/Speed_of_sound#Speed_in_plasma. You can keep increasing the temperature and the speed will indeed carry on increasing until it runs into a relativistic limit.
{ "domain": "physics.stackexchange", "id": 3501, "tags": "thermodynamics, waves, temperature, acoustics, speed" }
Sementic segmentation data and model compile in Keras
Question: I have multi-label data for semantic segmentation. For semantic segmentation for one class I get a high accuracy but I can't do it for multi-class segmentation. I have 6 class labels so my Y train matrix is equal [78,480,480,6] ('channel last'), where 78 - number of images 480,480 -image size, 6-number of masks and X train matrix [78, 480, 480, 1] The last lines of my CNN model: l = Conv2D(filters=64, kernel_size=(1,1), activation='relu')(l) output_layer = Conv2D(filters=6, kernel_size=(1,1), activation='sigmoid')(l) model = Model(input_layer, output_layer) model.compile(optimizer=Adam(2e-4), loss='categorical_crossentropy', metrics=['accuracy']) I don't know what I hove done wrong that my multi-label semantic segmentation model doesn't have proper results. Image and masks: Answer: Two things that stand out is that you are using sigmoid as activation which is used for binary classification (it just squash values between 0 and 1). The other is that your learning rate is 5x lower than the default value. Also because you are classifying images, I think the title is misleading; semantics is associated with "meaning" i.e. with language.
{ "domain": "datascience.stackexchange", "id": 3124, "tags": "deep-learning, keras, optimization, loss-function, cnn" }
Please help me with this doubt from spherical waves
Question: How to calculate phase difference for spherical waves? How to say whether they are in phase or out of phase? In sinusoidal we can easily say whether they are in phase or out of phase just by looking at it,but how to do the same for spherical waves? Answer: In spherical coordinates, you can write a spherical wave as $F(x,t) = A \exp (ikr - i\omega t)$. To say that two spherical waves are in phase at a certain point $(x,t)$ in space and time is to say that at this particular point, the argument inside the exponential is the same for both of them, up to a $2 \pi$ phase factor. Imagine you have two sources that emit spherical waves of wavelenght $\lambda$, the sources are located at point $A$ and $B$, respectively. Let us denote by $\phi_0$ the constant phase shift between the two sources (which is to say, in the case of sinusoidal waves, one of them is $\cos(\omega t)$ and the other is $\cos (\omega t + \phi0)$). Let us look at a point $M$ in space. We call $D_A$ the distance between $M$ and $A$ and $D_B$ the distance between $M$ and $B$. The two spherical waves are in phase at point $M$ under the condition that $\frac{2 \pi}{\lambda} D_A = \frac{2 \pi}{\lambda} D_B + \phi_0$ This formula comes from the statement I made earlier : looking at a certain point $M$ at time $t$, the argument of the two waves must the same for both of them, up to a $2 \pi$ phase factor.
{ "domain": "physics.stackexchange", "id": 28106, "tags": "waves, spherical-harmonics" }
Complexity of matrix diagonalization
Question: I'm probably missing a trivial answer, but somehow I can't find it. Given symmetric matrix $A \in \mathbb R^{n \times n}$, what's the complexity of diagonalizing the matrix, i.e. finding diagonal $\Lambda = diag(\lambda_1, \ldots, \lambda_n)$ and orthogonal $Q \in \mathbb R^{n \times n}$ such that $$\|A - Q^{-1} \Lambda Q\| < \epsilon \|A\|?$$ Here I use the operator norm. You can assume that the largest eigenvalue of $A$ is bounded by constants from both above and below. You can assume that all eigenvalues are different (it can also be achieved by perturbing $A$). Basically, I have $f(x) = x^\top A x$, and I need to guarantee that the function doesn't excessively change after diagonalization. Eigenvalues can be approximated efficiently using e.g. shifted QR algorithm. For the eigenspaces, The complexity of the matrix eigenproblem claims to find them, but I'm confused about what their result is: the main theorem only talks about approximating eigenvalues. What exactly they mean by "associated eigenspaces" is unclear to me: both from the definition point of view (since the eigenvalues are only found approximately, they almost surely don't have the eigenspaces) and from the approximation point of view. Other papers also don't seem to show a concrete answer (again, maybe I misinterpret the results). Answer: Reducing to a tridiagonal matrix takes $O(n^3)$ independent of $\epsilon$. I believe the fastest algorithm after that is divide and conquer, which I believe is $O(n^2 \log(1/\epsilon))$, for a total complexity of $O(n^3 + n^2 \log(1/\epsilon))$. However, it’s possible I have the dependence on $\epsilon$ wrong here.
{ "domain": "cstheory.stackexchange", "id": 5483, "tags": "linear-algebra, algebraic-complexity" }
Java Trie Implementation
Question: I am creating a Trie class in Java, and am wondering what else can be done to make it even better. I am hoping to add concurrency to speed querying up. public class Trie { private HashMap<Character, HashMap> root; private final Character END_CHARACTER = '$'; public Trie() { initializeRoot(); } public Trie(String s) { initializeRoot(); add(s); } public Trie(Collection<String> collection) { initializeRoot(); for (String s : collection) { add(s); } } private void initializeRoot() { root = new HashMap<>(); } public void add(String s) { HashMap<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); if (!node.containsKey(character)) { node.put(character, new HashMap<Character, HashMap>()); } node = node.get(character); } node.put(END_CHARACTER, new HashMap<>()); } public boolean contains(String s) { HashMap<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); if (node.containsKey(character)) { node = node.get(character); } else { return false; } } return node.containsKey(END_CHARACTER); } } Answer: Initializing private HashMap<Character, HashMap> root; at declaration will simplify your code public class Trie { private HashMap<Character, HashMap> root = new HashMap<>(); private final Character END_CHARACTER = '$'; public Trie() {} public Trie(String s) { add(s); } public Trie(Collection<String> collection) { for (String s : collection) { add(s); } } public void add(String s) { HashMap<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); if (!node.containsKey(character)) { node.put(character, new HashMap<Character, HashMap>()); } node = node.get(character); } node.put(END_CHARACTER, new HashMap<>()); } public boolean contains(String s) { HashMap<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); if (node.containsKey(character)) { node = node.get(character); } else { return false; } } return node.containsKey(END_CHARACTER); } } From whats-wrong-with-overridable-method-calls-in-constructors Simply put, this is wrong because it unnecessarily opens up possibilities to MANY bugs. When the @Override is invoked, the state of the object may be inconsistent and/or incomplete. This having in mind we will refactor the class, to add a private internalAdd() method which we call from the constructor and the public add() methods public class Trie { private HashMap<Character, HashMap> root = new HashMap<>(); private final Character END_CHARACTER = '$'; public Trie() {} public Trie(String s) { internalAdd(s); } public Trie(Collection<String> collection) { for (String s : collection) { internalAdd(s); } } public void add(String s) { internalAdd(s); } private void internalAdd(String s) { HashMap<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); if (!node.containsKey(character)) { node.put(character, new HashMap<Character, HashMap>()); } node = node.get(character); } node.put(END_CHARACTER, new HashMap<>()); } public boolean contains(String s) { HashMap<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); if (node.containsKey(character)) { node = node.get(character); } else { return false; } } return node.containsKey(END_CHARACTER); } } but we can do better.. let us take a look at internalAdd() private void internalAdd(String s) { HashMap<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); if (!node.containsKey(character)) { node.put(character, new HashMap<Character, HashMap>()); } node = node.get(character); } node.put(END_CHARACTER, new HashMap<>()); } If node.isEmpty() we won't need to check if (!node.containsKey(character)) anymore. Also if (!node.containsKey(character)) evaluates one time to true, we won't need to check this anymore. Let us add a new method: private void internalAdd(String s, HashMap<Character, HashMap> node) { for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); node.put(character, new HashMap<>()); node = node.get(character); } } and call it private void internalAdd(String s) { HashMap<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); if (node.isEmpty() || !node.containsKey(character)) { internalAdd(s.substring(i), node); break; } node = node.get(character); } node.put(END_CHARACTER, new HashMap<>()); } But wait, we can do even better, as node.isEmpty() should also be used in the contains() method public boolean contains(String s) { HashMap<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { if(node.isEmpty()){ return false; } Character character = s.charAt(i); if (node.containsKey(character)) { node = node.get(character); } else { return false; } } return node.containsKey(END_CHARACTER); } Finished ? No, we still can do better. As the passed String parameters won't be changed, let us make them final the same is true for the root field. Also as janos has answered use interface types instead of the implementation Putting altogether public class Trie { private final Map<Character, HashMap> root = new HashMap<>(); private final Character END_CHARACTER = '$'; public Trie() { } public Trie(final String s) { internalAdd(s); } public Trie(final Collection<String> collection) { for (String s : collection) { internalAdd(s); } } public void add(final String s) { internalAdd(s); } private void internalAdd(final String s) { Map<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); if (node.isEmpty() || !node.containsKey(character)) { internalAdd(s.substring(i), node); break; } node = node.get(character); } node.put(END_CHARACTER, new HashMap<>()); } private void internalAdd(final String s, Map<Character, HashMap> node) { for (int i = 0; i < s.length(); i++) { Character character = s.charAt(i); node.put(character, new HashMap<>()); node = node.get(character); } } public boolean contains(final String s) { Map<Character, HashMap> node = root; for (int i = 0; i < s.length(); i++) { if (node.isEmpty()) { return false; } Character character = s.charAt(i); if (node.containsKey(character)) { node = node.get(character); } else { return false; } } return node.containsKey(END_CHARACTER); } }
{ "domain": "codereview.stackexchange", "id": 8953, "tags": "java, algorithm, trie" }
Asking for Docking programme
Question: I'm tring to do my M.Sc. research and I Have to do Docking to save some money as I cannot try all the compound that I'm working on. what is the best Protein-Ligand Docking programme? And anyone has any tutorial for such programmes. Answer: It's case specific Docking small compounds well is problematic and you need to consider whether: if a single conformer is okay if rigid sidechains are okay if implicit water is okay if a non-polarisable model is okay if a non-covalent interaction is okay if you are okay with a score that is a rough ΔΔGibbs approximation of the interaction (see this SO answer) or do you want something more like a K_d Do note that for any experiment you do to have any validity you need to have controls. Do you know what can and cannot bind already? It is imperative that you score these too. Free I am assuming that you want to use a free program. Virtual compound screens (VCS) are a big business in pharma. Therefore, a lot of really good software is not free, such as ICM-Dock, which outscores AutoDock discuss below in most tests. AutoDock: simple cases AutoDock Vina is easy to use and has a PyMOL plugin (I believe), but the peptide, in both its sidechains and its backbone, is rigid, which is not good. The ligand rotamers are generated for AutoDock at runtime, but there is a hardcoded low limit, it technically isn't a rigid ligand, but nearly. It is the best-scoring free rigid peptide docking program. There are lots of tutorials on using Autodock, such as the official one or on YouTube. As shown by this Bioinformatics Stack Exchange question Autodock 4 (different score fxn) has several steps, each important. Rosetta: very complicated cases I prefer the harder to use Rosetta Ligand_dock which uses conformers (if the ligand is parameterised correctly, say they were generate via Open Babel conformers) and allows sidechains to repack. Water Both tools mentioned use implicit waters, which works badly for highly hydrophobic ligands. That is the waters are simulated as a homogeneous field that affects the ligand and protein. But some solutions are possible, in the case of AutoDock WaterDock variant exists —although this is not polarisable water. For more advanced cases there are MD methods such as dynamic undocking, which are completely non-trivial. All these methods mentioned use classical force-field calculations (Amber, Charmm, Talaris etc. are different models), but in MD noise (think of it as environmental heat) is present in the calculations, so the ligand comes unbound (and is actually pulled out). You can also get better results by refining your bound ligand with QM-MD (e.g. gauss of gaussian). Choices... This is a lot of options. So a first question is are your ligands man-made drugs/fragments (i.e. obey Lipinski's rule of 5) or metabolites? Generally the former are rigid and very hydrophobic (greasy) (and require better force fields), while the latter are flexible and hydrophilic. Minor considerations Also to consider is your entranceway to your protein as the Michaelis constant is dictated in some enzymes by the substrate going in, not by its binding (e.g. P450s). If the ligand have a shared backbone that is fixed there are some tricks you can do to cut your compute time and increase the reliability. If the ligands are bound not all methods are possible. Rosetta ligand_dock can but it is a bit fiddly to parameterize. RDKit skills One thing also to consider is your programming skills. Can you use RDKit? In most cases downloading the sdf off PubChem and running OBabel or similar to generate conformers is not possible and you have be familiar with a python package called RDKit, which is really powerful but non-trivial to use. Additionally, docking scores can be strongly augmented with ligand-only values (such as logP, TPSA, molar refractivity, molecular weight, QED, Bertz topology etc. etc) that are generated from RDKit. NB. The scoring function in Autodock Vina (not 4 though, as that is Amber FF) partially takes into account. Ligand_dock details Ligand_dock is an older application of the Rosetta suite and the current philosophy is to use a specific Rosetta script, however, it works fine and is actually quicker to implement. Here are some links: manual of ligand_dock tutorial for ligand parameterisation PubChem for your SDF files online tools for small molecule manipulation example of ligand creation for docking with RDKit
{ "domain": "bioinformatics.stackexchange", "id": 1165, "tags": "docking" }
If rabies always kills its hosts, must there be some animal that is an asymptomatic carrier?
Question: I searched a lot on the Internet, but I am not clear about the spread of the rabies virus. As I understand, rabies kills any animal it infects—cats, dogs, foxes, bats, humans. If it kills all of its hosts, then where does it stay? So, if all rabies-infected animals die, but in order to transmit the virus, every rabid animal has to bite some other animal or have rabies without symptoms, must there be some animal that is an asymptomatic carrier? Answer: This is an interesting question, and it's been subject to a fair amount of research. From an epidemiological perspective, most rabies outbreaks have been studied in dogs. Among domestic dogs, the R0, or basic reproduction number, of rabies is usually quite low—estimated to be around 1.2 in rural sub-Saharan Africa, and <2 in most historically observed cases [1] (though a particularly bad epidemic in Osaka had an R0 of ~2.42 [2]). This implies that, among dogs, it's quite possible for rabies to spread faster than it kills its hosts, but can be entirely eliminated in dogs by mass vaccinations (see [1] and [2]). But your question is still valid—if, say, the virus infects all the dogs in an area, and they all die, shouldn't rabies disappear from that area? There's some debate as to where the virus keeps hanging out. For one, there is some circumstantial evidence [3] that the fatality rate in dogs is not 100%, but actually closer to 85%. However, this doesn't necessarily explain the reemergence of epidemics, since for the virus to spread, it needs to get to the dog's saliva, by which point the dog is likely exhibiting lethal symptoms [4][5]. Dogs sometimes being asymptomatic carriers is a tempting explanation, but remains a "highly speculative" possibility [6]. To answer your question, let's look at a place where rabies outbreaks can cause severe economic losses—the livestock farms of South America and sub-Saharan Africa. Vampire bats, the hematophagic suckers, can bite over half of the animals in at-risk zones [7], which house some 70 million head of cattle. Since bats are very well-known sources of infectious disease—in fact, they're known to be hosts of 10 out of 11 recognized Lyssavirus species, including the rabies virus [8]—they've been a tempting explanation since 1911, when a bat-borne rabies outbreak in Brazil was first diagnosed [9]. [9] continues to say this: An idea that vampire bats may be asymptomatic rabies carriers, shedding the virus in their saliva for months, was popular during initial studies of vampire bat rabies (16). However, in a well-documented experimental study by Moreno and Baer (17), the disease in vampire bats was similar to rabies observed in other mammals. The bats that developed signs of disease and excreted the virus via saliva soon died, whereas those that survived the inoculation without clinical signs never excreted the virus or had it in the brain as demonstrated upon euthanasia. More recently, the asymptomatic excretion of RABV in the saliva of experimentally infected vampire bats, which survived the challenge during at least 2 years of observation, was documented again (18). Clearly, this phenomenon requires additional investigation. This is all I've been able to find so far. If I had to bet, I'd put my money on bats being the asymptomatic carrier that you're looking for, with a solid $n=14$ paper to back it up [13]. However, when it comes to established scientific consensus—well, I'm not sure there is one.
{ "domain": "biology.stackexchange", "id": 10443, "tags": "evolution, virology, epidemiology" }
Find a palindrome using Ruby
Question: I am preparing for an upcoming exam and have the following practice question.. Write a method to determine if a word is a palindrome, without using the reverse method. Most of you probably know what a palindrome is, but for those that don't.. a palindrome is a word which reads the same backward or forward. Anyway, here's my answer to the question. I've written my own reverse method as directed. How would you refactor this? def reverse(word_arr) reverse = [] index = word_arr.length until index == 0 do reverse << word_arr[index - 1] index -= 1 end reverse end def is_palindrome?(word) word_arr = word.downcase.gsub(/ /,'').split('') true if word_arr == reverse(word_arr) end p is_palindrome?('Anna') p is_palindrome?('Joe') p is_palindrome?('Go dog') Answer: There are some different ways to look at what a palindrome is, and all of them can translate directly to code. [For simplicity's sake, I will assume that the word always gets passed as an Array of downcased single-letter Strings, e.g. like palindrome?('Anna'.downcase.chars)] "A palindrome is the same forwards and backwards": def palindrome?(word) word == reverse(word) end This is of course the one you were thinking about, and the one the exam author wants to prohibit. Obviously, one way to "work around" this restriction is to implement the reversal method yourself. Again, there are several different ways of thinking about reversing an array: "Appending the first element to the reversed rest": def reverse(ary) return [] if ary.empty? reverse(ary.drop(1)) + [ary.first] end "Prepending the last element to the reversed rest": def reverse(ary) return [] if ary.empty? [ary.last] + reverse(ary[0...-1]) end "The first and last letters are the same and the rest is a palindrome": def palindrome?(word) return true if word.empty? word.first == word.last && palindrome?(word[1...-1]) end My guess is that this, or something like this, is actually what the exam author was thinking about. However, by explicitly forbidding reverse, they focus the student's mind on "How can I overcome not having reverse" instead of "how could I interpret a palindrome differently". IOW: it's a bad exam question (if my guess is right). Of course, there are even more ways to think about palindromes (and reversal), and there are some interesting optimizations given in other answers and comments.
{ "domain": "codereview.stackexchange", "id": 19551, "tags": "ruby, palindrome" }
The meaning of $*$ in regular expressions
Question: I'm designing a Turing machine that decides a language denoted by a regular expression. Let's say this expression is $a^*bbc^*$. Does this machine accept the string $bb$ since $a^*$ and $c^*$ can have zero instances or more? Answer: Yes, the word $bb$ in the language generated by the regular expression $a^*bbc^*$, because, as you say, $a^*$ and $c^*$ generate also the empty word. So if you are building a Turing machine that accepts this language, it should also accept the string $bb$.
{ "domain": "cs.stackexchange", "id": 3481, "tags": "turing-machines, regular-expressions" }
ReactJS link with conditional CSS classes
Question: I'm starting to learn React/NextJS and find the way to write conditional CSS very ugly compared to VueJS. I'm curious if someone could look over this code snippet and guide me in the right direction towards a cleaner way of designing my React code. <Link key={link.text} href={link.to}> <a className={`border-transparent border-b-2 hover:border-blue-ninja ${ activeClass === link.to ? 'active' : '' }`}> {link.text} </a> </Link> Is this best practice? Answer: You could use a conditional className utility. There are several solutions in the ecosystem, but I would recommend clsx. Your example would then be: <Link key={link.text} href={link.to}> <a className={clsx('border-transparent border-b-2 hover:border-blue-ninja', { active: activeClass === link.to } )}> {link.text} </a> </Link>
{ "domain": "codereview.stackexchange", "id": 41672, "tags": "beginner, css, react.js, jsx" }
References to learn about Majorana Zero Modes
Question: I'm a masters student currently trying to learn about Majorana zero modes in condensed matter physics. But so far the references I have checked have been not really useful for learning. I even read Kitaev's paper Unpaired Majorana fermions in quantum wires. https://arxiv.org/abs/cond-mat/0010440 But I'm having a slightly hard time going through this. Does does anyone know more didactic references? Up to now these two below have been nice Topological superconducting phases in one dimension. Felix von Oppen, Yang Peng, Falko Pientka Majorana Qubits. Fabian Hassler https://arxiv.org/abs/1404.0897 Answer: I stumbled across this Master's thesis by Henrik Roising 'Topological Superconductivity and Majorana Fermions' and found it immensely useful and well written: https://www.duo.uio.no/handle/10852/51429 I also really liked this PhD thesis by Stefan Rex 'Electric and magnetic signatures of boundary states in topological insulators and superconductors': https://brage.bibsys.no/xmlui/bitstream/handle/11250/2460720/PhD_Stefan%20Rex.pdf?sequence=1 I referred to the above two a lot when learning about Kitaev's chain/ Majorana modes. Some other good ones comes from Eddy Ardonne's students: http://staff.fysik.su.se/~ardonne/events.html I liked Christian Spånslätt and Nikolaos Palaiodimopoulos Licentiate and Master's thesis respectively (because those are the only two I read). I don't know any of these people I just google a lot.
{ "domain": "physics.stackexchange", "id": 54595, "tags": "condensed-matter, reference-frames, education, majorana-fermions" }
Counting items in categories
Question: I'm an experienced programmer that also has a little experience with functional programming, although mostly theoretical (as in hobby-level reading and very minor projects). I recently decided to work through the new book "Beginning Haskell" to get up to speed with Haskell and try to crank out something worthy of a github repo. Early in the book you are asked to implement a Client ADT including some supportive data types for Person and Gender. I did it like this and I'm not liking it at all: data Gender = Male | Female | Unknown | None deriving (Show, Eq) data Person = Person { fName :: String, lName :: String, gender :: Gender } deriving (Show, Eq) data Client = GovOrg { code :: String, name :: String } | Individual { code :: String, person :: Person } | Company { code :: String, name :: String, contact :: Person } deriving (Show, Eq) After this you are asked to write a function that takes [Client] and returns the count for the different genders, e.g. "out of the 20 clients listed, we have 5 male, 5 female, 10 unknown". Let's call it countGenders. Below is the code I've ended up with, and I feel that it's a complete mess and lacks any feeling of clarity or elegance. I think my OOP-brain is forcing me into bad designs. I'll walk you through the lines below. Once again I am not at all happy with this. First we have a result data type. Could be just an (Int, Int, Int), but I feel like the location in the tuple isnt a natural fit with what the data should represent (as opposed to Point Int Int): data GenderCount = GenderCount { male :: Int, female :: Int, unknown :: Int } Then we have the top level function that is supposed to be invoked. Maps genderCount across a [Client], which results in [(Int, Int, Int)], then unzips those and sums each list. countGenders :: [Client] -> GenderCount countGenders cs = GenderCount (sum m) (sum f) (sum u) where (m, f, u) = unzip3 $ map genderCount cs Then we have genderCount, which maps a Client to a result tuple (Male, Female, Unknown): genderCount :: Client -> (Int, Int, Int) genderCount c = case cGender c of Male -> (1, 0, 0) Female -> (0, 1, 0) Unknown -> (0, 0, 1) None -> (0, 0, 0) Last we have cGender, which gets the gender from a client (using RecordWildCards extension): cGender :: Client -> Gender cGender GovOrg {} = None cGender Individual { .. } = gender person cGender Company { .. } = gender contact As you can see this is all a mess. But I'm having trouble thinking of how to refine it. I feel like my data model is bloated, which bleeds over into too many functions to do something that should be really simple. I would appreciate any feedback on the data model, the functions or anything else. Here is the entire code if you want to look it over in full context: http://pastebin.com/Vymcd6Bj Answer: This isn't that much of a mess at all. Let's clean it up, then get a little fancy. Right off the bat it seems a little funky to have a None Gender, as it appears you're not actually trying to account for agender individuals but instead providing for the failure to produce a Gender for a particular Client. Operations that might fail in Haskell usually signal so by returning a Maybe value, so let's drop None from the definition and see what needs changing. data Gender = Male | Female | Unknown deriving (Show, Eq) genderCount and cGender depend on None, let's start with the latter. c doesn't mean much as a prefix to me, so I'm going to call this function clientGender, but that's a stylistic preference. Changing this function to use Maybe is straightforward. clientGender :: Client -> Maybe Gender clientGender GovOrg {} = Nothing clientGender Individual {..} = Just (gender person) clientGender Company {..} = Just (gender contact) Now let's turn to genderCount. The first thing to notice is that the function is a tiny wrapped up case statement. This is usually a code smell to me that hints at a missed opportunity to break functions into smaller, more independent pieces. In this case genderCount has nothing to do with Clients, so let's rip that part out! genderCount :: Gender -> (Int, Int, Int) -- Counting Genders, not Clients! genderCount Male = (1, 0, 0) genderCount Female = (0, 1, 0) genderCount Unknown = (0, 0, 1) This is still unsatisfactory, right? There's that convention-typed tuple, and we've got a perfectly good GenderCount datatype lying around, so let's use it. genderCount :: Gender -> GenderCount genderCount Male = GenderCount 1 0 0 genderCount Female = GenderCount 0 1 0 genderCount Unknown = GenderCount 0 0 1 It's a small change, but users of GenderCount can rely on the field names rather than a tuple ordering. And now countGenders, the piece that glues it all together. Our type is still correct, which is awesome! No changes there. The implementation though is doing a few different things we'll need to adjust. In an informational sense it's determining the genders of all of the clients, then accumulating a count of each Gender. What it looks like though is some weird tuple math to produce a GenderCount value from nowhere! We can rewrite it given our new implementations to be a little prettier, but first we're going to need a way to add two GenderCounts together. addGenderCounts :: GenderCount -> GenderCount -> GenderCount addGenderCounts (GenderCount m f u) (GenderCount n g v) = GenderCount (m + n) (f + g) (u + v) A lot of repeated instances of GenderCount in there, but that's not so bad given we can use this as a combinator. Now we can put our countGenders function together. countGenders :: [Client] -> GenderCount countGenders = foldr (addGenderCounts . genderCount) (GenderCount 0 0 0) . mapMaybe clientGender This works! I've imported mapMaybe from Data.Maybe here to account for our clientGender function sometimes returning Nothing (mapMaybe drops all the Nothings and returns a list of the Just values). We use a right fold to accumulate our GenderCount values, and a starting value of GenderCount 0 0 0 for our accumulator. There are a few ways to go from here to clean things up further. You could get rid of the GenderCount 0 0 0 value by using foldr1, at the cost of adding another composition with map into the mix. If you have sharp eyes and a working knowledge of the Typeclassopedia you'll note a striking similarity between the way we use GenderCount with a right fold, and a Monoid. If you don't have a working knowledge of the Typeclassopedia, our motivation is that Monoids allow us to specify an identity element and an associative reduction operation, revealing some higher level abstractions (and functions) we can use to wire our code together. Let's make a Monoid. instance Monoid GenderCount where mempty = GenderCount 0 0 0 mappend = addGenderCounts -- Laws: -- mempty <> x = x -- x <> mempty = x -- x <> (y <> z) = (x <> y) <> z I won't prove the Monoid laws, but you should be able to see that they are trivial given the properties of addition and our definition of GenderCount. Let's try one more pass at countGenders now. countGenders :: [Client] -> GenderCount countGenders = foldMap genderCount . mapMaybe clientGender Nice!
{ "domain": "codereview.stackexchange", "id": 7422, "tags": "beginner, haskell, functional-programming" }
What is the species of these mushrooms?
Question: The mushrooms are gilled with a light-brown cap. The stem is widened to the base. What species is it? Is it considered edible? They are found in mixed forest in Moscow, Russia. They grow in the ground quite separated from each other. Answer: This is Clitocybe Nebularis, an edible species.
{ "domain": "biology.stackexchange", "id": 1635, "tags": "taxonomy, species-identification, mycology" }
Info about interfacing Fanuc R-2000iC/165F with ROS
Question: Hi, I already got the answers through direct contact, but as requested, it is worth to share the answers (thanks @gvdhoorn !) I wanted to know if a package is already available for the R-2000iC/165F and also R-2000iC/210F, or alternatively build it myself. For the 165F version, it is now available in fanuc_experimental The 210F version should have the same Xacro as the 165F version, but needs to have joint limits changed (and inertia if needed). I will PR it as soon as I have tested and confirmed working. The process needed to build a URDF from scratch: try to get a SolidWorks model. It will be infinitely more efficient to convert don't use SolidWorks or any other tool, it's not needed and will complicate the work if you can get a SolidWorks model, use any SolidWorks viewer and export the parts of the links to individual STL files (binary STL) use a mesh editing tool to transform their origins to where the joint origins are (you can do this using the diagrams showsn in the 'basic specifications' section of any operating manual of your robot which shows the lengths of joints use existing support package xacro files as a template (for the structure. If it is a variant, do not create a separate package. Variants of a series/model go into a single robot support package. Hope this helps others Originally posted by dq18 on ROS Answers with karma: 15 on 2021-01-28 Post score: 0 Answer: You may be interested to know that I pushed a new support package for the R-2000iC/165F. See ros-industrial/fanuc_experimental/fanuc_r2000ic_support. Edit: ros-industrial/fanuc_experimental#59 may be a nice PR to look at, it adds support for another variant of the R-2000iC: the 270F. Originally posted by gvdhoorn with karma: 86574 on 2021-01-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2021-01-28:\ For the 165F version, it is now available in fanuc_experimental Oh, I only now see you already noticed that. Comment by dq18 on 2021-01-31: Yes, thanks for that. I will mark this topic as answered
{ "domain": "robotics.stackexchange", "id": 36017, "tags": "ros, fanuc" }
Two classes to avoid confusion when handling degrees and radians
Question: Experimenting with operator overloading for the first time. Based on my reading, it appears to be a bit of a minefield. Have I fallen into any traps? namespace Units { class Degree; // Forward declare Degree so it can be passed by reference into Radian. /// \class Radian /// Encapsulates a double, and used to represent a radian. class Radian { public: /// Constructor. /// \param radian The angle (radians). Radian(double radian) : angle_(radian) {} /// Constructor. /// \param degree The angle (degrees). Radian(const Degree& degree); /// Function Call Operator. /// \return <double> The angle value (radians). operator double() const { return angle_; } private: double angle_; }; /// \class Degree /// Encapsulates a degree, and used to represent a degree. class Degree { public: /// Constructor. /// \param degree The angle value (degrees). Degree(double degree) : angle_(degree) {} /// Constructor. /// \param radian The angle (radians). Degree(const Radian& radian); /// Function Call Operator. /// \return <double> The angle (degrees). operator double() const { return angle_; } private: double angle_; }; /// Constructor. /// Constructor defined outside of class declaration, to allow Degree class to be defined before it. /// \param degree The angle (degrees). Radian::Radian(const Degree& degree) : angle_(degree * SoC::Maths::Trigonometry::DegToRad) {} /// Constructor. /// Constructor defined outside of class declaration, to allow Radian class to be defined before it. /// \param radian The angle (radians). Degree::Degree(const Radian& radian) : angle_(radian * SoC::Maths::Trigonometry::RadToDeg) {} } // namespace Units #endif // UNITS_H EDIT and initialised in the way: const Units::Degree latitude = 48.8566; Answer: Design While it might seem that the current design is working fine, I think there are two distinct issues lurking beneath the surface. Issue #1: operator double From what I can tell, the intention behind these unit classes seems to be the prevention of unit mismatches. Providing an access function to the contained value is not a bad idea, but using the implicit conversion operator for doing so is probably not the wisest choice here. Consider the following snippet: auto angle = Degrees{ 180.0 }; auto sine = std::sin(angle); It compiles easy enough, but will then fail silently at runtime. It likely won't even crash the application, but quietly produce values different to those expected. Of course, this is a contrived case ("std::sin is not really in the scope of this library (yet)!"), but nonetheless it shows a problem: As it is, there is hardly any prevention of accidental unit mismatches. Adding the keyword explicit to operator double() might help with some of these cases (though not with std::sin), but not all of them. If this were the only issue, it would be easy to fix with a getter function with a descriptive name (e.g. getDegrees). Issue #2: Extensibility Let's say that in the future, you (as the library developer) want to add another representation for angles (e.g. gons/gradians). Sounds simple, right? And at that point, it is. Adding one more class according to the given scheme, four more converting constructors, and it is done. Someone familiar with the SOLID principles might already spot a code smell in the last sentence: four more converting constructors, two of which have to be added to the existing and otherwise rather independent classes Degrees and Radians, thus violating the Open-Closed part of SOLID. After that comes another user A of the library and wants to add his own custom angle representation RepA. And then comes user B with RepB. And suddenly, we're having twenty converting constructors just for those five classes. And each additional representation is going to add a lot more: For \$N\$ representations, we need \$N \cdot (N - 1)\$ converting constructors to cover all combinations. And that is assuming independent developers add converting constructors for each others implementation. Otherwise, operator double will again lurk in the shadows, allowing for code to compile that really should not. class Gradians { public: Gradians(double); Gradians(const Degrees&); Gradians(const Radians&); operator double() const; // ... }; class Turns { public: Turns(double); Turns(const Degrees&); Turns(const Radians&); operator double() const; // ... }; Now stuff like auto a = Gons(300.0); auto b = Turns(a); will actually compile, but produce wrong results (b == 300.0 instead of b == 0.75). How can we solve this conundrum? A first step would be to separate the value from its representation(s) by choosing one internal representation which can be converted on demand: class Angle { public: static Angle fromDegrees(double degrees); static Angle fromRadians(double radians); double radians() const; double degrees() const; private: Angle(double radians) : radians_{ radians } {} double radians_; }; Angle Angle::fromDegrees(double degrees) { auto radians = degrees * SoC::Maths::Trigonometry::DegToRad; return Angle{ radians }; } Angle Angle::fromRadians(double radians) { return Angle{ radians }; } double Angle::degrees() const { return radians_ * SoC::Maths::Trigonometry::RadToDeg; } double Angle::radians() const { return radians_; } As you can see, I chose radians for my internal representation (mostly because that's what the trigonometric functions of the standard libary expect). For adding a new representation, we now only need to add one factory function (fromXyz(...)) and one getter function (xyz()). While this is a lot cleaner (and takes care of some issues), SOLID devotees will not fail to notice that the violation of the Open-Closed principle hasn't been fixed yet, just moved. To address this, we could introduce a hierarchy of derived classes, but that seems like overkill for this problem. Another easy solution would be to use templates: struct Degrees { static double toRadians(double degrees) { return degrees * SoC::Maths::Trigonometry::DegToRad; } static double fromRadians(double radians) { return radians * SoC::Maths::Trigonometry::RadToDeg; } }; struct Radians { static double toRadians(double radians) { return radians; } static double fromRadians(double radians) { return radians; } }; class Angle { public: template<typename Representation> static Angle from(double value) { return Angle{ Representation::toRadian(value) }; } template<typename Representation> double as() const { return Repreentation::fromRadians(radians_); } private: Angle(double radians) : radians_{ radians } {} double radians_; }; // Usage auto angle = Angle::from<Degrees>(180.0); auto sine = std::sin(angle.as<Radians>()); Of course, this is far from done, yet: Operators for addition, subtraction (angles), multiplication and/or division (scalars) could be overloaded for this Angle class For demonstration purposes I didn't mark the member functions above noexcept or constexpr. This should likely be amended. Helper functions like sin, cos, tan and similar could be provided for this Angle class. For the template version: The templates could be restricted to only accept types with correct signatures for fromRadians and toRadians. Implementation Aside from the design considerations mentioned above, I can add these points for the general implementation: Consider marking converting constructors and conversion operators as explicit. Very likely sizeof(Degree) == sizeof(double), so there probable won't be a benefit for taking a const Degree& parameter over just Degree. I'd suggest checking the precision of the constants ' DegToRadandRadToDeg`, especially if calculated on your own. If the precision on these constants is poor, there might be small numeric errors that accumulate over multiple conversions to and fro. A comment reads /// Function Call Operator: Actually, no, this is a conversion operator. A function call operator would look like this: double operator()() const. Generally, the comments don't tell me much about anything. Unless there is a hard requirement for them (in which case they should be improved) I'd suggest removing them. In their current form, they are at best visual clutter, and confusing at worst.
{ "domain": "codereview.stackexchange", "id": 36191, "tags": "c++, unit-conversion, overloading" }
When an electron absorbs a photon doesn't that change it's mass?
Question: When an electron absorbs a photon it leaps to a higher energy level, what exactly happens when an electron absorbs a photon? By the mass-energy equivalence doesn't that changes the electron's mass and thus alters it? Answer: These days we treat the mass of a particle is invariant (rest mass) to avoid confusion, but as the electron gains kinetic energy, it's energy goes from being $mc^2$ to $\gamma\, m c^2$ where $$\gamma =\frac{1}{ \sqrt{1-\frac{v^2}{c^2}}}$$ is a relativistic factor arising from the electron's speed $v$. If you tried to further accelerate the electron, you would find that it behaves as though it's mass were $\gamma m \gt m $, but this is an artifact of applying Newtonian expectations to relativistic effects. Note that you can expand $E = \gamma m c^2$ as a series, and if $v \ll c$ then you only need to keep the first few terms: $$E = m c^2 + \frac{1}{2} m v^2 + \dots$$
{ "domain": "physics.stackexchange", "id": 10745, "tags": "quantum-mechanics, electrons, mass-energy, absorption" }
SLAM Gmapping map to odom remapping error
Question: Hello, I am trying to perform slam_gmapping for my project, and using filtered odometry obtained using ekf. However, the tf_tree shows that map->odom (this is for slam_gmapping), and second side of the tree has map->/odometry/filtered (this is ekf odometry). I am attaching my file which classifies tfs, and the tf diagram I got while running the system. I have remapped odom->/odometry/filtered in the slam_gmapping section of the code, but seems like it is not being activated. Currently, no map is being received. Can anyone tell me how to fix it so I can map the environment using the filtered odometry from ekf for gmapping? This is the tf when the code runs first time: C:\fakepath\tf.png This is the tf I get after waiting few minutes and finding the tf: C:\fakepath\frames.png Code: <launch> <arg name="frame_id" default="/base_link" /> <arg name="rgb_topic" default="/zed/rgb/image_rect_color" /> <arg name="depth_topic" default="/zed/depth/depth_registered" /> <arg name="camera_info_topic" default="/zed/rgb/camera_info" /> <arg name="imu_topic" default="/imu/data" /> <arg name="imu_ignore_acc" default="true" /> <arg name="imu_remove_gravitational_acceleration" default="false" /> <arg name="wheelchair" default="/odom"/> <arg name="rtabmap_args" default=""/> <!--Wheelchair Launch--> <node pkg="roboteq_driver" type="driver_node" name="roboteq_driver" output="screen"> <!-- enable broadcast of odom tf --> <param name="pub_odom_tf" value="true" /> <!-- specify odom frame --> <param name="odom_frame" value="odom" /> <!-- specify base frame --> <param name="base_frame" value="wheelchair_base" /> <!-- specify cmd_vel topic --> <param name="cmdvel_topic" value="cmd_vel" /> <!-- specify port for roboteq controller --> <param name="port" value="/dev/ttyACM0" /> <!-- specify baud for roboteq controller --> <param name="baud" value="115200" /> <!-- specify whether to use open-loop motor speed control (as opposed to closed-loop)--> <param name="open-loop" value="false" /> <!-- specify robot wheel circumference in meters --> <param name="wheel_circumference" value="0.3429" /> <!-- specify robot track width in meters --> <param name="track_wdith" value="0.5715" /> <!-- specify pulse-per-revolution for motor encoders --> <param name="encoder_ppr" value="900" /> <!-- specify counts-per-revolution for motor encoders (ppr*4 for quadrature encoders) --> <param name="encoder_cpr" value="3600" /> </node> <!--Zed Camera Launch--> <group ns="zed"> <node name="zed_wrapper_node" pkg="zed_wrapper" type="zed_wrapper_node" /> </group> <!-- Depth_image to Laser_Scan --> <node pkg="depthimage_to_laserscan" type="depthimage_to_laserscan" name="depthimage_to_laserscan"> <param name="scan_height" value="100"/> <param name="output_frame_id" value="/base_frame"/> <param name="range_min" value="0.3"/> <param name="range_max" value="60"/> <remap from="image" to="/zed/depth/image_rect_color"/> <remap from="scan" to ="/scan"/> </node> <!-- Laser Scan Match --> <node pkg="laser_scan_matcher" type="laser_scan_matcher_node" name="laser_scan_matcher_node" output="screen"> <param name="fixed_frame" value = "odom"/> <param name="base_frame" value="base_link"/> <param name="use_alpha_beta" value="false"/> <param name="use_odom" value="false"/> <param name="use_imu" value="false"/> <param name="max_iterations" value="10"/> <param name="publish_pose" value="true"/> <param name="publish_tf" value="true"/> <param name="use_vel" value="false"/> </node> <!-- IMU: MPU6050 Launch --> <node pkg="mpu6050_serial_to_imu" type="mpu6050_serial_to_imu_node" name="mpu6050_serial_to_imu_node"> <param name="frame_id" value="imu_link"/> <param name="remove_gravitational_acceleration" type="bool" value="true"/> </node> <!-- Static Transform (TF) --> <!--<node pkg="tf" type="static_transform_publisher" name="wheelchair_base" args=" 0.0 0.0 0.0 0 0 0 /map /odometry/filtered 100" /> --> <node pkg="tf" type="static_transform_publisher" name="wheelchair" args=" -9.0 0.0 -38.0 0 0 0 /odom /wheelchair_base 100" /> <node pkg="tf" type="static_transform_publisher" name="wheelchair_camera" args=" 0.0 0.0 0.0 0 0 0 /wheelchair_base /camera_link 100" /> <node pkg="tf" type="static_transform_publisher" name="camera_imu" args=" 0.0 0.0 -38.0 0 0 0 /wheelchair_base /imu_link 100" /> <node pkg="tf" type="static_transform_publisher" name="base_link_to_camera_link_rgb" args=" 0 0 38.0 -1.5707963267948966 0 -1.5707963267948966 /camera_link /zed_initial_frame 100" /> <node pkg="tf" type="static_transform_publisher" name="zed_current" args=" 0 0 0 0 0 0 /zed_initial_frame /zed_current_frame 100" /> <node pkg="tf" type="static_transform_publisher" name="zed_left" args=" 0 0 0 0 0 0 zed_current_frame ZED_left_camera 100" /> <!-- Odometry fusion (EKF), refer to demo launch file in robot_localization for more info --> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true" output="screen"> <param name="frequency" value="50"/> <param name="sensor_timeout" value="0.1"/> <param name="two_d_mode" value="false"/> <param name="odom_frame" value="/odometry/filtered"/> <param name="base_link_frame" value="wheelchair_base"/> <param name="world_frame" value="/odometry/filtered"/> <param name="transform_time_offset" value="0.0"/> <param name="odom0" value="/vo"/> <param name="imu0" value="$(arg imu_topic)"/> <param name="odom1" value="/odom"/> <param name="odom1_differential" value="false"/> <param name="odom1_relative" value="true"/> <param name="odom1_pose_rejection_threshold" value="2"/> <param name="odom1_twist_rejection_threshold" value="0.1"/> <!-- The order of the values is x, y, z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az. --> <rosparam param="odom0_config">[true, true, true, false, false, false, true, true, true, false, false, false, false, false, false]</rosparam> <rosparam param="odom1_config">[true, false, true, false, false, true, false, false, false, false, false, true, false, false, false]</rosparam> <rosparam if="$(arg imu_ignore_acc)" param="imu0_config">[ false, false, false, true, true, true, false, false, false, true, true, true, false, false, false] </rosparam> <rosparam unless="$(arg imu_ignore_acc)" param="imu0_config">[ false, false, false, true, true, true, false, false, false, true, true, true, true, true, true] </rosparam> <param name="odom0_differential" value="true"/> <param name="imu0_differential" value="false"/> <param name="odom0_relative" value="false"/> <param name="imu0_relative" value="false"/> <param name="imu0_remove_gravitational_acceleration" value="$(arg imu_remove_gravitational_acceleration)"/> <param name="print_diagnostics" value="true"/> <!-- ======== ADVANCED PARAMETERS ======== --> <param name="odom0_queue_size" value="50"/> <param name="imu0_queue_size" value="50"/> <param name="odom1_queue_size" value="50"/> <!-- The values are ordered as x, y, z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az. --> <rosparam param="process_noise_covariance">[0.005, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.005, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.006, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.003, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.003, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.006, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0025, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0025, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.004, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.001, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.001, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.002, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.001, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.001, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0015]</rosparam> </node> <!-- Gmapping --> <node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen"> <param name="map_udpate_interval" value="5.0"/> <param name="delta" value="0.02"/> <param name="base_frame" value="wheelchair_base"/> <param name="odom_frame" value="odom"/> <remap from="rgb/image" to="$(arg rgb_topic)"/> <remap from="depth/image" to="$(arg depth_topic)"/> <remap from="rgb/camera_info" to="$(arg camera_info_topic)"/> <remap from="odom" to="/odometry/filtered"/> </node> <!--RVIZ launch --> <node pkg="rviz" type="rviz" name="rviz" /> </launch> Thanks, Hdbot Originally posted by hdbot on ROS Answers with karma: 36 on 2019-01-15 Post score: 0 Original comments Comment by kazi ataul goni on 2019-01-16: Hi i am new in this field. can you please help me regarding my problem written bellow: In an industrial field, one robot will pick up the apples and sort them out. The robot will move fast. In that case, if any human is near to robot it should be slow down. For that purpose, I want to use Rplidar Comment by hdbot on 2019-01-16: @kazi ataul goni, you may want to post this question in the forum. This way lot more people will be able to help. Comment by kazi ataul goni on 2019-01-16: sorry to post it here. i already did ..but nobody replied. and i thought may be you could help .. i really dont know what should i do after i have done the slam. should i do the image processing on slam or there is another to find out the obstacle form slam? @hdbot Answer: I noticed a couple of things in your launch file: /odometry/filtered is a topic published by EKF Localisation and not a frame. Remapping it to /odom makes it available on the topic /odom which can be used by laser_scan_matcher. You do not need these static transform publishers between map and odometry/filtered/. The transform between map -> odom will be published by gmapping. I am not sure what your base_link is called as you have also used it as the name of a static_transform_publisher and also the name of your base_frame. You need to provide a transform between laser_frame -> base_link, with a static transform publisher. In your case base_frame to base_link (or however you call it in your URDF file). You also need a transform between base_link and odom. If you want to use Laser Scan Matcher for this transform, change your base_frame to base_link Hope this helps! Originally posted by curi_ROS with karma: 166 on 2019-01-16 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by hdbot on 2019-01-19: I did the modifications, but I am still getting no map received. The tf looks like it flows properly when i run it the first time, but when I run the tf again without any changes the frames change. I am not sure why. Can you explain how I can fix that? I have updated the question with modifications.
{ "domain": "robotics.stackexchange", "id": 32284, "tags": "slam, navigation, ros-kinetic, ubuntu, gmapping" }
Translating SAT to HornSAT
Question: Is it possible to translate a boolean formula B into an equivalent conjunction of Horn clauses? The Wikipedia article about HornSAT seems to imply that it is, but I have not been able to chase down any reference. Note that I do not mean "in polynomial time", but rather "at all". Answer: No. Conjunctions of Horn clauses admit least Herbrand models, which disjunctions of positive literals don't. Cf. Lloyd, 1987, Foundations of Logic Programming. Least Herbrand models have the property that they are in the intersections of all satisfiers. The Herbrand models for $(a \lor b)$ are $\{\{a\}, \{b\}, \{a,b\}\}$, which doesn't contain its intersection, so as arnab says, $(a \lor b)$ is an example of a formula which can't be expressed as a conjunction of Horn clauses. Incorrect answer overwritten
{ "domain": "cstheory.stackexchange", "id": 2974, "tags": "reference-request, lo.logic, sat" }
Is Sun not the biggest source of energy to us?
Question: Probably in my 5th or 6th grade, I learnt in my science classes that sun is the biggest source of energy to us. However, I was watching this youtube video according to whom the energy of earth is constant. The energy we get from Sun is equal to the energy radiated. So, what is actually true, please explain? Answer: Intro: One thing we assume, because so far it seems to be true, is that the energy of the universe is constant. In other words, no matter what you do you cannot eliminate energy, only convert energy that is in one form into another. Short Answer: So it is like food, all the food that goes in must come out, it may be in different forms (urine, feces, sweat,...), or you will have to store it (get fat). If you're not gaining weight, it's because all the food you consume is processed and eventually expelled from the body. The same with energy. Long Answer: Imagine a cold winter, and you feel like turning the heat on. The energy that enters the heater as electricity comes out as heat, heat is also energy. So if you count all the energy in all of these new forms it has been converted to, you will end up seeing that the energy is all there. All of it. Okay then... so energy doesn't disappear, but happens if we get 100 units of energy per second from the sun and only throw 30 units per second out into space? Since it does not go way, 70 units of energy per second will accumulate on Earth and in no time we have a ticking bomb amassing so much energy it would make our planet a living hell. And that does not seem to be happening. This all indicates that energy net balance of the Earth is 0, what it gets from the sun, it is used by the earth and then thrown out.
{ "domain": "physics.stackexchange", "id": 97426, "tags": "energy, thermal-radiation, estimation, sun" }
A small technical explanation of nuclear energy
Question: I don't know if this is the place to ask this kind of questions but I'm sure you guys can help me. I'm looking for a small (4 pages max.) technical explanation of nuclear energy, preferably of an academic source. It would be best if the physics and mathematics involved is understandable by last-year high school students. Can anybody help me out? Thanks, Arnoud Answer: For the academic source - someone directly involved with physics would just not be interested in this kind of text. But look for introductions for academic fields that are only remotely/lightly involved with nuclear physics, maybe medicine or biology. And: Not the right place for this I think.
{ "domain": "physics.stackexchange", "id": 13214, "tags": "nuclear-physics" }
SLAM system for LIDAR and Stereo camera for cone detection for Autonomous driving
Question: The purpose of the SLAM system is very specific, for detecting cones in an image and triangulate their position to create a map. The data input would be the camera data, odometry and the LIDAR data. I have been going through SLAM algorithms on openSLAM.org and through other implementations of SLAM systems. I would like to know if there are a set of SLAM algorithms specific for the problem I have and what are the most efficient and least time consuming SLAM algorithms available. Any leads would be helpful. Answer: You should probably also add information regarding your sensors, for instance what LIDAR are you to use 2D or 3D etc. The nature of SLAM algorithm shall also depend upon what kind of system do you need to use it on specifically what are the rates that you need, do you need the SLAM to be online or offline etc. Further the machine you run your algorithms may also be the reason which decides which algorithm you finally use. Regardless I think these are some SLAM algorithms that may help you : Orb SLAM : http://webdiis.unizar.es/~raulmur/orbslam/ uses only camera information but performs very well at least the RGB-D version. Google Cartographer : https://github.com/googlecartographer/cartographer needs no introduction. You should also checkout gmapping - slam-gmapping Definitely checkout the LOAM algorithm, it uses only LIDAR data but produces very good quality maps, you can use EKF on the odometry that it provides with the one you have, that should give you a good positional estimate.
{ "domain": "robotics.stackexchange", "id": 1438, "tags": "slam, autonomous-car" }
How to distinguish those flowers in outskirts: hawkweeds (Hieracium), hawksbeard (Crepis) and hawkbits (Leontodon)?
Question: How to distinguish those flowers of the forest outskirts: hawkweeds (Hieracium), hawksbeard (Crepis) and hawkbits (Leontodon)? I am not asking about concrete species it should be extremely hard since they have hundreds forms, but what are observable differences between genus? I have read wikis articles but still not sure that I could distinct those flowers in the field right way. http://en.wikipedia.org/wiki/Leontodon http://en.wikipedia.org/wiki/Hieracium http://en.wikipedia.org/wiki/Crepis Answer: It's quite hard to answer this question and the best would be to follow a flora, which I'll not just copy informations from to past here. So following are general observations subject to numerous exceptions. Hieracium generally have lanceolate to oval leaves, not or little toothed, most hairy. Crepis and Leontodon have generally Taraxacum-like leaves, that means, dentate to lobed. Some Crepis have leafy stems (but some don't), while Leontodon always have their leaves concentrated to the base, forming a rosette. [edit on 2014, october 2] Be aware that there is another genus of the Asteraceae family which may be mistaken with the Crepis/Leontodon/Hieracium group, although it's easier to distinguish. It's the Hypochaeris genus. Hypochaeris have few but not so homogenous species as there are within the other genera. Hypochaeris radicata (which may be the most common species in France and wester Europe) have all its leaves basal, forming a rosette and sticking to the ground (never erected). These leaves are always lobed (neither entire, nor toothed), with erected hard hairs covering there upper faces. Hypochaeris maculata may be mistaken with some Hieracium : it have hairy smooth entire leaves wich are (all or mostly?) basal forming a rosette. [Addition on August, the 5, 2014] Also Leontodon and Hypochaeris have hollow stem just below the capitulum(1). When you press the stem just below the capitulum, you fell it flattens under your finger, and if you use the nail to make an incision in the stem, you will see it is hollow like a pipe. NB : Taraxacum also have hollow stems but they are fully hollow (from the base up to the capitulum), while Leontodon just have the upper part of the stem which is hollow. Leontodon are almost always monocephal, that means, the stems are not branched : they have only one capitulum per stem, while Crepis and Hieracium and Hypochaeris mostly (if not always) have branched stems carrying several capitula. (1) the capitulum is the type of inflorescence in the Asteraceae family. It's also called "head" (and actually, "capitulum" is the latin word for head). You may like to have a look at this page [in french] presenting some common yellow and ligulate weeds from the Asteraceae family to get some illustrations of what I explained here.
{ "domain": "biology.stackexchange", "id": 1562, "tags": "botany, species-identification" }
What happens when yellow light is passed through a prism?
Question: What happens when secondary colors like yellow,green and magenta are passed through a prism? Will it split into its component colors(for example yellow split into green and red)?If yes why we are observing secondary colors in the visible spectrum obtained when white light(composite light) is passed through a prism? Answer: "Yellow light" is an ill-defined concept. Our eyes can perceive as yellow both monochromatic light (at, say, 570 nm) as well as combinations of red and green light; depending on the mix of wavelengths and intensities the human-perceived color can be intistinguishable in the two cases. Passing such light through a prism is, in fact, the best way to distinguish which of the two cases you have: monochromatic yellow light will just bend, whereas a mixture of different wavelengths will split at a prism. However, you can't tell which one will happen just by looking at the light.
{ "domain": "physics.stackexchange", "id": 37967, "tags": "optics" }
Fundamental quantities in physics
Question: I observed that the fundamental units like meter, kilogram, ampere, kelvin and candela are all indirectly dependent on one single fundamental value 'second' and each other. For example: Meter:- 1 meter is the length that makes the speed of light in vacuum to be 299792458 when expressed in $\rm m\,s^{-1}$, where the second is defined in the terms of ground state hyperfine transition frequency of Cs 133 Here, Meter is indirectly dependent on second and does not really standout as a fundamental unit to me. Can anyone clarify on how it is a fundamental quantity? Answer: As already pointed out by the quote in your questions the SI-Units are nowadays defined by fixing physical constants in order to avoid artifacts due to the reliance on real-world physical samples. For example, keeping a meter e.g. as some rod that is "one meter long" is imprecise as there are always measurement errors on the measurements of the rod and the rod could change its shape with time (e.g. through corrosion, etc.). Now your question is what classifies the meter as a fundamental unit. The short answer is nothing. As already demonstrated within your question you could as well define the velocity to be "fundamental" and derive length from the fundamental "velocity" unit and the "time" unit. This counts for all the SI-Units, the important thing is that you need a set of units by which you can express all other units. In terms of the current SI-Units, you can write the unit $[Q]$ of every physical quantity $Q$ in terms of the SI-Units (m, s, kg, A, K, mol, cd) $$[Q]=\text{m}^\alpha\ \text{s}^\beta\ \text{kg}^\gamma\ \text{A}^\delta\ \text{K}^\epsilon\ \text{mol}^\zeta\ \text{cd}^\eta\ $$ with $\alpha,\beta,\gamma,\delta,\epsilon,\zeta,\eta\in\mathbb{Z}$. It is now convention that we say we use the length instead of the velocity as a fundamental unit.
{ "domain": "physics.stackexchange", "id": 82681, "tags": "conventions, dimensional-analysis, si-units, metrology" }
Sudden break down of Kinect?
Question: I was using rviz and roboearth when suddenly the image view went black. The terminal of openni.launch showed a red sentence "depth image frame id does not match rgb image frame id". I closed the terminal and opened a new one and then roslaunch openni_launch openni.launch but it said "No devices connected... waiting for devices to be connected". I reboot the computer a few times, sudo apt-get install ros-electric-openni-kinect, but the problem persists. Up till this morning the Kinect has been running normally. I really have no idea what happened. Is it due to auto-update? Anyone please help? Turtlebot laptop: Ubuntu 10.04; ROS Electric Workstation: Ubuntu 12.04; ROS Fuerte (I have another one in Ubuntu 10.04 and ROS Electric. I tried but the problem remains unsolved.) The Kinect shows a flashing green light. But when I type lsusb, I saw only one device from Microsoft and one from Chronicity. Is it normal or not? Originally posted by Chik on ROS Answers with karma: 229 on 2013-04-09 Post score: 0 Original comments Comment by freadx on 2013-04-09: Remove USB and restart and connect back only when you do rviz. I had the same problem. Comment by Chik on 2013-04-10: Thank you very much. I tried that but it still says No devices connected. Answer: Solved. Power supply problem from the iRobot Create base. Originally posted by Chik with karma: 229 on 2013-04-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13755, "tags": "ros, kinect, openni.launch" }
image_geometry project3d to pixel
Question: I'm getting multiple projections of 3d points to the same pixel which don't coincide with the 3D ray but parallel with the image plane. This can be seen here: I'm unable to find an error in my code or in the re-projection function (CODE API) This is my code fragment in which I calculate it: void calculateVoxels (void) { Point2d uv; Point3d pt_cv; int width, height; double vox_number = 0; width = cam_model_.reducedResolution().width; height = cam_model_.reducedResolution().height; cloud_.width = WORLD_RESOLUTION; cloud_.height = 1; cloud_.is_dense = false; cloud_.points.resize (WORLD_RESOLUTION); ///Only calculate if the image and the camera info are received. if(image_rec_ & cam_info_rec_){ double x, y , z; int a=0; int b=0; int c=0; for(x = WORLD_BEGIN_X; x < WORLD_MAX_X; a++){ b = 0; for(y = WORLD_BEGIN_Y; y < WORLD_MAX_Y; b++){ c = 0; for(z = WORLD_BEGIN_Z; z < WORLD_MAX_Z; c++){ pt_cv = Point3d(x, y, z); ///WARNING!: (u,v) in rectified pixel coordinates! uv = cam_model_.project3dToPixel(pt_cv); ///@todo : check this, might be inverse ///Only if it falls on the image plane if((uv.x < height) && (uv.y < width) && (uv.x >=0) && (uv.y >= 0)){ if(current_image_.at<int>(uv) != 0){ world_[a][b][c] = 255; ///Start indexing from 0,0,0 in the array! ///Also create the pointcloud cloud_.points[vox_number].x = x; cloud_.points[vox_number].y = y; cloud_.points[vox_number].z = z; vox_number ++; } else world_[a][b][c] = 0; } z= z + WORLD_STEP_Z; } y= y + WORLD_STEP_Y; } x= x + WORLD_STEP_X; } calc_voxels_ = true; ///Trigger a update of the OpenGL world if(do_opengl_){ glutPostRedisplay(); } ///Save the pointcloud cloud_.width = vox_number; ///minimize the size of the pointcloud cloud_.points.resize (vox_number); ///maximal vox_number points written if(save_cloud_){ pcl::io::savePCDFileASCII (PCL_PCD_SAVE, cloud_); ROS_INFO ("Saved %d data points to %s", (int)cloud_.points.size (), PCL_PCD_SAVE); } cloud_.header.frame_id = "world"; cloud_.header.stamp = ros::Time::now (); pub_.publish(cloud_); } } Originally posted by KoenBuys on ROS Answers with karma: 2314 on 2011-02-24 Post score: 2 Answer: The cam_info_rec_ variable is only true if a correct camera_info callback happened. Furthermore all WORLD_x variables are defines and varying them gives the correct response (except for my problem). I verified the projection routine from image_geometry and found no bugs (except for the limited implementation (see my question on multiple camera set-up )). Originally posted by KoenBuys with karma: 2314 on 2011-03-01 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 4869, "tags": "ros, image-geometry" }
How to use the IMU raw data, rostopic echo imu
Question: Hello I have one question regarding using the raw IMU data. Im using a sensor package with laser range finder, IMU and camera. My bag file consists of scans, imu data and images. I would like to know how to use the imu data to calculate the position and angular velocity of the robot. This is the output of the rostopic echo /imu/data For example header: seq: 24237 stamp: secs: 1301628270 nsecs: 972690349 frame_id: /base_imu orientation: x: -0.628151834011 y: 0.0210457909852 z: -0.0200814530253 w: -0.777546823025 orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] angular_velocity: x: -0.00680741108954 y: -0.00184269389138 z: -0.00491618132219 angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] linear_acceleration: x: -0.43050968647 y: 0.00864356849343 z: 9.79496383667 line......... Any help?? Originally posted by Astronaut on ROS Answers with karma: 330 on 2012-11-27 Post score: 0 Answer: Take a look at the navigation stack. There is lots of information and tools to use to calculate robot position, generate maps, and navigate. Also take a look at the robot_pose_ekf which fuses IMU data, odometry, and video odometry to calculate position. In addition, viso2 is a good package to produce video odometry from your camera images. Originally posted by Kevin with karma: 2962 on 2012-11-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Astronaut on 2012-11-28: Sorry but I did not found in the navigation stack any info for IMU. Any help???
{ "domain": "robotics.stackexchange", "id": 11907, "tags": "imu, rostopic" }
Finding the energy eigenvalue for the second excited state of a quantum harmonic oscillator
Question: I am self studying quantum mechanics and currently trying to find the energy eigenvalue for the second excited state of a quantum harmonic oscillator: $$\psi_2=N_2\left\{\frac{4x^2}{\alpha^2}-2\right\}e^{\frac{-x^2}{2\alpha^2}}$$ $$\alpha=\left(\frac{\hbar^2}{mk}\right)^\frac{1}{4}$$ $$ E\psi=-\frac{\hbar^2}{2m}\frac{d^2\mathit\psi}{dx^2}+\frac{kx^2}{2}\psi$$ The energies permitted by the boundary conditions are $$E_v=(v+\frac 12)\hbar\omega$$ It follows that the energy corresponding to the second excited state is $\frac 52\hbar\omega$. I know this, however I would like to find this value by using the differential equation. I managed to do this for the ground state and first excited state ($v=0, v=1$, respectively), however I'm struggling with this, and can't find any existing help online; I can't see any glaring issues with my calculus, nor my algebra: the issue seems to be when i try to factorise $\psi$ out after taking the second derivative. I don't think this should be as difficult as I'm finding it! I appreciate any help. Answer: You are getting overwhelmed by superfluous constants and signs. Note that $$ \partial^2((2x^2-1) e^{-x^2/2})\\ =\partial((4x-x(2x^2-1) ) e^{-x^2/2})\\ =(-6x^2+5+ x^2(2x^2-5))e^{-x^2/2}\\ =(-5+x^2)(2x^2-1)e^{-x^2/2}. $$ Note the first parenthesis is just $2V-2E_2$.
{ "domain": "physics.stackexchange", "id": 96110, "tags": "quantum-mechanics, harmonic-oscillator" }
Can diopters be added for glasses? or is it more complicated?
Question: I am using glasses for myopia and in the past couple years I've been adding reading glasses for when I'm on the computer. The reading glasses add +0.75. I wear one pair on top of each other! it's not super comfortable but it works and looks funny as a bonus. I was thinking about getting a pair of computer glasses, but because of the pandemic I'm not very keen on the idea of having the optometrist breathing right in my face during an eye check. Since my prescription is L:-5.25 / R:-3.50, can I simply add the +0.75 directly and order glasses with -4.50 / -2.75? Answer: The focal lenght combination law of two thin lenses is addition of diopter minus quadratic reduction by distance $$\frac{1}{f} = \frac{1}{f_1}+\frac{1}{f_2}-\frac{d}{f_1 f_2}$$ For myopia with focal length of 0.4 m and distance eye-glas 0.02m for a correction to 10m eg $$ 1/(1/f_e - 1/f_g + 0.02/(f_e*f_g)) == 10 \to 1/g= 2.52632$$ Correct instead to 1m screen distance $$ 1/(1/f_e - 1/f_g + 0.02/(f_e*f_g)) == 1 \to 1/g= 1.57895$$ For laptops work its better to correct for 0.5m yielding 0.526316. So the absolute value of no great importance, reduce the diopter by something between 1 and 2. With 0.75 its even possible to view the TV at the wall.
{ "domain": "physics.stackexchange", "id": 98587, "tags": "optics, lenses" }
Boltzmann brain vs Actual universe probability actual calculations
Question: I have been reading lots of physics lately about how universe came into being. One idea is that universe is long gone (thermal death) and it might not exist the way I feel it and I am a sequence of Boltzmann brains fluctuating into existence over incredibly long amounts of time which makes me experience my past and present, and the brains that will be quantum fluctuated will be my future. The likelihood of universe that we know fluctuating into existence is much less likely due to the size of fluctuation needed. But then I think of calculating probability of universe fluctuating once (and then evolving the way we all know) against many Boltsmann brain fluctuating to represent my lifetime I have to think of the size of a brain. What sort of Boltsmann brain am I? I don't necessarily have to be human brain I could be software that fluctuates on a microchip and every-time I fluctuate I can be a different kind of brain. Has anyone done any sort of calculations that I could look at what size of fluctuation did they use to compare probabilities. I have read quite a few articles that claim that we are likely to be Boltzmann brain but couldn't find any actual math done, I would be surprised if there is no article with calculations on the web. Answer: There is a huge huge problem here. You could be a Boltzmann brain. But to calculate the probability that you are a Boltzmann brain we have to have a theory about the way the universe is and then quantify how many Boltzmann brains that universe has compared to how many real brains it has. But you want to compute a probability. The only way to compute a probability is to use some information. If you use the information available to your brain, then if you are a Boltzmann brain your information (the information you use to compute your probability) is factually wrong. As an extreme example. Imagine a universe with just one Boltzmann brain that happens to be just like your brain is just right now. It might think it is in a universe with certain physical constants, a certain age, a certain number of forces and particles etcetera etcetera. But it could be completely completely wrong about absolutely everything. So the probabilities it computes are just meaningless. And the poor Boltzmann brain doesn't have access to better information so it is hopeless for the poor Boltzmann brain to do a better job. But you are quite right that if you have a model that predicts many more Boltzmann brains like you in it than real brains like you in it then the agreement of the model with observations (information available to your brain) should not be taken as evidence that you are a real brain. However, if the same model predicts vastly more Boltzmann brains super different than you (including Boltzmann brains that have totally wrong ideas about the universe) then the fact that you are one of the brains with correct ideas should mean something. But each brain could try to imagine a universe where it's ideas are correct ideas. That is, in fact, what we do. We try to come up with models where the facts in our brains are correct. If we failed, that would be the true failure. We could try science and have it not work. And it would be a cause for concern if that happened. And we might worry in that case that we are a Boltzmann brain. Or we could work hard to be the first physicist to make sense of a reality that is real. It is just pessimism versus optimism. Not really probability theory. But isn't it science to calculate probability? Science involves making predictions and comparing them to observations in a way that is consistent with past observations. Good science does the predictions in a way that is related to how it explains past results. Probability comes from mathematics, not from science and probability just a big if-then. So you can learn something about an if by looking at the then. To take the probability of entropy decrease needed to fluctuate one brain then to take probability of entropy decrease needed to fluctuate visible universe To compute a probability you need a sample space from which to draw. My point was where does the sample space come from. If the sample space comes from a brain, then Boltzmann brains with totally wrong opinions will draw up their own sample spaces and compute their own, wrong, probabilities based on the totally wrong sample spaces. You could have a universe with five forces that has a Boltzmann brain with your exact opinions including you thinking there are four forces. And your brain would have an opinion about what the visible universe looks like and it could be very very very wrong. and compare the two - to see how many BB one can fluctuate at the cost of one universe. I understand that it will be inaccurate by a huge factor still why not do it. If you imagine that your brain's opinion are correct. And then you compute that if they are correct that the universe is full of many Boltzmann brains then you might reasonably wonder whether you are, in fact, a Boltzmann brain. However even if you computed how many Boltzmann brains there are like you given that there is one real brain like you that correctly sees the universe and is full of true facts. It doesn't really matter how big or small that ratio is. Because you have no idea the probability that there is a real brain like yours given that you are a Boltzmann brain. There are possible universes with Boltzmann brains just like yours where there are zero real brains like yours throughout the entire space and time of that universe. So your opinions don't tell you the way the universe is. If you tried to make sense of your opinions and failed, that is a good sign you are a Boltzmann brain. But it could be a sign that more work is needed. And when I say Boltzmann brain I don't mean the run of the mill fluctuation of protons and neutrons and electrons that produce an actual physical brain that momentarily look like your brain does. I mean any self interacting system that makes perceptions of its own state the way your brain does. Any thing that feels like your brain does and has opinions like your brain does. After all, you wanted to include simulations. So that includes arrangements of weird stuff satisfying totally different laws. After all you can imagine a universe that follows our laws but fluctuates a brain that has a memory consistent with a totally Newtonian universe. So your opinions could be totally wrong opinions and a fluctuation in a vastly more complicated universe. Your brain might not even be large enough to conceptualize even the most superficial description of the actual universe and so your computations of the probabilities of generating a brain like yours in a universe like that might be impossible. But even if it were possible there is no way to assign a sample space for having these different kinds of real universes.
{ "domain": "physics.stackexchange", "id": 24739, "tags": "quantum-mechanics, thermodynamics, cosmology, universe" }
What are the factors that affect the corrosiveness of an acid?
Question: Acids are usually corrosive but what determines this? Is it the concentration of the acid or strength of the acid ($K_a$ value)? Answer: Corrosiveness of an acid is a more complex concept, of course the concentration of the acid and the strength of the acid are two main factors that determinate the success of the corrosion reaction. For example this is the Pourbaix diagram of iron both concentration and $K_a$ determine the pH of the solution so if this is higher then about 9 the metal is passivate and you can't have any corrosion. I add to these another important factor: temperature. In some case the corrosiveness is due to a more complex interaction that is directly linked to the material you are dealing. One example is acqua regia in this case the strong corrosiveness is due to the couple effect of two different acid that are able to de-passivate the metal allowing an enhanced corrosive effect on noble metals. However acqua regia is not the best choice for an organic material in this case Piranha Solution has a greater corrosive effect.
{ "domain": "chemistry.stackexchange", "id": 1183, "tags": "acid-base, ph" }
Is chloronitrobenzene and nitrochlorobenzene same?
Question: Is it 2-chloronitrobenzene or 2-nitrochlorobenzene? If we take nitro group as the substituent, it will be 2-chloronitrobenzene. But if we take chloro group as the substituent, it is 2-nitrochlorobenzene. What if we start numbering the carbon atom from the carbon to which the nitro group is attached, it should be o-chloronitrobenzene. But no, carbon atom numbering starts from the group to which chloro group is attached. Is it the fact that chlorobenzene forms ortho-para substituted product and it is named accordingly or according to some IUPAC rules? Answer: In the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book), the order of citation of substituent prefixes and the numbering of the corresponding locants are laid down in two different rules. Simple substituent groups that are named by means of prefixes (such as ‘nitro’ and ‘chloro’) are arranged alphabetically. P-14.5 ALPHANUMERICAL ORDER Alphanumerical order has been commonly called ‘alphabetical order’. As these ordering principles do involve ordering both letters and numbers, in a strict sense, it is best called ‘alphanumerical order’ in order to convey the message that both letters and numbers are involved Alphanumerical order is used to establish the order of citation of detachable substituent prefixes (not the detachable saturation prefixes, hydro and dehydro), and the numbering of a chain, ring, or ring system when a choice is possible. (…) P-14.5.1 Simple prefixes (i.e., those describing atoms and unsubstituted substituents) are arranged alphabetically; multiplicative prefixes, if necessary, are then inserted and do not alter the alphabetical order already established. Therefore, the correct alphanumerical order for the compound given in the question is x-chloro-y-nitrobenzene (not x-nitro-y-chlorobenzene). The locants x and y are used to indicate positions of the parent structure at which modifications represented by suffixes occur. The locant ‘1’ is omitted in monosubstituted homogeneous monocyclic rings (e.g. chlorobenzene or nitrobenzene); but in substitutive names of disubstituted benzene, numerical locants have to be used to distinguish the 1,2-, 1,3-, and 1,4-isomers. Note that the letters o, m, and p have been used in place of ortho, meta, and para, respectively, to designate the 1,2-, 1,3-, and 1,4-isomers of disubstituted benzene (e.g. o-chloronitrobenzene). According to current IUPAC recommendations, this usage is strongly discouraged and it is not used in preferred IUPAC names. The relevant rules concerning the numbering of locants for substituent prefixes are: P-14.3.5 Lowest set of locants The lowest set of locants is defined as the set that, when compared term by term with other locant sets, each cited in order of increasing value, has the lowest term at the first point of difference; (…) and P-14.4 NUMBERING When several structural features appear in cyclic and acyclic compounds, low locants are assigned to them in the following decreasing order of seniority: (…) (f) detachable alphabetized prefixes, all considered together in a series of increasing numerical order; (g) lowest locants for the substituent cited first as a prefix in the name; (…) Note that Rule (f) takes precedence over Rule (g). In accordance with Rule (f), the compound given in the question could be named as 1-chloro-2-nitrobenzene as well as 2-chloro-1-nitrobenzene since both names correspond to the locant set ‘1,2’. However, according to Rule (g), this example is named as 1-chloro-2-nitrobenzene rather than 2-chloro-1-nitrobenzene since chloro is cited first as a prefix in the name.
{ "domain": "chemistry.stackexchange", "id": 4988, "tags": "organic-chemistry, nomenclature, aromatic-compounds" }
Why does aluminium-on-glass mirror work without distortion?
Question: I have read an article about glass (zerodur) with low thermal expansion coefficient. It is mentioned that large casts of such glass are covered with reflective layer of Aluminium and used as mirrors in space observatories. Low CTE is so important in this glass, because changes in size of it would distort the picture taken with the telescope. But what about said layer of aluminium? It is metal and it's CTE is much larger than the one of the glass. So how does it happen that the thin layer of aluminium doesn't distort the picture? Doesn't it expand/shrink? Answer: Let us take the example of the Hubble primary mirror. It has a diameter of 2.4 m and a mass of 828 kg. It is actually made in a sandwich structure - glass-honeycomb-glass - making it about 30 cm thick (for stiffness) but light. The mirror is coated with an aluminum coating of thickness t = 65 nm, with a 25 nm MgF2 protective coating on top. Coefficient of thermal expansion of aluminum is $2.2 \cdot 10^{-5} \mathrm{m/m\cdot K}$ and the Young's modulus is 69 GPa. If you constrain the coating to be of constant size, then you increase the strain by $2.2 \cdot 10^{-5}$ per °C, and for a square sheet of aluminum with sides $L$ and thickness $t$, the force this creates would be $$F = \sigma E A = 2.2\cdot 10^{-5} \cdot 69 \cdot 10^9 \cdot 2.4 \cdot 10^{-9} \approx 0.4 N $$ A force of 0.4 N across a 30 cm thick mirror gives a bending moment of about 0.06 Nm; the radius of curvature can be approximated by $$R = \frac{EI}{M}$$ Again, we will approximate the mirror as a square with 1 D deflection, in which case the second moment of area is given by approximately (t = thickness of glass = 3.5 cm, s = spacing = 25 cm) $$I = \frac{Lt}{2 s^2} \approx 0.001 m^4$$ Thus we find for the (additional) curvature of the mirror: $$R = \frac{70\cdot 10^9 \cdot 0.001}{0.4}=3\cdot 10^9 m$$ This results in a deflection across the 2.4 m diameter of $$\Delta h = \frac{r^2}{2R} = \frac{1.44}{2\cdot 3\cdot 10^9} \approx 0.3 \mathrm{nm}$$ For comparison it is worth noting that the specification of the mirror calls for the shape to be accurate to within 10 nm, and the error that led to the initial blurry images was on the order of 2 µm. In other words, while the coating might have an effect, it is quite a bit smaller than the specification of the mirror. And there is another thing - the mirror has a heater on it which keeps the temperature constant within half a degree (to maintain shape). Originally this temperature was 21 degrees, but that makes it work less well in the near IR. Later the temperature was dialed down to 15°C (source - as above). So yes - thermal expansion of coatings can affect the shape of the mirror; but no, it is not significant.
{ "domain": "physics.stackexchange", "id": 21658, "tags": "thermodynamics, material-science, metals, optical-materials" }
Converting code for javax.xml.soap.* to webServiceTemplate
Question: I am able to send requests to the web service using javax.xml.soap.*. I would like to convert the code to use webServiceTemplate. I am struggling with creating request and result objects. (sample I've found is related to xml not SOAP) I am also wondering whether there are advantages to using webServiceTemplate over java.xml.soap. If there is not, am I doing it correctly? Given that I need to get connected to 20 web services. The only service it has is findEvents as follows: <soapenv:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soap="http://ticketmaster.productserve.com/v2/soap.php" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"> <soapenv:Header/> <soapenv:Body> <soap:findEvents soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <request xsi:type="soap:Request"> <!--You may enter the following 7 items in any order--> <apiKey xsi:type="xsd:string">?</apiKey> <country xsi:type="xsd:string">?</country> <resultsPerPage xsi:type="xsd:int">?</resultsPerPage> <currentPage xsi:type="xsd:int">?</currentPage> <sort xsi:type="soap:Request_Sort"> <!--You may enter the following 2 items in any order--> <field xsi:type="xsd:string">?</field> <order xsi:type="xsd:string">?</order> </sort> <filters xsi:type="soap:ArrayOfRequest_Filter" soapenc:arrayType="soap:Request_Filter[]"/> <updatedSince xsi:type="xsd:string">?</updatedSince> </request> </soap:findEvents> </soapenv:Body> </soapenv:Envelope> My code is as follows: try { SOAPConnectionFactory soapConnectionFactory = SOAPConnectionFactory.newInstance(); SOAPConnection connection = soapConnectionFactory.createConnection(); MessageFactory factory = MessageFactory.newInstance(); SOAPMessage message = factory.createMessage(); SOAPHeader header = message.getSOAPHeader(); header.detachNode(); SOAPBody body = message.getSOAPBody(); SOAPFactory soapFactory = SOAPFactory.newInstance(); Name bodyName; bodyName = soapFactory.createName("findEvents", "xsd", "http://ticketmaster.productserve.com/v2/soap.php"); SOAPBodyElement getList = body.addBodyElement(bodyName); Name childName = soapFactory.createName("findEvents"); SOAPElement eventRequest = getList.addChildElement(childName); childName = soapFactory.createName("apiKey"); SOAPElement apiKey = eventRequest.addChildElement(childName); apiKey.addTextNode("MYAPI"); childName = soapFactory.createName("country"); SOAPElement cid = eventRequest.addChildElement(childName); cid.addTextNode("UK"); message.writeTo(System.out); //show message details URL endpoint = new URL("http://ticketmaster.productserve.com/v2/soap.php"); SOAPMessage response = connection.call(message, endpoint); connection.close(); //SOAPBody soapBody = response.getSOAPBody(); SOAPMessage sm = response; System.out.println("Response:"); ByteArrayOutputStream out = new ByteArrayOutputStream(); sm.writeTo(out); String validSoap = "<?xml version=\"1.0\"?> " + out.toString(); System.out.println("It is ValidSoap: " + validSoap); //ValidSoap message SAXBuilder builder = new SAXBuilder(); Reader in = new StringReader(validSoap); //reading character stream Document doc = null; //empty jDom document is instantiated doc = builder.build(in); //build the jDom document Element root = doc.getRootElement(); //Envelope List allChildren = root.getChildren(); //list of all its child elements System.out.println("Root is:" + ((Element) allChildren.get(0)).getName()); listChildren(root); } catch (Exception ex) { ex.printStackTrace(); } New Code webServiceTemplate.sendSourceAndReceiveToResult ("http://ticketmaster.productserve.com/v2/soap.php",source, result); @XmlRootElement public class FindEvents { @XmlElement Request request; public Request getRequest() { return request; } public void setRequest(Request request) { this.request = request; } } @XmlSeeAlso(SortTicket.class) public class Request { @XmlElement String apiKey; @XmlElement String country; @XmlElement int resultsPerPage; @XmlElement int currentPage; @XmlElement(name = "Sort") SortTicket sort; @XmlElement String[] filters; @XmlElement String updatedSince; public String getApiKey() { return apiKey; } public void setApiKey(String apiKey) { this.apiKey = apiKey; } public String getCountry() { return country; } public void setCountry(String country) { this.country = country; } public int getResultsPerPage() { return resultsPerPage; } public void setResultsPerPage(int resultsPerPage) { this.resultsPerPage = resultsPerPage; } public int getCurrentPage() { return currentPage; } public void setCurrentPage(int currentPage) { this.currentPage = currentPage; } public SortTicket getSort() { return sort; } public void setSort(SortTicket sort) { this.sort = sort; } public String[] getFilters() { return filters; } public void setFilters(String[] filters) { this.filters = filters; } public String getUpdatedSince() { return updatedSince; } public void setUpdatedSince(String updatedSince) { this.updatedSince = updatedSince; } } public class SortTicket { @XmlElement String field; @XmlElement String order; public String getField() { return field; } public void setField(String field) { this.field = field; } public String getOrder() { return order; } public void setOrder(String order) { this.order = order; } } Answer: Since you have generated DTO classes with Jaxb annotation you can create a marshaller ,unmarshaller and create objects of the DTO classes (SortTicket, Request, FindEvents) and send the objects directly instead of using the xml request webServiceTemplate.marshalSendAndReceive(findEvents); Something like this you'll have to configure. Create a marshaller <oxm:jaxb2-marshaller id="marshaller" contextPath="com.yourcontextpath" /> create web service template <bean id="webServiceTemplate" class="org.springframework.ws.client.core.WebServiceTemplate"> <property name="marshaller" ref="marshaller" /> <property name="unmarshaller" ref="marshaller" /> <property name="defaultUri" value="http://ticketmaster.productserve.com/v2/soap.php" /> </bean> and in some class's method where you want to send soap request inject webServiceTemplate using @Autowired @Autowired private WebServiceTemplate webServiceTemplate; public void sendSampleSoapRequest() { SortTicket sortTicket=new SortTicket(); // set its values Request request=new Request(); //set its values request.setSort(sortTicket); FindEvents findEvents=new FindEvents(); setRequest(request) Object response=webServiceTemplate.marshalSendAndReceive(findEvents); } marshalSendAndReceive message uses the Jaxb marshaller to convert your objects (marked with JaxB annotation)to xml.So above your findEvents object will be converted to its xml from. Advantage of using this is that you will be getting rid of creating xml elements manually. Regarding your second point advantages of using webServiceTemplate over java.xml.soap. : you don't have to create those SOAPElements manually you just create an object and send it instead of big code for manually handling it. Since you'll have to connect to 20 different webservices it will be much easier for you to create DTO objects and send them directly.You may need to modify my above samples a little. Remove the deault uri <bean id="webServiceTemplate" class="org.springframework.ws.client.core.WebServiceTemplate"> <property name="marshaller" ref="marshaller" /> <property name="unmarshaller" ref="marshaller" /> </bean> and while sending request give the URI request Object response=webServiceTemplate.marshalSendAndReceive(uri,object); For sending it to multiple server Object response1=webServiceTemplate.marshalSendAndReceive(uri1,object); Object response1=webServiceTemplate.marshalSendAndReceive(uri2,object) uri1 and uri2 can be different soap service and if you don't have the wsdl you can send xml with this method sendSourceAndReceiveToResult(uri1,source, result); sendSourceAndReceiveToResult(uri2,source, result); Sending a uri in the send method over rides the default URI For example check this also check the api doc
{ "domain": "codereview.stackexchange", "id": 14673, "tags": "java, web-services, spring" }
Transcription factor binding site located in intron
Question: I have noticed that some TF binding sites are located in the introns of the genes. I am puzzled about whether the TF only binds to DNA in the initiation stage of transcription and will detach during transcription. (I am thinking if the TF bind to the sense strand, it will block the PoLII for transcription, thus they should be removed.) Many thanks in advance. Answer: As Armatus said TF can remain bound without an effect. There are some alternative explanations: Promoters need not be always upstream to the Transcription Start Site (TSS). There are promoters called Downstream Promoter Elements that are actually downstream to TSS. There can be alternate TSS within the introns TF bound to intron may regulate elongation or splicing rather than initiation. [Elongation can also be regulated especially when a gene is poised for expression. RNApolymerase stalling is a well known phenomenon and is affected by epigenetic marks. Since splicing happens co-transcriptionally there can be DNA marks that affect the process. Although it has not been shown whether such a thing happens, but Intron-Exon boundaries are known to have a distinct nucleosomal pattern.{1, 2, 3}]
{ "domain": "biology.stackexchange", "id": 1091, "tags": "dna, genetics, transcription, introns" }
Why are sidebands generated in AM and FM?
Question: When the signal is modulated onto the carrier in the electromagnetic spectrum, that signal occupies the small portion of the spectrum surrounding the carrier frequency. It also cause sidebands to be generated at frequencies above and below the carrier frequency. But how and why are those sidebands generated in AM and FM and why are there so many sidebands generated in FM while just two are generated in AM ? Please provide a practical example, as I already know how they are generated mathematically. What I know is, in the time domain, when the original signal is put into the carrier signal, it is actually multiplied with the carrier signal which means that in frequency-domain the original signal is convolved with the carrier signal. Those two Sidebands in AM are actually the Fourier transform of the carrier signal. Is this correct? Answer: Carrying information requires bandwidth. For a given S/N ratio, modulating a signal to carry more information will thus expand its bandwidth. Call the addition bandwidth "side bands". If you don't add side bands to a fixed frequency carrier, you can't expand its bandwidth, and thus you can't transmit any information (other than the presence of a constant carrier). For AM, AM is not PM (phase modulation). Any additional bandwidth (as required to carry information in the modulating signal) on one side of the carrier will usually have a different phase (change of phase with respect to time from any reference point) from the carrier. To neutralize this phase difference, AM modulation has to add some additional matching bandwidth on the opposite side of the carrier to carry a signal that will exactly cancel any phase shift of the spectrum the first side, so that AM doesn't become PM. With FM, modulating a carrier changes the signal frequency to new frequencies. You can also call those additional new frequencies so generated "side bands".
{ "domain": "dsp.stackexchange", "id": 2944, "tags": "frequency, frequency-spectrum, modulation" }
Fetching tweets
Question: I have written this (truncated) code to fetch some tweets: dispatch_async(dispatch_get_global_queue(0, 0), ^{ [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:YES]; NSString *JSONStr = [NSString stringWithContentsOfURL:[NSURL URLWithString:@"http://search.twitter.com/search.json?q=haak&lang=nl&rpp=100"] encoding:NSUTF8StringEncoding error:nil]; if (!JSONStr) { [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:NO]; return; } /*... PARSING ETC ...*/ dispatch_sync(dispatch_get_main_queue(), ^{ [delegate didReceiveTweets:foundTweets]; }); [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:NO]; }); Note the lines from dispatch_sync(dispatch_get_main_queue(), ^{ to });. This will update the UI. Is this the good way to do it, or are there better ways than using dispatch_sync within a dispatch_async? Or should I not do this at all? Should I also send setNetworkActivityIndicatorVisible: from within the main thread? The reason I'm not using NSURLConnection is because this code comes from a class method, so I need to create an object containing the delegate for the NSURLConnection, which seems overkill to me. Answer: you don't necessarily have to call -setNetworkActivityIndicatorVisible: from the main thread. I didn't find anything in the documentation about UIApplication not being thread safe, and since your application's main thread doesn't have anything in common with the system, you are free to call it whenever you like. dispatch_sync: that's fine. you could also call dispatch_async, since i'm not really sure you would want do display the network indicator while the feeds are actually being set on the UI - instead you probably only want the downloading to be indicated. I would probably go for dispatch_async. But to answer your question, that piece of code is perfectly fine ( with the minor addition that maybe you should really use only one exit point from your method ... ) Hope this helps.
{ "domain": "codereview.stackexchange", "id": 2098, "tags": "multithreading, objective-c, twitter" }
How does a train, airplane measure its speed?
Question: I always get the doubt about this. I know that a bike measures its speed based on the motion of its front wheel. So what is the case with train? Is it same principle? Then what about an airplane? Is it by radars? Answer: Trains simply use a wheel rotation, either by an eddy current disc as in a car speedo or by a digital counter on a shaft in a modern system. Aeroplanes don't really care about their absolute (ground) speed they only care about the speed relative to the surrounding air - this is what determines if the wings work. They measure this with a pitot tube, essentially a form of air pressure measurement.
{ "domain": "physics.stackexchange", "id": 6152, "tags": "measurements, kinematics, speed, relative-motion" }
How could I use "ROS" commands in a bash file?
Question: Hey ! First, i'm sorry for my english , i don't use translator and i'm not really good at. Let me explain my problem : I want to use "PuTTY" via Windows' terminal (to connect to linux) with the arguments "-m" and a file .txt This file will be read by PuTTY and execute line by line the commands. But there's a problem , every linux' commands works, but "catkin_make" or "rosrun" it wrote : bash: line 2: catkin_make: command not found per exemple However , i can use it manually via PuTTY How can I use ROS commands in a bash file ? Thanks. Originally posted by Fjara on ROS Answers with karma: 3 on 2018-05-23 Post score: 0 Answer: This is an environment setup issue (ie: the environment is not correctly setup). Passing absolute paths to commands will work around some problems, but not all (ie: rospkg not being able to find ROS packages and others). Originally posted by gvdhoorn with karma: 86574 on 2018-05-24 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Fjara on 2018-05-25: @gvdhoorn so what do you recommand me ? It's my mate who had installed all the things , and I need to wait him to make some test (he's absent atm) I don't know which commands we need ( catkin_make , rosrun , but don't know the others ) I'll be back with more informations Thanks for your answer Comment by gvdhoorn on 2018-05-25: You have a few options, but I don't know how Putty treats each of the lines in your -m file.txt: if they are all executed in individual SSH sessions, then you'll need to source the correct setup.bash each time before you run a new command. If it's all a single session, source once. Comment by Fjara on 2018-05-25: Well , isn't enough if we sourced it in the .bashrc ? I will try to get some infos about the -m so ! :) Comment by gvdhoorn on 2018-05-25:\ Well , isn't enough if we sourced it in the .bashrc ? It can be, but regular SSH logins do not source $HOME/.bashrc. That is not a ROS problem btw, just how SSH/bash work. If you solve/work-around that, your initial approach might work. Comment by Fjara on 2018-05-25: Thanks for your answers ! I'll make some tests and i'll come back Comment by Fjara on 2018-05-25: hey , so with the source : /opt/ros/kinetic/bin/catkin_make Traceback (most recent call last): File "/opt/ros/kinetic/bin/catkin_make", line 12, in <module> from catkin.init_workspace import init_workspace ImportError: No module named catkin.init_workspace Comment by Fjara on 2018-05-25: I used : #!/bin/bash cd catkin_ws /opt/ros/kinetic/bin/catkin_make 2>> erreurs.log Comment by gvdhoorn on 2018-05-25: I'm not sure I understand what you're trying to do exactly. In a nutshell, I believe you need to: source /opt/ros/kinetic/setup.bash run your commands you cannot source catkin_make. Comment by Fjara on 2018-05-25: Like that catkin_make works but as you said earlier , rosrun : [rospack] Error: stack/package hello_world_demo not found Comment by gvdhoorn on 2018-05-25: Well if you have pkgs in a workspace then of course you should source /path/to/your/catkin_ws/devel/setup.bash instead. Comment by Fjara on 2018-05-25: i did : #!/bin/bash cd Ben_catkin_ws source /opt/ros/hydro/setup.bash 2> erreurs.log source /devel/setup.bash rosrun hello_world_demo youBot_HelloWorldDemo 2>> erreurs.log Didn't work, even if i delete the source hydro Comment by Fjara on 2018-05-25: (ah and yes , i use kinetic for test , and hydro on the robot) Comment by gvdhoorn on 2018-05-25: This doesn't match: cd Ben_catkin_ws ... source /devel/setup.bash Either your devel space is in the root (/) of the drive, or it's in Ben_catkin_ws. It cannot be both. Also: I'd advise you to use absolute paths with source. Comment by Fjara on 2018-05-28: Should i use ben_catkin_ws devel ? Comment by gvdhoorn on 2018-05-28: If that is the workspace containing your packages, then that would make sense, yes. Please take note of the capitalisation though. You now write ben_catkin_ws, but in your other comments it's Ben_catkin_ws. Case matters. Comment by Fjara on 2018-05-28: I'll try when i got the robot back :) Comment by Fjara on 2018-05-31: Well , awesome it worked ! Thanks a lot @gvdhoorn , i understand my mistake here. /devel/setup.bash is /home/path/to/devel/setup.bash
{ "domain": "robotics.stackexchange", "id": 30888, "tags": "ros, script, ros-kinetic, bash" }
Discrete signal testing for periodicity
Question: How would one go about determining if the following discrete time signal x[n] is periodic, and if it is, determine its fundamental period? I understand that the period for the second exponential term is 6, but apart from that I am unable to further my calculation. The answer to the question above is: but I fail to understand how the equation highlighted in yellow comes to be. Any help with this matter would be appreciated. Answer: The first term is just a constant, so it's not relevant to periodicity. The sequence as written is periodic with a fundamental frequency of 1/6. Your cited solution makes no sense to me.
{ "domain": "dsp.stackexchange", "id": 9622, "tags": "discrete-signals, periodic" }
are GSEA and other geneset enrichment analysis supposed to yield extremely different results between them?
Question: I have recently ran in R four geneset enrichment analysis in the same database (TCGA:breast cancer) comparing two intrinsic subtypes. The methods I used were: MIGSA, that imports mGSZ package and combines it with a SEA algorithm. Using RNA-seq TMM normalized counts as input, because the software supports it. All the parameters are as default. mGSZ with standard parameters, using RNA-seq TMM normalized log2 cpm. All parameters as default FGSEA preranked, using log fold change as ranking metric, obtained from limma deg from RNA-seq TMM normalized counts. All the parameters are as default. GSVA + limma: extracted by-sample enrichment score from RNA-seq TMM normalized log2 cpm, then ran a differential expression analysis with geneset enrichment scores as if they were simple gene expression values. I know this is not a formal approach but I wanted to compare. All the parameters are as default. The genesets tested are the full combined list of KEGG, GO:BP, GO:MF and GO:CC. In all cases I trimmed the genesets to genesets of length <500, which leaves a total of 21723 genesets tested. Sadly, results are confusing. For starters mGSZ for some reason is cropping the output and giving results for only around 9700 of those genesets -I've already sent an email to the package mainteiner asking about that. But that isn't the biggest problem, the most shocking thing is that none of the analyses performedt has a good correlation to the results of any of the other analyses. Here I plotted the -log10(adjusted p-value + 0.0001) of each analysis agaisnt every other analysis, with lines indicating -log10(0.05)=1.3 and -log10(0.01)=2. I was expecting the results to be somehow different, but here I can't see even something remotely close to a correlation between the different results. It is also evident that MIGSA and GSVA+limma output a lot more significatively enriched genesets than the other two methods. I have gone through the code time and again, and I can't find anything errors. Also I have searched for studies doing this kind of direct comparison between methods, without luck so far. Has anybody experienced this kind of inconsistencies between gene set enrichment analysis results with different tools? What could I possible be doing wrong?. Answer: Each of these methods do something different, so it is reasonable to expect different results. The bottom line is that there isn't a single question you can ask when you do an "enrichment test". For example: you can test if a set of genes are more expressed than this other list of genes or they are above the mean levels of any other gene. Even more, you use the gene ontology, which has an underlying structure (directed acyclic graph) that no method you used takes into account but could modify the results if taken into account. However, the KEGG pathways do not have this structure. So when you want to apply a method mixing both data sources you need to decide if you want to use it or not. This is also complicated because for most of these methods you can only compare the p-value which depends on the underlying assumptions of the methods (uniform, normal, binomial distribution?) The latest comprehensive study comparing enrichment methods including some of the methods you used and more is this one (that I know of): Toward a gold standard for benchmarking gene set enrichment analysis. Last, what would a correlation tell you here? It is know that there is little agreement between methods. You couldn't say that one method is equal to the other unless you tested with several datasets and you had always a correspondence on the p-value.
{ "domain": "bioinformatics.stackexchange", "id": 1319, "tags": "r, gsea, go-enrichment" }
calculating overlap of modular ranges
Question: So, this might be a really simple problem but I can't seem to find a nice algorithm to solve it: Given two ranges, [a1, a2], [b1, b2] (all real numbers) and a real number n, find the length of the overlapping segment between the two ranges over a modulo of n. For example, consider a 24-hour clock and the range [20, 4] (night time); for a given range, calculate the number of hours within that range that are night hours: [13, 21] ==> 1 #[20,21] [0, 6] ==> 4 #[0, 4] [11, 19] ==> 0 I tried to think of it in terms of predefined segments [a1,b1], [b2, a2] and do some math with them but it didn't work. Maybe I should sort them somehow? I will appreciate any help or direction, thanks! Answer: Here is how I understand your problem. We have a modulus $n$. A generalized interval $[\![a,b]\!]$ consists of $[a,b]$ if $a < b$, and of $[a,n) \cup [0,b]$ if $a > b$ (assume for simplicity that $a != b$). You want to know the size of the intersection of two generalized intervals $[\![a_1,b_1]\!],[\![a_2,b_2]\!]$. One way to solve this is to decompose each generalized interval into a disjoint union of one or two intervals, and then compute the size of the pairwise intersections (which I will let you work out yourself), and sum them up.
{ "domain": "cs.stackexchange", "id": 6298, "tags": "algorithms" }
Keras: Custom output layer for multiple multi-class classifications
Question: Hello, I’m quite new to machine learning and I want to build my first custom layer in Keras, using Python. I want to use a dataset of 103 dimensions to do classification task. The last fully connected layer of the model has 103 neurons (represented by 13 dots in the image). Groups of five dimensions of the former layer should be connected to three neurons of the output layer, so there will be 20 classifications. The neurons of the output layer represent "True" ("T" in the image), "indifferent" ("?") and "False" ("F"). The remaining three don’t need connections to the output layer. How can I build this layer? And how can I make sure, that each of the 20 groups with three neurons gives probabilities that add up to 1? Can I apply the softmax activation function to each of the groups, for example? Edit – This is my solution: # define input and hidden layers. append them to list by calling the new layer with the last layer in the list self.layers: list = [keras.layers.Input(shape=self.neurons)] [self.layers.append(keras.layers.Dense(self.neurons, activation=self.activation_hidden_layers)(self.layers[-1])) for _ in range(num_hidden_layers)] self.layers.append(keras.layers.Dense(self.neurons - self.dims_to_leave_out, activation=activation_hidden_layers)(self.layers[-1])) # define multi-output layer by slicing the neurons from the last hidden layer self.outputs: list = [] index_start: int = 0 for i in range(int((self.neurons - self.dims_to_leave_out)/self.neurons_per_output_layer)): index_end: int = index_start + self.neurons_per_output_layer self.outputs.append(keras.layers.Dense(self.output_dims_per_output_layer, activation=self.activation_output_layers)(self.layers[-1][:, index_start:index_end])) index_start = index_end Answer: Functional API allows you to design more complicated models, including multi-output models. Check the documentation to see how you can connect specific neurons to others of your choice. You should be able to make custom layers from scratch. Once you build distinct output layers, probabilities within each can be set just as usual by using softmax activation.
{ "domain": "datascience.stackexchange", "id": 10090, "tags": "machine-learning, classification, keras, multiclass-classification, softmax" }
Is there magnetic dipole-dipole interaction between electrons in the quantum level?
Question: Classically, two magnetic moments interact with each other through the magnetic field they create. Consider two electrons, the common Hamiltonian would be $$ \hat{H} = \frac{\hat{p}_1^2+\hat{p}_2^2}{2m}+\frac{e^2}{4\pi \epsilon_0}\frac{1}{(r_1-r_2)^2}, $$ where $\hat{p}_1$ is the momentum operator of one of the electrons. However, I am wondering whether there should be a term representing the interaction between the spin magnetic moments of the two electrons. Thoughts: Since spin is essentially a relativistic effect, I would imagine that the interaction between spin magnetic moments, if exists, should come from relativistic quantum mechanics. I have heard of Darwin term and spin orbit coupling, both coming from reducing Dirac equation to Schrodinger equation. However, I have never heard of any interaction term between spin magnetic moments. In condensed matter physics, there are models, such as Heisenberg model, that describe the interaction of spins. However, such interaction comes from Coulomb interaction, rather than the interaction from spin magnetic moments. Answer: Yes there is such an interaction. It is sometimes called a spin-spin interaction, though strictly, as you say, it is an interaction between magnetic dipole moments. In helium it makes a contribution to the fine structure similar to that from spin-orbit interaction; in other atoms it also contributes, but less noticeably. It is a common misconception that spin is a relativistic effect or a quantum effect or both. Strictly speaking, it is no more a relativistic effect or a quantum effect than everything else. I mean there is a low-velocity limit where spin is still relevant, and there are spin states which behave like a classical limit of spin. In both respects the same can be said of other degrees of freedom such as position and momentum, but we don't normally propose that they have to be relativistic, nor that they are a quantum effect. Having said that, when one constructs a relativistic quantum theory one is more or less forced to include spin, whereas when one constructs classical physics you could leave spin out (from ignorance) and a sensible theory can still be constructed. (But you don't have to leave it out, and you can include it while still doing classical physics by letting the spin degree of freedom be described by including it in the angular momentum tensor using numbers not operators, and noting that in such a classical version it can be observed without significantly disturbing it. The reason why spin seems to be so "quantum" in practice is because we almost always encounter it in situations far from such a classical limit).
{ "domain": "physics.stackexchange", "id": 67377, "tags": "quantum-mechanics, special-relativity, condensed-matter, quantum-spin, magnetic-moment" }
map_server for indigo
Question: Hello all, How do I get/install the map_server package for indigo? Thank you, Andreass Originally posted by oinkmaster2000 on ROS Answers with karma: 1 on 2014-08-06 Post score: 0 Answer: sudo apt-get install ros-indigo-map-server Originally posted by ahendrix with karma: 47576 on 2014-08-06 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 18927, "tags": "navigation, mapping, map-server, ros-indigo" }
Proving that $dS$ is an exact differential mathematically
Question: OK...so I hope this is not too dumb a question: We know that we can express $dS$ as $$dS=\frac{dQ}{T}=\frac{C_v}{T}dT+\frac{R}{V}dV,$$ where $C_v$ is the thermal capacity at constant volume and $R$ is the gas constant. However, I recall that for a differential of the form $dz=X(x,y)dx+Y(x,y)dy$ to be exact we must have $$\frac{\partial X(x,y)}{\partial y}=\frac{\partial Y(x,y)}{\partial x}.$$ Now my problem is how do you show that $$\frac{\partial}{\partial V}\left(\frac{C_v}{T}\right)=\frac{\partial}{\partial T}\left(\frac{R}{V}\right).$$ My math skills are kind of rusty, so I'm having trouble here. I hope someone can help me out on this. Answer: For a perfect gas, once one arrives at an expression for $dS$, the integrability condition is a trivial check. $C_v=\alpha R$ with alpha constant. Thus, $C_v/T$ does not depend on $V$, while $R/V$ does not depend on $T$ and the condition for integrability is trivially satisfied. A little less trivial (but not too difficult task) would be to show for a perfect gas that $\delta q_{rev}/T$ is an exact differential, where $\delta q_{rev} = dU + PdV$. For a general system (not a perfect gas) this would be equivalent to Carnot's theorem and then an expression of the 2nd law.
{ "domain": "physics.stackexchange", "id": 57483, "tags": "thermodynamics, differential-geometry, entropy, reversibility" }
Uncertainity relation of Kinetic energy with position
Question: In R. Shankar's Principle of quantum mechanics book in the problem Now $$\Delta T = \frac{-\hbar^2}{2m} \Delta( p^2)$$ And I don't arrive anywhere using this, but I also know that $\Delta A \Delta B = \left|\frac{1}{2i}\left<[A,B]\right>\right|$. So, using this I find the commutation relation \begin{align*} \frac{1}{2m}[p^2,x] &= \frac{-i p\hbar}{m}\\ \end{align*} And, $$ \Delta T \cdot \Delta X = \frac{\left<p\right>\hbar}{2m} $$ Here the author asks why this relation is not so famous. Though this seems nothing special to me. Am I donig anything wrong here? Answer: The relation will give you $$[x, T] = \frac{i \hbar}{m} p$$ Now, $\Delta T \Delta X \geq \dfrac{\langle p \rangle \hbar}{2m}$. Notice that for a state with zero momentum, the product of uncertainties can have the minimum value zero, unlike the case for any canonically conjugate pair (like momentum, position).
{ "domain": "physics.stackexchange", "id": 58742, "tags": "quantum-mechanics, homework-and-exercises, heisenberg-uncertainty-principle, commutator" }
Passing an NSManagedObjectContext
Question: I've been experimenting with how to pass my managed object context throughout my application in Core Data and I came across an approach that I'd like reviewed. I've been creating a protocol for all of my View Controllers that need a managed object context to implement: protocol ManagedObjectContextProperty { var managedObjectContext: NSManagedObjectContext! { get set } } Then I create an extension to that protocol to assert that the context does indeed exist: extension ManagedObjectContextProperty { func checkManagedObjectContext(name: String) { if managedObjectContext == nil { assertionFailure("\(name) is missing the managed object context.") } } } Finally, I create an extension that fetches my context from the app delegate and add's it to my managedObjectContext variable: extension UIViewController { func getManagedObjectContext<T : UIViewController where T : ManagedObjectContextProperty>(controller: T) { var controller = controller let appDelegate = UIApplication.shared().delegate as! AppDelegate controller.managedObjectContext = appDelegate.managedObjectContext } } Then in my View Controller I simply do this: class ViewController: UIViewController, ManagedObjectContextProperty { var managedObjectContext: NSManagedObjectContext! override func viewDidLoad() { getManagedObjectContext(controller: self) checkManagedObjectContext(name: "ViewController") } } Is this approach is against the general convention? If so, why? Answer: Ok, I think I've found an answer that I can appreciate. From Marcus Zarra's blog: A view controller typically shouldn’t retrieve the context from a global object such as the application delegate. This tends to make the application architecture rigid. Neither should a view controller typically create a context for its own use. This may mean that operations performed using the controller’s context aren’t registered with other contexts, so different view controllers will have different perspectives on the data. source: http://www.cimgf.com/2011/01/07/passing-around-a-nsmanagedobjectcontext-on-the-iphone/ Look's like I'll go back to passing the managed object context via Segue.
{ "domain": "codereview.stackexchange", "id": 20919, "tags": "swift, core-data" }
GPS sensor plugin in Ignition
Question: Hi! Is there an equivalent to the "libhector_gazebo_ros_gps.so" plugin in Ignition (Fortress)? From what I can see, the NavSat-plugin is the closest, but I can't find any parameters to link it to a model. Originally posted by knuttu on Gazebo Answers with karma: 1 on 2022-12-12 Post score: 0 Answer: By browsing github I found out how to use the NavSat plugin (ignition-gazebo-navsat-system if that was unclear). So basically you have to do something like this if you are using SDF: <model name="NavSat"> <pose>0 0 0.05 0 0.0 0</pose> <link name="link"> <inertial> <mass>0.1</mass> <inertia> <ixx>0.000166667</ixx> <iyy>0.000166667</iyy> <izz>0.000166667</izz> </inertia> </inertial> <collision name="collision"> <geometry> <box> <size>0.1 0.1 0.1</size> </box> </geometry> </collision> <visual name="visual"> <geometry> <box> <size>0.1 0.1 0.1</size> </box> </geometry> </visual> <sensor name="navsat" type="navsat"> <always_on>1</always_on> <update_rate>1</update_rate> <topic>navsat</topic> </sensor> </link> </model> For this to work you also need to set the spherical coordinates for navsat. I did this is in my world SDF file, and it looks like this: <spherical_coordinates> <surface_model>EARTH_WGS84</surface_model> <world_frame_orientation>ENU</world_frame_orientation> <latitude_deg>-22.986687</latitude_deg> <longitude_deg>-43.202501</longitude_deg> <elevation>0</elevation> <heading_deg>0</heading_deg> </spherical_coordinates> However, I was not able this from an URDF file, which is what I want to do. EDIT: If you are using ROS2 you can call the service /world/world_name/set_spherical_coordinates service from your launch.py file to set the spherical coordinates. Originally posted by knuttu with karma: 1 on 2022-12-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 38643, "tags": "ignition-fortress" }
How to handle out-of-bound values in Production data?
Question: So I have this model but the data may vary. And it is virtually impossible to always have the values in bounds. If I do I`d have to use larger period leading to concept shift which is worse. The question is what is the best way to deal with the values of futures that are out of the model bounds? I see 3 options If the value is greater than max set it to the max value the model has seen If the value is less than min set it to the min value the model has seen If the value is greater or less set it to the mean that kind of eliminates the future weight for the prediction. So what would be the best approach here any thoughts? Note: I am retraining the model daily and the model has a lot of futures ~500 so it is highly likely even right after retraining some to be out of bounds, excluding futures is not the best option since it's never the same future showing this behaviour. I am using this function for scaling. def min_max(value, min_max_map): result = 0 if min_max_map['max'] - min_max_map['min']: result = (value - min_max_map['min']) / (min_max_map['max'] - min_max_map['min']) if result > min_max_map['max'] or result < min_max_map['min']: result = (min_max_map['mean'] - min_max_map['min']) / (min_max_map['max'] - min_max_map['min']) return result Answer: Steps 1 and 2 are basically the same operation, clipping, which is a possibility but not the best since your loosing information. Imagine having two instances with same features except for feature $n$, which assume value 101 and 201 respectively. Let's say that 100 was your observed maximum in the training data for feature $n$, after clipping both instances will look the same and lead to same predictions, good for instance 1 cause its value of $n$ is close to the training maximum, but nonsense for instance 2. The best way would be defining a theoretical maximum, so independent from the training data, it could be also a value that we set as maximum and after which we do indeed clip cause occurrences of higher values are rare. An alternative is also to use z score standardization instead of min max normalization. The output range is [$-\infty$, $\infty$] but in practice you'll end up with values almost all within the range [-1, 1], and rare occurrences below and above this range are totally fine.
{ "domain": "ai.stackexchange", "id": 3453, "tags": "deep-learning, deep-neural-networks, prediction, models, features" }
Doped semiconductor: "what if all donated electrons are gone?"
Question: I have trouble understanding conductivity of a n-doped semiconductor in the band theory. I know that donator atoms carry one excess electron that can enter the conduction band easily. If this happens, a fixed positive charge remains. Assume that due to an applied voltage, the excess electrons in the conduction band are all removed from the semiconductor. How can the current continue to flow, how can new electrons enter the semiconductor or the conduction band? Are the doping atoms "close enough" to each other so that excess electrons can move from doping atom to another? I have less trouble understandung conductivity in a p-doped SC: an electron "fills up the missing bond", creating a fixed negative charge. Conduction occurs in the valence band by "passing the hole" from one Si atom to the next. Answer: If all the dopant electrons were removed from an n-doped material then the material would have a net positive charge. If an applied voltage removes some of the electrons from the material either new electrons will be attracted to the positive dopant ions from elsewhere in the circuit or, if there is not "rest of the circuit", you will have made a capacitor (the same as with any other conducting material).
{ "domain": "physics.stackexchange", "id": 26409, "tags": "semiconductor-physics, electronic-band-theory" }
Python script takes input of four elements and outputs a valid 4 × 4 Sudoku grid in O(n) time, but only when given non-repeating elements
Question: Introduction I've found a clever and fun thing to do after solving a n2 × n2 sudoku puzzle. I can take a grid as such as this one and hard-code its indices as constraints to output other 4 × 4 latin squares in poly-time Now, I took each successive row and hardcoded as indices with successive print statements. Indices constraints 0123 3210 1032 2301 Working Code print('enter with [1,2,3...] brackets') text = input()[1:-1].split(',') print(text[0], text[1], text[2], text[3]) print(text[3], text[2], text[1], text[0]) print(text[1], text[0], text[3], text[2]) print(text[2], text[3], text[0], text[1]) Question Being a novice at python I'm asking is there a better way of hardcoding the Sudoku's pattern with fewer lines of code? Because it would be daunting to have to write larger constraints for larger latin squares. I would appreciate it to keep it O(n) time because I have a desire to input integers besides just elements 1-4. But, 100-104 and so on.. Answer: Use json to simplify your input handling. You can use str.format to simplify all the prints. You don't handle incorrect data well. What if I enter 3 or 5 numbers? Your code doesn't run in \$O(n)\$ time, it runs in \$O(n^2)\$ time. I recommend that you ignore \$O\$ and just get working code if you're a novice. After you get it working make it readable. Finally here you should time your code to see if you need to then optimize it. Use functions, they make your code easier to use. import json def handle_input(input_): try: data = json.loads(input_) except ValueError: raise ValueError('Invalid format.') if len(data) != 4: raise ValueError(f'Incorrect amount of numbers, got {len(data)} not 4.') return data def main(): print('Enter four numbers in brackets. E.g. [1, 2, 3, 4]') data = handle_input(input()) print( '{0} {1} {2} {3}\n' '{3} {2} {1} {0}\n' '{1} {0} {3} {2}\n' '{2} {3} {0} {1}\n' .format(*data) ) if __name__ == '__main__': main()
{ "domain": "codereview.stackexchange", "id": 35260, "tags": "python, sudoku" }