anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Angular frequency of spring mass system
Question: Why is it true that ω=√(k/m)? Can't find a description of this anywhere Answer: You have the basic equation for harmonic motion (which applies to harmonic oscillators as well) - $\dfrac{d^2 x}{d t^2} = -{\omega}^2 x \tag{1}$ and that the force acting on the block in the spring mass system is given by $F = m\dfrac{d^2 x}{d t^2} = -kx \tag{2}$ where $k$ is the spring constant which is characteristic to a spring, which measures the stiffness of a spring. Compare the two equations, and you have the angular frequency as $\omega = \sqrt{\dfrac{k}{m}} \tag{3}$
{ "domain": "physics.stackexchange", "id": 45361, "tags": "harmonic-oscillator, spring, angular-velocity" }
Running a Python file from a ROS Launch file : Cant Find it
Question: So I have the following Launch file, when I run the launch file it errors saying it cant find the Python file <?xml version="1.0"?> <launch> <node name="brickpi.py" pkg="brickpi" type="brickpi.py" output="screen"> <node pkg="xv_11_laser_driver" type="neato_laser_publisher" name="xv_11_node"> <!--<param name="port" value="/dev/tty.usbserial-A9UXLBBR"/>--> <param name="port" value="/dev/tty.ACM0"/> <param name="firmware_version" value="2"/> <param name="frame_id" value="laser"/> </node> <node pkg="tf" type="static_transform_publisher" name="base_frame_2_laser" args="0 0 0 0 0 0 /base_frame /laser 100"/> <include file="default_mapping.launch"/> <include file="$(find hector_geotiff)/launch/geotiff_mapper.launch"/> </launch> The file is in the same directory as the launch file, I have also copied it in to /home/pi/catkin_ws/src/brickpi/brickpi.py /home/pi/catkin_ws/src/brickpi/src/brickpi.py /home/pi/catkin_ws/src/brickpi.py /home/pi/catkin_ws/build/brickpi/brickpi.py error: ERROR: cannot launch node of type [brickpi/brickpi.py]: brickpi ROS path [0]=/opt/ros/indigo/share/ros ROS path [1]=/home/pi/catkin_ws/src ROS path [2]=/opt/ros/indigo/share ROS path [3]=/opt/ros/indigo/stacks Update I have done chmod +x to the file but I must point out I have not made a package.xml, CMakeText or anything, I can run the file on its own via python and all works so maybe my issue is for a Launch file to use it, I must make a Package? All code can be found here https://github.com/burf2000/ROS_Robot Originally posted by burf2000 on ROS Answers with karma: 202 on 2017-01-20 Post score: 0 Original comments Comment by Thomas D on 2017-01-20: Did you make the Python file executable? Comment by suforeman on 2017-01-20: In your CMakeLists.txt do you have you Python program listed in the catkin_install_python section? I have a working example here: https://gitlab.com/bradanlane/locoro Comment by gvdhoorn on 2017-01-21: @burf2000: I think you'll only get (well intended) guesses from other board members, unless we can get access to (a minimal working example of) your code. In principle, this should all work, but there is probably a minor thing missing or not setup correctly which causes problems. Comment by gvdhoorn on 2017-01-21: @suforeman: that is good advice, but installing artefacts is not needed when working with the devel space (as rosrun et al. will typically resolve Python packages to the src space). Comment by suforeman on 2017-01-21: Thanks @gvdhoorn, I'm still learning. My example still my shed some light. At least it's working for me. I also recall a situation similar to the OP that happened early on in my project. I had forgotten to source devel/setup.bash. Comment by gvdhoorn on 2017-01-21: Yes, I was thinking something like that may have happened here (as it should just be enough to chmod +x a Python script to rosrun it, as long as it's in a ROS pkg), but the OP claim(s)(ed) that he already did that. Comment by burf2000 on 2017-01-21: I have not chmod +x the file, all I have done is dumped the file in a new directory called brickpi? Do I need to make a cmaketxt? Answer: I solved this by creating a package :) called burf_robot Then I copied the script in to a directory called scripts within the package and did chmod +x I then changed the package name in the launch file to the new burf_robot package name I then ran catkin_make Originally posted by burf2000 with karma: 202 on 2017-01-21 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2017-01-22: Please don't accept answers when they're not really the answer. The real solution was to create a package (called brickpi) and put the Python script in that. Seeing as your launch file had pkg="brickpi" in it, we all assumed you had already done that. A MWE would have immediately .. Comment by gvdhoorn on 2017-01-22: .. let us know you hadn't. my issue is for a Launch file to use it, I must make a Package? so yes, ROS is package based, and anything you want to rosrun must be part of a package. Comment by burf2000 on 2017-01-22: Sorry about that, yes you are right.
{ "domain": "robotics.stackexchange", "id": 26782, "tags": "ros, python, roslauch" }
ROS1 to ROS2 Dynamic Bridge Error: Failed Bridge Creation for '/rosout' Topic
Question: " failed to create 2to1 bridge for topic '/rosout' with ros2type 'rcl_interfaces/Log' and Ros1 type 'rosgraph_msgs/Log' " this is the error i obtained while running the $ros1_bridge dynamic_bridge. please help me to rectify this error. if message remapping is need how can we done remapping. rcl_interfaces/Log TO rosgraph_msgs/Log remapping is special case i think why because, the data types and number of variables are not equal... your answer is most helpful to me ..please answer THANK YOU. Originally posted by roslearnersai on ROS Answers with karma: 11 on 2019-05-14 Post score: 0 Answer: This is a know issue, see https://github.com/ros2/ros1_bridge/issues/159 Originally posted by Dirk Thomas with karma: 16276 on 2019-05-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 33014, "tags": "ros, ros2, remapping, ros-crystal" }
R solution to max product in matrix for four adjacent numbers (Euler 11)
Question: Background This is solution to Problem 11 on Project Euler that is concerned with finding largest product of four adjacent numbers in the provided grid. Other than solving the problem I was also interested in creating a search function that would have a more generic character and could search across any number of adjacent numbers. Provided Grid Read as 11grid.txt in the provided code. 08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08 49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00 81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65 52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91 22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80 24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50 32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70 67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21 24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72 21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95 78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92 16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57 86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58 19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40 04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66 88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69 04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36 20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16 20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54 01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48 Solution Notes The solution: Gets addresses of all cells in the matrix From each address creates a list of neighbours in each direction (North, South, ...) Product is calculated on each of the sets Max product is returned Code # Notes ------------------------------------------------------------------- # Problem 11 # Data -------------------------------------------------------------------- # Read provided matrix M <- matrix( data = scan("./problems/11/11grid.txt"), nrow = 20, ncol = 20 ) # Support ----------------------------------------------------------------- # Pad matrix for the desired number of neighbhours pad_matrix <- function(M, n = 4) { # Create a list of NAs to pad l <- lapply(X = 1:n, function(x) { NA }) # Pad columns lc <- l lc[[1]] <- M Mcols <- do.call(what = cbind, args = lc) Mcols <- do.call(what = cbind, args = { # Pad other side lc[[1]] <- Mcols rev(lc) }) # Pad rows lr <- l lr[[1]] <- Mcols Mcols_rows <- do.call(what = rbind, args = lr) Mcols_rows <- do.call(what = rbind, args = { # Pad other side lr[[1]] <- Mcols rev(lr) }) Mcols_rows } # Search ------------------------------------------------------------------ search_product <- function(M = M, n = 4) { addresses <- expand.grid(x = sequence(nrow(M)), y = sequence(ncol(M))) n_search <- n - 1 # Create padded matrx M_pad <- pad_matrix(M = M, n = n) neighbhours_res <- apply( X = addresses, MARGIN = 1, FUN = function(M_addr) { tryCatch( expr = list( North = M_pad[M_addr["x"]:(M_addr["x"] - n_search), M_addr["y"]], North_East = c(M_pad[M_addr["x"], M_addr["y"]], sapply( X = 1:n_search, FUN = function(i) { M_pad[M_addr["x"] - i, M_addr["y"] + i] } )), East = M_pad[M_addr["x"], M_addr["y"]:(M_addr["y"] + n_search)], South_East = c(M_pad[M_addr["x"], M_addr["y"]], sapply( X = 1:n_search, FUN = function(i) { M_pad[M_addr["x"] + i, M_addr["y"] + i] } )), South = M_pad[M_addr["x"]:(M_addr["x"] + n_search), M_addr["y"]], South_West = c(M_pad[M_addr["x"], M_addr["y"]], sapply( X = 1:n_search, FUN = function(i) { M_pad[M_addr["x"] + i, M_addr["y"] - i] } )), West = M_pad[M_addr["x"], M_addr["y"]:(M_addr["y"] - n_search)], North_West = c(M_pad[M_addr["x"], M_addr["y"]], sapply( X = 1:n_search, FUN = function(i) { M_pad[M_addr["x"] - i, M_addr["y"] - i] } )) ), error = function(e) { NA } ) } ) products <- rapply(object = neighbhours_res, f = prod, classes = "numeric") # Keep max only max(products, na.rm = TRUE) } res <- search_product(M = M, n = 4) res Answer: When working with matrices in R, you can often do an operation with a single command if you are clever about how that is structured. Take padding a matrix with npad missing values as an example. Your current code does this by first padding the columns and then padding the rows. However, you could define a correct-sized matrix with all missing values to start, and then store the original matrix at the correct location within the new matrix: pad_matrix2 <- function(M, npad) { padded <- matrix(NA, nrow(M)+2*npad, ncol(M)+2*npad) padded[seq(npad+1, nrow(M)+npad),seq(npad+1, ncol(M)+npad)] <- M padded } This is much more compact code and will also be more efficient. In terms of the search_product function, you have a lot of repeated code that does the same thing for a particular direction. You could avoid that by looping through a set of directions that you want to search: search_product2 <- function(M, n=4) { npad <- n-1 M_pad <- pad_matrix2(M, npad) directions <- rbind(c(1, 0), c(0, 1), c(1, 1), c(1, -1)) all.pos <- expand.grid(r=seq(npad+1, nrow(M)+npad), c=seq(npad+1, ncol(M)+npad)) max(apply(directions, 1, function(direction) { max(Reduce("*", lapply(seq(0, n-1), function(dist) { M_pad[cbind(all.pos$r+dist*direction[1], all.pos$c+dist*direction[2])] })), na.rm=TRUE) })) } search_product2(M, 4) == search_product(M, 4) # [1] TRUE
{ "domain": "codereview.stackexchange", "id": 37528, "tags": "programming-challenge, matrix, r" }
Is the world Markovian according to modern theories (QM, GR, etc.)?
Question: Is the world Markovian according to modern theories (QM, GR, etc.)? According to modern theories, is it true that there is no additional knowledge to be gained from the past for predicting the future if we know everything possible about the state of the universe in the present? Answer: Indeed, they are designed that way. Physics is based on making predictions, and our best theories are designed to make their predictions assuming that you start with a state that has all the information you need to make your predictions. Nothing else is assumed to be able to determine the future evolution, because if we thought it could influence it, then we'd include it as part of the current state. But the state isn't a thing you can put in your pocket and pull it out to show your friends. For instance, in general relativity you might need an initial value formulation, which wouldn't just include the metric at $t=0$, but also its derivatives (lapse and shift, etc.). This is similar to Newtonian mechanics that requires at least the velocities as well as the positions of the parts. In quantum mechanics it is worse, the state has a phase, but there is no known way to measure the phase, only relative phases (and those can still depend on gauge). General Relativity has the same problem if you formulate it in terms of coordinates. So in some sense the commonly used states are too much in that some states that look different are actually describing identical universes. That's a problem we can live with, as long as we have enough states we can deal with having some parts that are ambiguous as long as the predictions are clear. The other thing to keep in mind is that the basic equations of General Relativity are time-reversal invariant, you predict the past as well as you predict the future (so a singularity in the past or the future creates a problem, as would a time machine). So I basically already said yes, but let's look for hidden assumptions and bias. You talked about "if we know everything possible about the state of the universe", and that leads to an important distinction, the difference between the state of the universe (that which I described above) and our state of knowledge of the universe (a different issue). People do still argue about which states we should be modelling or which we are modelling. But I haven't seen someone argue that the past (as opposed to the present) is a useful way to supplement our knowledge. Because in the absence of time machine we don't have direct access to the past, we can only indirectly access the past based on how it has influenced the present. In your linked example of a nonmarkovian process, the information about the past is right there in your hand in the present and you just ignore it. In physics if the information about the past was not accessible it makes an actual testable prediction about what we see. For instance in the classical a double slit experiment, if the particle interacted with the environment in such as way as to leave a record of it's past being in one of those slits, then the interference pattern is destroyed. Only when no record survives (even in principle, not just because we are lazy or indifferent) is there interference, and then the question of where it was in the past becomes a subject of academic debate because there is no evidence, and there is evidence of the permanent forever more lack of evidence (people can tell stories and make theories, but there is no evidence outside of a particular theoretical vantage point). So modern physics actually goes so far as to say there is evidence that there is no past beyond the present.
{ "domain": "physics.stackexchange", "id": 18992, "tags": "quantum-mechanics, determinism" }
Series Capacitor impedance
Question: A series capacitor, for example, has infinite impedance at f=0 Hz Is this statement true? If Yes then what is the reason! Thanks Answer: If you are talking about only capacitive AC circuits then we can say the following.
{ "domain": "physics.stackexchange", "id": 33730, "tags": "electrical-resistance, capacitance" }
Is this a valid method to find percentage of pure ethanol in an impure sample?
Question: To generalize the problem, I will just say: 5 gram of impure ethanol (or any other type of combustible material) is placed inside a bomb calorimeter where volume is constant. If the combustion produces a certain amount of energy, how should I find the mass in gram of pure ethanol contained in that quantity of impure ethanol? Here is what I will do: find the standard molar enthalpy of ethanol. write a combustion equation. find the energy produced in the combustion of one mole of ethanol. then divide that amount of ethanol by the amount of energy stated in problem. To find how many mole of pure ethanol, convert the mole into gram. But there are two things that make me think that this way of solving the problem is not correct: the quantity of impure ethanol is a mixture of pure ethanol and another unknown material. What if this unknown material produce part of the energy stated in the problem? the standard molar enthalpy is measure under standard conditions (1 atm, 25 °C). However, the condition inside the bomb calorimeter is unknown. Answer: What if this unknown material produce part of the energy stated in the problem? It would be stated in the problem that the amount of heat that the unknown material produced. The standard molar enthalpy is measured under standard conditions (1 atm, 25°C). However, the condition inside the bomb calorimeter is unknown. We are given some information, the final temperature of the system. But you are right in that we do not know the pressure, and the specific heat of the remaining items is difficult to calculate.
{ "domain": "chemistry.stackexchange", "id": 2226, "tags": "physical-chemistry, thermodynamics" }
What are the pros and cons of zero padding in a convolution layer?
Question: TensorFlow's conv2d() operation lets you choose between "VALID" (without padding) and "SAME" (with zero-padding). I suppose all other frameworks let you do the same. I'm trying to understand the pros and cons of zero padding: when would you want to use it, or not to use it? So far, my understanding is that if the filter size is large relative to the input image size, then without zero padding the output image will be much smaller, and after a few layers you will be left with just a few pixels. So to maintain a reasonably sized output, you need zero-padding + stride 1. Is this the main reason for using zero-padding? Is it preferable to avoid it when you can, for example when the filter size is small relative to the input image size? Answer: I summarized from some other web pages: Pros: Deep network would be possible because the output dimension could be constant after convolution Example: Saving image border information Cons: Heavy computation Example: Waste of computational resources
{ "domain": "datascience.stackexchange", "id": 4199, "tags": "neural-network" }
Universal Robot URDF and xacro structure
Question: Hi all, I am trying to create a custom environment for my UR5e robot and having a little bit of a problem understanding the file structure. I would like to have a fixed, not moving object around 1 meter away from the UR I want to see both the robot and my fixed object in Gazebo and RViz setups So far, I have extracted the CAD model of my object in .stl format, and created a URDF file where I read that .stl file and I am able to spawn the object both in Gazebo and RViz. My question is merging that URDF file with the UR setup. Both xacro files here says that they should not be modified: https://github.com/fmauch/universal_robot/tree/calibration_devel/ur_gazebo/urdf How can I merge them together with my new URDF file? Where should I call that new xacro file? Here? https://github.com/fmauch/universal_robot/blob/calibration_devel/ur_gazebo/launch/inc/load_ur.launch.xml#L33 Is that correct/normal that the new object will be a part of robot_description or should I see robot_description and object_description separately? Since my object is not attached to the robot but rather something that should spawn in the environment, it didn't feel quite correct that it is a part of the robot_description. I am using ROS Noetic with Ubuntu 20.04 Thank you in advance Originally posted by rosberrypi on ROS Answers with karma: 75 on 2022-01-18 Post score: 0 Original comments Comment by aarsh_t on 2022-01-18: you can try editing this file and use something like this <node name="spawn_model" pkg="gazebo_ros" type="spawn_model" args=" -file /path/to/urdf_file/with/stl -urdf -x 0 -y 0 -z 0 -model model_name" respawn="false" output="screen"/> Comment by rosberrypi on 2022-01-20: Thank you for the answer. I am able to spawn the object in gazebo like this, however how do I also attach it to RViz and let the robot know not to plan a path that intersects with that object? Comment by aarsh_t on 2022-01-20: one simple way I can suggest is to use parameter such as <param name="/object/robot_description" textfile=" /path/to/urdf_file" /> and then in rviz add new robotModel with topic /object/robot_description. For the rviz to know the object you might need to put some sensors or you can remove that part from the robot's workspace. I haven't used universal_robot package focusing this problem so I won't be able to comment on that. Perhaps its a good idea to raise new question for that rather than asking in comments. Answer: The robot and The Object are independent objects, so typically are not described in the same urdf/xacro/sdf file. In gazebo, you create an instance of an object by "spawning" it into the world at some initial position. After that, the physics simulator determines the behavior of all independent objects e.g. if something crashes into your Object, it may move. If you spawn an object in mid-air, gravity will cause it to fall to the ground. Originally posted by Mike Scheutzow with karma: 4903 on 2022-01-19 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by rosberrypi on 2022-01-20: Thank you for your explanation. What is the correct way of defining that .stl object so that it is visible in RViz and the robot always knows not to hit it? I know that we can add objects to MoveIt's planning interface but I thought I could directly use the .stl object and its URDF for that. Comment by Mike Scheutzow on 2022-01-21: I don't know how to add an .stl file to the moveit planning scene.
{ "domain": "robotics.stackexchange", "id": 37366, "tags": "ros, urdf, universal-robots, xacro, universal-robot" }
Cannot Find COLLADA Headers
Question: I'm trying to install ROS Groovy on a Lenovo X200 Tablet with openSUSE 12.3. I've installed COLLADA-DOM 2.4, and it installed its headers to /usr/local/include/collada-dom2.4/. I have double and triple-checked, and dae.h is in that folder. However, when I try to run the full desktop install with: sudo ./src/catkin/bin/catkin_make_isolated --install-space /opt/ros/groovy --install -DSETUPTOOLS_DEB_LAYOUT=OFF I get the following error: [100%] Building CXX object CMakeFiles/collada_parser.dir/src/collada_parser.cpp.o /home/abouchard/ros_catkin_ws/src/collada_parser/src/collada_parser.cpp:45:17: fatal error: dae.h: No such file or directory compilation terminated. make[2]: *** [CMakeFiles/collada_parser.dir/src/collada_parser.cpp.o] Error 1 make[1]: *** [CMakeFiles/collada_parser.dir/all] Error 2 make: *** [all] Error 2 Traceback (most recent call last): File "./src/catkin/bin/../python/catkin/builder.py", line 717, in build_workspace_isolated number=index + 1, of=len(ordered_packages) File "./src/catkin/bin/../python/catkin/builder.py", line 497, in build_package install, force_cmake, quiet, last_env, cmake_args, make_args + catkin_make_args File "./src/catkin/bin/../python/catkin/builder.py", line 353, in build_catkin_package run_command(make_cmd, build_dir, quiet) File "./src/catkin/bin/../python/catkin/builder.py", line 198, in run_command raise subprocess.CalledProcessError(proc.returncode, ' '.join(cmd)) CalledProcessError: Command '/opt/ros/groovy/env.sh make -j2 -l2' returned non-zero exit status 2 <== Failed to process package 'collada_parser': Command '/opt/ros/groovy/env.sh make -j2 -l2' returned non-zero exit status 2 Reproduce this error by running: ==> /opt/ros/groovy/env.sh make -j2 -l2 Command failed, exiting. I was thinking that perhaps this was related to the cmake command, and indeed there is no FindCOLLADA_DOM.cmake file on the computer, but I found one online and put it in /usr/share/cmake/Modules to no avail. I'm still digging into the cmake documentation to see if I can find any error either in the CMakeLists for collada_parser in ROS or the FindCOLLADA_DOM.cmake script, but I don't seem to be making any headway. Has anyone else run into this that could perhaps give me some pointers? Originally posted by teddybouch on ROS Answers with karma: 320 on 2013-07-03 Post score: 1 Answer: Chalk this one up to a mystery - I uninstalled everything collada from my system, tried again, and it seems to have worked. Go figure. Originally posted by teddybouch with karma: 320 on 2013-07-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14803, "tags": "ros-groovy" }
How can diesel fuel produce less carbon monoxide but more carbon particulates?
Question: My guess at why diesel produces less carbon monoxide is that more heat is required for combustion of diesel fuel, so there is less incomplete combustion. Therefore rather than carbon monoxide being formed and released, the ability to completely combust means that carbon dioxide is often formed instead. Correct me if I'm wrong. However, a risk of diesel fuel is that more carbon particulates are produced. But surely this is just a contradiction, because carbon particulates are a result of a lack of oxygen supply and incomplete combustion. Surely if there is enough complete combustion to form carbon dioxide over carbon monoxide, then there is a high enough oxygen supply and temperature to stop carbon particulates forming? Answer: It seems that the mechanism of particulate formation is not yet well known. This article (preview) states some basics about the composition and formation of Diesel Particulate Matter (DPM): Diesel particulates form a very complex aerosol system. Despite considerable amount of basic research, neither the formation of PM in the engine cylinder, nor its physical and chemical properties or human health effects are fully understood. [...] The basic fractions of DPM are elemental carbon, heavy hydrocarbons derived from the fuel and lubricating oil, and hydrated sulfuric acid derived from the fuel sulfur. DPM contains a large portion of the polynuclear aromatic hydrocarbons (PAH) found in diesel exhaust Also, it seems that the particulates are formed both during combustion and from gas during dilution. During combustion, carbonaceous agglomorates (soot), metallic ash from lubricating oil, and precursors to particle formation during dilution (sulfur oxides and partially burned hydrocarbons from fuel and lubricating oil) are formed. During dilution, materials such as sulfuric acid and other sulfates, as well as heavy hydrocarbons and hydrocarbon derivatives, are for the most part adsorbed onto the carbonaceous agglomorates formed during combustion. Some material may also nucleate by itself, but most nucleation occurs on the agglomorates. PDF with explanation of ultrafine particle formation mechanisms Unfortunately, i found it difficult to find any reason why this is such a big problem with diesel and not with gasoline. Perhaps the composition of the diesel is to blame, not temperature or oxygen supply.
{ "domain": "chemistry.stackexchange", "id": 498, "tags": "combustion, fuel" }
Fundamental units
Question: Is it right that all units in physics can be defined in terms of only mass, length and time? Why is it so? Is there some principle that explains it or is it just observational fact? Answer: Which units are fundamental and which are derived is pretty much a matter of arbitrary convention, not an objective fact about the world. You might think that the number of fundamental units would be well-defined, but even that's not true. Take electric charge for example. In the SI system of units (i.e., the "standard" metric system), charge cannot be expressed in terms of mass, length, and time: you need another independent unit. (In the SI, that unit happens to be the Ampere; the unit of charge is defined to be an Ampere-second.) But sometimes people use different systems of units in which charge can be expressed in terms of mass, length, and time. By decreeing that the proportionality constant in Coulomb's Law be equal to 1, $$ F={q_1q_2\over r^2}, $$ you can define a unit of charge to be (if I've done the algebra right) $(ML^3/T^2)^{1/2}$, where $M,L,T$ are your units of mass, length, time. Whether charge is defined in terms of mass, length, time, or whether it's an independent unit, is a matter of convenience, not a fact about the world. People can and do make different choices about it. Similarly, some people choose to get by with fewer independent units than the three you mention. The most common choice is to decree that length and time have the same units, using the speed of light as a conversion factor. You can even go all the way down to zero independent units, by working in what are often called Planck units. In summary, you can dial up or down the number of "independent" units in your system at will. One more example, which seems silly at first but is actually of some historical interest. You can imagine using different, independent units of measure for horizontal and vertical distances. That'd be terribly inconvenient for doing physics, but for many applications it's actually quite convenient. (In aviation, altitudes are often measured in feet, while horizontal distances are measured in miles. In seafaring, leagues are horizontal and fathoms are vertical. Yards are pretty much always used for horizontal distance.) It sounds absurd to think of using different units for different directions, but in the context of special relativity, using different units for space and time (different directions in spacetime) is sort of similar. If we had evolved in a world in which we were constantly zipping around near light speed, so that special relativity was intuitive to us, we'd probably think that it was obvious that distance and time "really" came in the same units.
{ "domain": "physics.stackexchange", "id": 1215, "tags": "si-units, unit-conversion, units" }
Do weak acid/weak base neutralisation reactions go to completion?
Question: I'm wondering if reactions that involve a weak acid and a weak base go to completion. For example, say we have equal amounts and equal volumes of acetic acid and ammonia being mixed together. Would this neutralisation reaction go to completion or would it reach a state of equilibrium? $\ce{NH_3 + CH_3COOH <=> NH_4^+ + CH_3COO^-}$ Instinctively, I'm thinking that the reaction would be an equilibrium, since proton transfer should be able to occur between the products ($\ce{NH_4^+ + CH_3COO^-}$) to reform the reactants. Can anyone clarify if this is true? Answer: Weak acids and bases do indeed remain in an equilibrium state with a certain concentration of the free acid and the free base alongside a certain concentration of the deprotonated acid anion and the protonated base cation. In general, you can consider any acid-base reaction to be essentially the following: $$\ce{HA <=> H+ + A-}\tag{1}$$ Which in turn means that the $\mathrm pK_\mathrm a$ value is defined as in $(2)$: $$K_\mathrm a = \frac{[\ce{H+}][\ce{A-}]}{[\ce{HA}]}\tag{2}$$ Reactions $(1)$ and $(2)$ are usually written out in water with hydronium in place of a naked proton but the same principle applies. Of course, a base reaction can be considered essentially the reverse, meaning that the acid constant typically given is actually $\mathrm pK_\mathrm a (\ce{HB+})$ rather than anything directly corresponding to the free base. If we now take the reaction of a (weak) acid and a (weak) base, we get equation $(3)$ and the equilibrium constant as given in $(4)$: $$\begin{align}\ce{HA + B &<=> A- + HB+}\tag{3}\\[0.7em] K &= \frac{[\ce{A-}][\ce{HB+}]}{[\ce{HA}][\ce{B}]}\tag{4}\end{align}$$ We can now perform simple mathematics with $(4)$ to arrive at the modified equation $(5)$ as below: $$\begin{align}K &= \frac{[\ce{A-}][\ce{HB+}]}{[\ce{HA}][\ce{B}]}\tag{4}\\[0.7em] &= \frac{[\ce{A-}][\ce{HB+}][\ce{H+}]}{[\ce{HA}][\ce{B}][\ce{H+}]}\\[0.7em] &= \frac{[\ce{H+}][\ce{A-}]}{[\ce{HA}]} \times \frac{[\ce{HB+}]}{[\ce{B}][\ce{H+}]}\\[0.7em] K &= \frac{K_\mathrm a(\ce{A})}{K_\mathrm a (\ce{HB+})}\tag{5}\end{align}$$ We have thus arrived at a way to determine the equilibrium constant of the proton transfer reaction just from the acidity constants of both participants. If both acids weak and have similar $\mathrm pK_\mathrm a$ values — as ammonium and acetic acid do — the equilibrium constant will be close to $1$ and thus both sides of the equation will have similar concentrations. Using $(5)$, you can also predict a ‘degree of completion’ for any acid-base reaction.
{ "domain": "chemistry.stackexchange", "id": 8942, "tags": "acid-base, equilibrium" }
Probability of random numbers
Question: This is a question I found on internet under DSP section so that's why I am posting it here. Help me understand it please. A computer adds 1000 random numbers that have each been rounded off to the nearest 10$^{\rm th}$. Find the probability that the total round-off error for the sum is $\ge 1$. How probability of any system can be greater than 1? Answer: The total roundoff error for the sum of $N$ numbers is: $$ S = \sum_{i=0}^{N-1} E_i $$ The roundoff error for the $i$-th number is represented by the random variable $E_i$. If we assume that the random number generator used by the computer yields numbers $X_i$ taken from a uniform distribution, then the difference between each $X_i$ and the nearest tenth (which is the roundoff error $E_i$) is uniformly distributed on the interval $(-\frac{0.1}{2}, \frac{0.1}{2}) = (-0.05, 0.05)$. What we're concerned with, though, is the distribution of $S$. Since $S$ is the sum of $N$ independent, identically distributed (iid) random variables, then via the central limit theorem, as $N \to \infty$, $S$ will tend to a Gaussian distribution. If we assume that your case of $N=1000$ is "large enough" for the Gaussian assumption to hold, we can easily estimate the probability that you seek. It's certainly possible to exactly calculate the distribution of $S$, but the Gaussian assumption is likely close enough for most applications with such large $N$. A Gaussian distribution is characterized by its first two moments, so if we can find those for $S$, then we have all the information we need. These are easy to calculate for a sum of iid random variables. The mean of $S$ is equal to: $$ \mathbb{E}(S) = \sum_{i=0}^{N-1} \mathbb{E}(E_i) = 0 $$ The variance of $S$ is equal to: $$ \mathbb{E}\left((S - \mathbb{E}(S))^2\right) = \sum_{i=0}^{N-1} \mathbb{E}\left((E_i - \mathbb{E}(E_i))^2\right) $$ Recall that the random variables $E_i$ are distributed uniformly. It is well known that the uniform distribution over the interval $(a,b)$ has variance $\frac{1}{12}(b-a)^2$. For this case, that yields a variance $\sigma_{E_i}^2 = \frac{0.01}{12}$. Therefore, the variance of the total roundoff error $S$ is $\sigma_{S}^2 = \frac{0.01N}{12}$. So in summary, we can approximate $S$'s distribution as Gaussian with mean zero and variance $\sigma_{S}^2 = \frac{0.01N}{12}$. Based on those parameters, you can easily calculate the estimated probability distribution function (pdf), then integrate that result to arrive at whatever probability you seek. The probability that there is a total roundoff error with magnitude greater than one would be: $$ \begin{align} P(|S| > 1) &= P(S>1 \lor S < -1) \\ &= 1 - P(-1 < S < 1) \\ &= 1 - \int_{-1}^{1}f_S(s)ds \end{align} $$ where $f_S(s)$ is the Gaussian distribution's pdf that we arrived at before.
{ "domain": "dsp.stackexchange", "id": 592, "tags": "random" }
Dimensional analysis in differential equations
Question: I know how to use Buckingham Pi Theorem to, for example derive from the functional equation for a simple pendelum, with the usual methods also described here $1=fn\left[T_{period}, m, g, L\right]$ $1=fn\left[\frac{g}{L}T_{period}^2\right]=fn\left[\Pi_1\right]$ Everything seemed to work well until I tried to apply the theorem to the governing differential equation: $(I): m\frac{d^2\Theta}{dt^2}L = - sin(\Theta)mg$ $1 = fn\left[\Theta, m, g, L, t\right]$ $1 = fn\left[\Theta, \frac{g}{L}t^2\right] = fn\left[\Theta, \Pi_1\right]$ Seems to work so far, but when I now try to rewrite $I$ in terms of $\Theta, \Pi_1$, I can't deal with the derivative. I tried something myself and it seems to work out, but I don't know how rigorous the argumentation is and if the result can be stated in a better way. See the following: Since we want to take a derivative with respect to time, we define a new dimensional variable $\xi$ with $\xi\bar{t} = t$ with arbitrary $\bar{t} \neq 0$. We also introduce a new function $\Omega\left(\xi\right) = \Theta\left(\xi\bar{t}\right)=\Theta\left(t\right)$. Computing $\frac{d\Omega}{d\xi} = \bar{t}\Theta\left(\xi\bar{t}\right)=\bar{t}\Theta\left(t\right)$ and likewise for higher derivatives. We now write the functional equation including derivatives like so $1 = fn\left[\frac{d^2\Theta}{dt^2}, \Theta, m, g, L, t\right]$ substituting $t=\xi\bar{t}, \frac{d^2\Theta}{dt^2}=\frac{1}{\bar{t}^2}\frac{d^2\Omega}{d\xi^2}, \Theta=\Omega$ we have the new functional equation $1 = fn\left[\frac{1}{\bar{t}^2}\frac{d^2\Omega}{d\xi^2}, \Omega, m, g, L, \bar{t}, \xi\right]$ because the derivative is now in terms of two nondimensional parameters, everything else seems to work great with Buckingham Pi: $1 = fn\left[\frac{d^2\Omega}{d\xi^2}, \Omega, \frac{g}{L}\bar{t}^2, \xi\right] = fn\left[\frac{d^2\Omega}{d\xi^2}, \Omega, \Pi_2, \xi\right]$ When resubstituting in the actual equation we get $\frac{d^2\Omega}{d\xi^2} = -sin\left(\Omega\right)\Pi_2$ which indeed yields the correct results. Although I can't seem to find any mistakes in my reasoning, I am not quite happy with the arbitrary choice of $\bar{t}$. The choice does influence the \Pi-group 2 (although only in magnitude) and also will influence boundary conditions when they are given. As far as I can see, it does not influence the result, when given in natural parameter form (as terms of $m, g, L, t$) because every $\bar{t}$ will get paired again with one $\xi$, yielding just $t$. But I have not been able to prove this point. I remain with three questions: What is the standard approach in literature that I can study? Is there a problem with my derivations, so far? Can I somehow prove that the choice of $\bar{t}$ does not matter in the final solution? Answer: You have done more than half the work yourself. It is convenient to define, $\Pi_1\equiv \sqrt{\frac{g}{L}}t$. There is nothing wrong with the way you have defined it, but my definition reduces work in what follows. Rewrite derivative as: $\frac{d\theta}{dt}=\frac{d\theta}{d\Pi_1}\frac{d\Pi_1}{dt}=\frac{d\theta}{d\Pi_1}\sqrt{\frac{g}{L}}$ $\frac{d^2\theta}{dt^2}=\frac{d^2\theta}{d\Pi_1^2}\frac{d\Pi_1}{dt}=\frac{d^2\theta}{d\Pi_1^2}\frac{g}{L}$ So your differential equation becomes $\frac{d^2\theta}{d\Pi_1^2}=-\sin \theta$.
{ "domain": "physics.stackexchange", "id": 32928, "tags": "homework-and-exercises, dimensional-analysis, differential-equations" }
How to calculate the overlap of the orthogonal state?
Question: This is probably a very obvious question, but I am going through this problem set and I don't understand why in 1b) it says that it is obvious that $|\langle\psi_1^\perp|\psi_2\rangle|=\sin\theta$ given that $|\langle\psi_1|\psi_2\rangle| = \cos\theta$. Answer: TL;DR: These inner products are equal to the amplitudes and therefore the squares of their magnitudes sum to one. By Pythagoras theorem $\sin^2\theta + \cos^2\theta = 1$, so if one of the amplitudes is $\cos\theta$ then the magnitude of the other must be $|\sin\theta|$. Since $\{|\psi_1\rangle, |\psi_1^\perp\rangle\}$ is a basis, we can expand $|\psi_2\rangle$ as $$ |\psi_2\rangle = \alpha |\psi_1\rangle + \beta |\psi_1^\perp\rangle\tag1 $$ where $|\alpha|^2 + |\beta|^2=1$. Moreover, since the basis is orthonormal, we can compute $\alpha$ and $\beta$ in terms of inner products $$ \alpha = \langle \psi_1|\psi_2\rangle \\ \beta = \langle \psi_1^\perp|\psi_2\rangle $$ as is easy to check by taking the inner product of $(1)$ with the elements of the dual basis. Now, from $|\alpha|^2 + |\beta|^2=1$ we see that $$ |\langle \psi_1^\perp|\psi_2\rangle| = |\beta| = \sqrt{1 - |\alpha|^2} = \sqrt{1 - |\langle \psi_1|\psi_2\rangle|^2} = \sqrt{1 - \cos^2\theta} = |\sin\theta| $$ but $\theta \in (0, \pi)$ so $|\langle \psi_1^\perp|\psi_2\rangle|=\sin\theta$.
{ "domain": "quantumcomputing.stackexchange", "id": 2536, "tags": "quantum-state, textbook-and-exercises, linear-algebra" }
JPL DE Documentation
Question: I'm currently trying to write a C++ library that makes accessing binary data from a JPL ephemeris file easier. So far I sifted through the web, trying to find a good and thorough documentation about what is actually stored in there and how to work with the data. Does anybody know of a PDF/Book or something similar, that explains anything of that sort? Thanks in advance! Answer: Be aware there are differing formats. SPK is the newer format - the older text format seems to be deprecated. It's worth having a look at the documentation on this jplephem python library as it explains the format reasonably well. If you do use SPK, you'll need to read all of the documentation in NASA's SPK repository I'm not sure if this is from the same author as this library - worth a read as it discusses the FITS format.
{ "domain": "astronomy.stackexchange", "id": 2165, "tags": "solar-system, ephemeris" }
no ros workspace
Question: I am a beginner with ROS. I am trying the tutorials, and I do not find ros workspace in the ros directory. Do I have to create it? Thanks, Morpheus Originally posted by Morpheus on ROS Answers with karma: 111 on 2011-11-06 Post score: 0 Answer: Yes, you have to create the directory ros_workspace. Go to your /home/user directory and type mkdir ros_workspace Look at the link below for more details. http://www.ros.org/wiki/ROS/Tutorials/InstallingandConfiguringROSEnvironment Originally posted by AlgorithmSeeker with karma: 46 on 2011-11-06 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 7201, "tags": "ros" }
Calculate loan rate based on payment and duration in months
Question: So I have the following code which works well-ish. Looking for ideas on how to improve it. public void testItAll() { System.err.println(determineRate(226.34, 25.00, 12)); System.err.println(determineRate(5800.00, 29.00, 31)); System.err.println(determineRate(4000.00, 25.00, 460)); System.err.println(determineRate(4000.00, 111.00, 173)); System.err.println(determineRate(15000.00, 270.00, 60)); System.err.println(determineRate(7000.00, 154.00, 60)); System.err.println(determineRate(27002.51, 315.00, 85)); System.err.println(determineRate(17118.33, 270.14, 73)); System.err.println(determineRate(5170.71, 143.00, 48)); System.err.println(determineRate(45297.34, 400.00, 135)); System.err.println(determineRate(6000.00, 113.00, 60)); System.err.println(determineRate(23058.98, 219.59, 145)); System.err.println(determineRate(47475.76, 390.44, 181)); System.err.println(determineRate(69691.48, 554.00, 180)); System.err.println(determineRate(39310.00, 725.00, 60)); System.err.println(determineRate(45000.00, 316.00, 180)); System.err.println(determineRate(14071.01, 45.00, 220)); System.err.println(determineRate(16875.00, 198.00, 120)); System.err.println(determineRate(66080.04, 295.00, 12)); System.err.println(determineRate(120000.00, 664.00, 120)); System.err.println(determineRate(58000.00, 387.19, 120)); System.err.println(determineRate(351213.00, 1993.11, 25)); System.err.println(determineRate(139500.00, 1000.00, 300)); System.err.println(determineRate(2400.00, 1875.00, 180)); System.err.println(determineRate(193155.00, 2002.00, 120)); System.err.println(determineRate(40800.00, 507.46, 36)); System.err.println(determineRate(198375.00, 1530.00, 240)); System.err.println(determineRate(22700.00, 450.00, 60)); System.err.println(determineRate(20000.00, 622.85, 999)); System.err.println(determineRate(629999.32, 25.00, 300)); System.err.println(determineRate(298905.41, 0.01, 360)); System.err.println(determineRate(329850.00, 2075.00, 240)); System.err.println(determineRate(21163.58, 322.00, 72)); System.err.println(determineRate(42342.08, 320.00, 73)); System.err.println(determineRate(33000.00, 335.00, 120)); System.err.println(determineRate(21234.64, 207.73, 144)); System.err.println(determineRate(103400.00, 787.00, 240)); System.err.println(determineRate(82262.45, 565.00, 120)); System.err.println(determineRate(73300.00, 610.00, 60)); System.err.println(determineRate(131948.07, 528.00, 180)); System.err.println(determineRate(14000.00, 102.00, 180)); System.err.println(determineRate(74847.00, 610.00, 240)); System.err.println(determineRate(50000.00, 544.00, 120)); System.err.println(determineRate(15000.00, 167.00, 120)); System.err.println(determineRate(741940.00, 4135.00, 60)); System.err.println(determineRate(62540.34, 999.00, 84)); System.err.println(determineRate(27277.50, 325.00, 120)); System.err.println(determineRate(24435.57, 375.00, 72)); System.err.println(determineRate(3000.00, 104.00, 0)); System.err.println(determineRate(2927.03, 105.00, 36)); System.err.println(determineRate(11000.00, 128.00, 120)); System.err.println(determineRate(8000.00, 180.00, 60)); System.err.println(determineRate(13459.32, 206.90, 84)); System.err.println(determineRate(25828.47, 277.00, 145)); System.err.println(determineRate(19395.36, 250.00, 120)); System.err.println(determineRate(240000.00, 245058.08, 1)); System.err.println(determineRate(65148.68, 450.00, 180)); System.err.println(determineRate(86000.00, 719.00, 120)); System.err.println(determineRate(34147.83, 298.27, 181)); System.err.println(determineRate(8230.29, 150.00, 73)); System.err.println(determineRate(85993.71, 659.54, 360)); System.err.println(determineRate(110000.00, 515.43, 92)); System.err.println(determineRate(563000.00, 3555.13, 360)); System.err.println(determineRate(3000.00, 40.00, 61)); System.err.println(determineRate(440.67, 30.00, 312)); System.err.println(determineRate(457.50, 33.00, 371)); System.err.println(determineRate(13015.00, 235.00, 60)); System.err.println(determineRate(14713.16, 325.00, 48)); System.err.println(determineRate(27110.25, 415.38, 72)); System.err.println(determineRate(14819.92, 225.38, 72)); System.err.println(determineRate(10000.00, 264.00, 48)); System.err.println(determineRate(7900.00, 143.00, 59)); System.err.println(determineRate(34970.94, 499.00, 84)); System.err.println(determineRate(41110.61, 426.81, 120)); System.err.println(determineRate(69627.21, 726.25, 120)); System.err.println(determineRate(25609.00, 260.00, 120)); System.err.println(determineRate(18100.00, 334.00, 59)); System.err.println(determineRate(10559.00, 28.00, 92)); System.err.println(determineRate(102000.00, 748.11, 181)); System.err.println(determineRate(6614368.75, 0.01, 47)); System.err.println(determineRate(12125.77, 174.00, 84)); System.err.println(determineRate(25667.70, 385.33, 84)); System.err.println(determineRate(1992.50, 75.00, 0)); System.err.println(determineRate(1815.38, 75.00, 0)); System.err.println(determineRate(3527.52, 25.00, 0)); System.err.println(determineRate(13036.39, 262.00, 64)); System.err.println(determineRate(22000.00, 190.00, 144)); } private double determineRate(double loanAmount, double payment, int termInMonths) { //initial guess .05 5% rate. double rateGuess = 0.05; double calculatedPayment = 0.0; int wag = 19; int times = 0; do { times++; calculatedPayment = calculatePayment(loanAmount, rateGuess, termInMonths); if (payment < calculatedPayment) { //rate needs to go down by a percentage of the difference. double rateGuessPrior = rateGuess; rateGuess = Math.abs(rateGuess - Math.abs(((rateGuess * (1 - (Math.min(calculatedPayment, payment) / Math.max(calculatedPayment, payment)))) * Math.max(wag - times, 1)))); if (rateGuessPrior < rateGuess) { rateGuess = rateGuessPrior - 0.01; // remove a percent. safety net. } } else { //rate needs to go up by a percentage of the difference. double rateGuessPrior = rateGuess; rateGuess = Math.abs(rateGuess + Math.abs(((rateGuess * (1 - (Math.min(calculatedPayment, payment) / Math.max(calculatedPayment, payment)))) * Math.max(wag - times, 1)))); if (rateGuessPrior > rateGuess) { rateGuess = rateGuessPrior + 0.01; // remove a percent. safety net. } } } while (Math.max(payment, calculatedPayment) - Math.min(payment, calculatedPayment) > 0.05); System.err.println("It took: " + times + " times to complete."); System.err.println("for: " + loanAmount + ", " + payment + ", " + termInMonths); return Double.isNaN(rateGuess) ? 0.0 : BigDecimal.valueOf(rateGuess).multiply(BigDecimal.valueOf(100.0)).setScale(5, BigDecimal.ROUND_HALF_UP).doubleValue(); } private double calculatePayment(double loanAmount, double rate, double termInMonths) { return (loanAmount * ((rate/12.0)*(Math.pow((1 + (rate/12.0)), termInMonths)))) / (Math.pow((1 + (rate/12.0)), termInMonths) - 1); } Answer: Unit testing public void testItAll() { ... } Kind of a good start... System.err.println(determineRate(226.34, 25.00, 12)); Eh, what? One, why are you printing out on the standard error stream? Two, how do you complete the so-called 'testing'? By manually checking the output matches? This is where unit testing comes in! A proper unit test should let a developer: Specify some test input. Process the input to some output to be recorded. Assert that the recorded output matches an expected result. There are a handful of good Java unit testing frameworks, and I will use TestNG as an example below. You start off by having annotating a test method so that the framework recognizes that it needs to run that test (this is dependent on your unit testing framework), and an underlying method that calls your method to test: @Test public void testOne() { doTest(226.34, 25.00, 12, 55.50992); } public void testLoanRate(double loanAmount, double payment, int termInMonths, double expectedRate) { // ... } As you can see, its method signature is very similar to your method, because you need the desired inputs. The extra parameter, expectedRate, lets you assert that the calculation is correct. The body of the method can be something like: public void testLoanRate(double loanAmount, double payment, int termInMonths, double expectedRate) { // assuming determineRate() can be made static double result = determineRate(loanAmount, payment, termInMonths); // following static method is provided by TestNG // the third argument controls the absolute tolerable difference, // an arbitrary small amount is chosen as an example Assert.assertEquals(result, expectedRate, 0.0001d); // optionally print inputs and output when successful System.out.printf("Amount: %f, Payment: %f, Term in months: %d, Result: %f%n", loanAmount, payment, termInMonths, result); } Testing a number of inputs and expected results is relatively easy with TestNG's parameterized testing feature. You need a @DataProvider, and link it up with the @Test method on the testLoanRate() method now: @DataProvider(name = "test-cases") public Iterator<Object[]> getTestCases() { return Arrays.asList( new Object[] { 226.34, 25.00, 12, 55.50992 }, new Object[] { 5800.00, 29.00, 31, 0 }, // ??? new Object[] { 4000.00, 25.00, 460, 6.97672 }, /* ... */ ).iterator(); } @Test(dataProvider = "test-cases") public void testLoanRate(double loanAmount, double payment, int termInMonths, double expectedRate) { double result = determineRate(loanAmount, payment, termInMonths); Assert.assertEquals(result, expectedRate, 0.0001d); System.out.printf("Amount: %f, Payment: %f, Term in months: %d, Result: %f%n", loanAmount, payment, termInMonths, result); } When you run this unit test, TestNG will iterate through all the test cases to assert the results. This eliminates the need to manually check every calculation. Magic numbers and math private double calculatePayment(double loanAmount, double rate, double termInMonths) { return (loanAmount * ((rate/12.0)*(Math.pow((1 + (rate/12.0)), termInMonths)))) / (Math.pow((1 + (rate/12.0)), termInMonths) - 1); } Using just one example, 12.0 seems to be used a lot here. You should consider putting your magic numbers into an easily identifiable constant so that they can be reused in a consistent and readable manner. Math.max(payment, calculatedPayment) - Math.min(payment, calculatedPayment) > 0.05 Another way of expressing this can be just: Math.abs(payment - calculatedPayment) > 0.05
{ "domain": "codereview.stackexchange", "id": 20661, "tags": "java, finance" }
Multi-tag bundles detection of ar_track_alvar
Question: I have some problem to detect Multi-tag bundles through ar_track_alvar. According to the wiki, To create a bundle, first choose which tag you want to be the master tag. Treat the center of the master tag as (0,0,0). Then, after placing the rest of the tags, measure the x, y, and z coordinate for each of the 4 corners of all of the tags, relative to the master tag origin. Enter these measurements for each tag into the XML file starting with the lower left corner and prograssing clockwise around the tag. So I think I should add points in XML file as this sequences: lower left -> upper left -> upper right -> lower right. However, an example XML file included in ar_track_alvar seems like not to follow that rules. (It progresses counter-clockiwise around tag. e.g. lower left -> lower right -> upper right -> upper left) I also tried to make XML file through createMarker. However, it is also weird. 4 points generated by it are center -> lower right -> upper right -> upper left. I tried three of them. When I check it through rviz, all three cases cannot detect master tag (red square marker appears different position and orientation) However, other markers except for master one are well detected as green marker. So what was my fault or what I have to do more? Thank you. Originally posted by zieben on ROS Answers with karma: 118 on 2013-04-10 Post score: 0 Answer: Yes, you have to consider the corners counter-clockwise. But I have never used createMarker. Originally posted by jacky_90 with karma: 101 on 2013-04-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13761, "tags": "ros, ros-fuerte, ar-track-alvar" }
How does density affect gravity?
Question: Say we have two masses, mass A and mass B. These two masses are identical in every dimension. The only difference is the density. Do they not curve the same amount of space-time, and if not, why? Answer: Let's assume our two masses are spherical and not rotating, and they have the same mass. In that case Birkhoff's theorem tells us the geometry outside the masses is the same in both cases i.e. the Schwarzschild metric. So if you are some distance $r$ away, where $r$ is greater than the radius of either object, then the curvature is exactly the same. You would not be able to tell the difference between the two objects from their gravitational fields. However if one object is very dense while the either is far less dense, e.g. one is a solid sphere and the other a spherical shell, then you could get much closer to the denser object before meeting its surface. This means the spacetime curvature would be greater at the surface of the solid object than at the surface of the shell.
{ "domain": "physics.stackexchange", "id": 55769, "tags": "general-relativity, gravity, spacetime, curvature, density" }
Could we eradicate mosquitoes?
Question: Researchers have proposed the application of CRISPR/Cas9 and gene drive to genetically alter wild mosquito populations such that they don't transmit malaria. The government of New Zealand has announced a program to eliminate several invasive mammals from that island, also using gene drive (among other things). Could we use this technology to completely eradicate from the world all species of mosquito that prey on humans? Also, could we accurately predict the extent of the resulting ecological disruption so we could decide if it was worth it? Answer: Could we use this technology to completely eradicate from the world all species of mosquito that prey on humans? Yes, implemented correctly, a gene drive has this capability. Also, could we accurately predict the extent of the resulting ecological disruption so we could decide if it was worth it? The current scientific and political consensus is that, no, we can't predict the potential consequences with high enough confidence to move forward with such a large-scale manuever. One of the leading scientists studying and improving gene drives, Kevin Esvelt at the MIT Media Lab, instead supports a "daisy chain gene drive" which is pre-programmed to weaken with each successive generation, allowing humans to effectively control the spread of gene drives to a specific geographic area and time frame.
{ "domain": "biology.stackexchange", "id": 7007, "tags": "ecology, extinction, crispr" }
[Resolved] Communication between Kinetic and Indigo only working one-way
Question: I am currently trying to set a Master-Slave using my laptop (Ubuntu 16.04 w/ ROS Kinetic) as master and an UDOO Quad (Ubuntu 14.04.5 LTS (GNU/Linux 3.14.56-udooqdl-02044-gddaad11 armv7l) w/ ROS Indigo ARM) as a client and a router (TP-LINK TL-WR841N V7) to set the network for both devices. I've already done all the steps given here, where I set the ROS_MASTER_URI to the IP on my PC. So far I can see topics via rostopic list either on my PC or via ssh on the UDOO. The problem I have is that when I try a simple publisher node like rostopic pub \testing std_msgs\String hello, I can only get the message using rostopic echo \testing when I do it on the machine I am publishing from (even through ssh), I get nothing if I do it on the other machine. I've read other questions like #q9915 and #q76279. I've tried to disable the firewall, set the ROS_HOSTNAME, but neither of those solutions worked for me. My intuition says to me that it should not be a problem with the messages between the machines since I am only sending a String which is one of the basic std_msgs. Could it be a networking problem? Could it be a compatibility problem since I am running Kinetic on one device and Indigo on the other? Any help will be appreciated Originally posted by manu-diaz-zapata on ROS Answers with karma: 16 on 2019-01-22 Post score: 0 Original comments Comment by ahendrix on 2019-01-22: This sounds like a networking issue. I did a talk about network troubleshooting at ROSCon a few years back: https://vimeo.com/67806888 . Comment by manu-diaz-zapata on 2019-01-22: If I have time I'll look into that dnsmasq setup for my lab, because it sounds like I will save us some hassle. I've already added on both machines the IP of the other on \etc\hosts. And the weird thing is that before I went home, I managed to listen from my PC to something published in the UDOO. Comment by manu-diaz-zapata on 2019-01-22: I think I'll try the rosnode info and see what debugging I can do from there. Answer: I just found the reason why it wasn't working. Just needed to give the IP of my PC on the WLAN to the ROS_IP environment variable on my PC. Once I disabled the firewall, everything worked as expected. Originally posted by manu-diaz-zapata with karma: 16 on 2019-01-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by tfoote on 2019-01-24: I'm glad that you found the solution. I've accepted your answer. Now that you have enough karma, you should be able accept your own answer in the future.
{ "domain": "robotics.stackexchange", "id": 32319, "tags": "ros-kinetic, ros-indigo" }
Do stars vary their own brightness?
Question: All I could find on the internet was about how stars vary in brightness depending on their distance to Earth, temperature, type of star... But my question is, can a star can change its brightness. Answer: They can, and some do. These stars are called variable stars, because their luminosities as observed from Earth vary over time, often (though not always) in a regular period. Here are some broad categories: Pulsating stars, where fluctuations lead to increases and decreases in size or temperature, which in turn produce changes in the star's brightness. Example: The oft-cited Cepheid variables. Eclipsing binary stars involve two stars orbiting each other. When one passes between the other and Earth, the combined luminosity of the system appears to decrease, even though the stars' intrinsic luminosities are probably constant. Example: Algol. So-called eruptive variables (often lumped together with cataclysmic variables, which are different) may have irregular outbursts caused by flares or other phenomena. Example: Luminous blue variables (LBVs). The change in brightness and the length of the variations depend on the type of variable star in question. The luminosity can vary by anywhere from a fraction of a magnitude to many magnitudes. Likewise, the period (if there is one) can range from hours to years.
{ "domain": "astronomy.stackexchange", "id": 2215, "tags": "variable-star" }
Embedded conditional code compaction
Question: I'm porting some AVR code from PROGMEM/PGM_P to __flash, and I want to reduce the amount of conditional compilation I need to do in the code. Here's all the code (but keep in mind that only the parts in the conditionals should need changing): #define F_CPU 12000000 #include <avr/io.h> //#undef __FLASH #ifndef __FLASH #include <avr/pgmspace.h> #define FLASH(x) const x PROGMEM #else #define FLASH(x) const __flash x #endif #include <avr/interrupt.h> #include <avr/sleep.h> #define M_CPORT PORTC #define M_CDIR DDRC #define M_C1 PC7 #define M_C2 PC6 #define M_C3 PC5 #define M_C4 PC4 #define M_C5 PC3 #define numCols 5 #define M_RPORT PORTA #define M_RDIR DDRA #define M_R1 PA0 #define M_R2 PA1 #define M_R3 PA2 #define M_R4 PA3 #define M_R5 PA4 #define M_R6 PA5 #define M_R7 PA6 #define numRows 7 // bit values for each row FLASH(unsigned char) rows[] = {_BV(M_R1), _BV(M_R2), _BV(M_R3), _BV(M_R4), _BV(M_R5), _BV(M_R6), _BV(M_R7)}; // byte values for each row of each frame FLASH(unsigned char) letterI[7] = {0x1f, 0x4, 0x4, 0x4, 0x4, 0x4, 0x1f}; ... #ifdef __FLASH const __flash unsigned char const * const __flash sequence[] = { #else unsigned PGM_P const PROGMEM sequence[] = { #endif letterI, ... }; // how long to sustain each row fow, based on the number of dots in the row const unsigned int rowDur[6] = {0, 140, 160, 180, 200, 220}; unsigned char currFrame, currSlice; unsigned char frameBuffer[7]; // optimized routine to count number of 1s in a frame row unsigned char count5Bits(unsigned char v) { asm volatile("clr __tmp_reg__\n\t" "ror %[val]\n\t" "adc __tmp_reg__,__zero_reg__\n\t" "ror %[val]\n\t" "adc __tmp_reg__,__zero_reg__\n\t" "ror %[val]\n\t" "adc __tmp_reg__,__zero_reg__\n\t" "ror %[val]\n\t" "adc __tmp_reg__,__zero_reg__\n\t" "ror %[val]\n\t" "adc __tmp_reg__,__zero_reg__\n\t" "mov %[val], __tmp_reg__" : [val] "=&r" (v) :); return v; } // advance the frame, oring the dots from the next image frame in turn ISR(TIMER1_COMPA_vect) { int r; ++currSlice; #ifdef __FLASH const __flash unsigned char *fPtr = sequence[currFrame]; #else PGM_P fPtr = (PGM_P)pgm_read_word(&(sequence[currFrame])); #endif for (r = numRows; r > -1; --r) { frameBuffer[r] = ((frameBuffer[r] << 1) | #ifdef __FLASH fPtr[r] #else pgm_read_byte(&(fPtr[r])) #endif >> ((numCols + 1) - currSlice)); } currSlice %= (numCols + 1); if (!currSlice) { ++currFrame; currFrame %= sizeof(sequence) / sizeof(sequence[0]); } } unsigned char currRow; unsigned char rowByte; unsigned char frameRow; // turn on the dots for the current row ISR(TIMER0_OVF_vect) { #ifdef __FLASH rowByte = rows[currRow]; #else rowByte = pgm_read_byte(&(rows[currRow])); #endif frameRow = frameBuffer[currRow]; M_CPORT |= ~(frameRow << 3); M_RPORT |= rowByte; ++currRow; currRow %= numRows; OCR0A = rowDur[count5Bits(frameBuffer[currRow])]; } // turn off the dots for the current row ISR(TIMER0_COMPA_vect) { M_RPORT = 0; M_CPORT = 0; } int main() { register unsigned char newMCUCR = MCUCR | _BV(JTD); MCUCR = newMCUCR; MCUCR = newMCUCR; // timer 1 for frame advance, CTC mode // OCRA for advance TCCR1A = 0; TCCR1B = (_BV(WGM12) | _BV(CS12) | _BV(CS10)); TIMSK1 = _BV(OCIE1A); OCR1AH = 0x3; OCR1AL = 0x0; //timer 0 for row advance, PWM mode // OVF for advance, OCRA for shutoff TCCR0A = (_BV(WGM01) | _BV(WGM00)); TCCR0B = _BV(CS01); TIMSK0 = (_BV(OCIE0A) | _BV(TOIE0)); OCR0A = 200; M_CDIR |= (_BV(M_C1) | _BV(M_C2) | _BV(M_C3) | _BV(M_C4) | _BV(M_C5)); M_RDIR |= (_BV(M_R1) | _BV(M_R2) | _BV(M_R3) | _BV(M_R4) | _BV(M_R5) | _BV(M_R6) | _BV(M_R7)); set_sleep_mode(SLEEP_MODE_IDLE); sei(); while (1) { sleep_enable(); sleep_cpu(); sleep_disable(); } } Both code paths currently work perfectly, but having a single code path would likely reduce maintenance. I can replace pgm_read_byte with *, but I'm befuddled by both the declaration and access of sequence and the declaration of fPtr, and don't know where I should start with them. EDIT: After a bit of mucking around, here's what I came up with: #ifndef __FLASH #include <avr/pgmspace.h> #define FLASH(x) const x PROGMEM #define FLASH_P(x) const x * const PROGMEM #define FLASH_PR(x, y) (x *)pgm_read_word(&(y)) #else #define FLASH(x) const __flash x #define FLASH_P(x) const __flash x * const __flash #define FLASH_PR(x, y) (y) #define pgm_read_byte(x) *(x) #endif ... FLASH_P(unsigned char) sequence[] = { ... FLASH(unsigned char *) fPtr = FLASH_PR(unsigned char, sequence[currFrame]); Wouldn't mind a sanity check on this though, in case I missed anything. Answer: A few notes: Your included libraries and definitions at the beginning of your code is not very organized. #include <avr/io.h> //#undef __FLASH #ifndef __FLASH #include <avr/pgmspace.h> #define FLASH(x) const x PROGMEM #define FLASH_P(x) const x * const PROGMEM #define FLASH_PR(x, y) (x *)pgm_read_word(&(y)) #else #define FLASH(x) const __flash x #define FLASH_P(x) const __flash x * const __flash #define FLASH_PR(x, y) (y) #define pgm_read_byte(x) *(x) #endif #include <avr/interrupt.h> #include <avr/sleep.h> I would sort it where all of the #includes are at the beginning, then a space, and then all of your preprocessor definitions. Also, I don't like to #include stuff in an #ifndef, just include it. Your compiler will make the proper optimizations if it's not used anyways. #include <avr/io.h> #include <avr/interrupt.h> #include <avr/sleep.h> #include <avr/pgmspace.h> #ifndef __FLASH #define FLASH(x) const x PROGMEM #define FLASH_P(x) const x * const PROGMEM #define FLASH_PR(x, y) (x *)pgm_read_word(&(y)) #else #define FLASH(x) const __flash x #define FLASH_P(x) const __flash x * const __flash #define FLASH_PR(x, y) (y) #define pgm_read_byte(x) *(x) #endif You have a lot of preprocessor conditionals spread throughout your program. #ifdef __FLASH const __flash unsigned char const * const __flash sequence[] = { #else unsigned PGM_P const PROGMEM sequence[] = { #endif Another option to this is to have two separate files for the different implementations. This would mean having version-specific implementations of some classes, and switch entire implementations rather than just a few lines here and there. It would clean up your code of all these preprocessor conditionals. You have some preprocessor definitions that aren't capitalized. #define numRows 7 You should always capitalize all preprocessor definitions. #define NUMROWS 7 You use some magic numbers here and there. const unsigned int rowDur[6] = {0, 140, 160, 180, 200, 220}; You already defined NUMROWS, so you should use it here. const unsigned int rowDur[NUMROWS - 1] = {0, 140, 160, 180, 200, 220}; Define variables in your for loops.(C99) for (int r = numRows; r > -1; --r) Use more comments. If another developer was to take a look at your code, they would have to use a lot of reason and logic to derive what you are trying to do in some places. For longer explanations, put your comments above the tricky statement. M_CPORT |= ~(frameRow << 3); // <insert comment here>
{ "domain": "codereview.stackexchange", "id": 5961, "tags": "c, assembly, type-safety, embedded" }
What's the relation between the parallax distance and the luminosity distance?
Question: I have read that Riess and his team are able to measure $H_0$ from supernovae calibrated using Cepheid in a model independent way. from what I have gathered they find the absolute luminosity of Cepheid $M_c$ with the parallax method and a bunch of other geometric methods then, once they have $ M_c$, they use the luminosity redshift relation of SN Ia to find $M_{sn}$. my question is: once they have the parallax distance of Cepheid, how do they find their distance moduli without imposing a model? in other words, how do they convert the parallax distance in the luminosity distance without assuming a model? otherwise I don't understand how can they find their absolute luminosity or the distance moduli needed to find $M_c$ p.s. i had initially posted in physics.stackexchange but someone told me to post here Answer: The distinction between different distance measures in cosmology (in this context, the luminosity distance and the parallax distance) only becomes significant over cosmological distances, i.e., when the redshift of the objects begins to approach $\mathcal{O}(1)$. Maybe one day we will be able to observe parallax for cosmologically distant objects. Then it will be important to consider how parallax distances compare to luminosity distances. That day is not today, however. We only measure parallax distances for objects in our own galaxy. The cosmological model has no relevance in this context.
{ "domain": "astronomy.stackexchange", "id": 6967, "tags": "observational-astronomy, general-relativity, luminosity, hubble-constant, parallax" }
Information that can be extracted from the time-ordered correlation function
Question: The time-ordered correlation function can be very complicated and encodes a tremendous amount of information. For example, the LSZ formula can be used to extract S-matrix elements from the time-ordered correlation function. What other quantities in the context of quantum field theory can be extracted from the time-ordered correlation function? Answer: In principle, all the information of the QFT is encoded in the $n$-point functions. This means that, once you know the correlation functions, you know the Hilbert space, the fields and their algebra (modulo a unitary transformation). This is known as the Wightman reconstruction theorem, though AFAIK, it is not known whether the theorem holds for Yang-Mills theories (the Standard Model) or not: it is an open problem. The details of the theorem are quite involved (and I'm not familiar with them, nor with AQFT in general), but if you want to read about Wightman's QFT this scholarpedia article seems nice. Now I'll try to be more specific: in practice, what do we know once we know the $n$-point functions? The first thing, as you already noted, is that the $n$-point functions give us the information of all scattering phenomena. But apart from this, the $n$-point functions carry the information of the possible decays (and their time constants); the poles of the correlators give you the energy of bound states (e.g., you can use this to calculate the mass of the proton using the QCD Lagrangian); the correlators are easily related to effective vertices (which contain the information of the electric and magnetic moments, for example); etc.
{ "domain": "physics.stackexchange", "id": 30650, "tags": "quantum-field-theory, correlation-functions, s-matrix-theory" }
What decides the direction in which the accretion disk spins?
Question: Planets lie on the same plane because of the accretion disk formed during the Protostar stage, as I read in this question. I also read about the collision of particles in the gas cloud causing the overall spin to be in just one direction. But what decides the direction in which the accretion disk spins relative to the direction of the core? (I'm thinking that it might be because of some primary conditions - maybe the direction in which the core is spinning.) Answer: Stellar systems are born from clouds of turbulent gas. Although "turbulence" means that different parcels of gas move in different directions, the cloud have some overall, net angular momentum. Usually a cloud gives birth to multiple stellar systems, but even the subregion forming a given system has a net, and non-vanishing (i.e. $\ne0$), angular momentum. Parcels moving in opposite directions will collide, and friction will cause the gas to lose energy, such that the cloud contracts. Eventually subclouds moving in one direction will "win over" subclouds moving in other directions such that everything moves in the same direction, keeping the original angular momentum (minus what is ejected e.g. through jets). This means that the central star will rotate in the same direction as the circumstellar disk and that, in general, the planets that form subsequently, will also not only orbit the star in the same direction, but also spin in the same direction around their own axes. This is called prograde rotation. Sometimes, however, collisions between bodies may cause a planet or asteroid to spin in the opposite direction. This is called retrograde rotation and is the case for Venus and Uranus.
{ "domain": "astronomy.stackexchange", "id": 1491, "tags": "star, protostar" }
Histogram matching (specification) in Python
Question: I'm trying to implement an algorithm in which I first pad each row of the image with a fixed amount of new pixels in a certain range, apply Gaussian smoothing to the row cumulative histograms in vertical (y) direction, and thus obtain new cumulative histograms for each row in the end. After having obtained the new row cumulative histograms, my task is to get back/restore an intensity image from the cumulative histograms. Specifically, I would like to match each row histogram to the new, corresponding row histogram (to those which I obtained after padding and Gaussian filtering). Therefore, if there are, for example, 1000 rows in an image I will match 1000 row histograms and restore the original image. My code in Python is below, where I came until the step where I need to perform histogram matching but I got stuck there: import cv2 import numpy as np import scipy.ndimage as ndi from matplotlib import pyplot as plt from scipy.interpolate import interp1d # Threshold of intensity values for padding T = 200 img = cv2.imread('images/flicker1.jpg',0) rows,cols = img.shape # Number of pixels to pad each bin of a row histogram N = 30 ######################################################################## cdf_hist_Padded = np.zeros((rows,256)) cdf_hist_noPad = np.zeros((rows,256)) for i in range(0,rows): # Read one row img_row = img[i,] # Calculate the row histogram (without padding) hist_row_noPad = cv2.calcHist([img_row],[0],None,[256],[0,256]) # Calculate the cumulative row histogram (without padding) cdf_hist_row_noPad = hist_row_noPad.cumsum() cdf_hist_noPad[i,:] = cdf_hist_row_noPad # Copy the row to prepare for padding hist_row_Padded = np.copy(hist_row_noPad) # Apply uniform padding to all the bins less than T hist_row_Padded[0:T] = hist_row_Padded[0:T] + N # Calculate the cumulative row histogram of the padded row cdf_hist_row_Padded = hist_row_Padded.cumsum() # Accumulate the cumulative histograms of padded rows in a matrix cdf_hist_Padded[i,:] = cdf_hist_row_Padded # Apply 1D-Gaussian filtering on the padded cumulative histogram along the columns Gauss_cdf = ndi.gaussian_filter1d(cdf_hist_Padded, sigma=2, axis=0, output=np.float64, mode='nearest') # Normalize all the CDFs to get values between [0,1] norm_cdf_hist_noPad = cdf_hist_noPad/cdf_hist_noPad.max() norm_cdf_hist_Padded = cdf_hist_Padded/cdf_hist_Padded.max() norm_Gauss_cdf = Gauss_cdf/Gauss_cdf.max() # Take the first original and padded+smoothed row cumulative histogram H_zero = norm_cdf_hist_noPad[0,:] H_hat_zero = norm_Gauss_cdf[0,:] # I would like to match now 'H_zero' and 'H_hat_zero'. When I find how to do that, # I will apply this to all rows in a loop. How do I perform the matching?.. Answer: If I understood right, you are stuck in matching a given histogram into a desired one and creating a new image from this matched histogram obtained by your filtering method. I would first suggest you to get rid of all the unnecessary stuff (including the python code) above and isolate your problem as "histogram matching" in mathematical terms. Now I will describe how you can approximately match a given histogram to a desired one in two setps: by first converting it into that of a uniform (equalized) one, and in then converting this uniform one to the desired one. Finally we shall combine these two steps to get the answer. The method is based on converting a random variable into another by means of a transform $G(\cdot)$. You will replace the random variable $X$ with image intensity function $I(n,m)$. Consider a uniform random variable $X$ whose CDF is $F_X(x) = U(x) = P(X < x) = x$. We want to generate a new rv $Y$ from $X$, by means of a transform $Y=G(X)$, such that its CDF is the desired one. We shall therefore find this transform $G(x)$. Under suitable conditions: $$F_Y(y)= P(Y< y) = P(G(X)<y) = P(X< G^{-1}(y)) = F_X(G^{-1}(y))=G^{-1}(y) $$ from which we deduce that the transform that converts uniform $X$ to arbitrary $Y$ with a desired CDF is $G(x) = F_Y^{-1}(x)$ from which we obtain: $$ \boxed{Y = F_Y^{-1}(X)}$$ Therefore the transform that converts a uniform RV $X$ to an arbitrary (desired) RV $Y$ is $$ \boxed{G(x) = F_Y^{-1}(x)}$$ The transform that converts an arbitrary RV $Y$ to a uniform RV $X$ is $$ \boxed{ G^{-1}(x)=F_Y(x) } $$ Finaly, the transform that converts arbitrary RV $Y$ to another arbitrary RV $Z=T(Y)$ is $$ \boxed{ T(y)=F_Z^{-1}(F_Y(y)) } $$ We are almost done, we will finish by converting above algorithm to Image manipulation and also clarify how you will overcome the problem of not having those CDF's as formulas. Replace RV $Y$ and $Z$ with i/o images $Iy(n,m)$, $Iz(n,m)$, also replace CDFs of $Y$, $Z$ with that of "computed from histograms". I denote it as $H_y(i)$ and $H_z(i)$ where $H_y(i)$ denotes the cumulative histogram obtained from image at hand, and $H_z(i)$ computed from the desired (to be matched) histogram. Under suitable conditions and correct scalings to ensure intensity levels to remain within valid limits, therefore we can make the definiton as $I_z = T(I_y)$ or more explicitly $I_z(n,m) = H_z^{-1}(H_y(I_y(n,m)))$ Your only problem is about the inverse cumulative histogram of $Z$ (or $I_z$) and unfortunately I don't know a method which may guarantee perfect inversion. As it is a CDF from a discrete histogram there might be empty bins and hence some indices of inverse $H_z(i)$ might have no mappings at all. It is up to you how to best overcome this problem. However, as I understood from your code, you add pixels to empty/weak bins so that $H_z(i)$ becomes invertable... Below I will put some matlab code "as is" : There are many methods to match histograms and I don't think below is the most accurate or the efficient one. Iy = imread('cameraman.tif'); % Intesity between [0-255] hy = imhist(Iy); % compute image histogram hy(i) figure,stem(hy); title('original image histogram'); S = size(Iy); Imax = 255; % 8 bit K = Imax / (S(1)*S(2)); % Scale factor Hy = zeros(1,256); % compute CDF Hy(i) of input image Hy(1)=hy(1); for i=2:256 Hy(i) = Hy(i-1) + hy(i); end Hy = K*Hy; figure,stem(Hy); title('CDF of original input image'); % Get the desired image and histograms: I_des = imread('tire.tif'); figure,imshow(I_des); S = size(I_des); h_des = imhist(I_des); figure,stem(h_des); title('desired image histogram'); h_des(h_des < (S(1)*S(2)/(Imax*8))) = h_des(h_des < (S(1)*S(2)/(Imax*8))) + 32; % HERE paddings Hz = zeros(1,256); % This is desired CDF Hz(1)= h_des(1); for i=2:256 Hz(i)=Hz(i-1)+h_des(i); end K_des = Imax / (Hz(256)); Hz = K_des*Hz; figure,stem(Hz); title('Desired CDF Hz'); % MATCH is performed Imatch = (Iy); % Processed image from matching for i=0:255 % for each intensity level ind = (Iy==i); % find those index with an intensity value of "i" j = Hy(i+1); looping=1; % implement you own method while looping k = find( (Hz > (j-1)) & (Hz < (j+1))); if length(k)>0 % for practical matters... looping =0; else j = j + 1; % when there is no match, update a little end end Imatch(ind) = k(1); % Adjust the original intensity now... end figure, imshow(Imatch); title('Image Matched to desired histogram'); h_match = imhist(Imatch); figure, stem(h_match); title(' Matched Histogram, not a good match ???');
{ "domain": "dsp.stackexchange", "id": 2345, "tags": "python, histogram, equalization" }
how to use python3.8 with ROS Kinetic in a node (import rospy)?
Question: Hey, I am using ROS kinetic which supports python2.7. I want to run a node with python3.8 using rosrun. when I used a shebang to test the script without any "import rospy" it worked. but when I include "import rospy" to turn the script into a node and be able to send values, it gives me the following error: in import yaml ModuleNotFoundError: No module named 'yaml' I would greatly appreciate any idea to make ROS kinetic itself use python3.8 instead of python2.7 and fix this issue. Thanks in advance Originally posted by AA A on ROS Answers with karma: 23 on 2022-04-07 Post score: 0 Original comments Comment by WarTurtle on 2022-04-07: Here are some similar answers that might help you: ROS Kinetic and Python3 general rospy and python3 specific Answer: Can you update your system to a later version of ROS that uses Python 3.8? Noetic should do it. Originally posted by Rodolfo8 with karma: 299 on 2022-04-08 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37569, "tags": "ros, python3, ros-kinetic" }
Frequency array feed real values FFT
Question: EDIT The first version of my question has not worked as I expected, so that I will try to be a little bit more specific. The final goal I am trying to achieve is the generation of a ten minutes time series: to achieve this I have to perform an FFT operation, and it's the point I have been stumbling upon. Generally the aimed time series will be assigned as the sum of two terms: a steady component $U(t)$ and a fluctuating component $u^{'}(t)$. That is $$u(t) = U(t) + u^{'}(t);$$ So generally, my code follows this procedure: 1) Given data $time = 600 [s];$ $Nfft = 4096;$ $L = 340.2 [m];$ $U = 10 [m/s]$ $df = 1/600 = 0.00167 Hz;$ $f_{n} = Nfft/(2*time) = 3.4133 Hz;$ This means that my frequency array should be laid out as follows: $$ f = (-f_{n}+df):df:f_{n} $$ But, instead of using the whole $f$ array, I am only making use of the positive half: $$ f_{+} = df:f_{n} = 0.00167:3.4133 Hz; $$ 2) Spectrum Definition I define a certain spectrum shape, applying the following relationship $$ S_{u} = \frac{6L/U}{(1 + 6f_{+}L/U)^{5/3}}; $$ 3) Random phase generation I, then, have to generate a set of complex samples with a determined distribution: in my case, the random phase will approach a standard Gaussian distribution $(\mu = 0, \sigma = 1)$. In MATLAB I call nn = complex(normrnd(0,1,Nfft/2),normrnd(0,1,Nfft/2)); 4) Apply random phase To apply the random phase, I just do this $$ H_{u} = S_{u}*nn; $$ At this point start my pains! So far, I only generated $Nfft/2 = 2048$ complex samples accounting for the $f_{+}$ content. Therefore, the content accounting for the negative half of $f$ is still missing. To overcome this issue, I was thinking to merge the $real$ and $imaginary$ part of $H_{u}$, in order to get a signal $H_{uu}$ with $Nfft = 4096$ samples and with all real values. But, by using this merging process, the $0-th$ frequency order would not be represented, since the $complex$ part of $H_{u}$ is defined for $f_{+}$. Thus, how to account for the $0-th$ order by keeping a procedure as the one I have been proposing so far? Answer: There is still lots of issues with what you're trying to achieve, but it's a little clearer for me now. Thanks for the edit. DC term: You say $$ u(t) = U(t) + u^{'}(t); $$ with $U = 10 [m/s]$. Surely the constant $U$ is your "DC" (zeroth order) term? Sampling Rate: You say that $time = 600$ and that $Nfft = 4096$. Does that make your sampling rate $f_s = 4096 / 600 = 6.8267$ Hz? df Choice: I am not sure why you choose $df = 600$ ? Random Phase Generation: Your phase generation seems odd to me. This nn = complex(normrnd(0,1,Nfft/2),normrnd(0,1,Nfft/2)); will not generate just random phase. It will also generate a random amplitude, which is not what you want if you're just after phase. To get a random phase, you're better off doing: nn = exp(1j*2*pi*rand(1,Nfft/2)). That generates Nfft/2 uniform random variables (between 0 and 1), multiplies them by $2\pi$ and then forms $e^{j2\pi\times \mathrm{rand(1,Nfft/2)}}$. How to account for the DC term? If your (positive-frequency) spectrum is $H_n(f)$ for $f= k df$ for $k=1,\ldots,Nfft/2$, then just form: $$ \begin{array} \ H(f) &=& U, \mathrm{for\ } f=0\\ &=& H_n(f), \mathrm{for\ } f>0\\ &=& H^*_n(-f), \mathrm{for\ } f<0 \end{array} $$
{ "domain": "dsp.stackexchange", "id": 910, "tags": "fft, matlab, frequency" }
Recursive mimic joint?
Question: It seems that the joint_state_publisher is not able to mimic another mimic joint. Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "/opt/ros/kinetic/lib/joint_state_publisher/joint_state_publisher", line 212, in loop joint = self.free_joints[parent] KeyError: u'<recursive_joint_name>' The problem is addressable to the clear distinction between free and dependent joints, i.e. if a joint is dependent, it cannot be taken as reference for anyone else. I see that this is not a real problem as I can mimic the base joint with the proper reduction multiplier. Nonetheless it becomes annoying with a long chain and it also could be not very intuitive at first glance. I think that the best solution is to implement this workaround without creating an extra non-free-but-neither-only-dependent category which could only mess up with the code. I also think that I'm going to create a pull request in the following days, have you got any suggestions? EDIT: Here is (possible?) simple modification of the part of interest: # Add Dependent Joint elif name in self.dependent_joints: param = self.dependent_joints[name] parent = param['parent'] factor = param.get('factor', 1) offset = param.get('offset', 0) while parent in self.dependent_joints: param = self.dependent_joints[parent] parent = param['parent'] factor *= param.get('factor', 1) # the offset is relative only to the first parent joint = self.free_joints[parent] @David Lu, do you think it could work? Originally posted by alextoind on ROS Answers with karma: 217 on 2017-01-09 Post score: 0 Original comments Comment by David Lu on 2017-01-11: Go ahead and make a pull request and we'll go from there. Comment by alextoind on 2017-01-12: Perfect, I'm going to create it this evening Answer: That's a use case I did not consider when originally implementing it. The code has been relatively ignored for years, so I'm sure it could use freshening up. The frustrating part is that how different URDF tags are handled across different nodes is inconsistent, so there's not a great understanding of how mimic tags are used in different places, i.e. whether you should just mimic the base joint or not. In the context of this one node however, I think you should be able to recurse. Originally posted by David Lu with karma: 10932 on 2017-01-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by alextoind on 2017-01-09: Thank you very much for your quick response! I know the problems, but I also believe that mimic joints are very helpful with some specific robotic mechanisms, e.g. when replicating a pure rolling motion among two meshes. If you have time I would be glad if you could have a look at the above proposal
{ "domain": "robotics.stackexchange", "id": 26673, "tags": "joint-state-publisher, ros-kinetic" }
Can semimetals be explained by the nearly free electron model?
Question: In the quasi-free electron model, only U shaped and flipped U shaped parabolic energy bands emerge. So I think one can not derive from the free electron model anything about semimetals, as there need to be two U shaped bands next to each other for a material to have a band overlap and be considered a semimetal, which does not occur in this model. In an old exam, I read tough that one can deduce from the quasi-free electron model that semimetals have small potentials, as the quasi-free electron models predict the bandgap to be proportional to the potential strength. I wonder whether this statement makes actually sense. Because no matter how small the bandgap, there will never be a semimetal in the quasi-free model, right? Answer: While band overlap can't occur in the free electron model in one dimension it can in two- or more dimensions. In these higher dimensions there don't have to be two U shaped bands next to each other in order to have band overlap. To convince one of this one should draw a $k_x-k_y-E(\vec{k})$ graph in the first BZ and look at it from the side. In the unperturbed case $E(\vec{k})$ is quadratic. The energy $E(\vec{k})$ on the corners of the first BZ (e.g. at $(\pi/2,\pi/2)$) is therefore higher than on the edges of the BZ between the corners, especially in the middle (e.g. at ($(\pi/2,0)$). The dispersion looks like a rag which is suspended from the four corners of the BZ. When looking at the dispersion from the side (e.g. view on the $k_x-E$ plane) one therefore sees a U shape. The minimum of the U is in the middle of an BZ edge and the two maxima are on the two corners of that edge. The difference in energy from the middle to the corner may be called $\Delta$. If $\Delta$ is bigger than the band gap, which is caused by the perturbing potential $V(x)$ then there will be an band overlap. Because in this case the second band in the middle of the BZ edge starts below the energy of the first band at the corners. The band overlap is small if the perturbing potential is weak. The quasi free electron model with a weak potential may therefore indeed be used to model a semimetal.
{ "domain": "physics.stackexchange", "id": 70913, "tags": "solid-state-physics" }
Making Different combinations from a string that has been seperated
Question: I am trying to make different combinations of a full name that has been broken down to different names. My code is as below : import java.io.IOException; import java.util.Scanner; public class Username { public static void main(String[] args) throws IOException{ System.out.println("Please enter a Firstname , MiddleName & Lastname separated by spaces"); Scanner sc = new Scanner(System.in); String name = sc.nextLine(); String[] arr = name.split(" "); int arrLength = arr.length; //for(int i = 0; i <= arrLength - 1; i++) //{ // System.out.println("Name" + i + " " + arr[arrLength-(arrLength-i)] + "\n"); // } if(arrLength == 2) { System.out.println("Name length is 2"); String Name1 = arr[arrLength-arrLength]; String Name2 = arr[arrLength-(arrLength-1)]; String firstLetterName1 = String.valueOf(Name1.charAt(0)); String firstLetterName2 = String.valueOf(Name2.charAt(0)); String windowsUsername1 = Name1 + "" + firstLetterName2.toUpperCase(); String windowsUsername2 = Name2 + "" + firstLetterName1.toUpperCase(); System.out.println("Username1 " + windowsUsername1); System.out.println("Username2 " + windowsUsername2); } if(arrLength == 3) { System.out.println("Name length is 3"); String Name1 = arr[arrLength-arrLength]; String Name2 = arr[arrLength-(arrLength-1)]; String Name3 = arr[arrLength-(arrLength-2)]; String firstLetterName1 = String.valueOf(Name1.charAt(0)); String firstLetterName2 = String.valueOf(Name2.charAt(0)); String firstLetterName3 = String.valueOf(Name3.charAt(0)); String windowsUsername1 = Name1 + "" + firstLetterName2.toUpperCase(); String windowsUsername2 = Name1 + "" + firstLetterName3.toUpperCase(); String windowsUsername3 = Name2 + "" + firstLetterName1.toUpperCase(); String windowsUsername4 = Name2 + "" + firstLetterName3.toUpperCase(); String windowsUsername5 = Name3 + "" + firstLetterName1.toUpperCase(); String windowsUsername6 = Name3 + "" + firstLetterName2.toUpperCase(); System.out.println("Windows Usernames are " + windowsUsername1 + " " + windowsUsername2 + " " + windowsUsername3 + " " + windowsUsername4 + " " + windowsUsername5 + " " + windowsUsername6); } if(arrLength == 4) { System.out.println("Name length is 4"); String Name1 = arr[arrLength-arrLength]; String Name2 = arr[arrLength-(arrLength-1)]; String Name3 = arr[arrLength-(arrLength-2)]; String Name4 = arr[arrLength-(arrLength-3)]; String firstLetterName1 = String.valueOf(Name1.charAt(0)); String firstLetterName2 = String.valueOf(Name2.charAt(0)); String firstLetterName3 = String.valueOf(Name3.charAt(0)); String firstLetterName4 = String.valueOf(Name4.charAt(0)); String windowsUsername1 = Name1 + "" + firstLetterName2.toUpperCase(); String windowsUsername2 = Name1 + "" + firstLetterName3.toUpperCase(); String windowsUsername3 = Name1 + "" + firstLetterName4.toUpperCase(); String windowsUsername4 = Name2 + "" + firstLetterName1.toUpperCase(); String windowsUsername5 = Name2 + "" + firstLetterName3.toUpperCase(); String windowsUsername6 = Name2 + "" + firstLetterName4.toUpperCase(); String windowsUsername7 = Name3 + "" + firstLetterName1.toUpperCase(); String windowsUsername8 = Name3 + "" + firstLetterName2.toUpperCase(); String windowsUsername9 = Name3 + "" + firstLetterName4.toUpperCase(); String windowsUsername10 = Name4 + "" + firstLetterName1.toUpperCase(); String windowsUsername11 = Name4 + "" + firstLetterName2.toUpperCase(); String windowsUsername12 = Name4 + "" + firstLetterName3.toUpperCase(); System.out.println("Windows Usernames are " + windowsUsername1 + " " + windowsUsername2 + " " + windowsUsername3 + " " + windowsUsername4 + " " + windowsUsername5 + " " + windowsUsername6 + " " + windowsUsername7 + " " + windowsUsername8 + " " + windowsUsername9 + " " + windowsUsername10 + " " + windowsUsername11 + " " + windowsUsername12 ); } } } Test output is below : Please enter a Firstname , MiddleName & Lastname separated by spaces Maria Anna Sophia Cecilia Name length is 4 Windows Usernames are MariaA MariaS MariaC Anna AnnaS AnnaC Sophia SophiaA SophiaC Cecilia CeciliaA CeciliaS I found this code very untidy though and in effective too. Are they any better ways of doing this? Yet an amateur and struggling with concepts from Java. Appreciate any suggestion. Answer: Java naming conventions In Java variables should never start with a capital, so rename Name to name Input validation What about empty input? Or a name with length 1? Make sure you can handle all input. Logic I would try to do something like this: Create a List<String> of names Maybe remove duplicates? For each item in the list, capitalize the first letter Create all permutations of length 2 of this list (for example: https://stackoverflow.com/a/35323560/461499) For each permutation, create the output by taking the first item of the List<String>, and append the first letter of the second. Example import java.util.ArrayList; import java.util.Arrays; import java.util.List; public class Names { private static List<List<String>> generate(List<List<String>> permutations, List<String> names, int length) { List<List<String>> perms = new ArrayList<List<String>>(); if (length == 0) { perms.add(new ArrayList<>()); } else { for (String name : names) { // copy names, but leave out the current List<String> namesWithoutCurrent = new ArrayList<>(names); namesWithoutCurrent.remove(name); for (List<String> perm : generate(permutations, namesWithoutCurrent, length - 1)) { perm.add(name); perms.add(perm); } } } return perms; } public static void main(String[] args) { for (List<String> perm : generate(null, Arrays.asList("Anna", "Maria", "Sophia", "Cecilia"), 2)) { System.out.println("Windows username:" + perm.get(0) + perm.get(1).charAt(0)); } } } Windows username:MariaA Windows username:SophiaA Windows username:CeciliaA Windows username:AnnaM Windows username:SophiaM Windows username:CeciliaM Windows username:AnnaS Windows username:MariaS Windows username:CeciliaS Windows username:AnnaC Windows username:MariaC Windows username:SophiaC
{ "domain": "codereview.stackexchange", "id": 27661, "tags": "java" }
6D RGB D SLAM EFFICIENCY
Question: How efficient is this 6D RGB-D SLAM in localizing and navigation Originally posted by Francis Dom on ROS Answers with karma: 21 on 2014-10-07 Post score: 0 Original comments Comment by bvbdort on 2014-10-08: Check this paper from authors, they have resulting model image.Hope it give you and idea. Also the video in wiki page. Answer: 6D RGB-D SLAM is a bit more computationally expensive. ccny_rgbd is comparatively very fast. Both of these can help in localizing with respect to your initial position or with respect to the reference frame at your starting point. They can help you in building a 3D map but they are not meant for navigation. For navigation, you can try using ROS NAVIGATION STACK You can try to create a 2D map from 3D map (created using previously mentioned methods) and use it with ROS NAVIGATION stack for navigation Using ccny_rgbd or RGB-D SLAM with a structure sensor may not be a straightforward task. Firstly, the depth data from the structure sensor should be registered to another external RGB camera, if your structure camera does not provide RGB information. I would suggest you to use Asus Xtion pro live as it works out of the box with RGB-D SLAM or ccny_rgbd or many other algorithms are out there for RGB-D sensors. Most of these methods need both RGB and depth information for them to work. Originally posted by sai with karma: 1935 on 2014-10-08 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Francis Dom on 2014-10-08: ccny_rgbd method allows us to create a map from RGB-D data and localize using visual odometr Pls provide more details on the differences between this method and 6D RGBD SLAM or RGBD SLAM How do I incorporate the ROS navigation stack with SLA Comment by Francis Dom on 2014-10-08: Does the ccn_rgbd use both RGBD and visual odometr Comment by sai on 2014-10-08: yes, it works with rgb-d sensors like kinect and asus xtion pro live Comment by Francis Dom on 2014-10-08: Does it work with the structure sensor as well Comment by Francis Dom on 2014-10-08: How do I combine ccny_rgbd & ROS NAVIGATION STACK I am using a Parrot Drone with PX4 Autopilot...and Structure Sensor with ODROID-U3 board How do I connect the ODROID and PX4 - mavros help? Comment by sai on 2014-10-09: I have edited the answer. You can have a look at the links given below. These work with a RGB camera and do not need a structure sensor. http://wiki.ros.org/tum_ardrone http://vision.in.tum.de/data/software/tum_ardrone Comment by Francis Dom on 2014-10-09: Hi, the Structure sensor is a RGB-D sensor or just depth sensor Comment by sai on 2014-10-09: I do not know that..i think its just a depth sensor.. Comment by Francis Dom on 2014-10-20: May I use Gmapping slam with the structure sensor?
{ "domain": "robotics.stackexchange", "id": 19668, "tags": "slam, navigation, rgbd" }
Can I train my non-dominant hand and make it dominant?
Question: Are our dominant limbs decided on birth or is there some way in which I can train my non-dominant hand and make it as coordinated as my dominant? Answer: Of course you can train yourself and make your non-dominant hand to be equal or even greater than your actual dominant handedness. Handedness is directly wired to the right/left hemisphere of your brain, so by training, you will actually re-wire your brain and acquire new pathways. It's a hard process in terms of breaking habits, otherwise, it's non-invasive and non-painfull, where most humans tend to give up before acquiring this skill justifying it as an unnecessary/low priority goal. Handedness is not decided upon birth. It's gained in the earlier process (while a child is still in mother's womb) based on personal/individual preference (expressed by sucking a finger btw) toward right-handedness (90%) or left-handedness (9%), cross-dominance/ambidexterity (1%). https://youtu.be/DU3OdTLuHf0 https://en.wikipedia.org/wiki/Handedness https://ghr.nlm.nih.gov/primer/traits/handedness
{ "domain": "biology.stackexchange", "id": 9589, "tags": "human-biology, genetics, human-genetics" }
Why electric potential is separable?
Question: In Electrostatics, if we consider a region without charges the electrostatic potential $V$ obeys Laplace's Equation $\nabla^2 V = 0$. We can tackle this with separation of variables. In cartesian coordinates we have $V(x,y,z) = X(x)Y(y)Z(z)$ I want to know why $V$ is separable function $$V = \frac{kq}{\sqrt{x^2+y^2+z^2}}.$$ I can't find a way to separate variable in terms of x y and z can someone give me explanation on why it makes sense Answer: In general, the solution to Laplace's equation will not be separable, i.e., it will not be possible to find $X, Y, Z$ such that $V(x, y, z) = X(x) Y(y) Z(z)$. And that's where the great difficulty arises in finding closed form solutions to partial differential equations. Sometimes we get lucky and the PDE is separable in a different coordinate system, but in the general case it will not be separable. However, if we only need a numerical solution, then we can try to express $V$ as a (usually infinite) sum of separable solutions: \begin{equation} V(x, y, z) = \sum_{i, j, k} a_{ijk} X_i(x) Y_j(y) Z_k(z) \end{equation} Here, each $X_i(x) Y_j(y) Z_k(z)$ individually satisfies Laplace's equation, and if you do the algebra, together with boundary conditions, you find that there are only a countably finite number of possible $X$ functions, indexed in some obvious way by a natural number $i$, and likewise with $Y$ and $j$, $Z$ and $k$. And furthermore, for physically realistic solutions, we can assume that the coefficients $a_{ijk}$ fall sufficiently quickly with increasing $i, j, k$ that we can truncate the summation at some reasonable point to get a solution that is sufficiently accurate for our needs. I'm sure there are people on this site who could explain to you the mathematical theory behind why this works for certain types of PDEs and what additional conditions need to be imposed, but I couldn't even begin to do that. I think it does work in practice with physically realistic solutions to Laplace's equation, and so physicists do it.
{ "domain": "physics.stackexchange", "id": 91247, "tags": "electrostatics, symmetry, potential, voltage, gauss-law" }
Abnormal in simple rosbridge v2 / turtlesim example
Question: There is a good tutorial for rosbridge v1 quick start, at https://code.google.com/p/brown-ros-pkg/wiki/Quick_start_rosbridge_and_ROS, but I can't find a similar tutorial for rosbridge v2, so I tried to mimic turtlesim with rosbridge 2, based on the example file under rosjs, and here is the link of my html file link text During testing, I run the following commands in terminal, roscore rosrun turtlesim turtlesim_node rosrun rosbridge_server rosbridge.py and then load the html file in Firefox. Basically there are two issues observed, 1 when the 4 buttons are pressed, the turtle never move and rostopic echo /turtle1/command_velocity always give me linear: 0.0 angular: 0.0 2 the subscription to topic "/turtle/pose" returns undefined and only returns once, Received message on /turtle1/pose: undefined Obviously I missed something in the above html file, so appreciate if anyone can give me some clue. thanks clark Originally posted by clark on ROS Answers with karma: 393 on 2012-10-26 Post score: 0 Answer: I am not yet familiar with "ros_bundle.min.js". However, I can answer this issue using "ros.js", found under "rosbridge_clients" in the release version of rosbridge_suite. Using this ros.js, I used the following example page to control both turtlebots and AR.Drones: https://dl.dropbox.com/u/14391589/tutorial/rosbridge_ardrone/drone_browser_teleop.html In this case, the connection is created as follows: var con = new Bridge("ws://localhost:9090"); Addressing point 1, publishing command velocity for moving forward is done as follows: con.publish('/cmd_vel', {"linear":{"x":1.0,"y":0,"z":0},"angular":{"x":0,"y":0,"z":0}}); In Chrome, you should be able to inspect the actual websocket frame that is sent using the development tools (regardless of which ros.js you are using). I found this description of inspecting websocket traffic useful: http://blog.kaazing.com/2012/05/09/inspecting-websocket-traffic-with-chrome-developer-tools/ I haven't tried subscribing to pose yet, so I will pass on speculating about the second issue. Originally posted by odestcj with karma: 123 on 2012-11-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11534, "tags": "rosbridge, turtlesim" }
What is the precise internal structure of Jupiter?
Question: I've been trying to find out exactly where the layers of molecular hydrogen, and metallic hydrogen are precisely, inside Jupiter, in kilometres from the centre. Ideally with an error margin of 1-10kms. I'm a newbie at astronomy, physics, and chemistry, so the more detailed the explanation, the better. How have we gone about guessing, what factors are used to do so? What is the most recent data? EDIT: try not to quote me easily accessible vague research, I've already done google searches! Thank you :) Answer: Here is an excellent article on the topic: Burkhard Militzer, Francois Soubiran, Sean M. Wahl, William Hubbard, "Understanding Jupiter's Interior" This article provides an overview of how models of giant planet interiors are constructed. We review measurements from past space missions that provide constraints for the interior structure of Jupiter. We discuss typical three-layer interior models that consist of a dense central core and an inner metallic and an outer molecular hydrogen-helium layer. These models rely heavily on experiments, analytical theory, and first-principle computer simulations of hydrogen and helium to understand their behavior up to the extreme pressures ~10 Mbar and temperatures ~10,000 K. We review the various equations of state used in Jupiter models and compare them with shock wave experiments. We discuss the possibility of helium rain, core erosion and double diffusive convection may have important consequences for the structure and evolution of giant planets. In July 2016 the Juno spacecraft entered orbit around Jupiter, promising high-precision measurements of the gravitational field that will allow us to test our understanding of gas giant interiors better than ever before. DOI: 10.1002/2016JE005080
{ "domain": "physics.stackexchange", "id": 82994, "tags": "astrophysics, planets, hydrogen, jupiter" }
Noninertial frame-rotation
Question: $$de_2=[\cos(d\theta_1)e_2+\sin(d\theta_1)e_3]-e_2=[e_2+d\theta_1e_3]-e_2=d\theta_1e_3$$ I can't see the intuition behind this from here. Shouldn't the change in $e_2$ be just this $$de_2=\cos(d\theta_1)e_2-e_2~?$$ Why do we need the $e_3$ component? Answer: For finite $\theta_1$ : \begin{equation} \Delta\mathbf{e}_{2}=\mathbf{e}'_{2}-\mathbf{e}_{2}=\cos\theta_1\,\mathbf{e}_{2}-\mathbf{e}_{2}+\sin\theta_1\,\mathbf{e}_{3} \tag{01} \end{equation} For infinitesimal $\mathrm{d}\theta_1$ : \begin{equation} \mathrm{d}\mathbf{e}_{2}=\mathbf{e}'_{2}-\mathbf{e}_{2}=\underbrace{\cos\mathrm{d}\theta_{1}}_{\approx\;1}\,\mathbf{e}_{2}-\mathbf{e}_{2}+\underbrace{\sin\mathrm{d}\theta_{1}}_{\approx \;\mathrm{d}\theta_{1}}\,\mathbf{e}_{3}= \mathrm{d}\theta_{1}\,\mathbf{e}_{3} \tag{02} \end{equation}
{ "domain": "physics.stackexchange", "id": 32671, "tags": "homework-and-exercises, reference-frames, inertial-frames, rotational-kinematics" }
Among $k$ unit vectors, find odd set with sum length less than 1
Question: I have $k$ unit vectors in $\mathbb{R}^k$. Can I efficiently identify a set of $2n+1$ vectors $v_1, \dots v_{2n+1}$ such that $\sum_{i< j} v_i\cdot v_j < -n$ for any $n$ -- or determine that no such set exists? As some motivation, if I have 3 vectors (so that $n=1$), the minimum value of their pairwise dot products if $-3/2$, given by them forming the corners of a triangle. This places all 3 vectors in the same two-dimensional plane. Anything out of that plane (so, more than 2 dimensions) will raise the minimum to some sum $>-3/2$. If I go to a fewer number of dimensions by constraining all the vectors to 1D, then my only possible sums are -1 or 3. In general, if I have $2n+1$ vectors, then by putting them at corners of a $2n$-simplex, you get a sum of dot products of $-(2n+1)/2$. But by putting them in 1D, you get a minimum of $-n$. I'd like to find sets of vectors that "don't look very 1D" in the sense that they violate this bound. This is equivalent to the question, "Find an odd-size set of vectors such that their sum has length less than 1" -- the equivalence can be shown by taking the norm of the sum. I think this is more natural, so I'll change the question title. Answer: I have now learned that this problem is co-NP-complete. The question can be reduced to testing whether a point in $R^{n^2}$ (given by the Gram matrix generated by the vectors) satisfies all the pure $(2k+1)$-gonal hypermetric inequalities. This fact is stated (without proof?) on Page 454 of "Geometry Cuts and Metrics" by Michel Deza and Monique Laurent, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.215.1108&rep=rep1&type=pdf .
{ "domain": "cs.stackexchange", "id": 10900, "tags": "optimization, linear-algebra, search-problem" }
How gravitation affects tides
Question: I know that tide is caused by the gravitational pull of moon but what I don't know is how it affects water. I have actually these doubts. Why does gravity of the moon creates tides only in water? Are there other things (other than water) where tides are created on earth ( I have heard that, in some moons of Jupiter tides(of ground) can be found on the surface due to Jupiter's gravity)? If we take a bowl of some length; lets say 30 cm diameter and fill it with water and keep it in a full moon night. Whether it will create tide? If moons gravitational pull can cause tides in seas, then why a sailor can't feel the gravitational pull of moon? Answer: You ask: How gravity of the moon creates tides only in water? This is wrong. Tides are created by the moon on all materials on earth that have some elasticity. The raising and falling of the ground has been measured at the beams in CERN, for example. The solid ground tides are called earth tides and their height can be 40cm. Is any things( other than water) also create tides in earth( i heard that some moons in Jupiter has tides(of ground) on the surface due to Jupiter's gravity)? You mean "get tides", not "create tides". It is the moon mainly and not the water that creates the tides. There is some effect in the tides from the large planets and the sun's gravitational field. That is why tide tables are needed. The source is not one. Yes there are tides on planets that have moons and some elasticity in their composition. If we take a bowl of some length; lets say 30 cm diameter and fill it with water and kept it in a full moon night whether it will create tides? You should measure the bowl during the tide cycle, which is close to a 12 hour one ; the full moon or not is a secondary effect. Look at the explanation in the link. You will need accurate measurements and to consult tide tables for your particular location. If moons gravitational pull can cause tides in seas, then why a sailor cant feel the gravitational pull of moon? The sailor and all of us feel the vector sum of the gravitational forces impinging on us at our location. One cannot distinguish the individual components unless on does a fit to the components of known gravitational sources. We are not equipped biologically for that , so it must not offer an evolutionary advantage :) . The water is lifted and the boat is lifted with the water, no?
{ "domain": "physics.stackexchange", "id": 18014, "tags": "gravity, newtonian-gravity, planets, tidal-effect" }
Viewing custom messages from a ROS node that is on a different ROS install
Question: I have ROS running on a robot with a raspberry Pi. I have several custom messages that I would like to echo and view on rqt_plot from my laptop. However as the robot package is only installed on the pi, the custom messages do not exist on the laptop so I cannot view them. What is usually done in this case? Should I move my messages to a different package so I can just install that on other devices? Originally posted by hez on ROS Answers with karma: 3 on 2016-10-02 Post score: 0 Answer: Should I move my messages to a different package so I can just install that on other devices? Yes, that is actually the best practice for message, services and actions. Being able to re-use messages without having to install also the other parts of a package is one part of the rationale. Separating the messages from the rest of your packages also clearly separates your ROS API from your implementation, further enhancing separation of concerns and re-usability. Originally posted by gvdhoorn with karma: 86574 on 2016-10-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by hez on 2016-10-02: Thanks that makes sense. I keep all of my code in a github repo, I have just created a new package with the postfix _msgs to store messages - and with that a second git repository. Is it also best practice to distribute code like that? One repo for the messages package and one for the main package? Comment by gvdhoorn on 2016-10-02: Depends. Some ppl like to keep _msgs pkgs in the same repository as the nodes that use that pkg, to make releases easier (_msgs is dependency of other pkgs, so needs to be released first). Others like to separate. Smallest re-usable artefact vs distribution convenience is always a trade-off. Comment by gvdhoorn on 2016-10-02: Note that if you release the pkg (through the ROS buildfarm), it doesn't matter (for your users) whether the pkg is in a separate repository or not, they can always install your _msgs pkg separately.
{ "domain": "robotics.stackexchange", "id": 25874, "tags": "custom-message" }
Why is disilyne bent?
Question: Why is disilyne $\ce{Si2H2}$ bent when the steric number of silicon is 2? Answer: The following disilyne has been prepared and found to be stable to ~100 C. An X-ray crystal structure found that the two silicon atoms in the triple bond, Si(1) and Si(1'), along with the two attached silicon atoms, Si(2) and Si(2') are coplanar and the Si(1)-Si(1')-Si(2') angle is 137.44 degrees ("trans-bent"). The full text of the article describing the preparation, characterization and reactivity of the compound can be found here. See figure 4 in the paper for a MO explanation as to why the trans-bent geometry occurs. Basically, the authors suggest that, "the bending is thought to be the result of the mixing of an in-plane $\ce{\pi}$-orbital with a $\ce{\sigma^{\ast}}$ orbital whose energies are close enough to cause the interaction."
{ "domain": "chemistry.stackexchange", "id": 1907, "tags": "vsepr-theory, molecular-structure" }
Should I train the "Unknown" class separately from the other classes
Question: I have a CNN model that classifies 10 classes of audio spectrograms. However, since I work with the open set of data, I need to classify the unknown audio data as an "Unknown" class. The problem is my training samples of unknown data are larger than the other known class. I'm afraid that there would be a problem when the model performs stochastic optimization. Should I separate the "Unknown" training data and train the model separately. Or I can just simply mix the unknown data to the other classes and train the model right away? Answer: There are several ways of doing this. Examples are: Binary classifier Train a separate binary classifier for Known vs Unknown, using supervised learning. The Known data would come from your dataset, and the Unknown dataset be a large set of samples from a diverse dataset like AudioSet et.c Anomaly detector Train an anomaly / out-of-distribution model, using only your dataset (Known) and unsupervised learning. This should be done on the learned representation in your CNN. You can use a Gaussian Mixture Model (e.g. from scikit-learn) as the anomaly model. To verify that it works, and to set hyperparameters such as number of gaussians, anomaly threshold you should use a few samples from another dataset (AudioSet et.c.).
{ "domain": "datascience.stackexchange", "id": 9792, "tags": "machine-learning, neural-network, classification, dataset, audio-recognition" }
Stopping/Moving a Robot During Navigation
Question: Hello, I'm using ROS Navigation Stack on my robot. How can I get the robot to stop during navigation for example because of safety issues and after the safety issue was resolved the robot moves again towards the goal. So basically I don't want to cancel the goal as mentioned in this thread Thanks Originally posted by ROSCMBOT on ROS Answers with karma: 651 on 2014-10-01 Post score: 2 Answer: The current ROS Navigation Stack does not offer a "pause navigation" method. Your best shot would be to implement a node that keeps track of the current goal and when a safety issue occurs, use the actionlib API to cancel the current goal (and therefore stop the robot) and when the safety issue is solved, send the goal again(so the robot moves again). Hope this helps Originally posted by Martin Peris with karma: 5625 on 2014-10-01 This answer was ACCEPTED on the original site Post score: 9 Original comments Comment by David Lu on 2014-10-02: I believe this is correct, but if it seems useful, you should open an issue in the nav repo. Comment by Martin Peris on 2014-10-03: Thanks for your input @David Lu, I have opened an issue here: https://github.com/ros-planning/navigation/issues/259 let's discuss about it
{ "domain": "robotics.stackexchange", "id": 19595, "tags": "ros, navigation, robot" }
Searching of Word documents
Question: I have a website that offers to search documents from local hearings conducted, stored on a network file server. I need to take in the search term and search a bunch of .docx (roughly 4500) files. They are not large, < 150 kb mostly, but it runs very slow downloading the files into the stream. I'm sure there's a better way to write the search, (perhaps multi processing) but I don't know how to tune it up and speed the search up. The search itself is taking over 3 minutes. bool found = false; Hearing h = new Hearing(); Stream str = null; MemoryStream str2 = new MemoryStream(); HttpWebRequest fileRequest = (HttpWebRequest)WebRequest.Create(url); HttpWebResponse fileResponse = (HttpWebResponse)fileRequest.GetResponse(); str = fileResponse.GetResponseStream(); str.CopyTo(str2); str2.Position = 0; using (WordprocessingDocument wpd = WordprocessingDocument.Open(str2, true)) { string docText = null; using (StreamReader sr = new StreamReader(wpd.MainDocumentPart.GetStream())) { docText = sr.ReadToEnd(); found = docText.ToUpper().Contains(txtBasicSearch.Text.ToUpper()); if (found) { hearingArrayList.Add(h); foundCount++; } } } Answer: This is really the exact use-case for indexed full-text search engines. Since you're running this code server-side on a website, I'd suggest you seriously consider writing a simple worker that polls your FS for new documents and adds them to a database that has full text searching enabled. If you're using SQL Server: https://docs.microsoft.com/en-us/sql/relational-databases/search/get-started-with-full-text-search If you're using MySQL: http://www.w3resource.com/mysql/mysql-full-text-search-functions.php This way, you'd not only get your results back much more quickly than scanning each document manually, you'd also avoid the onerous network traffic involved in streaming every file from the FS for every request. To do this, you can pretty easily either write a page in your site or a new console app (preferable) that is called by a cron job (linux) or scheduled task (windows) on the server every so often. That interval would be however often you expect there to be new documents added to the FS or whatever your tolerance is for stale data. At that point, the page/app would pull the list of documents already cached in the database, query the FS for its contents, and compare the lists of filenames or file dates to see what needs to be added/updated. At that point, you can only stream in the files you actually need to add and you don't really care how long it takes. The database would then take care of the indexing of the new documents. Your webpage then becomes a dumb pipe for searching those indexed documents. If storing the texts in a database isn't an option, you might consider mirroring the files on your own server. It'd still remove the slowest part of your algorithm (the network traffic). You'd still need your cron/scheduled task worker to do that mirroring but it'd be a simple matter of copying the new files from the FS to your local disk. If you mirror locally or can't do either, your best bet is parallelization. You can do some refactoring but your local operations aren't your real bottleneck. For instance, if you can mirror locally, you could use this in place of your existing code: // ToUpper() your search string outside of the loop, // rather than in each passs. string txtBasicSearch = "My Search String".ToUpper(); // Use Parallel.ForEach over every docx file in our directory. Parallel.ForEach(Directory.EnumerateFiles(directoryPath, "*.docx"), (string file) => { string docText = string.Empty; try { // Try to dispose of our streams as soon as possible to avoid // holding memory unecessarily. Also, avoid copying Streams // to different types. A generic Stream works just fine. // // As well, only open with read perms to avoid unecessary locks and // any delays that may cause. using (Stream str = File.OpenRead(file)) { using (WordprocessingDocument wpd = WordprocessingDocument.Open(str, false)) { using (StreamReader sr = new StreamReader(wpd.MainDocumentPart.GetStream())) { docText = sr.ReadToEnd(); } } } // Search the haystack for the needle. if (docText.ToUpper().Contains(txtBasicSearch)) { // No need for a counter variable. Just user // hearingArrayList.Count() at the end. hearingArrayList.Add(file); } } catch (Exception ex) { // Do whatever error handling here. return; } }); Timing that parallel version against the same version using a regular foreach loop with a small directory on my local NAS showed that the parallel version was typically 3-6 times faster. If you can't mirror locally, you can still parallelize the file streaming but will need to be cognizant of the limits the server may place on the number of connections you can open at once. The HttpClient class will probably serve you better here than the WebRequest class. https://msdn.microsoft.com/en-us/library/hh696703(v=vs.110).aspx There, you can query the files within the remote directory then iterate through them, making an async call with HttpClient. So, that might look like: string txtBasicSearch = "My Search String".ToUpper(); HttpClient client = new HttpClient(); // Use client to populate myFileList with the remote files. foreach (string file in myFileList) { client.GetStreamAsync(file).ContinueWith((Task<Stream> result) => { if (result.Status != TaskStatus.RanToCompletion) { // Error handling. return; } string docText = string.Empty; try { using (WordprocessingDocument wpd = WordprocessingDocument.Open(result.Result, false)) { using (StreamReader sr = new StreamReader(wpd.MainDocumentPart.GetStream())) { docText = sr.ReadToEnd(); } } if (docText.ToUpper().Contains(txtBasicSearch)) { hearingArrayList.Add(file); } } catch (Exception ex) { // Do whatever error handling here. return; } } } The HttpClient class will take care of rate limiting you. By default, I believe it allows three connections at any time, but you can easily change that to your liking. Enumerating the files on the remote server would be a different topic, depending on how that remote is being accessed. I'd suggest searching other SO answers like https://stackoverflow.com/questions/124492/c-sharp-httpwebrequest-command-to-get-directory-listing (If your file server is just a NAS on your intranet, save yourself the pain an just use the System.IO.Directory and .File classes to query the files)
{ "domain": "codereview.stackexchange", "id": 26229, "tags": "c#, performance, asp.net, search" }
Can the total energy of a signal diverge while its average power converges to zero?
Question: If the integral used to calculate the total energy for a continuous time real signal converges, its average power is equal to zero. But could the average power still equal zero if the total energy diverges? Equivalently, could this limit evaluate to zero $$P_\infty=\lim_{T \to \infty}\frac{1}{2T} \int_{-T}^{T}\left| x(t) \right|^2dt=0$$ if this limit diverges $$E_\infty=\lim_{T \to \infty} \int_{-T}^{T}\left| x(t) \right|^2dt=\infty$$ Answer: That's indeed possible, at least in theory. Just come up with a signal which when squared and integrated diverges as $t\to\infty$, but slower than linearly. E.g., $$x(t)=\frac{1}{\sqrt[4]{|t|}}$$ $$\int_{-T}^T|x(t)|^2dt=\int_{-T}^T\frac{dt}{\sqrt{|t|}}=4\sqrt{T}$$ Consequently, $E_x\to\infty$ but $$P_x=\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T\frac{dt}{\sqrt{|t|}}=\lim_{T\to\infty}\frac{2}{\sqrt{T}}=0$$
{ "domain": "dsp.stackexchange", "id": 12322, "tags": "continuous-signals, signal-power, signal-energy" }
Communicating with TurtleBot from Two Computers
Question: I bought a TurtleBot and am really excited about getting it to perform various tasks for me. As a first, (relatively) simple task, I'd like to get it to periodically check whether my mail has been delivered each day (we have the "slot in the door" type of "mailbox," so it needs to navigate to the door and determine if there is a new pile of mail lying there) and send me an email once the mail arrives. I'm totally new to ROS though, so I'm still going through the TurtleBot tutorials. I'm on the step in which you build a SLAM map of your house so you can subsequently program TurtleBot to navigate the map autonomously. However, I have a problem - my desktop machine (Ubuntu 10.04) is in my bedroom, so I'm unable to see what the TurtleBot is doing in order to teleop navigate him around the other rooms in the house via RViz. I can watch the output from the Kinect's RGB camera, but due to the position in which it's mounted, I can't see all obstacles that he might run into. The solution I came up with was to install the ros-electric-turtlebot-desktop package on a modest netbook (1.5ghz dual core, 1gb ram). I would then run the keyboard teleop from the netbook while I followed the TurtleBot around my house, with the gmapping app running on the TurtleBot's netbook and RViz and the map_server running on my desktop machine. Is this even possible? If not, is there an alternative solution? As a secondary question, if you believe this project is too complex for a beginner, could you please recommend a simpler alternative beyond the introductory tutorials on the TurtleBot wiki that would be more practical for me? I would really appreciate it - I am so eager to learn to use my TurtleBot but have been frustrated by my lack of knowledge. Thanks in advance! Originally posted by tommytwoeyes on ROS Answers with karma: 57 on 2012-04-07 Post score: 0 Answer: Yes, this is completely possible. Just make sure that each computer can see each other, (based on the steps from the Network Setup page). Once each machine is fully on the ROS network, you'll be able to launch whatever applications you want on each one. Originally posted by Ryan with karma: 3248 on 2012-04-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by tommytwoeyes on 2012-04-07: I followed the wiki's network setup instructions & verified that the netbook & desktop workstations could both ping the TB's netbook (raw IP and $ROS_MASTER_URI on port 11311). The netbook workstation's KB teleop node produced some error I didn't understand. Comment by tommytwoeyes on 2012-04-07: I assumed it was due to the TB's ROS installation getting confused about which machine it was reporting to. I'll try it again & post any error messages I see so I can ask for further assistance. Thanks, Ryan!
{ "domain": "robotics.stackexchange", "id": 8899, "tags": "ros, turtlebot, turtlebot-navigation" }
Work done by battery in moving charge $Q$ in circuit
Question: why work done by a battery in circuit is potential diffrence across plates times charge flown through body. W=Qε(ε is emf of battery) although there are heat and other loses ? and another that my text book has not mentioned from which point till which point in circuit Q is being transffered? Answer: Batteries use a chemical reaction to do work on charge in the battery to generate a voltage across its terminals giving the charge potential energy. If the battery is not connected to a circuit that voltage is called an emf which means the voltage across its terminals when there is no current flow (the open circuit voltage). All real batteries have internal resistance. Once connected to a circuit and current flows there is a voltage drop across its internal resistance. That reduces the terminal voltage and results in some heat loss internally. Now the battery does work to deliver charge to the circuit. The charge loses the potential energy the battery gave it and it’s either dissipated or stored in the circuit elements. The work it does now is your equation except you replace emf with the terminal voltage because that’s the voltage actually applied to the circuit. While it delivers energy to the circuit it also dissipates some energy internally. Finally the battery then does work on the charge returning to one terminal once again giving it potential energy at the other terminal. Hope this helps
{ "domain": "physics.stackexchange", "id": 58727, "tags": "electrostatics, electric-circuits, charge, work, batteries" }
Imitating message types
Question: Hi, I am having a configuration where my rover is running mavros 0.13 and my laptop is running mavros 0.15. Therefore there is a difference in namespacing. As I am not able to recompile 0.15 for my rover easily (running out of time for my internship) I was wondering if it is allowed (or do-able) to change the messages of mavros. I think this is needed as I am not able to communicate with my rover as those version differences. Can I clone the files in /opt/ros/indigo/include/ so I have a file which defines the message as mavros 0.13? Or should I create a .msg file in my own application which generates it for me? It is for the specific /mavros/rc/override message. When starting mavros on my rover, running the application (ros_erle_teleoperation) the message type changes to the new 0.15 message type /mavros_msgs/rc/override and therefore I seem to not getting the device to work. Regards, Martin. TL;DR - Can I duplicate the mavros/OverrideRCIn message in the folder /opt/ros/indigo/include to make mavros 0.13 on the rover communicate with mavros 0.15 on the laptop? Originally posted by mbeentjes on ROS Answers with karma: 5 on 2015-11-18 Post score: 0 Answer: You should install mavros 0.13 in an overlay on your laptop. http://wiki.ros.org/catkin/Tutorials/workspace_overlaying Originally posted by tfoote with karma: 58457 on 2015-11-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23018, "tags": "mavros" }
Represent a pure state in terms of 2 antipodal points on the Bloch sphere
Question: I recently had an assignment where the question is based on the assumption that we can write any pure state qubit $|\phi \rangle$ as: $$|\phi \rangle = \gamma |\psi\rangle + \delta |\psi^\perp \rangle$$ Where $|\psi\rangle$ and $|\psi^\perp \rangle$ are 2 antipodal points on the Bloch sphere: $$ |\psi\rangle = \cos \frac{\theta}{2} |0\rangle +e^{i\varphi}\sin \frac{\theta}{2} |1\rangle$$ $$ |\psi^\perp\rangle = \cos \frac{\theta + \pi}{2} |0\rangle +e^{i\varphi}\sin \frac{\theta + \pi}{2} |1\rangle$$ I have a lingering question about how this actually works. So far I got: $$|\phi\rangle= \gamma |\psi\rangle + \delta |\psi^{\perp}\rangle$$ $$= \gamma \left(\cos \frac{\theta}{2} |0\rangle +e^{i\varphi}\sin \frac{\theta}{2} |1\rangle \right) + \delta \left(\cos \frac{\theta + \pi}{2} |0\rangle +e^{i\varphi}\sin \frac{\theta + \pi}{2} |1\rangle \right)$$ $$ = \left(\gamma \cos \frac{\theta}{2} + \delta \cos \frac{\theta + \pi}{2}\right)|0\rangle + \left(\gamma e^{i\varphi}\sin \frac{\theta}{2} + \delta e^{i\varphi} \sin \frac{\theta + \pi}{2}\right)|1\rangle$$ $$\Rightarrow \alpha = \gamma \cos \frac{\theta}{2} + \delta \cos \frac{\theta + \pi}{2}$$ $$\Rightarrow \beta = \gamma e^{i\varphi}\sin \frac{\theta}{2} + \delta e^{i\varphi} \sin \frac{\theta + \pi}{2}$$ So $\alpha^2 + \beta^2 = 1$. I'm not sure if I can solve this equation. I wonder if it's solvable or is there a better way to go about understanding writing a pure state in $|\psi\rangle$ and $|\psi^\perp \rangle$ basis. I know that they are orthonormal so intuitively it should work. Answer: Two antipodal states in the Bloch sphere (note that $0 \leq \theta \leq \pi$): \begin{equation} |\psi \rangle = \cos \frac{\theta}{2} |0 \rangle + e^{i\varphi}\sin \frac{\theta}{2} |1 \rangle \\ |\psi^\perp \rangle = \cos \frac{\pi - \theta}{2} |0 \rangle + e^{i\varphi + \pi}\sin \frac{\pi - \theta}{2} |1 \rangle = \sin \frac{\theta}{2} |0 \rangle - e^{i\varphi}\cos \frac{\theta}{2} |1 \rangle \end{equation} The expression for $|\psi^\perp \rangle$ from the question is different from this $|\psi^\perp \rangle$ by a global phase. The reason why I prefer this notation is that I want to keep Bloch sphere formalism (e.g. $0 \leq (\pi - \theta) \leq \pi$ constrant that is true for $|\psi^\perp \rangle$ presented here). Note that: $$\langle \psi | \psi^\perp \rangle = \cos \frac{\theta}{2} \sin \frac{\theta}{2} - \sin \frac{\theta}{2} \cos \frac{\theta}{2} = 0$$ By doing the same calculations we can obtain for $|\phi\rangle= \gamma |\psi\rangle + \delta |\psi^{\perp}\rangle = \alpha |0\rangle + \beta |1\rangle$: \begin{align*} &\alpha = \gamma \cos \frac{\theta}{2} + \delta \sin \frac{\theta}{2} \\ &\beta = \gamma e^{i\varphi}\sin \frac{\theta}{2} - \delta e^{i\varphi} \cos \frac{\theta}{2} \end{align*} Then (here I take into account that $|e^{i\varphi}| = 1$): \begin{equation} |\alpha|^2 = |\gamma|^2 \cos^2 \frac{\theta}{2} + |\delta|^2 \sin^2 \frac{\theta}{2} + 2 Re(\gamma) Re(\delta) \cos \frac{\theta}{2}\sin \frac{\theta}{2} + 2 Im(\gamma) Im(\delta) \cos \frac{\theta}{2}\sin \frac{\theta}{2} \end{equation} \begin{equation} |\beta|^2 = |\gamma|^2 \sin^2 \frac{\theta}{2} + |\delta|^2 \cos^2 \frac{\theta}{2} - 2Re(\gamma) Re(\delta)\cos \frac{\theta}{2} \sin \frac{\theta}{2} - 2Im(\gamma) Im(\delta)\cos \frac{\theta}{2} \sin \frac{\theta}{2} \end{equation} Because if we have two complex numbers $z_1 = x_1 + i y_1$ and $z_2 = x_2 + i y_2$, then: $$|z_1 \pm z_2|^2 = (x_1 \pm x_2)^2 + (y_1 \pm y_2)^2 = |z_1|^2 + |z_2|^2 \pm 2 x_1 x_2 \pm 2 y_1 y_2$$ After summing the expressions for $|\alpha|^2$ and $|\beta|^2$ we will obtain: $$|\alpha|^2 + |\beta|^2 = \left(|\gamma|^2 + |\delta|^2 \right) \left(\sin^2 \frac{\theta}{2} + \cos^2 \frac{\theta}{2} \right) =1$$
{ "domain": "quantumcomputing.stackexchange", "id": 1414, "tags": "quantum-state, mathematics, bloch-sphere" }
Why are humans only so tall/large?
Question: In biology I've learned that cells rapidly divide and can grow and split undefintely, and that certain parts of the body have to grow and evolve before growing, but I am tied up on the fact that the body produces hormones to have only grow up to a few feet tall and stops growing at a certain age. Why does it stop in terms of height or physical mass when it can still keep on growing? Why do the hormones tell the body to stop growing when it can continue? Answer: The question is a bit vague but I will take it to mean the following Why does it (the body) stop in terms of height or physical mass when it can still keep on growing? The answer is physics, specifically the ability for a body of a specific shape and structure to support itself and move itself, followed by the energy requirement to feed all those cells. Strength of a bone scales with its cross sectional area, similar to muscle. Mean while mass scales with its volume. As a result, mass increases faster than bone strength. So the bigger your are, the weaker your bones become if all proportions are maintained. As a result for a certain design of animal there is an upper limit of how big it can become before it unable to support its own weight or able to move. Next comes energy requirement. In general the more living bio mass you have, the more energy you require to keep it alive. The scaling isn't proportional but there is a general trend. And the more energy required the more food is needed, and given the habitat an animal lives in, there is only so much food around. And this places a limit on how big an animal can grow. An animal living in its niche has an energy budget. Associated with energy budget, we meet biology and natural selection. Energy placed into growing, isn't energy placed into reproducing. All animals eventually die, either by predator or bad climate (winter, drought, even bad luck) . So it becomes a balance, given the amount of energy available, how much energy should be put into growth to give a more robust structure to survive vs how much energy should be put into reproduction so that the organism can multiply. Put too much energy into growing bigger, your probability of dying before reproducing increases as you take too much time growing before becoming sexually mature. Put too much energy into reproducing, upon sexual maturity your body is small and weak and your die before you spend much time reproducing. The ultimate right answer is dependent on the animal's niche, the physical environment around it, its biology, and the predators around it.
{ "domain": "biology.stackexchange", "id": 8727, "tags": "growth, cell-division" }
Filling a memory segment with a bit pattern
Question: I want to fill a memory segment with a certain byte pattern using powerpc assembly: # Task: Fill the area from 0x8000 to (inclusive) 0x8FFF with 0x55 (per byte read/write) # Write start address - 1 into a register addis r2, r0, 0x0000 ori r2, r2, 0x7FFF # Bit pattern is 0x55 addi r3, r0, 0x55 # Size of memory segment + 1 addis r4, r0, 0x0000 ori r4, r4, 0x1000 # Move that length to CTR register mtctr r4 loop: #Write pattern from r3 into the address (1 + r2), and write the result of 1 + r2 back into r2 stbu r3, 1(r2) # decrement CTR and see if we have reached the end of the segment bdnz Answer: You have some unnecessary comments. For example, this comment: # Move that length to CTR register mtctr r4 Anyone who knows this instruction set should be easily able to know what that line is doing. Comments are supposed to explain why, not how because "how" is very easy to find out on one's own. You did not specify what assembler you were using, but if your assembler allows for creating macros, I recommend that you make macros for those "random" numbers that are in your code. These "random" numbers are called magic numbers; they've just appeared out of no-where. In the NASM assembler, you can do something like this: %define BIT_PATTERN 0x55 Then, in the line where you are working with the bit pattern, you write: addi r3, r0, BIT_PATTERN And now there is no confusion about where that number came from and you can remove that comment from above this line. You should see if you can create macros for this bit pattern like I showed you, and for the memory segment addresses. Note: I could not find a PowerPC assembler any where on the internet so my recommendations from here on may be slightly flawed in some places. These lines can be simplified: addis r2, r0, 0x0000 ori r2, r2, 0x7FFF All you seem to be trying to do here is stick the value 0x7FFF into the register r2. However, you are taking two instructions to do this: Move 0 into r2 Make r2 become 0x7FFF with ori This seems a little unnecessary; why can't you just put 0x7FFF into r2 to begin with, like you did when you put the bit pattern into r3? A much simpler way of writing this would be: li r2, 0x7FFF which is directly equivalent to addi r2, 0, 0x7FFF Note: This assumes that r0 was 0 to being with, which I think it was. The above will put the value 0x7FFF into r2. You can do this same thing for when you are putting the size of the memory segment into r4, and when you are putting the bit pattern into r3. It didn't look like you specifically needed to actually use the addis instruction, which seems unnecessarily complicated for simply putting a value into a register. However, if you did, just change the instruction to: lis r2, 0x7FFF You are using the registers incorrectly. According to this page, the registers r2, r3, and r4 are for the table of contents pointer, the return of a function/subroutine, and (commonly) a function/subroutine argument, respectively. I think you should be using the register r4 through r10 because, since you are not using a subroutine, these registers can now be used for general purpose. You might be okay with using r3, but you definitely should not be using r2. Putting it all together: li r4, 0x7FFF li r5, 0x55 li r6, 0x1000 mtctr r6 loop: stbu r5, 1(r4) bdnz If you notice any bugs, notify me.
{ "domain": "codereview.stackexchange", "id": 14692, "tags": "assembly" }
P is undecidable and not semidecidable, Q is undecidable and semidecidable and P ⊂ Q
Question: My problem: Define two sets P and Q of words (that is, two problems) such that: P is undecidable and not semidecidable, Q is undecidable and semidecidable and P ⊂ Q Answer: Hint: Following the OP's suggestion, let $P$ be the set of all TMs halting on every input, and let $Q$ be the set of all TMs halting on the empty input. It is known that $Q$ is undecidable and semidecidable (indeed, $\Sigma_1^P$-complete) and that $P$ is undecidable and not semidecidable (indeed, $\Pi_2^P$-complete). Moreover, $P \subseteq Q$ since if $T \in P$ then $T$ halts on every input, and in particular $T$ halts on the empty input, so that $T \in Q$.
{ "domain": "cs.stackexchange", "id": 4193, "tags": "undecidability, semi-decidability" }
How to solve '[rosrun] Couldn't find executable named orb_template.py below /opt/ros/fuerte/share/object_recognition_core'?
Question: I try to run object_recognition on fuerte. When I run: sam@sam:~/code/ros/pcl/pcl_3d_recognition$ rosrun object_recognition_core orb_template.py -o my_textured_plane [rosrun] Couldn't find executable named orb_template.py below /opt/ros/fuerte/share/object_recognition_core sam@sam:~/code/ros/pcl/pcl_3d_recognition$ How to solve it? Thank you~ Originally posted by sam on ROS Answers with karma: 2570 on 2012-09-21 Post score: 0 Answer: On my installation (checked-out following the instructions for fuerte here: http://ecto.willowgarage.com/recognition/), the orb_template.py resides in object_recognition_capture/apps. D. Originally posted by dejanpan with karma: 1420 on 2012-09-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by sam on 2012-09-25: Why object_recognition_capture is not a package? That caused me couldn't use rosrun to run orb_template.py.
{ "domain": "robotics.stackexchange", "id": 11108, "tags": "ros, object-recognition" }
Using Approximate Time Sync - how does it work?
Question: Greetings, take this for instance: int main(int argc, char** argv){ ros::init(argc, argv, "decision_ensemble"); DecisionEnsemble ensemble; ros::NodeHandle nh; message_filters::Subscriber<elars::alg1> alg1_sub(nh, "alg1", 1); message_filters::Subscriber<elars::alg2> alg2_sub(nh, "alg2", 1); message_filters::Subscriber<elars::alg3> alg3_sub(nh, "alg3", 1); ROS_INFO("HERE"); typedef sync_policies::ApproximateTime<elars::alg1, elars::alg2, elars::alg3> syncPolicy; Synchronizer<syncPolicy> sync(syncPolicy(10), alg1_sub, alg2_sub, alg3_sub); sync.registerCallback(boost::bind(&DecisionEnsemble::ensembleCallback, &ensemble, _1, _2, _3)); ros::spin(); eturn 0; } An attempt to use ApproximateTime to synchronize 3 separate recognition algorithm messages. As you might expect, it doesn't work, simple hangs. The subscriptions are correct and I am sure that the 3 separate nodes are publishing, however, my callback for this particular node, ensemble callback, does not execute. In fact, the node hangs after subscribing to the topics alg1, alg2, alg3. Am I using boost::bind correctly? The template is a bit esoteric to the uninitiated. In any event, the real question is this: I have 3 nodes, each one taking some delta time interval to complete a task. I have an algorithm that needs info from each node, yet I do not want to run the algorithm until each of the 3 nodes have published. Once all 3 have published, I run the algorithm and output some interesting bit of data. The 3 algorithms are not synchronized and each can take wildly varying time to complete. Is ApproximateTime Synch the appropriate solution? or should I be investigating another approach to message passing synchronization? Originally posted by 101010 on ROS Answers with karma: 79 on 2013-05-24 Post score: 4 Answer: The lines that need to change message_filters::Subscriber<elars::alg1> *alg1_sub = new message_filters::Subscriber<elars::alg1(nh, "alg1", 1); message_filters::Subscriber<elars::alg2> *alg2_sub = new message_filters::Subscriber<elars::alg2>(nh, "alg2", 1); message_filters::Subscriber<elars::alg3> *alg3_sub = new message_filters::Subscriber<elars::alg3>(nh, "alg3", 1); Synchronizer<syncPolicy> sync(syncPolicy(10), *alg1_sub, *alg2_sub, *alg3_sub); Originally posted by dim_sgou with karma: 96 on 2016-04-25 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 14281, "tags": "ros, synchronization, message, approximatetime" }
Reference value for two-electron repulsion integral over GTO's
Question: I am currently trying to implement a Full CI program from scratch. The energies I get are a bit too high, so I'm looking for the mistake. One possibility is my implementation of the two-electron repulsion integrals, $$\int_{-\infty}^\infty \int_{-\infty}^\infty \frac{e^{-\alpha_1 \boldsymbol{r}_1^2} e^{-\alpha_2 \boldsymbol{r}_1^2} e^{-\beta_1 \boldsymbol{r}_2^2} e^{-\beta_2 \boldsymbol{r}_2^2}} {\left| \boldsymbol{r}_1-\boldsymbol{r}_2 \right|\ } \mathrm{d}\boldsymbol{r}_1^3 \mathrm{d}\boldsymbol{r}_2^3 $$ with $\boldsymbol{r}_1=(x_1,y_1,z_1), \boldsymbol{r}_2=(x_2,y_2,z_2).$ However, I cannot find any values I could compare my results to. I tried (numerical) integration in Maple, but that's way too slow and/or numerically unstable because of the singularity at $\boldsymbol{r}_1=\boldsymbol{r}_2$. I tried installing the libraries libint and Libcint, and the python module pyscf, which all should be able to do this kind of computations, but I horribly fail at installing things not made for Windows (MINGW is only half working, and I don't have a proper Linux installation available right now...). So, could someone who has such a program installed please give me a number this integral evaluates to, for whatever numbers $\alpha_1 \ldots \beta_2$ ? Answer: You don't state which method you implemented in order to compute two-electron integrals, therefore I will list all the main references at first. Cook's book [1] contains analytical formulas for overlap, kinetic, electron-nucleus attraction and electron-electron repulsion integrals. The analytical formula for electron-electron repulsion integrals is wrong in the book, but you can look at this discussion for the errata. However, a Full CI calculation is computationally costly. For this reason I believe a better approach would be to use a more efficient scheme to compute integrals. You can look at the following: Obara-Saika scheme McMurchie-Davidson scheme Rys quadrature You can find a good explanation of these methods in Ref. [2]. Now, if you want to check what you already implemented, you can find a list of the two-electron integrals in the STO-3G basis set for $\ce{HeH+}$ in Szabo's book appendix [3]. Here I can give you a set of two-electron integrals I obtain for $\ce{H2}$ (always in the STO-3G basis set) for a bond length of $1.4$ Bohr: ( 1 1 1 1 ) 0.77460834925515787 ( 1 1 1 2 ) 0.44410904384277344 ( 1 1 2 1 ) 0.44410904384277350 ( 1 1 2 2 ) 0.56967771733030592 ( 1 2 1 1 ) 0.44410904384277361 ( 1 2 1 2 ) 0.29702946944511982 ( 1 2 2 1 ) 0.29702946944511982 ( 1 2 2 2 ) 0.44410904384277333 ( 2 1 1 1 ) 0.44410904384277333 ( 2 1 1 2 ) 0.29702946944511982 ( 2 1 2 1 ) 0.29702946944511982 ( 2 1 2 2 ) 0.44410904384277361 ( 2 2 1 1 ) 0.56967771733030592 ( 2 2 1 2 ) 0.44410904384277350 ( 2 2 2 1 ) 0.44410904384277344 ( 2 2 2 2 ) 0.77460834925515787 Note that these results came from a program I wrote myself, but it usually match very well Szabo's [3] and Gaussian09 values for the total energy. If these results match your calculations, then the problem might be in integrals with higher angular momentum. [1] Cook, Handbook of Computational Chemistry, Oxford University Press, 1998. [2] T. Helgaker, P. Jørgensen and J. Olsen, Molecular Electronic-Structure Theory, Wiley, 2000. [3] A. Szabo and N. Ostlund, Modern Quantum Chemistry, Dover, 1996.
{ "domain": "chemistry.stackexchange", "id": 4680, "tags": "quantum-chemistry, computational-chemistry" }
Fast power in Go
Question: I just started learning Go. To start playing with it, I've implemented the fast power algorithm: Any suggestions or criticisms regarding the coding style? package main import ( "errors" "fmt" "math" ) func main() { result, ok := fast_power(2, 4) if ok != nil { fmt.Println("Something went wrong", ok) } fmt.Println(result) } func fast_power(n uint32, power int) (uint32, error) { if power < 0 && math.Floor(float64(power)) == float64(power) { return uint32(math.NaN()), errors.New("Power must be a positive integer or zero") } if power == 0 { return 1, nil } var factor uint32 var result uint32 mul := func(v uint32) { if result == 0 { result = v } else { result *= v } } for factor = n; power > 0; power, factor = power>>1, factor*factor { if power&1 == 1 { mul(factor) } } return result, nil } Answer: If an error type is returned, then the variable you're assigning it to, usually is called err or something similar. An example is shown here. If instead, a bool type is returned, which is true upon success, then it is called ok. An example is shown here. You might know that your fast_power function doesn't like getting negative power values, but since I can easily compute 5^(-2), I'm sure there'll be someone who will try to do so with your function. I suggest you document that as well (by means of comments), and not just by throwing an error.
{ "domain": "codereview.stackexchange", "id": 11951, "tags": "beginner, algorithm, integer, go" }
Does the weight of an hourglass change when sands are falling inside?
Question: An hourglass H weighs h. When it's placed on a scale with all the sand rested in the lower portion, the scale reads weight x where x = h. Now, if you turn the hourglass upside down to let the sand start to flow down, what does the scale read? I imagine initially, when the sand starts to fall but before the first batch of grains touch the bottom of the hourglass, these grains of sand effectively are in a state of free fall, so their weight would not register onto the scale. The weight at this point has to be less than h. However, what about the steady state when there is always some sand falling and some sand hitting the bottom of the hourglass? In the steady state, although we are having some sands in the free fall state and thus decrease the weight of H, there are also sands that are hitting (decelerating) the bottom of the hourglass. This deceleration should translate increase the reading on the scale more than the actual weight of those impacting sands. To illustrate the last point, imagine a ball weighing 500g rested on a scale. If you drop this ball from a mile high onto the same scale, on impact, the scale would read higher than 500g. in the same way, in our hourglass question, will the decreasing effect of weight due to free-fall cancel out exactly the increasing effect of weight due to sand impacting? does it depend on the diameter of the opening? does it depend on the height of the free-fall? Does it depend on the air pressure inside the hourglass? Answer: Analyzing the acceleration of the center of mass of the system might be the easiest way to go since we could avoid worrying about internal interactions. Let's use Newton's second law: $\sum F=N-Mg=Ma_\text{cm}$, where $M$ is the total mass of the hourglass enclosure and sand, $N$ is what you read on the scale (normal force), and $a_\text{cm}$ is the center of mass acceleration. I have written the forces such that upward is positive The center of mass of the enclosure+sand moves downward during process, but what matters is the acceleration. If the acceleration is upward, $N>Mg$. If it is downward, $N<Mg$. Zero acceleration means $N=Mg$. Thus, if we figure out the direction of the acceleration, we know how the scale reading compares to the gravitational force $Mg$. The sand that is still in the top and already in the bottom, as well as the enclosure, undergoes no acceleration. Thus, the direction of $a_\text{cm}$ is the same as the direction of $a_\text{falling sand}$ . Let's just focus on a bit of sand as it begins to fall (initial) and then comes to rest at the bottom (final). $v_\text{i, falling}=v_\text{f, falling}=0$, so $a_\text{avg, falling}=0$. Thus, the (average) acceleration of the entire system is zero. The scale reads the weight of the system. The paragraph above assumed the steady state condition that the OP sought. During this process, the center of mass apparently moves downward at constant velocity. But during the initial "flip" of the hour glass, as well as the final bit where the last grains are dropping, the acceleration must be non-zero to "start" and "stop" this center of mass movement.
{ "domain": "physics.stackexchange", "id": 91018, "tags": "newtonian-mechanics, mass, free-body-diagram, free-fall, weight" }
Day/Year Length Of Larger, but same mass, Earth?
Question: I’m wondering how the length of a day and year would change on Earth if it was twice as big, but the same mass (less density)? Also, would such a difference cause it to orbit closer or further from the sun, or the same? I found plenty of people asking this same question, though with a more massive Earth, but my curiosity has been piqued. Answer: As mentioned in the previous answer, the length of a day could be anything for any mass/density/size of Earth. It only matters how much angular momentum Earth has, and then you can calculate the time period of one rotation based on its shape and composition. But historically, if the processes which gave Earth its angular momentum gave the same amount of angular momentum to a larger or more dense planet, that heavier planet would rotate more slowly due to a higher moment of inertia. If the density were simply doubled, then the moment of inertia would double, which would halve the angular velocity for the same angular momentum (doubling the length of a day). As for the length of a year, the orbital period is to first order determined only by the distance to the sun and the mass of the sun (which would still be much larger than the new earth’s mass). Taken from the Wikipedia page, $$ T=2\pi\sqrt{\frac{r^3}{\mu}}. $$ Thus, the year would be the same duration, provided the radius of orbit remained the same. But for a twice as heavy Earth, this would again double the angular momentum of the system. If we once again assume a constant angular momentum, then doubling the Earth’s mass would require a reduction of the orbital radius by a factor of four (since angular momentum $L=I\omega=m_{\rm e}\sqrt{\mu r}$, where $I$ is moment of inertia, $m_{\rm e}$ is Earth mass, $\mu$ is gravitational parameter, and $r$ is radius). So in addition to us roasting to death, the length of a year would decrease by a factor of 8.
{ "domain": "physics.stackexchange", "id": 50696, "tags": "newtonian-mechanics, orbital-motion, earth, rotation" }
Why are $S = -k_B\sum_i P_i \ln P_i$ and $S = k_B \ln\Omega$ equivalent?
Question: This might be a silly question, but I don't see the equivalence relation between these two equations. Could somebody explain to me how to derive one from the other? Thanks in advance! Answer: Citing Wikipedia here, In what has been called the fundamental assumption of statistical thermodynamics or the fundamental postulate in statistical mechanics, the occupation of any microstate is assumed to be equally probable (i.e. Pi = 1/Ω, where Ω is the number of microstates); this assumption is usually justified for an isolated system in equilibrium. Since $P_i = 1/\Omega$, obviously $\ln P_i =-\ln\Omega$ and since $P_i$ is probability, then the $\Sigma_i P_i=1$
{ "domain": "physics.stackexchange", "id": 28140, "tags": "thermodynamics, entropy" }
Hamiltonian flow?
Question: I was wondering what the Hamiltonian flow actually is? Here is my idea, I just wanted to know if I am correct about this. So let $(x(t),p(t))' = X_{H}(x(t),p(t))$ are the Hamilton's equations and $X_H$ the Hamiltonian vector field. Then the Hamiltonian flow is the map $\phi^{t}(x(0),p(0)) = (x(t),p(t))$ and in particular $\phi^{0}= \operatorname{id}.$ Moreover we have that $d_t \phi^{t} = X_H(x(t),p(t)).$ Is this correct? Answer: The evolution of systems in the Hamiltonian formalism is called a flow, not merely because it can be described by a mapping, but because it is described by a particular mapping: one whose evolution in (q,p)-space resembles fluid flow. This resemblance gives rise to Liouville's theorem, where the Hamiltonian flow, like certain fluid flows, is shown to be incompressible (constant density).
{ "domain": "physics.stackexchange", "id": 31328, "tags": "classical-mechanics, hamiltonian-formalism, hamiltonian, vector-fields, flow" }
confusion about poles and zeros of Lead compensator?
Question: I was reading about lead compensator from a website but i am not able to understand how they have extracted zero and pole from transfer function of lead compensator as shown in attached photo? or they have extracted/calculated wrong values of zeros and poles? Answer: You're right, the given pole and zero are wrong. They should be $$s_0=-\frac{1}{\tau}$$ and $$s_{\infty}=-\frac{1}{\beta\tau}$$ because for $s=s_0$ the numerator becomes zero, and for $s=s_{\infty}$ the denominator becomes zero.
{ "domain": "dsp.stackexchange", "id": 7937, "tags": "control-systems" }
Why do quasicrystals have well-defined Fourier transforms?
Question: I was recently reading about quasicrystals, and I was really surprised to learn that even though they do not have a periodic structure, and only have long range order in a very different sense to the usual one, they can still be detected via crystallographic techniques that involve Bragg diffraction patterns. More specifically, quasicrystals are like Penrose tilings in that It is self-similar, so the same patterns occur at larger and larger scales. Thus, the tiling can be obtained through "inflation" (or "deflation") and any finite patch from the tiling occurs infinitely many times. The emphasized text also means that if I have a finite patch of quasicrystal to which I want to add atoms to make the full pattern, then there will be an infinite number of different ways to do this. Therefore, the process of adding atoms to a finite patch is not deterministic: it is constrained by certain rules but there is always some choice. To be more precise about what weirds me out, I feel this 'skips' an intermediate step. I can imagine there being a tiling which is not periodic but which is nevertheless deterministic in that given a starting 'seed' patch the whole pattern is determined. In such a pattern there is no translation invariance but there is nevertheless a very rigid sense of long-range order. Edit: I was recently shown a construction that falls in this case. Consider the discrete one-dimensional point set $\mathbb Z\cup r\mathbb Z=\{\ldots,-1,0,1,\ldots,\ldots,-r,0,r,\ldots\}$ where $r$ is irrational. This set is not periodic (although any finite patch has infinite other patches that are arbitrarily similar to it), but it does have a well-defined Fourier transform: it is simply the sum of the transforms of $\mathbb Z$ and $r\mathbb Z$, which are single peaks. However, given an initial patch of length bigger than $r$ and $1$, the rest of the pattern is completely determined, and the long-range order is "rigid" without the pattern being periodic. For quasicrystals, on the other hand, the local orders of two distant patches are definitely correlated but only loosely so. This being the case, I'm having some trouble visualizing how it is possible to obtain diffraction patterns from them, and understanding whether they have well-defined Fourier transforms. To bring this question to a more, precise footing then, let me ask this: given a starting patch of quasicrystal and a (non-deterministic) rule for adding atoms to it, is the Fourier transform of the full, infinite pattern well defined? If so, what's the intuition that allows this to happen? If this is actually way more complicated than I realize I would also be OK with a reference to an entry-level resource on the subject, but I would really like a nice explanation of this. Answer: [I am not really the best person to answer this, but since nobody else is answering here is my best shot] given a starting patch of quasicrystal and a (non-deterministic) rule for adding atoms to it, is the Fourier transform of the full, infinite pattern well defined? In summary it depends on the rule. [It may also be a tautology since quasi-crystals are required to have well defined discrete spectra by definition, but I assume that is not important here] In more detail the Fourier transform or spectrum of a pattern is only a function of the pattern, regardless of how it was produced. The question is how do the rules constrain the possible final configurations and thus their Fourier transforms. A non deterministic process can conceivably produce a perfectly regular pattern with a simple spectrum. If a process is sufficiently nondeterministic it may lead to perfectly regular on some runs and wildly complex patterns on others, in which case the rules of the process would not really tell us anything about the final spectrum. At another extreme the process may be such that the final pattern is the same regardless of the non deterministic choices made along the way which only affect the order in which the pattern is produced, in which case the spectrum is fixed by the rules of the process. In between you get systems where the space of final patterns is large but has regularities which are reflected in the spectra. Real process have probabilities associated with their nondeterministic choices, which leads to a probability density function on the final patterns and on the spectra. Probabilistic rule lead statistical regularities in the produced patterns and their spectra. The classic examples of this are white, pink and brown noise. These are random processes whose final pattern in the time domain is unpredictable but their spectra are well defined and have a high degree of regularity in that their amplitude-frequency relationships follow specific power laws. I do not know how the processes producing quasi-crystals produce the particular spectra they do, except that they must be biased to producing paterns with approximate symmetries that cannot be realized exactly by regular crystals, @user23660's comment and reference look like promising pointers Edit Some examples to illustrate the ideas described above, as requested in the comments: For a very simple example of an eventually/asymptically deterministic system start with a square lattice that has an "atom" at the origin. The rule is to put an atom on an empty spot next to an existing atom with equal probability. In the long run you will end up with a mostly symmetric, mostly convex blob around the origins with some variations in the exact shape of the boundary between runs, but for very long runs the result will be essentially the same. For types of stochastic patterns where all indidviduals are distinct but have large scale regularities which produce recognizable spectra consider natural patterns sand dunes, tree bark, girraffe spots, finger prints and other kinds of textures. Quasicrystals & Penrose tilings are vaguely similar to this, but that they are made of components that are perfectly regular. For a system that can produce any kind of pattern, simple or complex, you can take the rule to put an atom anywhere. You can then put the atoms in a regular arrangement or chaotically. This is a cheat though becauuse if you made the rule probabilistic the probaility of getting a regular pattern is virtually 0. You would almost certainly get a random pattern, but I think it would still have a well defined spectrum the same way white noise does. Unfortunately I do not have a good simple example of a probabilistic system that is likely to produce both complex and regular patterns with significant probability. Fluid dynamics is a real systems that is somewhat like that: you get laminar flow at low Reynolds number but become increasingly turbulent as Reynolds number increases. It may also help to think about Conway's Life. It is deterministic, of course, but depending on the starting configuration it can produce almost any kind of behaviour, so the rules do not really tell you what kind of pattern, and thus spectrum, you will get in the long run and, since Life is actually Turing complete, you can not predict the eventual patterns even if you have the initial configuration, except by essentially running the rules. This is due to the undecidability the reachability problem for Turing machines. It may help to look at (stochastic, asynchronous) cellular automata. Stephen Wolfram's New Kind of Science has many examples of discrete systems on the order chaos boundary. Here is also a talk by Wolfram.
{ "domain": "physics.stackexchange", "id": 11778, "tags": "condensed-matter, mathematical-physics, fourier-transform, x-ray-crystallography, quasicrystals" }
Can oral baking soda effect tumor cells in mice
Question: Could anyone explain me please, how exactly (according to research article in Cell journal) adding the baking soda in drinking water can influence the acidity of tumor cells? What about homeostasis and ph buffer systems? In this article in Cell, discussed in this news source, in addition to a number of experiments with cell lines, a live mouse model with xenograft tumors was used to show the effect of oral bicarbonate on tumor acidification, and support the in vitro results re: the effect of tumor associated acidification on the circadian clock. How is it possible that oral bicarbonate could make tumors in these mice less acidic? Wouldn't the physiological buffer system prevent any changes to the pH in tissues? Answer: What can oral baking soda do? If your question is (which I don't think it is): Does baking soda cure cancer? The answer is that there is no support for that statement. If your question is: Can oral bicarbonate alter the pH of tumors in a mouse model? The answer is yes. This effect has been observed in this mouse model and replicated many times This is an effect that has a been used for a while, and was first demonstrated in this 1999 article in the British Journal of Cancer. The addition of bicarbonate increases the buffering capacity in the live animal model, preventing the acidification of the tumor. It does not increase the pH of other tissues. How does this work? The idea here is not that adding a base increases blood or tissue pH directly, but it increases the amount of a physiologic buffer in a situation where that buffer has been depleted or is insufficient, allowing a better response to a pathologic excess of acid. Your question suggests you believe that an oral acid or base load will not change the pH of body tissues. This is true in normal physiology because there is a robust system of buffers. The primary extracellular buffer actually is bicarbonate. However, in diseased states, the buffer may become depleted or be insufficient. Oral (as well as intravenous) bicarbonate therapy is used in humans in several disease states, including certain forms of metabolic acidosis (Cecil Medicine, Ch 63, 120) and kidney disease (Ch 124, 128, 132). What is happening in these cases, and in the mouse model, is not a change in pH from the normal set point, but an increase in buffering capacity that allows the pH to return to the normal set point. I would caution, though, that just because oral treatment is used in certain specific forms of acid base pathology, however, does not mean that there is support for the various forms of quackery that use arguments about flushing acids from your body, or using alkaline water as a general health tonic. It does address the question about whether it is very strange that orally administered buffer would impact a tissue with excess acid. It's not (strange). The effect of changing the pH in the tumor microenvironment The effect in the study behind the article you referenced, as well as others, is an increased sensitivity to cancer chemotherapy in otherwise resistant tumors. This is not horribly surprising, and doesn't mean by any stretch that baking soda cures cancer or that, e.g., an acidic diet causes cancer. Other studies, including this one (also in this very particular mouse model), show decreased invasion and metastases, also presumably because of the increased buffering capacity (and decreased acidifcation of the tumor and tumor microenvironments), so there may be other possible effects. There has been a good deal of interest since 1999 in human studies adding bicarbonate to other treatments, but none have been published that I'm aware of, though I haven't checked the clinical trial registry. EDIT: On checking the clinical trial registry, on first glance it looks like some safety and tolerability studies have been completed, but it would take a fair amount to do a full analysis of what has been done and what hasn't. Maybe someone else wants to do it and include it in an answer? I'm fairly certain I would have seen and remembered an article showing that bicarbonate cures cancer in humans, though, so I don't think that's on the list :)
{ "domain": "biology.stackexchange", "id": 9083, "tags": "cell-biology, cancer, homeostasis" }
Which particles have helicity?
Question: I know electrons have helicity but can particles which are not fundamental i.e.protons have helicity? Answer: Helicity of a particle is defined to be the projection of its spin vector $\mathbf{s}$ along the direction of its impulse $\mathbf{p}$: $$H = \frac{\mathbf{s}\cdot\mathbf{p}}{||\mathbf{s}\cdot \mathbf{p}||}$$ For a massless particle, the helicity is equivalent to the chirality; If the mass of the particle is not zero, then helicity is not a Lorentz invariant; Neutrino helicity is $-1$ So every particle with spin $\neq 0$ and/or $m \neq 0$ has helicity.
{ "domain": "physics.stackexchange", "id": 26722, "tags": "spinors, protons, helicity" }
Does the 'President' have 2 Billion Leaves?
Question: The world's second-largest known tree, the President, in Sequoia National Park is 3200 years old and is said to have 2 billion leaves (Source: https://youtu.be/vNCH6uhB_Bs?t=59). Is this correct? And how was this number arrived at? In other words, how is such an insanely large number possible? Answer: One way* to come up with an estimate of how many leaves - or needles, in the case of Sequoiadendron giganteum - is simply to count the number of leaves on a twig (or a number, to get a good average), then the number of twigs on a branch, and then count the branches on the tree, after which it's just multiplication. Now one reason that the number seems so high is the way the needles grow. Unlike for instance pines, which have long needles arranged in sparse clusters of 2, 3, or 5, or spruce & fir, which have medium needles arranged along the branches, the sequoia has lots of tiny needles arranged on twiglets. Link with picture of sequoia needles: https://www.monumentaltrees.com/en/trees/giantsequoia/giantsequoia/ Picture of pine vs spruce & fir: https://www.finegardening.com/article/fir-vs-spruce-vs-pine-how-to-tell-them-apart *But I don't know whether it's the way used to get the number in the link.
{ "domain": "biology.stackexchange", "id": 10307, "tags": "botany, trees" }
Determining moonset time from Nautical Almanac
Question: I have recently learnt to determine moonrise and moonset times using the Nautical Almanac, as part of my maritime navigation course. I have a query regarding the determination of moonset times. I will use an arbitrary date and an arbitrary latitude to illustrate my confusion. On 3rd February 2020, at the equator, the Nautical Almanac tells me that the moon will set at 0041 hrs. Does this mean that the moon that rises on 3rd February 2020 sets at 0041 hrs on 4th February 2020 or does this mean that the moon that rose on 2nd February sets at 0041 hrs on 3rd February 2020? Answer: It means that there is a moonset at that time on the third of February. At the equator, it is certain that the moon that set at 0041, rose the previous day (Feb 2nd), at about midday. (the moonset is about 50 minutes later each day, which means that there can be days with no moonset, but you can't have a day with two moonsets) On days with no moonset the time of moonset in your almanac might be indicated with a time that is more than 23:59 For example: Date of month 1 2 3 4 h m h m h m h m 2052|2250|2500|0100 Here you see on the first day of the month the moon sets at 20:52, on the second it sets at 22:50, on the third day there is no moonset. The moon sets one hour after midnight. The moonset 25:00 and 01:00 refer to the same event.
{ "domain": "astronomy.stackexchange", "id": 4436, "tags": "the-moon, ephemeris" }
Type error creating R gate in Q#?
Question: The R operation in Q# is listed by Microsoft in the documentation as follows operation R (pauli : Pauli, theta : Double, qubit : Qubit) : Unit However, when I try to use the following command in a Q# operation, R(PauliX,0,Q1); I get an error, referencing the line of code for the R command: The type of the given argument does not match the expected type. Q1 is of course a Qubit, so I don't see what could be causing the problem. I've also been having difficulty getting the R1 gate working, but I suspect for similar reasons. To see the relevant documentation, please visit R operation, Q# Docs. Answer: The second argument theta has to be a Double, and 0 is a constant of type Int. Q# doesn't have implicit type casting, so you need to make sure your second argument is a Double. If you're looking for a rotation by zero angle, you'll need to do R(PauliX, 0.0, Q1);. Alternatively, you can use ToDouble to cast an integer parameter to a Double.
{ "domain": "quantumcomputing.stackexchange", "id": 576, "tags": "quantum-gate, programming, q#" }
Why do aromatic hydrogen atoms have a higher chemical shift value compared to aliphatic hydrogen atoms?
Question: In Nuclear magnetic resonance (H-NMR) spectroscopy, the chemical shift of aliphatic hydrogen atoms are very much closer to the 1.0 ppm than that of aromatic hydrogen atoms. Aromatic hydrogen atoms have a chemical shift value of about 7.0-9.0 ppm whereas the chemical shift value of aliphatic hydrogen atoms ranges between 2.0-3.5 ppm. Why is this so? Answer: The chemical shift gives you information about how well shielded the nuclei are from the magnetic field. A proton at higher chemical shift values is deshielded, so the aromatic protons are obviously less shielded than aliphatic protons. One effect that causes deshielding is the presense of electronegative atoms that draw electrons away from other atoms and thereby deshield them. But that is not the cause of the aromatic chemical shift. The reason for that one is the ring current that is induced in a magnetic field. The induced current creates a local magnetic field that has the same direction as the external field outside the aromatic ring where the attached protons are, leading to the deshielding of the nuclei.
{ "domain": "chemistry.stackexchange", "id": 111, "tags": "nmr-spectroscopy" }
Airtightness of a plastic-on-rubber seal
Question: I want to make a pressure container out of this food container The seal is rubber that is pushed into plastic edges. However, I don't know if it's going to work until after i've attached an air valve to it. My question is - at what rate is it going to pass air at an inside pressure of 2 atmospheres? I wan't it to hold at at least above 1.7 atmospheres over the course of two days. Thanks! Answer: Let us start with the force the pressure will exert on the lid. Assuming you mean 1.7 atmospheres above the external air pressure, at approximately 14.7 psi atmospheric pressure times 1.7 will equal a pressure of 24.99 pounds on each square inch of the lid. You can figure the area of the lid and multiply it by 24.99 to find the total pressure exerted on the lid. For instance a 4 inch by 6 inch lid will have an area of 24 square inches, 24 times 24.99 will exert a total of 599.76 pounds pressure on the lid. If the lid hold downs are strong enough to keep the lid on and the rubber seal compressed enough to not leak then you will be ok. I hope I have understood your question correctly.
{ "domain": "physics.stackexchange", "id": 73155, "tags": "material-science, gas" }
Red Cabbage Indicator colour change with Sodium Metabisulphite
Question: I made some pH indicator by boiling and straining cabbage juice and in an effort to keep it from spoiling for a while without having to store it at fridge temperatures I added some wine making stabiliser which consists primarily of Sodium Metabisulphite. I had previously tested some of it to check if adding it would affect the pH of the indicator since I wanted a pH neutral preservative. Well strangely after leaving it for a while I came back to find that the indicator had turned a weird shade of purple I had never seen the indicator take before. At the concentration it is at the original colour was a very dark purple - almost black. Now I know that the metabisulphite ion decomposes to produce SO2 gas which is acidic under aqueous conditions but I would expect that to change the indicator to its normal purple for a not very concentrated acid. But the colour it currently is is very pale and in fact when I pipette out some of the indicator it is almost totally clear. So I guess really my question is does Potassium Metabisulphite have some kind of bleaching effect I'm not aware of? Answer: http://mylespower.co.uk/2012/04/06/homemade-ph-indicator/ Now we add the reducing agent (model compound). That lessens conjugation and bleaches the anthocyanine. "But the colour it currently is is very pale" If the new compounds' residual optical absorbances are in the green, the solution will be iodine purple by transmission.
{ "domain": "chemistry.stackexchange", "id": 1001, "tags": "ph" }
Would electrolysis work with a conductive (but waterproof) divider between anode and cathode?
Question: Note I am not trying to "double fuel efficiency" in a car or something scamey like that. My purpose is to fill balloons with hydrogen and possibly power an HHO torch (also make it safer because pure hydrogen or pure oxygen will not explode). I found this https://www.thingiverse.com/thing:1200458 HHO generator someone made which seems perfect for my needs except that it outputs HHO together. I cannot find a design for actually useful HHO production that actually separates H2 and O (by useful I mean large volumes of gas, not just a science experiment). My plan for alterations is to make that design thicker so that it can hold two arrays of plates and split it in halves. I would also add in separate nozzles for each side. I would then print both halves and glue them together with a metal plate (aluminum) in the middle. Would this work? I don't see why it would not however I have not seen a design like this which probably means that I'm wrong rather than everyone else :D Answer: A cell with two electrodes separated from each other and an aluminum divider plate in the middle is really two cells: one has an aluminum plate as a cathode, and the other cell has the same aluminum plate as the anode. In this situation, the solution does not directly connect the external anode and cathode; the aluminum divider separates the solution into two separate cells. When you pass current (electrons) thru this arrangement, ions travel. Protons (or other cations) travel toward the negative electrode. In the cell attached to the external positive source, protons move away, and discharge at the aluminum. Simultaneously, protons in the adjacent cell move away from the aluminum and discharge at the stainless steel cathode. Hydroxyls (or other anions) do the reverse. You will need twice the cell voltage to get a high current, and hydrogen will be evolved from the external cathode (-) and the farther side of the aluminum divider plate. Oxygen will be evolved from the other side of this aluminum plate and from the external anode (+). Now make a small change: let the aluminum divider go from the top of your case to below the water line, say, halfway down. Then ions can travel thru the solution. This is now just one cell. Hydrogen will be evolved at the (one) cathode and oxygen will be evolved at the (one) anode. Your separation of the gas phase by the aluminum divider allows hydrogen to be collected separately from the oxygen. BTW, you might as well use stainless steel for this divider, since it is used for the electrodes. When collecting the gasses, hydrogen will be evolved at twice the rate of oxygen. If you build up pressure, the cell will respond by pushing liquid from the cathode compartment into the anode compartment to equalize the pressure. You may have to use a check valve or some kind of flow equalizer to keep liquid from spurting out of one of the gas outlets.
{ "domain": "chemistry.stackexchange", "id": 11769, "tags": "electrochemistry, electrolysis" }
Is there an NP-complete problem, such that the decision version of its counting problem is not PP-complete?
Question: Once we fix a polynomial time deterministic verifier V(input, certificate), its corresponding NP problem is the question: For this input, does a (polynomial size) certificate exist such that V(input, certificate) returns True? The associated counting problem (#P class) is: How many certificates exist such that V(input, certificate) returns True? #P is not a "decision problems" class, but a class of counting problems. The closest traditional "decision problems" class is PP, which has problems of the form: Do the majority of the certificates result in V(input, certificate) returning True? I am interested in the decision version of the counting problem associated with a certain NP-complete problem + verifier, which would be: Given the input instance and a positive integer number K: Are there at least K different certificates such that V(input, certificate) returns True? This decision problem is clearly equivalent to the counting version (via a Binary Search). If I am not mistaken, the class of all these "decision versions of the counting problems associated to NP problems" is exactly as hard as PP since: 1) Any of these "counting-decision" problems can be reframed as some other majority problem, by choosing an ad-hoc verifier definition where a lot of certificates are manually deemed to be True or False such that there are at least K True certificates in the original if and only if the majority is True in the resulting problem. Just as a simple example to illustrate the reduction idea, if there where 8 possible certificates, and we want to know whether there are at least 3 True ones, we might propose a different verifier having 11 possible certificates: for the 8 original ones it just checks normally, and for the other three it immediately returns True without looking at the input. Since the majority of 11 is 6, this new verifier accepts a majority of certificates exactly if the original one accepts at least 3. Thus, all of these problems are in PP. 2) The corresponding "counting-decision" version for any PP-complete problem will obviously be PP-hard, since solving the original majority problem is simply solving the $(input, \left \lfloor \frac{ totalCertificates}{2} \right \rfloor + 1)$ problem. Thus, such problems are PP-complete. So now, at last, can I clearly state my question, which is a "more sofisticated version" of the same idea shown in MAX,MAJ variants of NP complete problems: Is there any NP-complete problem such that the decision version of its counting problem (which is in PP) is not PP-complete? For example, in the case of Subset-Sum the associated decision problem I'm interested in would be: Are there at least K nonempty subsets of zero sum? Since K is free and not limited to be near half of the certificates, the argument of the other answer does not apply. Answer: Putting your question in more precise terms, you ask whether the following claim holds: $* \hspace{1mm} R(x,y) \text{ is an NP-complete relation}\Rightarrow count_R \text{ is PP-complete}$ Where $count_R$ is defined as follows: $count_R=\left\{(x,k) \big| \hspace{1mm} \left\lvert\left\{y : R(x,y)\right\}\right\rvert\ge k\right\}$. We call a relation $R(x,y)$ NP-complete if it is computable in polynomial time, and the language it defines $L_R=\left\{x | \exists y \hspace{1mm} R(x,y)=1\right\}$ is NP-complete. We talk in terms of relations, since as you mentioned, the counting version has to be defined relative to some specific verifier. It seems that this is an open question, as (*) implies: $** \hspace{1mm} R(x,y) \text{ is an NP-complete relation}\Rightarrow \#R\text{ is #P-complete}$ Where $\#R(x)=\left\lvert \left\{y : R(x,y)\right\} \right\rvert$. To see why * implies the above, let $R(x,y)$ be some NP-complete relation. Using (*), $count_R$ is PP complete, so $count_{\text{SAT}}\le_p count_R$. In that case, $\#SAT\in\mathsf{FP}^{\# R}$ and thus $\#R$ is $\# P$-complete (use binary search, where in each cutoff you apply the reduction from $count_{\text{SAT}}$ to $count_R$ and query the $\#R$ oracle on the result). To my knowledge, (**) is currently open. See this related question from cstheory. Also related.
{ "domain": "cs.stackexchange", "id": 7798, "tags": "complexity-theory" }
Calculate volume of void in a thermometer
Question: I am struggling to find a ground for this question So I have a mercury thermometer. There's a $0.2\, \mathrm{cm^3}$ void in the glass. The question asks for the new volume of void after a change in temperature. Below are the known quantities $$\Delta T \\ \beta_{\text{mercury}} \\ \beta_{\text{glass}}$$ Normally, if the initial volume of two materials are given, I can calculate the difference between their volume changes $\Delta V$. However, this question did not provide the initial volume of mercury and glass. The best I can do for this question, is to express the final answer in some unknown variable $x$ Answer: A hole in a solid expands at the same rate as the solid. If you know the initial volume of the mercury and the void, then you know the initial volume of the “hole”. But, you are right. For a numeric answer, you need the initial volume of the mercury.
{ "domain": "physics.stackexchange", "id": 82579, "tags": "thermodynamics, temperature, material-science, volume" }
Generating and finding parent nodes in a post-order tree
Question: Given a height h and list q of integers, list the parent node of each element in q, if nodes are read in post-order sequence starting at 1. The tree could be read like this if h=3. 7 3 6 1 2 4 5 If h = 3 and q = [1,3,7] the output would be the list [3, 6, -1] where the parent of the root is always -1. How can I make this code faster? It does okay until h=10, then it slows down since it has to check \$2^h-1\$ nodes. #Generate a perfect binary tree of height h #Find the parent nodes of the values in list g, return negative 1 for the #root node_list = [] solution = {} class Node: def __init__(self): self.left = None self.right = None self.data = None self.parent = -1 self.left_edge = True self.depth = 0 def answer(h, q): global node_list global solution final_answer = [] root = int(pow(2, h))-1 solution.update({root: -1}) node_list = (list(range(root+1))) node_list.reverse() node_list.pop() node = Node() node.data = root node.left = left_branch(h, node) node.right = right_branch(h, node) for i in q: for key in solution: if i == key: final_answer.append(solution[key]) return final_answer def left_branch(h, parent: Node): global node_list global solution new_node = Node() new_node.depth = parent.depth + 1 new_node.parent = parent.data try: if parent.left_edge: new_node.data = parent.data//2 node_list.remove(new_node.data) else: new_node.left_edge = False new_node.data = parent.data - int(pow(2, h - new_node.depth)) print(new_node.data) node_list.remove(new_node.data) except ValueError: if not(new_node.data in node_list): return new_node solution.update({new_node.data: parent.data}) left = left_branch(h, new_node) right = right_branch(h, new_node) new_node.left = left new_node.right = right return new_node def right_branch(h, parent: Node): global node_list global solution new_node = Node() new_node.left_edge = False new_node.depth = parent.depth + 1 new_node.parent = parent.data new_node.data = parent.data - 1 try: node_list.remove(new_node.data) except ValueError: return new_node left = left_branch(h, new_node) right = right_branch(h, new_node) new_node.left = left.data new_node.right = right.data solution.update({new_node.data: parent.data}) return new_node This was a challenge given by Google Foobar. The time window has expired. My code took too long to run. I'm wondering about the optimal way to solve this problem. Answer: The problem of "what is the parent of one node in the tree" can be solved as a recurrence relation, either using arithmetic or using the bits in the binary representation of the number. The problem of "what are the various parents of many nodes in the tree" is a group solution that cries out for some batch computation followed by speed-oriented lookups. Look at the numeric properties of the nodes. You should not be computing a tree for this. Also, look at the desired problem solution: you want to perform a computation or lookup for each number in a list. Thus, your information storage should be optimized for this. Given that your tree represents a range of numbers, I'll suggest either a dictionary or a list. Since the numbers are a dense group of integers that start from 1, I'll suggest a list. Parents = [0] * (n+1) Given h, how can you predict n (above)? What's the number at the root of the tree? Given R the number at the root of a tree, what are the values of the numbers at the root of the next lower subtrees? (Hint: easy for the left, harder for the right.)
{ "domain": "codereview.stackexchange", "id": 27492, "tags": "python, programming-challenge, python-3.x, tree, time-limit-exceeded" }
K-means algorithm very slow
Question: I'm a beginner in Python but I have tried to implement K-means algorithm in python and it's working... but it's too slow... Instead of few seconds I can spend hours to finish it and I don't know why... something it's wrong... Could anyone of you give me some advice please? Code: import math from random import randint from copy import deepcopy from chart import chart_centroids, save_image centroid = dict(x=0, y=0, points_x=[], points_y=[]) list_centroids = [] x = [] y = [] k = 0 def distance(x1, y1, x2, y2): x = (int(x1) - int(x2)) ** 2 y = (int(y1) - int(y2)) ** 2 sum = x + y sqr = math.sqrt(sum) return sqr def generate_centroids(range_x, range_y): list_centroids.clear() k = 3 for i in range(0, k): centroid["x"] = randint(1, range_x) centroid["y"] = randint(1, range_y) list_centroids.append(deepcopy(centroid)) def choose_points_for_centroids(x, y): for i in range(len(list_centroids)): list_centroids[i]["points_x"].clear() list_centroids[i]["points_y"].clear() distances = [] for j in range(len(x)): for i in range(len(list_centroids)): dist = distance(x[j], y[j], list_centroids[i]["x"], list_centroids[i]["y"]) distances.append(dist) minim = min(float(s) for s in distances) index = distances.index(minim) list_centroids[index]["points_x"].append(x[j]) list_centroids[index]["points_y"].append(y[j]) distances.clear() def move_centroids(): sum_x = 0 sum_y = 0 for cent in list_centroids: for j in range(len(cent["points_x"])): sum_x += cent["points_x"][j] sum_y += cent["points_y"][j] if len(cent["points_x"]) > 0 and len(cent["points_y"]) > 0: avg_x = sum_x / len(cent["points_x"]) avg_y = sum_y / len(cent["points_y"]) cent["x"] = avg_x cent["y"] = avg_y def run(): generate_centroids(300, 300) read_file("input.txt") tmp_x = [] tmp_y = [] checkers = [] while_end = False while True: if not while_end: choose_points_for_centroids(x, y) move_centroids() for cent in list_centroids: tmp_x.append(cent["x"]) tmp_y.append(cent["y"]) choose_points_for_centroids(x, y) move_centroids() for i in range(len(list_centroids)): if tmp_x[i] == list_centroids[i]["x"] and tmp_y[i] == list_centroids[i]["y"]: checkers.append(True) else: checkers.append(False) for checker in checkers: if not checker: tmp_x.clear() tmp_y.clear() break else: while_end = True else: break for i in range(len(list_centroids)): chart_centroids(list_centroids[i], i) save_image() def read_file(name): lines = [line.rstrip('\n') for line in open('../generate_file/' + name)] global x global y zone = [] for index in range(5): x.append(int(lines[index].split()[0])) y.append(int(lines[index].split()[1])) zone.append(int(lines[index].split()[2])) run() The input file it's looks something like... 100 52 2 440 100 3 10 200 1 ... Note: The code works in both Python 2 and Python 3. Answer: I see a lot of little things slowing you down, but I don't know what your chart_centroids and save_image functions do, so I have no idea if they are part of the problem or not. Let's look at one of your two frequently-called functions: def choose_points_for_centroids(x, y): for i in range(len(list_centroids)): list_centroids[i]["points_x"].clear() list_centroids[i]["points_y"].clear() distances = [] for j in range(len(x)): for i in range(len(list_centroids)): dist = distance(x[j], y[j], list_centroids[i]["x"], list_centroids[i]["y"]) distances.append(dist) minim = min(float(s) for s in distances) index = distances.index(minim) list_centroids[index]["points_x"].append(x[j]) list_centroids[index]["points_y"].append(y[j]) distances.clear() In the first paragraph, you "clear" a bunch of data. But I'm not sure why you have your centroids structured this way. Every time you access something, there's an index, a key lookup, and maybe another index. That's way too much work for getting at something you'll be addressing frequently! In fact, the whole idea of accessing list_centroids[i]["x"] and list_centroids[i]["y"] is kind of silly. I don't see any value to separating the x and y coordinates, here. On the other hand, if you were to combine your x and y coordinates into a tuple, you would have a constant object that can be hashed. And hashed items can be stored in a dictionary. Centroid = { ... } for c in Centroid: Centroid[c] = [] # Reset list of points to empty In the next section, you iterate over all your points (here you go again, segregating ordinates from abscissas!) computing a distance metric. You store the distances in a list. After creating the distances list, you then find the min value. After finding the min value, you then try to map back to the index of that value. After finding the index, you use that to figure out what centroid was closest to the point, and tie the point to the centroid. You overlook the min function's key= argument. The key is a function (or lambda-expression) that returns a value. Given the input, the min function determines what to compare by calling the key function. If the key function is not provided, then a simple identity function ( f(x) = x ) is used. In your case, you can replace all that code by judicious use of a lambda-expression: # This should be your global Point store, not x[] and y[] Points = [ (_x, _y) for _x, _y in zip(x,y) ] for p in Points: x,y = p nearoid = min(Centroid, key=lambda c: distance(x,c[0],y,c[1])) Centroid[nearoid].append(p) And if you recode your distance function to take tuples, you don't need to do even that much work: for p in Points: nearoid = min(Centroid, key=lambda c: distance(p, c)) Centroid[nearoid].append(p) This does three things for you. First, it eliminates a lot of bytecode. And that means it eliminates a lot of things that the computer was doing, which should save you time. Second, it converts some bytecode into builtins. Using the builtins as much as possible means that your code might be running in C, instead of bytecode. This makes for better performance. Third, it eliminates extra data structures. Which eliminates allocation, deallocation, garbage collection, data structure maintenance, etc. All that storage translates into performance, either directly (thrashing) or indirectly (code). Now, speaking of your distance function, I see you are calling int a bunch of times. But the inputs are, if I understand correctly, already integers. So those are a bunch of name lookups, and function calls, that are entirely redundant. Try something like this, again using the points-as-tuples approach: def distance(a, b, sqrt=math.sqrt): """Return the distance between (x,y) tuples a and b""" return sqrt( (a[0] - b[0])**2 + (a[1] - b[1])**2) (Putting the lookup of math.sqrt into the constants table is a bit of a cheat. But anything for speed, eh?)
{ "domain": "codereview.stackexchange", "id": 24884, "tags": "python, performance" }
Encog neural network multiple outputs
Question: I am a little confused with using encog to create a neural network. I am trying text classification with a basic feed forward network. For the input data I have 200 unique words (features/inputs) to input into the network and 100 different pieces of text. so I have a matrix of 100x230, essentially 100 different training items each with the frequency of 230 words. I then have an output which classifies the data as A, B, C, D or E, so one column and 5 different outputs, therefore this is an ordinal column and is part of the CSV data set I read in thus I actually have a matrix of 100X231 where the 231 column is the output I desire to classify. I can train and run this neural network no problem but I am confused about the number of output neurons to have. I would have thought there are 5 different classifications therefore I should have 5 different neurons, however I can't setup the code like that, as it complains: IMLDataSet trainingSet = new BasicMLDataSet(input, ideal); My input variable is a double[100][230] and the ideal is just the column of expected classifications so double[100][1]. Because I only have one column and the data is normalised those A,B,C... values are turned into numbers between -1 and 1 and therefore when I run the neural net with 5 output neruons it complains that it should be only 1. When I run the neural net with one neuron output it gives me what looks like correct and accurate answers but the value it gives is between -1 and 1. I am assuming the activation level is being matched to the output? My question therefore is either how to denormalise this output to get back to my actual letter classification or how to use 5 neurons on the output so the network chooses the appropriate classification. My neural network is as follows: BasicNetwork network = new BasicNetwork(); network.AddLayer(new BasicLayer(null, true, 230)); network.AddLayer(new BasicLayer(new ActivationTANH(), true, 16)); network.AddLayer(new BasicLayer(new ActivationTANH(), false, 1)); network.Structure.FinalizeStructure(); network.Reset(); Answer: Needed to normalise the data which encodes the column and then denormalise on the output which gives the correct classification.
{ "domain": "datascience.stackexchange", "id": 2805, "tags": "machine-learning, neural-network" }
synchronizing two point grey USB cameras for Stereo using OpenCV?
Question: Hi, I have two Firefly MV USB cameras from PoinGrey which are working with the Camera1394 node, Im able to view the images and create an package containing c++ code which uses the cv_bridge node to integrate the stream from both cameras and perform a simple color inversion process on the frames of the images on OpenCV , thanks to this handy tutorial on the web at siddhantahuja.wordpress.com. Now I would like to get up and running with synchronizing the two cameras (externally) and hopefully calculate the disparity. Currently I synchronize the cameras by enabling a strobe signal (output) and a trigger (input) on the two cameras by setting registers manually using Coriander . However this seems quite inefficient as I have to do it every time the cameras are turned on and is susceptible to errors on my behalf. So the questions I ask are: What is the best way for me to get and set registers dynamically on the Firefly MV (IIDC 1.3 Compliant) camera? Should I use libdc1394 if so could someone please give me an example or a quick guide? Once the cameras are synchronized I will need to poll the cameras so that one camera waits for the completion of the other cameras frame capture process. Does ROS provide a solution for this? If not how can I program the cameras to do this? Apologies if the question is very trivial, but im only just starting with OpenCV and ROS. I would greatly appreciate some help. UPDATE: Ok, Since my software skills are not up to scratch yet I have not yet been able to set the camera regiasters through the 1394 library in order to poll the cameras to wait for a each other to complete their . However I am sure its possible but I just dont know how to. Anyways to synchronise the cameras I made an external hardware trigger using a 555 timer circuuit which was able to synchronize my images to a 0.015s delay, by enabling a trigger at a specified frequency. Originally posted by Gaviria R on ROS Answers with karma: 61 on 2013-04-16 Post score: 4 Original comments Comment by zcream on 2014-03-26: What was the max frame rate you got from your PGR camera ? Using the external 555 timer sync. Answer: The camera1394 driver just supports the IIDC specification. It does not provide that kind of device-dependent processing. Check with Point Grey. They may provide information on how to write a separate program to enable the hardware synchronization. There is a brief camera1394 tutorial with examples of how to combine two cameras into a stereo pair. Originally posted by joq with karma: 25443 on 2013-04-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Gaviria R on 2013-04-16: I did as in the tutorial and it works, thanks! But my issue is that since the images are synchronised using a Master-Slave set up the Master should wait until the slave completes the capture process triggered by the master until a new capture process can start. Do you know how this can do this? Comment by joq on 2013-04-17: Camera1394 does not support that. Each driver node runs independently. The stereo image pipeline puts the images together using the approximate time synchronization parameter. Comment by Gaviria R on 2013-04-21: joq thanks for the reply I appreciate the help. I managed to get a workaround in order to get the images synchronised without having to worry about the Polling issue. I will update my post soon.
{ "domain": "robotics.stackexchange", "id": 13844, "tags": "opencv, camera1394, synchronization, stereo, pointgrey" }
Negative result from filtering a positive signal with a band-pass filter
Question: I have used scipy.signal.remez to calculate the coefficients for a band-pass filter and when I use it to filter a sinusoidal signal that goes between 0 and some positive number (e.g. $2^{16}$) I get that the filtered signal is negative ( it is attenuated, as I expected ) but it has a negative offset, is this expected? I need my outputs to be between 0 and $2^{16}$ (since I am implementing this filter in an FPGA and send the filtered signal to a DAC which accepts positive numbers between 0 and $2^{16}$ Should I just deal with this by adding a positive offset to re-centre the attenuated signal? Code used to generate the FIR filter: fs = 2500000 # Sample rate, Hz band = [95000, 105000] # Desired pass band, Hz trans_width = 5000 # Width of transition from pass band to stop band, Hz numtaps = 200 # Size of the FIR filter. edges = [0, band[0] - trans_width, band[0], band[1], band[1] + trans_width, 0.5*fs] taps = scipy.signal.remez(numtaps, edges, [0, 1, 0], Hz=fs) The taps are real and look like this: Here is an example of the result of using this FIR filter to filter a 200KHz sine wave in python: Answer: What you see is what one would expect. As pointed out in Marcus Müller's answer, your band pass filter has a relatively poor stop band attenuation, and since DC is in the filter's stop band, DC is not sufficiently attenuated. You can easily predict what is going to happen: compute the filter's DC gain, which is just the sum over all filter coefficients: $H(0)=\sum_nh[n]$ This will be a not so small negative number. Take the DC offset of your input signal ($2^{15}$), and multiply it with $H(0)$. This must be the DC offset of your output signal. So the DC offset of the output signal can be easily predicted, even without computing the output! What can you do to get rid the DC offset of the output signal? Since you can perfectly predict it, you can just subtract the DC offset $=2^{15}\cdot H(0)$ from your output signal. Another fun solution is to increase the filter length by $1$ sample, and set that new last sample to the value $-H(0)$ (as computed above). This will make sure that $H(0)$ of the new filter is exactly $0$, and it will only insignificantly change the overall frequency response of the filter. For the ones who wonder if it is normal that the first and last sample of the impulse response are much larger than all other samples: yes, this is normal, especially for narrow band pass filters, and it is a direct consequence of the equi-ripple optimality criterion implied by the Remez algorithm. These two impulses (first and last sample) are echos generating the sinusoidal (i.e., equi-ripple) stop band behavior in the frequency domain. The ripple in the impulse response generates the (impulse-like) narrow pass band in the frequency domain. So everything can be explained by the basic Fourier relationship "sinusoid $\Longleftrightarrow$ impulse".
{ "domain": "dsp.stackexchange", "id": 4157, "tags": "filters, filter-design, finite-impulse-response" }
Wine tasting like soot
Question: I'm not sure if this question is for chemistry. I had a bottle of (cheap) wine. It had a strong taste of soot, to an extend that I suspected that in the filling process some machine oil might have gotten into it. I discarded the bottle, but some time later, I've found the same taste, to a lesser extend, in some other wine. Now it looks like my taste buds have become more sensitive to that, as I can taste it now (depending on what, I don't know) quite frequently, mostly on a low scale. Any idea to explain that? Answer: The odour and aroma of wine consists of about 600–800 components. Many of these compounds have been described with a range of odour/flavour sensations, even in more than one category, and often have differing flavour impressions, dependent upon their concentrations in aqueous solution or in the air-space above. The odours and flavours in wine tasting are typically divided into groups, for example: Floral, Woody, Rustic/Vegetal, Balsamic, Fruity, Animal, Empyreumatic, Chemical, Spicy, and Etherish. [Bakker, J.; Clarke, R. J. Wine flavour chemistry; 2nd ed.; Wiley-Blackwell 2012] What you describe as “strong taste of soot” probably belongs to the Chemical group, which is associated with synthetic chemical manufacture. Typical compounds in this group that might be responsible for the soot aroma are volatile phenols: 3-methylphenol (tarry/leathery odour) 4-methylphenol (tarry/smoky odour) 2-ethylphenol (smoky odour, phenolic flavour) 2-methoxyphenol (smoky/woody odour) A special substance that is believed to be responsible for a strong, petrolly kerosene-like aroma developed during aging is 1,1,6-trimethyl-1,2-dihydronaphthalene, which is formed from carotene breakdown.
{ "domain": "chemistry.stackexchange", "id": 5107, "tags": "food-chemistry, taste" }
Active directory password changer
Question: I have written this Active Directory password changer script. Comments and testing is appreciated for things I may have overlooked, since this is my first AD Script. There are two parts: PHP and PowerShell. This is being run on an IIS Server with PHP 5.4.x with Fast CGI and PowerShell 2.x. The IIS_IUSRS is a manager of a group of all people who can change their own password with this method (as I exclude any account with access to the server). If IIS_IUSRS is not a manager of a group of people you will get "Access Denied" in the log. My goal with this script is to change passwords of people that cannot locally access domain computers. While I want this script as secure as possible, limiting those that can change their password and not allowing those with elevated permissions to change password. I was unable to find an escape character function and had to make my own. Feel free to test this script beyond breaking point and comment, as I would like to see how well I have done in scripting something for Active Directory. Can anyone confirm that I don't need to escape if possible? Post results if you do break it, please. Here is the PHP script: <?php setlocale(LC_CTYPE, "en_US.UTF-8"); ?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Pw Changer</title> <link rel="stylesheet" href="style.css"> </head> <body> <?php /* * Note: Errorstate var is changeable client side and should not be trusted */ $psScriptPath = ".\Bin\Non-Auth\adpwchange2014.ps1";// Path to the PowerShell script. $logfile = './Logging/Phpruntime.txt'; $date = date('D, d M Y H:i:s'); if(!empty($_GET["successstate"])){//Achievement Get: PW Changer echo '<div class="successstate">'. $_GET["successstate"] .'</div>'; exit(header('refresh:5; ../index.php')); } if(!isset($_POST["submit"])){ if(!empty($_GET["errorstate"])){echo '<div class="errorstate">' . $_GET["errorstate"] . '</div><br /><br />';} // if there was no submit variable passed to the // script (i.e. user has visited the page without clicking submit), display the form: echo '<form name="testForm" class="formbox" id="testForm" action="index.php" method="post" /> Username: <input type="text" name="username" id="username"/><br /> Old Password: <input type="password" name="old_password"><br /> New Password: <input type="password" name="new_password"><br /> Confirm New Password: <input type="password" name="confirm"><br /> <input type="submit" name="submit" id="submit" value="submit" /> </form>'; }elseif(!empty($_POST["username"]) && !empty($_POST["old_password"]) && !empty($_POST["new_password"]) && !empty($_POST["confirm"])){// Else if submit was pressed, check if all of the required variables have a value and then Use PHP to check for risks such as Username in password or useing old password $errorstate = ''; if($_POST["new_password"] != $_POST["confirm"]){ $errorstate .= 'New Password and Confirm do not match</br>'; } $username = utf8_decode($_POST["username"]); $old_password = utf8_decode($_POST["old_password"]); $new_password = utf8_decode($_POST["new_password"]); $confirm = utf8_decode($_POST["confirm"]); if(strlen($new_password) <= 8){//Length Check equal or greater then $errorstate .= 'Eight or more charictors needed</br>'; } if(strpos($new_password,$old_password) !== false){//New Password Matches username or old password $errorstate .= 'Can not contain your old password</br>'; } if(strpos($new_password, $username) !== false){ $errorstate .= 'Can not contain your Username</br>'; } $operator = array('\\','#','+','<','>',';','\"','=',',');//Operators that need to be escaped with $replace = array('\\\\','\\#','\\+','\\<','\\>','\\;','\\"','\\=','\,');//replacement $username = str_replace ($operator, $replace, $username); #$new_password = str_replace ($operator, $replace, $new_password); #$old_password = str_replace ($operator, $replace, $old_password); $check_upper = 0; $check_lower = 0; $check_digit = 0; $check_punct = 0; foreach(count_chars($new_password, 1) as $key => $value){//Strength Test Results can be derived from $value if(!ctype_upper(chr($key))){$check_upper=1;}//if Upper-case if(!ctype_lower(chr($key))){$check_lower=1;}//if Lower-case if(!ctype_digit(chr($key))){$check_digit=1;}//if Numeric if(!ctype_punct(chr($key))){$check_punct=1;}//if Symbol if($check_upper + $check_lower + $check_digit + $check_punct>= 3){}//Save us from checking the entire string } if($check_upper + $check_lower + $check_digit + $check_punct<= 2){ $errorstate .= 'Password needs to contain at least 3 of the following criteria: Upper-case, Lower-case, Numeric and/or Symbol</br>'; } if(!empty($errorstate)){//EXIT if error state is set. Do not pass go, do not collect $200. exit(header('Location: .?errorstate='.$errorstate)); } $user = $username; $username = base64_encode($username); //Transport Layer Base64 $new_password = base64_encode($new_password); //Transport Layer Base64 $old_password = base64_encode($old_password); //Transport Layer Base64 /* * The danger happens here as it is sent to powershell. */ $query = shell_exec('powershell.exe -ExecutionPolicy ByPass -command "' . $psScriptPath . '" < NUL -base64_username "' . $username . '" < NUL -base64_oldpassword "' . $old_password . '" < NUL -base64_newpassword "' . $new_password . '" < NUL');// Execute the PowerShell script, passing the parameters /* *Log the query result */ if(stristr($query, 'Success:') !== false){ //Return True $logstr = '========================================'."\r\n"; $logstr .= ' ' . $date . ' - Success'."\r\n"; $logstr .= '========================================'."\r\n"; $logstr .= $_SERVER['REMOTE_ADDR'] . ' - ' . $user .": Attempted Password Change result \r\n"; $logstr .= $query . "\r\n"; $logstr .= "\r\n"; file_put_contents($logfile, $logstr, FILE_APPEND | LOCK_EX); $errorstate = '</br>Success: Password was changed</br>'; exit(header('Location: ./index.php?successstate='.$errorstate)); }elseif(stristr($query, 'Failed:') !== false){ //Return False $logstr = '========================================'."\r\n"; $logstr .= ' ' . $date . ' - Failed'."\r\n"; $logstr .= '========================================'."\r\n"; $logstr .= $_SERVER['REMOTE_ADDR'] . ' - ' . $user .": Attempted Password Change result \r\n"; $logstr .= $query . "\r\n"; $logstr .= "\r\n"; file_put_contents($logfile, $logstr, FILE_APPEND | LOCK_EX); $errorstate = '</br>Failed: Password was not changed</br>'; exit(header('Location: .?errorstate='.$errorstate)); }else{//someone broke something not that we tell them but we log the entry $logstr = '========================================'."\r\n"; $logstr .= ' ' . $date . ' - Error Warning'."\r\n"; $logstr .= '========================================'."\r\n"; $logstr .= $_SERVER['REMOTE_ADDR'] . ' - ' . $user .": Attempted Password Change result \r\n"; $logstr .= 'powershell.exe -ExecutionPolicy ByPass -command "' . $psScriptPath . '" < NUL -username "' . $username . '" < NUL -oldpassword "' . $old_password . '" < NUL -newpassword "' . $new_password . '" < NUL' . "\r\n"; $logstr .= $query . "\r\n"; $logstr .= 'Username: ' .$username . "\r\n"; $logstr .= 'Old Password: ' .$old_password . "\r\n"; $logstr .= 'New Password: ' .$new_password . "\r\n"; $logstr .= "\r\n"; file_put_contents($logfile, $logstr, FILE_APPEND | LOCK_EX); //You could go one step further and ban IP for X time // you could also send an email to yourself $errorstate = '</br>Failed: Password was not changed</br>'; exit(header('Location: .?errorstate='.$errorstate)); } }else{// Else the user hit submit without all required fields being filled out: $errorstate = 'Please Complete all fields</br>'; exit(header('Location: .?errorstate='.$errorstate)); } ?> </body> </html> And here is the PowerShell: #*============================================================================= #* Script Name: adpwchange2014.ps1 #* Created: 2014-10-07 #* Author: #* Purpose: This is a simple script that queries AD users. #* Reference Website: http://theboywonder.co.uk/2012/07/29/executing-powershell-using-php-and-iis/ #* #*============================================================================= #*============================================================================= #* PARAMETER DECLARATION #*============================================================================= param( [string]$base64_username, [string]$base64_newpassword, [string]$base64_oldpassword ) #*============================================================================= #* IMPORT LIBRARIES #*============================================================================= if ((Get-Module | where {$_.Name -match "ActiveDirectory"}) -eq $null) { #Loading module Write-Host "Loading module AcitveDirectory..." Import-Module ActiveDirectory } #*============================================================================= #* PARAMETERS #*============================================================================= $username = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64_username)) $newpassword = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64_newpassword)) $oldpassword = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64_oldpassword)) #*============================================================================= #* INITIALISE VARIABLES #*============================================================================= # Increase buffer width/height to avoid PowerShell from wrapping the text before # sending it back to PHP (this results in weird spaces). $pshost = Get-Host $pswindow = $pshost.ui.rawui $newsize = $pswindow.buffersize $newsize.height = 1000 $newsize.width = 300 $pswindow.buffersize = $newsize #*============================================================================= #* EXCEPTION HANDLER #*============================================================================= #*============================================================================= #* FUNCTION LISTINGS #*============================================================================= Function Test-ADAuthentication { Param($Auth_User, $Auth_Pass) $domain = $env:USERDOMAIN Add-Type -AssemblyName System.DirectoryServices.AccountManagement $ct = [System.DirectoryServices.AccountManagement.ContextType]::Domain $pc = New-Object System.DirectoryServices.AccountManagement.PrincipalContext($ct, $domain) $pc.ValidateCredentials($Auth_User, $Auth_Pass).ToString() } Function Set-ADAuthentication{ Param($Auth_User,$Auth_OldPass, $Auth_NewPass) $domain = $env:USERDOMAIN $Auth_NewPass = ConvertTo-SecureString $Auth_NewPass -AsPlainText -Force $Auth_OldPass = ConvertTo-SecureString $Auth_OldPass -AsPlainText -Force #Running -whatif to simulate results #Therefore we expect "Failed: Password change" as it was not changed Set-ADAccountPassword -Identity $Auth_User -NewPassword $Auth_NewPass -OldPassword $Auth_OldPass -PassThru $authentication = Test-ADAuthentication $username $newpassword if ($authentication -eq $TRUE) { Write-Output "Success: Password Changed" }elseif ($authentication -eq $FALSE) { Write-Output "Failed: Password Change" }else { Write-Output "Error: EOS" EXIT NUL Stop-Process -processname powershell* } } #*============================================================================= #* Function: function1 #* Purpose: This function does X Y Z #* ============================================================================= #*============================================================================= #* END OF FUNCTION LISTINGS #*============================================================================= #*============================================================================= #* SCRIPT BODY #*============================================================================= Write-Output $PSVersionTable Write-Output " " $authentication = Test-ADAuthentication "$username" "$oldpassword" if ($authentication -eq $TRUE) { Set-ADAuthentication $username $oldpassword $newpassword }elseif ($authentication -eq $FALSE) { Write-Output "Failed: Validation" }else {Write-Output "Error: EOS" EXIT NUL Stop-Process -processname powershell* } #*============================================================================= #* SCRIPT Exit #*============================================================================= EXIT NUL Stop-Process -processname powershell* Answer: I don't really know anything about powershell, so I will only look at your PHP script. XSS echo '<div class="successstate">'. $_GET["successstate"] .'</div>'; This is vulnerable to reflected XSS, with which an attacker could execute arbitrary Javascript on a victims computer (and thus steal cookies, deface the website, display a phishing form, etc). Use htmlspecialchars to prevent this (same with errorstate). Functions Your code isn't all that long, but for 150 lines, I would extract some code to functions to structure it. For example, displayPasswordChangeForm and processPasswordChangeForm, and checkPasswordStrength. I would also add a logQueryResult function to avoid duplicate code: function logQueryResult($queryString, $date, $result, $redirect, $additional) { $logstr = '========================================'."\r\n"; $logstr .= ' ' . $date . ' - ' . $result . ."\r\n"; $logstr .= '========================================'."\r\n"; $logstr .= $_SERVER['REMOTE_ADDR'] . ' - ' . $user .": Attempted Password Change result \r\n"; $logstr .= $query . "\r\n"; $logstr .= $additional; $logstr .= "\r\n"; file_put_contents($logfile, $logstr, FILE_APPEND | LOCK_EX); $errorstate = '</br>' . $result . ': Password was changed</br>'; exit(header('Location: ' $redirect)); } // use like this: logQueryResult($query, $date, 'Success', './index.php?successstate='.$errorstate, ''); logQueryResult($query, $date, 'Failed', '.?errorstate='.$errorstate, ''); $plainUserPass = 'Username: ' .$username . "\r\n"; $plainUserPass .= 'Old Password: ' .$old_password . "\r\n"; $plainUserPass .= 'New Password: ' .$new_password . "\r\n"; logQueryResult($query, $date, 'Error Warning', '.?errorstate='.$errorstate, $plainUserPass); Misc when building a string, either use all double or all single quote. Code Like $var = 'foo' . $test . "'bar'" . '\n' for example is hard to read. always close your curly brackets at the same time (I would close it where the opening line began, it's easier to see which block it closes that way). personally, I would use more spaces (before {, after } and ,, around ., etc.)
{ "domain": "codereview.stackexchange", "id": 10769, "tags": "php, powershell, active-directory" }
How to initialize a $n$ qubit system in this specific state
Question: Basically, I have $2n$ qubits and I want to initialize them in the state $\frac{|\psi\rangle}{\lVert |\psi\rangle \rVert}$ where $$|\psi \rangle = \sum_{\mathrm{w}\in \{0,1\}^n}|\mathrm{w}\mathrm{w}\rangle$$ I know how to do it naively in Qiskit and for my purpose, this is not that bad (i.e I have $\log n$ qubits so the whole process will take $O(n^2)$ steps) but I was wondering if there is a better way to do it. I'm new to Quantum Computing so I'm sorry if this is a trivial question. Answer: You can generate an equal superposition on the first $n$ qubits using $H^{\otimes n}$, then use $n$ CNOT gates to get the desired state: This circuit has a depth of $2$
{ "domain": "quantumcomputing.stackexchange", "id": 3981, "tags": "qiskit, quantum-state, initialization" }
How can a body be displaced from one point to another without accleration?
Question: In definition of potential energy it is said that it is the amount of work done on object to displace it from infinity to that point without accleration .But how can body be displaced to that point without accleration? Answer: The exact definition of potential energy is as follows:- The change in potential energy of the system is defined as the negative of work done by the internal conservative forces of the system The definition you gave says that the work done by external forces in displacing object from one point to another without acceleration is known as change in potential energy.The acceleration can be zero if the internal conservative forces and external force balance each other.When the net force is zero the particle still has kinetic energy due to which it gets displaced.So a particle can be displaced from one point to another with constant speed.
{ "domain": "physics.stackexchange", "id": 61218, "tags": "classical-mechanics" }
Newton's second law and moving through a fluid
Question: It is harder to move an object through a dense fluid like water than compared to with a less dense fluid like air. Is an explanation for this possible with Newton's 2nd law? There are people that say the phenomenon occurs becasue of fluid pressure of fluid density or number of molecules needed to be pushed out of the way when walking. But I was thinking that Newton's 2nd law could explain this phenomenon. My thinking is below. $F=ma$. So heavier particles have less acceleration for a given constant force, which means a heavy particle does not get of the way as faster as a light particle when, say, someone is walking in water. Thus denser fluids are harder to move in because of Newtons 2nd law. Is this correct? Answer: The words you're looking for are "viscosity" and "drag" Viscosity is a physical measure of how resistant a fluid is to deformation. This animation from Wikipedia should make it more apparent. The viscous force is given by: $$F_v = \mu \frac{dv}{dy}$$ where $\mu$ is the coefficient of viscosity. This term is primarily dependent on the density of the fluid and any intermolecular forces between it's constituents (along with the temperature.) Your analogy of pushing heavier molecules is actually incorrect. Water is primarily composed of $\require{mhchem}\ce{H2O}$ which has a molar mass of 18 g/mol, but air is (primarily) composed of $\ce{N2}$ (28 g/mol), $\ce{O2}$ (32 g/mol) and $\ce{CO2}$ (44 g/mol) which are are each heavier than water. Common intuition however points out that it is easier to tread in air than in water, so this argument is flawed. We instead make use of the fact that air is far less dense than water for a given volume. Water molecules also exhibit hydrogen bonding, a significant intermolecular attractive force. In accordance with the equation, we find that water exerts a much larger viscous force than air because of it's higher coefficient of viscosity In accordance with Newton's Second Law, the resulting acceleration reduces, making it "harder" to move objects using the same constant force. $$ F - F_v = ma_x$$ This assumption is only justified when the flow is laminar. For fluids with higher values of the Reynold Number, this viscous force becomes less dominant, because the flow becomes turbulent (Thanks @Rick for pointing this out) The force of drag is a model of resistive force for an object moving through a fluid. It is given by $$R = \frac{1}{2}D\rho Av^2$$ where $D$ is the drag coefficient, and A is the cross sectional area of the moving object measured in a plane perpendicular to its velocity. This resistive force, clearly increases with the square of the velocity, and is bound to play a dominating role when you push through a fluid (like in your example). Once again, you can use Newton's second law to show how this makes it "harder" to move through: $$F - R = m. \frac{dv_x}{dt}$$ For reference, the coefficient of viscosity for water is $8.90 × 10^{-4} $ Pa.s and for air is $18.1 × 10^{-6}$ Pa.s Hope this helps.
{ "domain": "physics.stackexchange", "id": 82409, "tags": "newtonian-mechanics, forces, fluid-dynamics" }
What kind of boolean functions are faster to compute on qc?
Question: Deutsch-Jozsa algorithm can compute if some function $f : \{0,1\}^n \rightarrow \{0,1\} $ is constant. This goes exponentially faster than on classical computers. If we consider the set of all boolean functions $f : \{0,1\}^n \rightarrow \{0,1\} $ is there a characterizations or intuition about the properties of boolean functions, which achieve such a speedup compared to classical computations? Consider for example the AND gate, which ANDs all $n$ inputs. I don't know if this is faster on quantum computer, but if yes what does both functions share in common and if not what is different here compared to the constant testing function? Answer: Following up on @luciano's answer, I think you are envisioning a quantum computer as being fast at evaluating functions, when in actuality, quantum computers are better at evaluating global properties of functions (and not, necessarily, the function themselves.) For example referring to the Deutsch-Jozsa problem, consider two separate bags containing Boolean functions on $n$ variables. In one bag (called "constant") we put in the $2$ functions that either evaluate to $0$ for all $2^n$ inputs, or to $1$ for all $2^n$ inputs; and In another bag (called "balanced") we put in the functions that evaluate to $0$ for precisely $2^{n-1}$ inputs (and $1$ otherwise). If we were to scramble the bags and choose a random function, classically we'd have to evaluate the function a couple of times (and worse-case up to $2^{n-1}+1$ times) to know from which bag we grabbed our function. But following the Deutsch-Jozsa algorithm, we only need to evaluate the function once on a quantum computer. This "balanced" vs. "constant" property is a global property of the functions, closer to what a Fourier transform evaluates. ADDED There are $2^{2^n}$ individual Boolean functions with $n$ inputs and $1$ output. However, of all of these, there is only $1$ function on $n$ variables that performs the $\mathsf{AND}$ of all inputs (namely the $\mathsf{AND}$ function), and only $1$ that performs the $\mathsf{XOR}$ of all inputs (namely the $\mathsf{XOR}$ function). Furthermore it's hard to get one's head around the size of the problems to which quantum algorithms provide a significant speedup. But it's my understanding that there's a theorem (modulo a lot of asterisks and extra hypothesis and other details) that the one weird trick that quantum computers can do that classical computers cannot is to quickly take a Fourier transform of the output of some function, and sample the Fourier transform with a probability given by the (square of the) amplitude of the FT. The Deutsch-Jozsa algorithm determines whether the "DC component" of the FT is large ("constant") or small ("balanced").
{ "domain": "quantumcomputing.stackexchange", "id": 2263, "tags": "quantum-algorithms, complexity-theory, speedup, deutsch-jozsa-algorithm" }
convenience scripts for switching catkin workspaces
Question: A while back I came across a post or mail of someone using some short bash script to conveniently source different setup.bash files. It was something along the lines of searching the home directory (up to a specified depth) for setup.bash files and then providing completion to switch to one of those workspaces. I seem to be unable to find it again. I guess reimplementing it would not be too hard, but maybe something like this lying around in their .bashrc? Originally posted by demmeln on ROS Answers with karma: 4306 on 2014-02-28 Post score: 2 Answer: This might not be exactly what you're looking for, but it should be able to be modified easily to suit your needs: https://github.com/dornhege/ros_scripts Originally posted by dornhege with karma: 31395 on 2014-02-28 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by demmeln on 2014-02-28: Yeah it wasn't, but this is very useful, thanks!
{ "domain": "robotics.stackexchange", "id": 17128, "tags": "ros, catkin, bash" }
what kind of suspension liquid should be used with ferrofluid (so it does not stain the glass)
Question: Please take a look at the following video: I am working on a new project, and I need to find whats the best liquid to hold ferrofluid inside the glass (or maybe even plastic) container, so that ferrofluid (that easily stains everything) does not stain or stick to glass? Also, is there any special preparation for the glass needed? Answer: I don't have a full answer, but a hint at how to choose your liquid. You want the ferrofluid not to mix with it, so your fluid has to be immiscible with the ferrofluid solvent (or carrier fluid; ferrofluids are colloidal suspensions). In the case of your “FerroFluid EFH-1”, the solvent is a light mineral oil, so you should go with a polar solvent. Secondly, you want the ferrofluid solvent not to touch the glass, to avoid staining. In order to do so, you need your liquid to wet the bottle more than the mineral oil, i.e. to have a smaller contact angle. You can play on both the bottle material and nature of the fluid to achieve this. I suggest protic polar solvents, which should wet regular glass well: water, ethanol, isopropanol, acetic acid, …
{ "domain": "chemistry.stackexchange", "id": 1081, "tags": "physical-chemistry, experimental-chemistry, equipment" }
Electrophilic attack of X+ on double bond
Question: Is it the double bond attacking the $\ce{X+}$ ion or the other way around? Also, does it form a cyclic transition state if it isn't bromine or chlorine? Eg. cyclohexene $+ \,\ce{Cl+ ->}$ cyclic transition state But would the same thing happen if I put any other electrophile? Answer: These kind of reactions are called Electrophilic addition reactions. These are usually given by alkenes and alkynes. In your example, the electrophile is $\ce{Cl+}$. The alkene will act as a source of π electrons and will donate it to the electrophile which is in need of electrons. So, alkene is called as nucleophile in this reaction. Is it the double bond attacking the X+ ion or the other way around? It depends on how you look at it. We can say that alkene polarizes the $\ce{Cl-Cl}$ bond by pushing electrons into the anti-bonding molecular orbital. This weakens the bond and hence electrophile is getting generated. In the next step,the electrophile attacks the double bond [or double bond attacks the electrophile] and hence electrophile gets added across across the double bond Why does the 3-membered cyclic transition state form? The cyclic transition state is just a temporary relief for the electrophile that's just been added. The π molecular orbital donates electrons to the electrophile. While the p-orbitals in the π-bond overlap with the electrophile, each p-orbital forms a σ-bond. But would the same thing happen if I put any other electrophile? Generally,electrophilic addition reactions take place through carbocation intermediate. But this reaction can take place through cyclic 3-membered intermediate,if the electrophile has a lone pair. Apart from $\ce{Cl+,Br+}$ even $\ce{Hg^2+}$ (when the reagent is $\ce{Hg(OAc)2/H3O+}$) and $\ce{NO+}$ (when the reagent is $\ce{NOCl}$) can form the 3-membered cyclic intermediate.
{ "domain": "chemistry.stackexchange", "id": 13891, "tags": "organic-chemistry, reaction-mechanism, electrophilic-substitution, c-x-addition" }
Decomposing light into frequency spectrum
Question: Light hits a charge coupled element. The wavelength of the light somehow is translated into a color picture. Where can I learn about methods (algorithms) to decompose light hitting a CCD into frequency spectrum? Answer: A color CCD is made of a monochrom CCD Sensor and an array of filters. The common Bayer pattern is a pattern of 2 green, a blue and a red filter. The filters transmits light on your broad band CCD sensor. A color CCD already does some spectral analysis. The four physical pixels are read into RGB channels of one color pixel. If you want to see the spectrum of color than see a histogram of RGB channels in your favorite graphics tool. E.g. the green bandpassfilter allows transmittance of several light frequencies $\nu = \frac{c}{\lambda}$ in the green and possibly overlapping with blue and red filters. Usually the wavelength $\lambda$ is used to define the color of the light. This relation for visible light is visualized in the spectrum of light. Imagine the intensity on the green pixel is composed of all light transmitted through the green filter. It is not possible to tell "which green" frequency caused the signal on the green pixel. This information about the frequency spectrum is lost. The CCD just delivers electrons and firmware translates it to a digital value. A pure CCD can not decompose the measured intensity. The information is lost.
{ "domain": "physics.stackexchange", "id": 6848, "tags": "visible-light, measurements" }
WPF controls visibility throught IMultiValueConverter
Question: I have control with RadioButtons. Each RadioButton represent some state of document. Each document have a lot of labels, buttons and another trash. I need to collapse different labels, buttons, other... In different states. I think that I am building big crutch. What is the better way to implement this functionality? XAML <Window x:Class="WpfApplication1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:WpfApplication1.Converters" xmlns:enums="clr-namespace:WpfApplication1.Enums" xmlns:sys="clr-namespace:System;assembly=mscorlib" Title="MainWindow" Height="350" Width="525"> <Window.Resources> <local:MultiBooleanToVisibilityConverter x:Key="Converter" /> </Window.Resources> <Grid> <Grid.RowDefinitions> <RowDefinition></RowDefinition> <RowDefinition></RowDefinition> </Grid.RowDefinitions> <StackPanel Orientation="Vertical"> <RadioButton Content="First" x:Name="First"></RadioButton> <RadioButton Content="Second" x:Name="Second"></RadioButton> <RadioButton Content="Third" x:Name="Third"/> <RadioButton Content="Fourth" x:Name="Fourth"></RadioButton> </StackPanel> <StackPanel Orientation="Vertical" Grid.Row="1"> <Label Content="Test1"> <Label.Visibility> <MultiBinding Converter="{StaticResource Converter}"> <MultiBinding.ConverterParameter> <x:Array Type="{x:Type sys:Enum}"> <enums:DocumentTypes>Second</enums:DocumentTypes> <enums:DocumentTypes>Fourth</enums:DocumentTypes> </x:Array> </MultiBinding.ConverterParameter> <Binding ElementName="First" Path="IsChecked" /> <Binding ElementName="Second" Path="IsChecked" /> <Binding ElementName="Third" Path="IsChecked" /> <Binding ElementName="Fourth" Path="IsChecked" /> </MultiBinding> </Label.Visibility> </Label> <Label Content="Test2"> <Label.Visibility> <MultiBinding Converter="{StaticResource Converter}"> <MultiBinding.ConverterParameter> <x:Array Type="{x:Type sys:Enum}"> <enums:DocumentTypes>First</enums:DocumentTypes> <enums:DocumentTypes>Third</enums:DocumentTypes> </x:Array> </MultiBinding.ConverterParameter> <Binding ElementName="First" Path="IsChecked" /> <Binding ElementName="Second" Path="IsChecked" /> <Binding ElementName="Third" Path="IsChecked" /> <Binding ElementName="Fourth" Path="IsChecked" /> </MultiBinding> </Label.Visibility> </Label> <Label Content="Test3"> <Label.Visibility> <MultiBinding Converter="{StaticResource Converter}"> <MultiBinding.ConverterParameter> <x:Array Type="{x:Type sys:Enum}"> <enums:DocumentTypes>Fourth</enums:DocumentTypes> <enums:DocumentTypes>First</enums:DocumentTypes> </x:Array> </MultiBinding.ConverterParameter> <Binding ElementName="First" Path="IsChecked" /> <Binding ElementName="Second" Path="IsChecked" /> <Binding ElementName="Third" Path="IsChecked" /> <Binding ElementName="Fourth" Path="IsChecked" /> </MultiBinding> </Label.Visibility> </Label> </StackPanel> </Grid> </Window> ENUM using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace WpfApplication1.Enums { public enum DocumentTypes { First, Second, Third, Fourth } } CONVERTER using System; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Data; using WpfApplication1.Enums; namespace WpfApplication1.Converters { class MultiBooleanToVisibilityConverter : IMultiValueConverter { public object Convert(object[] values, Type targetType, object parameter, System.Globalization.CultureInfo culture) { Dictionary<string, bool> dict = new Dictionary<string, bool>(); dict.Add("First", (bool)values[0]); dict.Add("Second", (bool)values[1]); dict.Add("Third", (bool)values[2]); dict.Add("Fourth", (bool)values[3]); if (parameter != null) { List<DocumentTypes> result = ((IEnumerable)parameter).Cast<DocumentTypes>().ToList(); foreach (DocumentTypes type in result) { if (dict[type.ToString()]) { return System.Windows.Visibility.Visible; } } } return System.Windows.Visibility.Collapsed; } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } } Answer: OPTION A As proposed by @t3chb0t, you could realize it as XAML-only solution with triggers: <Grid> <Grid.RowDefinitions> <RowDefinition></RowDefinition> <RowDefinition></RowDefinition> </Grid.RowDefinitions> <StackPanel Orientation="Vertical"> <RadioButton Content="First" x:Name="First"></RadioButton> <RadioButton Content="Second" x:Name="Second"></RadioButton> <RadioButton Content="Third" x:Name="Third"/> <RadioButton Content="Fourth" x:Name="Fourth"></RadioButton> </StackPanel> <StackPanel Orientation="Vertical" Grid.Row="1"> <Label Content="Test1"> <Label.Style> <Style TargetType="Label"> <Setter Property="Visibility" Value="Collapsed" /> <Style.Triggers> <DataTrigger Binding="{Binding IsChecked, ElementName=First}" Value="True"> <Setter Property="Visibility" Value="Visible" /> </DataTrigger> <DataTrigger Binding="{Binding IsChecked, ElementName=Fourth}" Value="True"> <Setter Property="Visibility" Value="Visible" /> </DataTrigger> </Style.Triggers> </Style> </Label.Style> </Label> <Label Content="Test2"> <Label.Style> <Style TargetType="Label"> <Setter Property="Visibility" Value="Collapsed" /> <Style.Triggers> <DataTrigger Binding="{Binding IsChecked, ElementName=First}" Value="True"> <Setter Property="Visibility" Value="Visible" /> </DataTrigger> <DataTrigger Binding="{Binding IsChecked, ElementName=Third}" Value="True"> <Setter Property="Visibility" Value="Visible" /> </DataTrigger> </Style.Triggers> </Style> </Label.Style> </Label> <Label Content="Test3"> <Label.Style> <Style TargetType="Label"> <Setter Property="Visibility" Value="Collapsed" /> <Style.Triggers> <DataTrigger Binding="{Binding IsChecked, ElementName=Fourth}" Value="True"> <Setter Property="Visibility" Value="Visible" /> </DataTrigger> <DataTrigger Binding="{Binding IsChecked, ElementName=First}" Value="True"> <Setter Property="Visibility" Value="Visible" /> </DataTrigger> </Style.Triggers> </Style> </Label.Style> </Label> </StackPanel> </Grid> The downside is, that the logic is blurred in XAML and therefore difficult to read and extended. OPTION B A second option is to use a view model (let it call MainViewModel) and data binding: Create an enum with one value for each of the radio buttons Bind the selected value of the radio buttons to a property of the view model Create on view model for the labels (let it call LabelViewModel). Give the MainViewModel a list of LabelViewModels and bind them to an ItemsControl with a Label as DataTemplate Update the LabelViewModels within the MainViewModel if the selected value of the radio buttons changed. That approach is better to understand and extend IMHO.
{ "domain": "codereview.stackexchange", "id": 22333, "tags": "c#, wpf, xaml" }
How to plot multiple models val and traing acc/loss curve from csv files?
Question: I trained multiple CNN models, after that, I saved models details (Like , training/Validation Acc/Loss ) by callbacks by using this codes : tf.keras.callbacks.CSVLogger Now I have multiple models training-val acc-loss respective values in different csv files. I want to plots those in one figure from my csv files. How can I do this? Answer: Here are the steps I solve my problem: Save Statistical records (Training-Validation Accuraciers, Training-Validation Recall, Training-Validation Precisions, Training-Validation F1) from CSV files of this following code tf.keras.callbacks.CSVLogger after training each fold (I trained for the 3 Folds). And load those csv files in the list: import pandas as pd l = ["/Records_of_Fold_2.csv" , "/Records_of_Fold_3.csv"] After that, Plot using matplotlib : for (k,i) in enumerate(l): data = pd.read_csv(i) plt.figure(str(k)) plt.xlabel("x"+str(k)) plt.ylabel("y"+str(k)) plt.plot(data["epoch"],data["f1_score"]) plt.plot(data["epoch"],data["loss"]) print(k) plt.show()
{ "domain": "datascience.stackexchange", "id": 9126, "tags": "computer-vision, matplotlib" }
Why is oxygen needed for the electron transfer phosphorylation?
Question: I understand that oxygen is the acceptor of electrons and hydrogen ions during the electron transfer phosphorylation, the last step off the ATP-producing aerobic respiration. But why? Aren't there any other alternatives for this acceptor? Oxygen is already recognized to have several harmful effects to cells - wouldn't another molecule be a better choice? Why does it even require an "acceptor" to accept the electrons and hydrogen ions? What would happen if they were left alone? I apologize if my question was due to my ignorance in basic chemistry or biology, but do please point it out and explain it to me. Thanks! Answer: Aren't there any other alternatives for this acceptor? Not that we're aware of. Every other alternative requires an anaerobic environment - which means small, and often less efficient. Oxygen is already recognized to have several harmful effects to cells - wouldn't another molecule be a better choice? When we're talking about a molecule's fit there are many things to consider. Primarily its electronegativity, as that determines its stability, ability to accept and donate electrons, protons, and its ease of acquisition given plentiful amounts in the environment. Molecular Oxygen is comparatively stable to a lot of di-atomic molecules, and comparatively reactive to others. It's in the butter zone for the act of being the terminal acceptor. Let's think about Fluoride: Almost all of it is bound up in rocks with other elements. It's not available everywhere. It might be an alright candidate if we could eat rocks, but it also replaces minerals in our bones and teeth. Yes, you're familiar with this - but too much leads to fluorosis - where too much of the calcium compound is replaced, and it ends up making the bones and teeth much weaker. Alright, let's think about Nitrogen - Very stable, pretty good electronegativity, abundant... but a little too stable. Yup, it's a gas, it's everywhere, and we can't use a lick of it. It's so stable that it's basically inert. The only reason rhizomes and other beneficial bacteria can use it is because they're in a controlled environment away from oxygen with significantly different biochemical processes which produce much less energy (and probably wouldn't be able to support large multicellular organisms that are as active as we are). So, Nitrogen is out. Carbon? Nope, as an ion it's dangerously reactive, and as anything else it's too stable. Plus, we're carbon-based! If we used carbon as the terminal acceptor then the proteins handling the electrons would have to be even more exotic and filled with other elements to prevent them accidentally gobbling up the electron and becoming useless. Sulfur? Not electronegative enough for human purposes, unfortunately. It wouldn't act as a terminal acceptor unless our bodies seriously oxidized it; and any oxidized molecules floating around would be more dangerous than molecular oxygen floating around. Plus most sulfur is bound to minerals as well - not abundant unless you're in the ocean. Oxygen is both incredibly abundant and easy to perform the chemistry with. More reactive molecules are usually not abundant and tied up in the Earth's crust and could be more dangerous. Less reactive molecules tend to be so non-reactive that we can't do anything with them, and if we could force them into a state where they would be reactive they tend to be violently reactive or require very specific environments that we can't reproduce on a scale necessary to be as mobile as we are. Oxygen is not perfect, but it's a good fit. Why does it even require an "acceptor" to accept the electrons and hydrogen ions? What would happen if they were left alone? Same thing that would happen if you clogged a pipe or hose. Once all of the electron acceptors on the transport chain are full, movement stops. The electrons won't be magically absorbed by other parts of the proteins making up the chains, and while there might be errant reactions with other molecules, you'd be lucky to get a whole ATP molecule out in a day - assuming you could magically stay alive. In reality, Cyanide poisoning is precisely what you're asking, and will kill you very quickly.
{ "domain": "biology.stackexchange", "id": 2639, "tags": "biochemistry, cellular-respiration, energy-metabolism" }
How to Prove a System Is Invertible?
Question: what i know is that for a system to be invertibel it should be one-one , but I am confused that if i am given a transfer function of a LTI system how can I prove or verify if it is invertible. Another query is that can a system has more than one inverse systems , sometime before I used to think that inverse pairs are unique but just solving an example of finding inverse of differentiation as LTI system i came to result that it has got two inverses 1.u(t) 2.-u(-t) Answer: In general LTI System is invertible if it has neither zeros nor poles in the Fourier Domain (Its spectrum). The way to prove it is to calculate the Fourier Transform of its Impulse Response. The intuition is simple, if it has no zeros in the frequency domain one could calculate its inverse (Element wise inverse) in the frequency domain. Few remarks for the practice world: If for a certain system we know the input signals are limited to certain band (Or zones) in frequency domain it is enough the LTI system has no zeros in this zone only. In practice we have noise hence even values which are not mathematically zero but close to 0 (Or even low relative to the gain of the LTI system, Say ~-60 [dB]) are making the system not invertible. Look for many question in this community about Deconvolution and Inverse Problems.
{ "domain": "dsp.stackexchange", "id": 7623, "tags": "discrete-signals, signal-analysis, continuous-signals, linear-systems, signal-detection" }
How exactly does lambda calculus capture the intuitive notion of computability?
Question: I've been trying to wrap my head around the what, why and how of $\lambda$-calculus but I'm unable to come to grips with "why does it work"? "Intuitively" I get the computability model of Turing Machines (TM). But this $\lambda$-abstraction just leaves me confounded. Let's assume, TMs don't exist - then how can one be "intuitively" convinced about $\lambda$-calculus's ability to capture this notion of computability. How does having a bunch of functions for everything and their composobility imply computability? What am I missing here? I read Alonzo Church's paper on that but I'm still confused and looking for a more "dummed down" understanding of the same. Answer: You're in good company. Kurt Gödel criticized $\lambda$-calculus (as well as his own theory of general recursive functions) as not being a satisfactory notion of computability on the grounds that it is not intuitive, or that it does not sufficiently explain what is going on. In contrast, he found Turing's analysis of computability and the ensuing notion of machine totally convincing. So, don't worry. On the other hand, to get some idea on how a model of computability works, it's best to write some programs in it. But you do not have to do it in pure $\lambda$-calculus, although it's fun (in the same sort of way that firewalking is). You can use a modern descendant of $\lambda$-calculus, such as Haskell.
{ "domain": "cstheory.stackexchange", "id": 3120, "tags": "soft-question, computability, lambda-calculus, ho.history-overview, intuition" }
What is this little white creature?
Question: Please identify this little and soft white creature. Since childhood I am seeing this things flying in my surroundings and nowadays they are seen occasionally. Answer: It is, as stated by @rg255 a seed (or actually a fruit, see below). The seed itself is the small brownish thing. The white hairs are attached to make the seed fly with the wind. Looking at the seed and the hairs, I think the seed belongs to the daisy and dandelion familie asteracae/compositae. Altough there are other possibilities, see comments. EDIT: @AlwaysConfused is right if he states that in case of daisy family, it should be called a fruit, that containes one single seed. To be more precise, it is an Achene/Cypsula. In many species, what is often referred to as the "seed" is actually a fruit containing the seed. The seed-like appearance is owed to the hardening of the wall of the seed-vessel, which encloses the solitary seed so closely as to seem like an outer coat. Info derived from here on fruits and seeds and more specifically on Achenes/Cypsela here
{ "domain": "biology.stackexchange", "id": 6616, "tags": "botany, species-identification" }
Spinor functional quantization unitarily equivalent and determinant
Question: On P&S's qft page 301 and 302, the book discussed functional quantization of spinor field. The book define a Grassmann field $\psi(x)$ in terms of any set of orthonormal basis functions: \begin{equation} \psi(x)=\sum_i \psi_i \phi_i(x) \tag{9.71} \end{equation} where $\phi_i(x)$ are ordinary four component spinors, $\psi_i$ are Grassmann numbers. Then, the book defined the two-point function: \begin{equation} \left\langle 0\left|T \psi\left(x_1\right) \bar{\psi}\left(x_2\right)\right| 0\right\rangle=\frac{\int \mathcal{D} \bar{\psi} \mathcal{D} \psi \exp \left[i \int d^4 x \bar{\psi}(i \not \partial-m) \psi\right] \psi\left(x_1\right) \bar{\psi}\left(x_2\right)}{\int \mathcal{D} \bar{\psi} \mathcal{D} \psi \exp \left[i \int d^4 x \bar{\psi}(i \not \partial-m) \psi\right]} \tag{A} \end{equation} I don't understanding following: the book said that they write $\mathcal{D}\overline{\psi}$ instead of $\mathcal{D}\psi^*$ for convenience, the two are unitarily equivalent. So according to (9.71), this two are vectors, not matrix, what's the meaning here unitarily equivalent? According to previous discussion, the denominator of (A) should be \begin{equation} \text{det}(-i\int d^4 x (i\not \partial -m)) \end{equation} why we just denote $\text{det}(i\not \partial -m)$? Answer: The functional measure is not only the (continuous) product at each space-time point but also of each component of the objects we are taking the measure of. This means that the measure of $\psi^\dagger$ and $\psi$ is the same. Moreover, $\overline{\psi}=\psi^\dagger \gamma^0$, and $\text{det}(\gamma^0)=1$, so $\mathcal{D}\overline{\psi}$ and $\mathcal{D}\psi^\ast$ are the same. You have: \begin{equation} \prod_i \int d\theta^\ast_i d\theta_i\,e^{-\theta^\ast_i B_{ij}\theta_j} = \det(B), \end{equation} as identity see Berezin integral
{ "domain": "physics.stackexchange", "id": 92069, "tags": "quantum-field-theory, path-integral, spinors, grassmann-numbers, functional-determinants" }
Deriving energy for elliptical orbit
Question: So I wanted to derive the total energy for an elliptical orbit, $E = -GmM/2a,$ and while I was doing it, I ran into this hurdle. So at the closest point to the focus, the orbiting object is at a distance of $a(1-e)$ from the focus, where a is the semi-major axis and e is the eccentricity. So if we were to take the centripetal force at this point we should get $$\frac{mv^2}{r} = \frac{GmM}{r^2}$$ and $r = a(1-e)$ so then we would get $$\frac{1}{2}mv^2 = \frac{GmM}{2a(1-e)}$$ which is the kinetic energy. If we were to add this to the potential energy $$U = -\frac{GmM}{a(1-e)},$$ we get the total energy as $$E = -\frac{GmM}{2a(1-e)}.$$ Isn't this wrong because the total energy of an ellipse is $E = -GmM/2a$? Why did I end up with a total energy not equal to that? I asked my teacher about this and she said that we can only use mv^2/r for circular orbit but at the closest point in the ellipse, isnt the force perpendicular to the velocity so there shouldn't be any tangential acceleration, and therefore we can use the centripetal force equation? Answer: The expression you use for centripetal force, $F = mv^2/r,$ is only valid for circular orbits. For a planet in an elliptical orbit at its closest approach to the star it is orbiting, it will have a higher speed than that given by equating the centripetal and gravitational force. The planet will start moving farther away from the star after its closest approach, which means the star's gravity is not strong enough to hold the planet to a constant distance--that is, a circular orbit. So, the gravitational force must be weaker than $mv^2/r.$
{ "domain": "physics.stackexchange", "id": 92361, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, orbital-motion, celestial-mechanics" }
Concatenate dataframes Pandas
Question: I have three dataframes. Their shapes are (2656, 246), (2656, 2412) and (2656, 7025). I want to merge dataframes as above: So It will result a (2656, 9683) Dataframe. Thanks for any help. Typo on image: on Dataframe 3, it will 7025, not 5668. Answer: Assuming that the rows are in same order that you wish to merge all of the dataframes, you can use the concat command specificying axis=1. new_df = pd.concat([df1, df2, df3], axis=1) If the row index for each of the data frames are different and you want to merge them in the current order, you can also apply ignore_index: new_df = pd.concat([df1, df2, df3], ignore_index=True) For details on the merge, join & concatenation operations, please refer to the pandas docs.
{ "domain": "datascience.stackexchange", "id": 1870, "tags": "pandas, dataframe" }