anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Is the car braking time formula $ T = v / (\mu_s \, g) $ valid only for uniformly accelerated motion?
Question: I'm wondering if the car braking time formula is valid only for uniformly accelerated motion. $$ T = \frac{v} {\mu_s \, g} $$ with $ v $ average speed, $ \mu_s $ static friction coefficient between the wheel and the ground, $ g $ gravitational acceleration on the earth. I derived it in this way ($ F_{s, max} = \mu_s \, N = \mu_s \, m \, g $ maximum static friction force; $ N $ normal force, $ m $ car mass): $$ F_{s, max} = m \, a $$ $$ \mu_s \, m \, g = m \, \frac{v} {T} $$ $$ T = \frac{v} {\mu_s \, g} $$ where $ a $ is the average acceleration of the car. Thank you in advance. Answer: While the derivation you've used assumes uniform acceleration, it is also possible to show that the $T$ you have found is a lower bound on the stopping time of the car, even without assuming uniform acceleration. Roughly speaking, even if the acceleration varies with time, its magnitude can be no greater than $\mu_s g$, which implies that the stopping time can be no less than the $T$ you have found. More formally: assume the frictional force and the acceleration vary with time. The magnitude of the frictional force $F_\text{fr}(t)$ is no greater that $\mu_s$ (the coefficient of static friction) times the normal force $N$: $$ |F_\text{fr}(t)| \leq \mu_s N = \mu_s m g $$ assuming the car is on level ground. This means that the acceleration of the car is bounded by $$ |a(t)| = |F_\text{fr}(t)/m| \leq \mu_s g. $$ If the car has a positive velocity $v$ initially, then as the car brakes we have $a(t) < 0$, and so $a(t) > - \mu_s g$. Using calculus, we then have \begin{align*} \Delta v &= \int_0^T a(t) \, dt \geq \int_0^T (- \mu_s g) \, dt \\ 0 - v &\geq - \mu_s g T \\ \mu_s g T &\geq v \\ T &\geq \frac{v}{\mu_s g}. \end{align*} Thus, no matter what the car does, it will not be able to stop more quickly (i.e., in less time) than the $T$ you have calculated assuming uniform acceleration.
{ "domain": "physics.stackexchange", "id": 64096, "tags": "homework-and-exercises, newtonian-mechanics, kinematics, acceleration, friction" }
Do male marsupials have a pouch?
Question: Do male marsupials have a pouch, or is it a female organ only (like the womb)? Answer: In most marsupials, only the females have a pouch. However, males of the water opossum and the extinct tasmanian tiger (or thylacine) also have a pouch. The males of both the thylacine and water opposum used/use their pouch to keep their genitalia from getting entangled in vegetation.
{ "domain": "biology.stackexchange", "id": 1044, "tags": "zoology, marsupials" }
Sort algorithm input probabilities
Question: Suppose that there is an algorithm which sorts a sequence of $n$ elements $$a_1, a_2, ..., a_n$$ Each of the $a_i$ is chosen with probability $1/k$ from a set of $k$ distinct integer numbers. Is it true, given that $k \to \infty$, that: The probability that any two of incoming sequence elements are equal, tends to $0$? The probability that the incoming sequence is already sorted, tends to $\frac{1}{n!}$? Why / why not? Answer: Here is why you would expect these properties to hold. Suppose that $a_1,\ldots,a_n$ are chosen independently from the uniform distribution on $[0,1]$. The event $a_i = a_j$ has probability zero, and so with probability $1$ all numbers are distinct. Moreover, given that, all sequence orderings are equally likely, and since there are $n!$ of them, the probability that the sequence is ordered is exactly $1/n!$. To prove that these claims hold in the limit even when the $a_i$ are sampled from a finite set requires some calculation, which I leave to you.
{ "domain": "cs.stackexchange", "id": 1735, "tags": "probability-theory, sorting" }
Unable to use navsat_transform_node
Question: Hi everybody, I would ask for help to use properly the navsat_transform_node. I've a robot which publishes only: a GPS message topic (sensor_msgs/NavSatFix message) a IMU message topic ( sensor_msgs/Imu ) My aim is to simply convert the latitude/longitude/altitude in the relative x/y/z position without running any other node in the robot (rviz, ekf/ukf,...). Is it possibile in your opinion? Can I reach my purpose even without any odometry source? I've already tried to use 'wait_for_datum' flag, as explained in documentation, but the node doesn't seem to work. It doesn't print any ROS_INFO on the terminal even if seems to be launched correctly (I can see with 'rostopic list' the odometry/gps topic, but it is not published...). Thanks in advance for any advice. EDIT 02/07 Here the detailed messages and node setup: (Please note that as I said before I don't have any EKF node and so /initialpose in the final node launch file doesn't exists). sensor_msgs/NavSatFix message: --- header: seq: 3051 stamp: secs: 1530531889 nsecs: 957255545 frame_id: gps status: status: 0 service: 0 latitude: 22.542813 longitude: 113.958894172 altitude: 2.9022295475 position_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] position_covariance_type: 0 --- sensor_msgs/Imu message: --- header: seq: 11138 stamp: secs: 1530531940 nsecs: 302707371 frame_id: body_FLU orientation: x: -0.224501576947 y: -0.140455550098 z: 0.889119624209 w: -0.373279485428 orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] angular_velocity: x: 0.17925876379 y: -0.00353165809065 z: 0.0794828385115 angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] linear_acceleration: x: 37.4513301294 y: 4.78760564071 z: -22.280056494 linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] --- Node launch file: <launch> <node pkg="robot_localization" type="navsat_transform_node" name="navsat_transform_node" clear_params="true"> <!-- <rosparam command="load" file="$(find robot_localization)/params/navsat_transform_template.yaml" /> --> <param name="yaw_offset" value="0.00"/> <param name="zero_altitude" value="false"/> <param name="publish_filtered_gps" value="false"/> <param name="broadcast_utm_transform" value="true"/> <param name="use_odometry_yaw" value="false"/> <param name="wait_for_datum" value="true"/> <rosparam param="datum">[1.543533,4.5234534534,1.022411,map,base_link]</rosparam> <!-- Placeholders for input remapping. Set your topic names as the "to" values.--> <remap from="imu/data" to="/dji_sdk/imu"/> <remap from="odometry/filtered" to="/initialpose"/> <remap from="gps/fix" to="/dji_sdk/gps_position"/> </node> </launch> Originally posted by Mondo on ROS Answers with karma: 1 on 2018-06-29 Post score: 0 Original comments Comment by Tom Moore on 2018-07-02: Please post your full configuration for the EKF and navsat_transform_node. Also, please post one sample input message from every sensor input. IMU + GPS-only state estimation usually doesn't work very well, but I can't say more without more information. Comment by Mondo on 2018-07-02: Edited just right now. Answer: I see what you're trying to do. You shouldn't need the datum parameter; that's really there to force the GPS origin to a given point, rather than letting the first GPS reading be the GPS origin. The problem is that navsat_transform_node was written to handle this situation: Robot starts driving indoors and generating nav_msgs/Odometry messages with its current pose. No GPS signal is available. Robot drives outside and gets a GPS fix. In that situation, we need to invert our robot's pose in the nav_msgs/Odometry message, append that to the GPS reading, and use that point as our GPS origin. In your case, I think you should be able to just publish a nav_msgs/Odometry message to /initialpose that has a 0 position and an identity quaternion. You need to make sure that you set the frame_id and child_frame_id appropriately in the /initialpose message (in your case, it appears that you want map and base_link, respectively). Also, your IMU data is in the body_FLU frame. Are you providing a transform from that frame to base_link? You'll need to. Originally posted by Tom Moore with karma: 13689 on 2018-07-05 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 31133, "tags": "ros, navigation, odometry, navsat-transform-node, robot-localization" }
Do we know the availability of any material suitable for solar cells on the moon?
Question: Disclaimer: I have no formal training in any scientific field. Have we located any material suitable for solar panels already on the moon? Or does it appear that they would have be imported? Answer: Welcome to Astronomy.SE! The moon, like the earth, consists of 25% silicon by mass. Silicon is one of the main elements used in producing solar cells. So sourcing the material is not the problem. The manufacturing is. I assume your interest in this matter doesn't just stop at finding the materials, but also includes in-situ manufacture. I won't go for a detailed answer, but I will outline a few considerations. You need your own silicon industry. It's the sort of thing that requires a large infrastructure behind it. For instance, monocrystalline silicon solar cells require high purity silicon (on the order of a few ppm) and pristine growing conditions for the crystal to grow. That means you need to have the infrastructure in place to mine moon rock, reduce the silica to silicon, and separate out as much impurity as you can. With the sorts of purity levels we're talking about, you'd probably need at the very least a medium-sized moon base already in place. Making any sort of solar panel is hard. You need an industry to make the silicon. You need another industry to make the solar panels. This sort of stuff requires precision engineering, and complex (and heavy) tooling which you'll have to import from earth. You need a 'pyramid' of support structures and expertise to maintain such expensive equipment. Amorphous SI is probably your best bet. Amorphous silicon seems to be your best bet. It can be applied through some form of vapour deposition, and from a brief bit of reading around, appears to be one of the easiest ways of making solar cells, though the efficiencies tend to be low, with the best reported efficiency being 14% and realistically achievable efficiencies being likely much lower than 10%. (see the bottom green line of this diagram)
{ "domain": "astronomy.stackexchange", "id": 3678, "tags": "the-moon" }
Do male mammals other than humans have nipples?
Question: When I grew rabbits, I had a pair, one male and one female. And while the female's nipples were quite prominent, especially after giving birth, I don't remember the male having any nipples at all. Do males of other mammal species, have nipples like human males, or is it a trait that's unique to Humans? Answer: At the very least, I know that male primates also have nipples like female, though they are very close relatives to human. On the other hand, in some of my dissection labs, I noticed that male pigs also have nipples just like the female ones. It seems to be the case that most male mammals have nipples, which probably has to do with mammals being breast-feeder and their developmental pattern. There is a page on Wikipedia that I found about this topic: http://en.wikipedia.org/wiki/Nipple#In_male_mammals
{ "domain": "biology.stackexchange", "id": 2698, "tags": "anatomy, mammals" }
Reaction of nitrobenzene with sodium metal in ethanol
Question: The reactant is nitrobenzene(only) and reagent is Na metal in solvent liquid ammonia along with ethanol (which i am assuming is playing the role of proton donor please correct me if i am wrong). The first thing that comes in mind is "alkyne" but there is not any alkyne in nitrobenzene( not even in its resonance structures).How this reaction will proceed? In answer, the DBE is just reduced by one.I was also trying what if it was toluene?Can i have a explanation based on mechanism? Answer: Benzene Rings or Substituted Phenyl rings undergoes reduction in presence of $\ce{Na}$ in liq. $\ce{NH3}$ in presence of $\ce{EtOH}$ solvent, and generally forms substituted Cyclohexa-$1,4$-dienes. This reduction generally follows Single Electron Transfer (SET) Mechanism.The single unpaired electron of Sodium is transferred to the Benzene ring forming a radical which subsequently forms a Carbanion. This carbanion eventually takes the Proton from $\ce{EtOH}$ and becomes Cyclohexa-$1,4$-diene. But in the first step, the transfer of electron by $\ce{Na}$ occurs in that place, where the intermidiate formed after this step gets maximum stabilisation. In case of Electron-withdrawing group, such as $\ce{-NO2, - COOH, -SO3H, -CHO, -COOR, -COR }$ etc., that transfer occurs para to that group, as the negative charge generated is highly stabilised by that electron-withdrawing group. The mechanism thus in case of Nitrobenzene is as follows, But if the group is electron donating such as $\ce{-CH3, -OH, -OMe, -NMe2 }$ etc. The transfer will occur in such a way to stabilise the generated negative charge by keeping it in meta to the group (where the effect of electron donation by the group is least). For those compounds, the mechanism and corresponding product will be as follows, Thus, in your case, the product will be $\text{3-nitrocyclohexa-1,4-diene}$ as described in the first mechanism.
{ "domain": "chemistry.stackexchange", "id": 10884, "tags": "organic-chemistry" }
frontier_exploration to do unbounded exploration
Question: I am using the frontier_exploration [indigo]. I had troubles to use it, but now I know how to use it with limited settings. What I really want to do is the unbounded exploration. The wiki reads "To run an unbounded exploration task, simply leave the boundary blank." I don't know what this means. When I set polygon regions using "Publish Points" in rviz, the robot seemed to stop even though there are unknown regions, saying "Finished exploration room" message. Could you help me run the unbounded exploration? Here are my settings: Ubuntu 14.04 (64 bit) Turtlebot gazebo simulator Gmapping for map building ROS: Indigo Originally posted by JollyGood on ROS Answers with karma: 58 on 2015-03-06 Post score: 1 Original comments Comment by syafiqsalam on 2015-04-19: can i apply into hydro? Answer: To do 'unbounded' exploration, you send an actionlib goal (either using a SimpleActionClient implementation, or by running actionlib's axclient.py) that contains an empty polygon. Originally posted by paulbovbel with karma: 4518 on 2015-03-07 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by sezan92 on 2017-03-23: Hello , I have written a python code for sending empty polygon. But the turtlebot doesnt move . Can you help us with a code ? I am not sure how to use axclient.py either . Please help Comment by useranonymous on 2017-07-30: rosrun actionlib axclient.py /explore_server Comment by hasnain on 2018-03-26: does unbounded mean that it will start moving on its own and autonomously map the environment?
{ "domain": "robotics.stackexchange", "id": 21069, "tags": "ros, frontier-exploration, turtlebot" }
Ros2 for Unity create custom msg
Question: Hi all, I'm trying to add a custom msg to my ROS2 for Unity on Windows. From what I understand, I need to follow this. Everything is fine till I arrive at the point of modifying the custom_messages.repos file in order to get my custom msg from get_repos.ps1. I don't know how to write this file in fact, it requires some URL and other parameters. But in my case, I don't have an online repo because as said here, I should put my custom msg pkg into src/ros2 sub-folder. Obviously every time I run the build.ps1, I encountered some errors related to my msg. After building ros2-for-Unity pkg and importing it into the project, I get this error in Unity: error CS1061: 'Teleop' does not contain a definition for 'ang' and no accessible extension method 'ang' accepting a first argument of type 'Teleop' could be found (are you missing a using directive or an assembly reference?) Remember that ros2-for-unity is a plugin so I can't do ros2 topic list from it. To see if the topic is active I need to launch the command from the normal ros2_ws, and I can see my msg sent correctly. This is my custom msg that builds in the normal ws: std_msgs/Header header # current position geometry_msgs/Vector3[2] pose # meters # current angles geometry_msgs/Vector3[2] ang # rad This is my Unity subscriber: namespace ROS2 { public class RCM2Omni : MonoBehaviour { private ROS2UnityComponent ros2Unity; private ROS2Node ros2Node; private ISubscription<my_msgs.msg.Teleop> Omni2; private float[] transformations = new float[3]; bool moving = false; void Start() { ros2Unity = GetComponent<ROS2UnityComponent>(); } void Update() { if (ros2Node == null && ros2Unity.Ok()) { ros2Node = ros2Unity.CreateNode("RCM2Omni"); Omni2 = ros2Node.CreateSubscription<my_msgs.msg.Teleop>( "/Teleop", msg => { parse(msg); }); } if (moving) { transform.Rotate(transformations[0], 0, 0); moving = false; } } public void parse(my_msgs.msg.Teleop msg) { transformations[0] = (float)msg.ang[1].x; moving = true; } } } Has anyone already had this problem? Originally posted by alberto on ROS Answers with karma: 100 on 2022-07-08 Post score: 0 Original comments Comment by ljaniec on 2022-07-11: Is the line with transformations[0] = (float)msg.ang[1].x; in the parse(my_msgs.msg.Teleop msg) correct? In the examples I can see chatter_sub = ros2Node.CreateSubscription<std_msgs.msg.String>("chatter", msg => Debug.Log("Unity listener heard: [" + msg.Data + "]")); so maybe try msg.Data.ang[1] or similar? Comment by alberto on 2022-07-11: Even with msg.Data I receive: 'Teleop' does not contain a definition for 'Data' .. Instead, with only msg, I get the result in the console, but it's only the msg object, and I don't know if it has the elements or how to access it. Comment by ljaniec on 2022-07-11: I think you can check it from ground up - I would start with this example: https://github.com/RobotecAI/ros2cs/blob/master/src/ros2cs/ros2cs_examples/ROS2Listener.cs, check if it works with std_msgs.msg.String, then add your custom message and check with this simplest possible ROS 2 Listener if your message is heard correctly. I don't see e.g. Ros2cs.Init(); in your code, that is there in the example listener code. Answer: I found the solution, in this answer I will resume all the steps I followed in one place. First, to build your Ros2-for-Unity plugin follow this. clone the repo do .\pull_repositories.ps1, this will create the ros2cs folder open the ros2cs folder, and create the custom_messages folder in it. This is not mandatory, it should work if you put it in ros2cs/ros2, but I didn't test it. add my custom_msg_pkg in that folder exactly as it is in the ros2ws do ./build.ps1 -standalone do create_unity_package.ps1 Now in Unity, do Assets->Import package->Custom package and select your newly created UnityPackage. Import all. Write publisher and subscriber, I followed this. And then final but a most important reminder, if you'll ever face this error in Unity: error CS1061: your_msg does not contain a definition for your_field and no accessible extension method your_field accepting a first argument of type your_msg could be found (are you missing a using directive or an assembly reference?) CHECK THAT THE FIRST LETTER OF THE ELEMENT IS CAPITAL. For example: to access std_msgs.String.data in C++ you do msg.data, but in C# (or Unity, I don't know which one complains) you must do msg.Data (capital D !), or you will have error CS1061. The same goes for your custom_msg fields! I want to highlight this because it made me cry for 2 days and it's a very silly error. Also thanks to @ljaniec for the help! Originally posted by alberto with karma: 100 on 2022-07-11 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 37833, "tags": "ros2" }
Can all water soluble ionic compounds conduct electricity?
Question: I'm taking a first year university chemistry course, and I am reading a lot of contradicting things. Some people say that ionic compounds that are soluble in water are always electrically conductive, others say that it maybe conductive. Which one is it? I have this problem to think about: Suppose that an unknown chemical compound exhibits the following properties: it is crystalline but shows no electric conductivity in the solid state it melts at $\mathrm{300^\circ C}$ and decomposes at $\mathrm{400^\circ C}$ the compound is soluble in water, the solution shows no electrical conductivity. What kind of chemical bond would you expect in the given compound? Try to describe this type of bond in detail. Everything screams "ionic" (high melting point, solid at room temperature, crystalline structure, non-conducting in solid form) besides the fact that it doesn't conduct in water. Answer: DavePhD is right! Your material is an organic compound. In order to be soluble in water, it should have some polar substituents. I'd however exclude carboxylic acids or phenolic $\ce{OH}$ (except maybe in the proximity of a carbonyl group) since these will partly dissociate and yield to minor conductivity. If the melting point wouldn't be that high, inositol would be a candidate. The nucleobases thymine and uracil do show melting points in the 300 °C range and are both soluble in water.
{ "domain": "chemistry.stackexchange", "id": 2853, "tags": "solubility, ionic-compounds, conductivity" }
Will the MAE of testing data always be higher than MAE of training data?
Question: On the Kaggle Course Page the chart below shows that MAE of testing data is always higher than MAE of training data. Why is this the case? Is it only limited to DecisionTreeRegressor model? Or the graph is wrong and in practice the MAE of testing can be lower than MAE of training? Answer: Train MAE is "generally" lower than Test MAE but not always. Now coming to your questions. Q1 Why does this happen? A1. Train MAE is generally lower than Test MAE because the model has already seen the training set during training. So its easier to score high accuracy on training set. Test set on the other hand is unseen so we generally expect Test MAE to be higher as it more difficult to perform well on unseen data. However, it is not always necessary for Train MAE to be lower than Test MAE. It might happen "by chance" that the test set is relatively easier (than the training set) for the model to score higher accuracy hence leading to lower Test MAE! Q2. Is this true only for DecisionTreeRegressor? A2. No, this plot is not specifical for DecisionTreeRegressor. If you notice that in my explanation I haven't made any assumption on the model! Q3. Is the graph incorrect? A3. No, the graph is not wrong. We speak of the general case of what we are expecting on an average. If you were to draw a graph only for a particular/current instance of the model running you can have Train MAE above Test MAE.
{ "domain": "datascience.stackexchange", "id": 6641, "tags": "decision-trees, cross-validation, overfitting" }
Measuring reverse transcription yield?
Question: Is there any measurement I could perform after RT that allows me to check the efficiency that the procedure had? Nanodrop cannot be used as remnants of RNA and poly T primers mask measure, or? Answer: Nanodrop may be fine for the volumes but as you rightly guessed it is not so easy to distinguish DNA from RNA (and primers). Qubit (invitrogen) is a sensitive, dye based quantification assay. You can use that for estimating cDNA yield. The kit contains different dyes for DNA, RNA and protein estimation.
{ "domain": "biology.stackexchange", "id": 2007, "tags": "reverse-transcription" }
What is lambda caculus's "fix point combinators" corresponding to Turing Machine?
Question: The lambda caculus equals to Turing Machine,so What is lambda caculus's "fix point combinators" corresponding to Turing Machine? according to the paper <Primitive Rec, Ackerman's Function, Decidable, Undeciable, and Beyond Exposition by William Gasarch>,Turing Machine can: The machine acts in discrete steps. At any one step it will read the symbol in the \tape square", see what state it is in, and do one of the following: write a symbol on the tape square and change state, move the head one symbol to the left and change state, move the head one symbol to the right and change state. so what's the "fix point combinators" corresponding to?I think its 2 and 3:"the head can move left and right",it can "the head can move left and right",so it can loop,and "fix point combinators" support loop too am I right?Thanks! Answer: The lambda calculus is not "equal" to Turing machines. There is a correspondence, but it is not equality, it's one of simulation. You should not expect every aspect of one to have a corresponding counterpart in the other. For instance, it does not make sense to ask "what does the Turing machine head correspond to in lambda calculus?" Having said that, there is a more high level correspondence. The fixpoint combinator implements general recursion in lambda calculus. In command-based languages the (vague) counter-part is iteration, i.e., while loop. And Turing machines are capable of iterating a sequence of actions. However, the correspondence beteween recursion and iteration goes beyond both Turing machines and lambda calculus, so it's not really an answer specific to your question.
{ "domain": "cs.stackexchange", "id": 21167, "tags": "turing-machines, computability, lambda-calculus, functional-programming" }
How do i generate text from ids in Torchtext's sentencepiece_numericalizer?
Question: The torchtext sentencepiece_numericalizer() outputs a generator with indices SentencePiece model corresponding to token in the input sentence. From the generator, I can get the ids. My question is how do I get the text back after training? For example >>> sp_id_generator = sentencepiece_numericalizer(sp_model) >>> list_a = ["sentencepiece encode as pieces", "examples to try!"] >>> list(sp_id_generator(list_a)) [[9858, 9249, 1629, 1305, 1809, 53, 842], [2347, 13, 9, 150, 37]] How do I convert list_a back t(i.e "sentencepiece encode as pieces", "examples to try!")? Answer: Torchtext does not implement this, but you can use directly the SentencePiece package. installable from PyPi. import sentencepiece as spm sp = spm.SentencePieceProcessor(model_file='test/test_model.model') sp.decode([9858, 9249, 1629, 1305, 1809, 53, 842])
{ "domain": "datascience.stackexchange", "id": 10791, "tags": "python, nlp, pytorch, bert, transformer" }
How do you rigorously define current?
Question: I'm working through Griffiths, and nowhere in the book is current actually formally defined. That's kind of important, since he then bases the definition of surface and volume current densities off the notion of a line current. With line currents, current could be defined as the amount of charge that passes through a point per unit time. Okay, that works fine with electrons, but it's not very general and I'd much rather work with line charge densities (in which case the charges passing a point could be viewed as summed delta functions). With this restriction, I figured it's only really rigorously definable on a curve through space $\gamma (s)$ (where $s$ is some real parameter), carrying some line charge density $\lambda(s, t)$. Often with real currents you can view the line charge as propagating through the curve at some velocity $v$, so that then the current becomes $I=\lambda(s,t)v$, for some point $s$ on the curve. But that's not always the case, even when heeding charge continuity, as in the following example. Consider the following charge distribution, on a circular curve $\gamma(s) = (R\cos s, R\sin s)$ for $s \in (-\pi,\pi]$: \begin{align} \lambda_1(s,0) = q\delta(s); \\ \lim _{t\rightarrow \infty} \lambda_1(s,t) = \frac{q}{2\pi R}. \end{align} Physically this represents a point charge $q$ distributing itself over $\gamma$ uniformly as time passes. In this case, how would we define a current? The line charge $\lambda_1$ is not propagating over the curve in any sense, so the first equation for the current doesn't really work. Moreover, what about a closed loop over which the line charge density is constant? This is most theoretical currents, and yet we cannot define the current without the velocity (which is not determinable by looking at the charge density). That leads me to believe we also need a velocity field... but surely we could do some integral for $\lambda_1$ to get the velocity of the charge anyway? Is it possible to answer these questions? I'm starting to get very confused, so it would be great if someone could help. Answer: It is actually easier to start with current density, particularly if you want a definition based on charge density. Current density, $\vec J = \rho \vec v$ is the charge density $\rho$ times the velocity $\vec v$. Then current is the integral of the current density over some surface $I = \int \vec J \cdot d\vec A$, typically a cross-section of a wire. Charge is attached to matter, so both the charge density and the velocity are those of the matter carrying the charge. It can be treated as a continuum or as particles.
{ "domain": "physics.stackexchange", "id": 74297, "tags": "electric-circuits, electric-current, magnetostatics" }
Running the same node on multiple image messages parallel
Question: Hello ros comunity. I have six cameras, where i would like to do object classification, with YOLOv3 using darknet_ros, on each of the camera inputs, at the same time. I am using a Jetson AGX Xavier, with ros-melodic, cuda and opencv installed, and i can acces the cameras through ethernet. My question is: What would be the smartest and ideal way to handle multiple image messages (or messages in general) in parallel, using the same node? It would be nice if someone could point me in the direction of the theory that i need to solve my problem most efficiently. Have a nice day. Originally posted by bachla on ROS Answers with karma: 3 on 2020-04-03 Post score: 0 Answer: As a quick comment: if "the node" hasn't been written with this in mind, then the answer would be: you can't. Of course, I'm assuming here that "the same node" means: "with a single running process". If instead you meant: with multiple running instances of the same node type, then you should be able to start as many as you need in separate namespaces and have each of them subscribe to the appropriate topics. Depending on how many resources the node uses / requires (especially: GPU resources here as you mention CUDA), you should be able to start multiple. But that really depends on how "the node" has been written. Originally posted by gvdhoorn with karma: 86574 on 2020-04-03 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by bachla on 2020-04-14: Thank you for the response. I meant running multiple instances of the same node type but on different inputs, and as you said, using different namespaces, was the way to go.
{ "domain": "robotics.stackexchange", "id": 34686, "tags": "ros, ros-melodic, parallel" }
What is difference between "cv2.filter2D" vs Keras "Conv2D" function
Question: When I have to sharpen an image using opencv, I use: #Create our shapening kernel kernel_sharpening = np.array([[0,-1,0], [-1, 5,-1], [0,-1,0]])# applying the sharpening kernel to the input image & displaying it. sharpened = cv2.filter2D(image, -1, kernel_sharpening) In above code sharpened is our resultant image. As you can see in above code I used opencv function named filter2D to perform convolution of input image with the kernel, and as a result I got sharpened image. Recently I went through this link regarding image Super-Resolution (link) And found out Keras has something similar to filter2D and Keras calls it Conv2D. Its syntax is as follows: dis2 = Conv2D(filters=64, kernel_size=3, strides=2, padding='same')(dis1) My question is what is the difference between opencv filter2D, and Keras Conv2D ? (I assume both do the same role of convolution of image with a kernel, I may be wrong pls correct) Answer: So from an architectural viewpoint you are right, both are 2D Convolutional Kernels with size of (3,3). But there are some major differences. While cv2.filter2D(image, -1, kernel_sharpening) directly convolute the image dis2 = Conv2D(filters=64, kernel_size=3, strides=2, padding='same')(dis1) only constructs a Conv2D Layer which is part of the Graph ( Neural Network). So Keras Conv2D is no operation for directly convolute an image. Also the weights are different. In the cv2 Part [0,-1,0], [-1, 5,-1], [0,-1,0] are your weights. dis2 = Conv2D(filters=64, kernel_size=3, strides=2, padding='same')(dis1) has standard weight initializer which is glorot uniform. so the weights would not even match. Additionally the weights of a Conv2D Layer which represents the Keras way, will be learned during training stage of the neural network. However if you neural network would have only this convolution layer and yields the same weights as the cv2 convolution, the result should be exactly the same.
{ "domain": "datascience.stackexchange", "id": 6310, "tags": "machine-learning, python, deep-learning, opencv" }
Printing JUnit test results in file by changing and using the 'out' static variable of 'System' class
Question: I'm building a framework for comparing files (in my own way) using JUnit. All test cases have been packaged in a JAR which is ran independently using a .bat file I wrote. I needed to output the test results in a file instead of the console. I was just using normal System.out.println() in the TestRunner class and also in the various test case classes for printing the output on the console. I found a solution that I could use in the project (Method number 3 of this article). I redirected the output stream to my output file. Following is the relevant code from the TestRunner class: public class TestRunner { public static void main(String[] args) { System.out.println("Testing Started..."); // Save the System.out instance PrintStream oldPrintStream = System.out; FileOutputStream outFile = null; try { outFile = new FileOutputStream("result.txt"); PrintStream newPrintStream = new PrintStream(outFile); System.setOut(newPrintStream); Result result = JUnitCore.runClasses(TestSuite.class); // Print the results in desired format } catch (FileNotFoundException ex) { Logger.getLogger(TestRunner.class.getName()).log(Level.SEVERE, null, ex); } finally { // Reset the old System.out instance System.setOut(oldPrintStream); System.out.println("Testing Completed! Check output folder for result."); try { outFile.close(); } catch (IOException ex) { Logger.getLogger(TestRunner.class.getName()).log(Level.SEVERE, null, ex); } } } } The code is working fine but is it the correct way to do this? Answer: Looks good enough, but there's some improvements in the io API you're not using. Consider: try (OutputStream outFile = Files.newOutputStream(Paths.get("result.txt"), StandardOpenOption.WRITE, StandardOpenOption.CREATE); PrintStream newSysOut = new PrintStream(outFile)) { System.setOut(newSysOut); Result result = JUnitCore.runClasses(TestSuite.class); printResults(result); } catch (IOException ex) { // your code here } finally { System.setOut(oldPrintStream); } This should give you more fine-grained Exceptions when your operations fail (not that I'm using it here). Also you don't need to close the Resources you use yourself, because of the try-with-resources-block. Small additional note: Your comments are extraneous for this example. I'd suggest removing them.
{ "domain": "codereview.stackexchange", "id": 20390, "tags": "java, io, junit" }
Is AlphaZero's output (action probabilities) vector suboptimal?
Question: The AlphaZero research team states A move in chess may be described in two parts: selecting the piece to move, and then selecting among the legal moves for that piece. We represent the policy π(a|s) by a 8 × 8 × 73 stack of planes encoding a probability distribution over 4,672 possible moves. Each of the 8×8 positions identifies the square from which to “pick up” a piece. I am wondering if it would be better to only have the network output moves that could exist (i.e., "a1b3" is a possible knight move, but "a1g3" could never be reached by a piece). The modified output would be much smaller, and could potentially make the neural network learn quicker, right? Answer: Your observation is certainly correct: Assuming the chess boards gets flipped to the point of view of the current player, and depending on how exactly you handle promotions, castling, and other special moves there are about 2000 distinct chess moves. This means that about half of the policy output is never useful. However, as stated in the paper: Illegal moves are masked out by setting their probabilities to zero, and re-normalising the probabilities for remaining moves. This means those useless policy output values don't hurt, since they always get set to zero. The network doesn't even have to learn that those are illegal moves, it gets that for free. There is the small effect of having to compute some useless values, but the output head is only a tiny fraction of the total compute required by the network, so it can safely be ignored. If you're really worried about this you can use a fully connected output layer for the policy that only encodes potentially valid moves, this is what some earlier AlphaZero versions did.
{ "domain": "ai.stackexchange", "id": 3631, "tags": "alphazero" }
Error reading from SCI port. No data. Turtlebot disconnects when set Full Mode
Question: Hi, there, I have recently upgraded to Fuerte on Ubuntu 12.04. I tried to build SLAM map by following http://www.ros.org/wiki/turtlebot_navigation/Tutorials/Build%20a%20map%20with%20SLAM tutorial, but after I execute roslaunch turtlebot_navigation gmapping_demo.launch I receive error - "Failed to contact device with error: [Error reading from SCI port. No data.]. Please check that the Create is powered on and that the connector is plugged into the Create." turtlebot@turtlebot-1215N:~$ sudo service turtlebot start turtlebot start/running, process 4564 turtlebot@turtlebot-1215N:~$ roslaunch turtlebot_navigation gmapping_demo.launch ... logging to /home/turtlebot/.ros/log/0c768eee-c921-11e1-8666-485d60f51088/roslaunch-turtlebot-1215N-5175.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://192.168.1.6:41858/ SUMMARY ======== PARAMETERS * /kinect_laser/max_height * /kinect_laser/min_height * /kinect_laser/output_frame_id * /kinect_laser_narrow/max_height * /kinect_laser_narrow/min_height * /kinect_laser_narrow/output_frame_id * /move_base/TrajectoryPlannerROS/acc_lim_th * /move_base/TrajectoryPlannerROS/acc_lim_x * /move_base/TrajectoryPlannerROS/acc_lim_y * /move_base/TrajectoryPlannerROS/dwa * /move_base/TrajectoryPlannerROS/goal_distance_bias * /move_base/TrajectoryPlannerROS/heading_lookahead * /move_base/TrajectoryPlannerROS/holonomic_robot * /move_base/TrajectoryPlannerROS/max_rotational_vel * /move_base/TrajectoryPlannerROS/max_vel_x * /move_base/TrajectoryPlannerROS/min_in_place_rotational_vel * /move_base/TrajectoryPlannerROS/min_vel_x * /move_base/TrajectoryPlannerROS/oscillation_reset_dist * /move_base/TrajectoryPlannerROS/path_distance_bias * /move_base/TrajectoryPlannerROS/sim_time * /move_base/TrajectoryPlannerROS/vtheta_samples * /move_base/TrajectoryPlannerROS/vx_samples * /move_base/TrajectoryPlannerROS/xy_goal_tolerance * /move_base/TrajectoryPlannerROS/yaw_goal_tolerance * /move_base/controller_frequency * /move_base/global_costmap/footprint * /move_base/global_costmap/footprint_padding * /move_base/global_costmap/global_frame * /move_base/global_costmap/inflation_radius * /move_base/global_costmap/observation_sources * /move_base/global_costmap/obstacle_range * /move_base/global_costmap/publish_frequency * /move_base/global_costmap/raytrace_range * /move_base/global_costmap/robot_base_frame * /move_base/global_costmap/scan/clearing * /move_base/global_costmap/scan/data_type * /move_base/global_costmap/scan/marking * /move_base/global_costmap/scan/topic * /move_base/global_costmap/static_map * /move_base/global_costmap/transform_tolerance * /move_base/global_costmap/update_frequency * /move_base/local_costmap/footprint * /move_base/local_costmap/footprint_padding * /move_base/local_costmap/global_frame * /move_base/local_costmap/height * /move_base/local_costmap/inflation_radius * /move_base/local_costmap/observation_sources * /move_base/local_costmap/obstacle_range * /move_base/local_costmap/publish_frequency * /move_base/local_costmap/raytrace_range * /move_base/local_costmap/resolution * /move_base/local_costmap/robot_base_frame * /move_base/local_costmap/rolling_window * /move_base/local_costmap/scan/clearing * /move_base/local_costmap/scan/data_type * /move_base/local_costmap/scan/marking * /move_base/local_costmap/scan/topic * /move_base/local_costmap/static_map * /move_base/local_costmap/transform_tolerance * /move_base/local_costmap/update_frequency * /move_base/local_costmap/width * /openni_launch/debayering * /openni_launch/depth_frame_id * /openni_launch/depth_mode * /openni_launch/depth_registration * /openni_launch/depth_time_offset * /openni_launch/image_mode * /openni_launch/image_time_offset * /openni_launch/rgb_frame_id * /pointcloud_throttle/max_rate * /rosdistro * /rosversion * /slam_gmapping/angularUpdate * /slam_gmapping/astep * /slam_gmapping/delta * /slam_gmapping/iterations * /slam_gmapping/kernelSize * /slam_gmapping/lasamplerange * /slam_gmapping/lasamplestep * /slam_gmapping/linearUpdate * /slam_gmapping/llsamplerange * /slam_gmapping/llsamplestep * /slam_gmapping/lsigma * /slam_gmapping/lskip * /slam_gmapping/lstep * /slam_gmapping/map_update_interval * /slam_gmapping/maxUrange * /slam_gmapping/odom_frame * /slam_gmapping/ogain * /slam_gmapping/particles * /slam_gmapping/resampleThreshold * /slam_gmapping/sigma * /slam_gmapping/srr * /slam_gmapping/srt * /slam_gmapping/str * /slam_gmapping/stt * /slam_gmapping/temporalUpdate * /slam_gmapping/xmax * /slam_gmapping/xmin * /slam_gmapping/ymax * /slam_gmapping/ymin NODES / kinect_breaker_enabler (turtlebot_node/kinect_breaker_enabler.py) kinect_laser (nodelet/nodelet) kinect_laser_narrow (nodelet/nodelet) move_base (move_base/move_base) openni_launch (nodelet/nodelet) openni_manager (nodelet/nodelet) pointcloud_throttle (nodelet/nodelet) slam_gmapping (gmapping/slam_gmapping) ROS_MASTER_URI=http://192.168.1.6:11311 core service [/rosout] found Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[kinect_breaker_enabler-1]: started with pid [5204] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[openni_manager-2]: started with pid [5205] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[openni_launch-3]: started with pid [5212] [ INFO] [1341768064.009254575]: Initializing nodelet with 4 worker threads. Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[pointcloud_throttle-4]: started with pid [5257] [ INFO] [1341768064.225824642]: [/openni_launch] No devices connected.... waiting for devices to be connected Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[kinect_laser-5]: started with pid [5272] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[kinect_laser_narrow-6]: started with pid [5289] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[slam_gmapping-7]: started with pid [5306] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[move_base-8]: started with pid [5353] [ INFO] [1341768065.255244721]: [/openni_launch] No devices connected.... waiting for devices to be connected [ INFO] [1341768065.529859405]: Subscribed to Topics: scan [ INFO] [1341768065.572297783]: Requesting the map... [ INFO] [1341768065.577921498]: Still waiting on map... [ INFO] [1341768066.276262791]: [/openni_launch] No devices connected.... waiting for devices to be connected [ INFO] [1341768066.578017762]: Still waiting on map... [kinect_breaker_enabler-1] process has finished cleanly log file: /home/turtlebot/.ros/log/0c768eee-c921-11e1-8666-485d60f51088/kinect_breaker_enabler-1*.log [ INFO] [1341768067.298705975]: [/openni_launch] No devices connected.... waiting for devices to be connected [ INFO] [1341768067.577962126]: Still waiting on map... [ INFO] [1341768068.322282553]: [/openni_launch] No devices connected.... waiting for devices to be connected [ INFO] [1341768068.578022351]: Still waiting on map... [ INFO] [1341768069.577965591]: Still waiting on map... [ INFO] [1341768070.577958843]: Still waiting on map... [ INFO] [1341768071.078922315]: [/openni_launch] Number devices connected: 1 [ INFO] [1341768071.079242958]: [/openni_launch] 1. device on bus 001:39 is a Xbox NUI Camera (2ae) from Microsoft (45e) with serial id 'B00367206887043B' [ WARN] [1341768071.082777081]: [/openni_launch] device_id is not set! Using first device. [ INFO] [1341768071.179295315]: [/openni_launch] Opened 'Xbox NUI Camera' on bus 1:39 with serial number 'B00367206887043B' [ INFO] [1341768071.221746341]: rgb_frame_id = 'camera_rgb_optical_frame' [ INFO] [1341768071.229172662]: depth_frame_id = 'camera_depth_optical_frame' [ INFO] [1341768071.577994695]: Still waiting on map... [ INFO] [1341768072.577971674]: Still waiting on map... -maxUrange 16 -maxUrange 9.99 -sigma 0.05 -kernelSize 1 -lstep 0.05 -lobsGain 3 -astep 0.05 -srr 0.01 -srt 0.02 -str 0.01 -stt 0.02 -linearUpdate 0.5 -angularUpdate 0.436 -resampleThreshold 0.5 -xmin -1 -xmax 1 -ymin -1 -ymax 1 -delta 0.05 -particles 80 [ INFO] [1341768073.100087569]: Initialization complete update frame 0 update ld=0 ad=0 Laser Pose= 0 0 0 m_count 0 Registering First Scan [ INFO] [1341768073.578165576]: Still waiting on map... [ INFO] [1341768074.591897075]: Received a 32 X 32 map at 0.050000 m/pix [ WARN] [1341768074.806385721]: Costmap2DROS transform timeout. Current time: 1341768074.8062, global_pose stamp: 1341768074.0065, tolerance: 0.5000 [ INFO] [1341768075.160692480]: MAP SIZE: 32, 32 [ INFO] [1341768075.181478240]: Subscribed to Topics: scan [ INFO] [1341768075.803084198]: Sim period is set to 0.20 [ WARN] [1341768076.473023023]: Costmap2DROS transform timeout. Current time: 1341768076.4729, global_pose stamp: 1341768074.0065, tolerance: 0.5000 [ WARN] [1341768076.872924590]: Could not get robot pose, cancelling reconfiguration [ WARN] [1341768077.473032476]: Costmap2DROS transform timeout. Current time: 1341768077.4729, global_pose stamp: 1341768074.0065, tolerance: 0.5000 [ WARN] [1341768077.874107868]: Could not get robot pose, cancelling reconfiguration [ WARN] [1341768078.473300976]: Costmap2DROS transform timeout. Current time: 1341768078.4731, global_pose stamp: 1341768074.0065, tolerance: 0.5000 [ WARN] [1341768078.973711024]: Could not get robot pose, cancelling reconfiguration [ WARN] [1341768079.473889132]: Costmap2DROS transform timeout. Current time: 1341768079.4738, global_pose stamp: 1341768074.0065, tolerance: 0.5000 [ WARN] [1341768079.973774740]: Could not get robot pose, cancelling reconfiguration [ WARN] [1341768080.474282223]: Costmap2DROS transform timeout. Current time: 1341768080.4741, global_pose stamp: 1341768074.0065, tolerance: 0.5000 [ WARN] [1341768080.973871987]: Could not get robot pose, cancelling reconfiguration [ WARN] [1341768081.474447913]: Costmap2DROS transform timeout. Current time: 1341768081.4743, global_pose stamp: 1341768074.0065, tolerance: 0.5000 [ WARN] [1341768081.973958267]: Could not get robot pose, cancelling reconfiguration [ WARN] [1341768082.574226441]: Costmap2DROS transform timeout. Current time: 1341768082.5740, global_pose stamp: 1341768074.0065, tolerance: 0.5000 [ WARN] [1341768083.073476986]: Could not get robot pose, cancelling reconfiguration [ WARN] [1341768083.574545923]: Costmap2DROS transform timeout. Current time: 1341768083.5744, global_pose stamp: 1341768074.0065, tolerance: 0.5000 [ERROR] [1341768083.774149730]: Extrapolation Error looking up robot pose: Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] [ WARN] [1341768084.074333263]: Could not get robot pose, cancelling reconfiguration [ERROR] [1341768084.774230352]: Extrapolation Error looking up robot pose: Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] [ WARN] [1341768085.174787510]: Could not get robot pose, cancelling reconfiguration [ERROR] [1341768085.806383046]: Extrapolation Error looking up robot pose: Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] [ WARN] [1341768086.273697846]: Could not get robot pose, cancelling reconfiguration [ERROR] [1341768086.806457177]: Extrapolation Error looking up robot pose: Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] [ WARN] [1341768087.273752552]: Could not get robot pose, cancelling reconfiguration [ERROR] [1341768087.819976917]: Extrapolation Error looking up robot pose: Lookup would require extrapolation into the past. Requested time 1341768074.006515026 but the earliest data is at time 1341768077.837952710, when looking up transform from frame [/base_link] to frame [/odom] [ WARN] [1341768088.274151997]: Could not get robot pose, cancelling reconfiguration ... Basically, after I run roslaunch turtlebot_navigation gmapping_demo.launch connection with iCreate and power board with gyro brakes and I can't receive current robot position. Turtlebot Dashboard shows that mode is set as full; Kinect powered; and error "msg: Failed to contact device with error: [Error reading from SCI port. No data.]. Please check that the Create is powered on and that the connector is plugged into the Create." If I turn on iCreate by pressing power button on it, I have turtlebot back with "Mode: safe". As soon as I switch to Full mode, within 15 seconds iCreate turns off. Battery is fully charged. Not sure what is going on and how I can fix it. Please, help. Originally posted by Roman Burdakov on ROS Answers with karma: 131 on 2012-07-08 Post score: 1 Answer: if you have a DMM (multimeter) please measure the voltage across the battery to make sure that it is fully charged. Originally posted by mmwise with karma: 8372 on 2012-07-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tfoote on 2014-05-05: Similar: http://answers.ros.org/question/62893/turtlebot-bringup-error-robot-not-connected/
{ "domain": "robotics.stackexchange", "id": 10094, "tags": "ros, kinect, turtlebot, gmapping-demo, icreate" }
Concentration dependence of electron transfer
Question: Regarding a single electron transfer: $$O +e^- \rightarrow R$$ We find the current dependent on the overpotential as specified by the Butler Volmer equation: $$i = i_0 (exp(\frac{\eta}{b})-exp(\frac{-\eta}{c}))$$ With $\eta =E^{applied}-E^{eq}$ the overpotential, $i_0$ the exchange current density, b and c Tafel slopes for the anodic and cathodic reaction. $E^{eq}$ is defined by the Nernst equation: $$E^{eq}=E^0-\frac{RT}{F}ln(Q)$$ We find thus that the overpotential at the electrode surface is dependent on the concentration at the surface. The exchange current density itself however is said to be: $$i_0 = Fk_0C_O^{\alpha}C_R^{\beta}$$ In some books, pages online, and review articles I've seen these thrown around so much that it is not clear anymore how the current density is affected by the concentration. If you rework the first equation with the Nernst equation one can get the Q out of the exponential, but is this already taken into account in the exchange current density factor or not? Is the current density "doubly" dependent on the concentration as in the overpotential is dependent on the current density and there is some dependence of the exchange current density? Answer: Yes, the current density is "doubly" dependent on the concentration, in the following sense that you have written: $ C_O(0,t) $ and $ C_R(0,t) $ appear in $E^{eq}$ within the overpotential defind as $ \eta = E - E^{eq} $ where $ E $ is the potential difference between the electrode surface and the bulk solution, i.e., $ E = \phi_M - \phi_S $. $ C_O(0,t) $ and $ C_R(0,t) $ appear also in the exchange current density, which formally, has the following form $$ j_0 = F k^\Theta \bigg(\dfrac{C_O(0,t)}{C_O^\Theta}\bigg)^{1 - \alpha} \bigg(\dfrac{C_R(0,t)}{C_R^\Theta}\bigg)^\alpha $$ where $ \alpha $ is the symmetry factor, and $C_O^\Theta$ with $C_R^\Theta$ are the reference concentrations associated to the rate constant $ k^\Theta$. You can see this "double" dependence in textbooks that derive B-V kinetics by using the Transition State Theory, or Eyring Theory, applied to charge transfer reactions. Disclaimer: If you always talk about equilibrium, the concentrations at the electrode surface and the bulk are the same, so there is only one concentration to talk about. However, the kinetics are evaluated at the electrode surface, so it is a good practice to put $ (0,t) $, when it applies. When you write "$C_O$" or "$C_R$" in electrochemistry, my first question is, where is the $O$ and $R$ you are talking to me about? (my explanation does not take into account double-layer effects, where the evaluation is more subtle)
{ "domain": "chemistry.stackexchange", "id": 17314, "tags": "physical-chemistry, electrochemistry, kinetics" }
AdventOfCode 2019 day 6 in Haskell
Question: I am new to Haskell and currently trying to port my solutions for the 2019 installment of the coding challenge AdventOfCode to Haskell. So, I would very much appreciate any suggestions how to make the code more readable and, in particular, more idiomatic. This post shows my solution of day 6 part 2, but also includes the function totalDecendantCount used to solve part 1. If you have not solved these problems and still intend to do so, stop reading immediately. For both problems, you get a file with an orbit specification on each line of the form A)B, which tells you that B orbits A. This describes a tree of bodies orbiting each other with root COM. In part 1, you have to compute a check sum. More precisely, you have to compute the sum of the number of direct and indirect orbits of each body, which is the same as the sum of the number of descendants of each body in the tree. In part 2, which you cannot see if you have not finished part 1, you have to compute the minimal number of transfers between orbits from you (YOU) to Santa (SAN). I have kept the entire solution for each part of each day in a single module with a single exported function that prints the solution. For day 6 part 2 it starts as follows. module AdventOfCode20191206_2 ( distanceToSanta ) where import System.IO import Data.List.Split import Data.List import Data.Maybe import Data.Hashable import qualified Data.HashMap.Strict as Map distanceToSanta :: IO () distanceToSanta = do inputText <- readFile "Advent20191206_1_input.txt" let orbitList = (map orbit . lines) inputText let orbits = orbitMap $ catMaybes orbitList let pathToSanta = fromJust $ path orbits "COM" "YOU" "SAN" let requiredTransfers = length pathToSanta - 3 print requiredTransfers We subtract 3 from the length of the path because it consists of the bodies on the path and you only have to transfer from the body you already orbit to the body Santa orbits. To store the tree, I use a HashMap.Strict and introduce the following type aliases and helper function to make things a bit more descriptive. type OrbitSpecification = (String,String) type ChildrenMap a = Map.HashMap a [a] children :: (Eq a, Hashable a) => ChildrenMap a -> a -> [a] children childrenMap = fromMaybe [] . flip Map.lookup childrenMap Next follow the functions I use to read in the tree. orbit :: String -> Maybe OrbitSpecification orbit str = case orbit_specification of [x,y] -> Just (x,y) _ -> Nothing where orbit_specification = splitOn ")" str orbitMap :: [OrbitSpecification] -> ChildrenMap String orbitMap = Map.fromListWith (++) . map (applyToSecondElement toSingleElementList) applyToSecondElement :: (b -> c) -> (a,b) -> (a,c) applyToSecondElement f (x,y) = (x, f y) toSingleElementList :: a -> [a] toSingleElementList x = [x] To solve part 1, I introduce two general helper function to generate aggregates over children or over all descendents. childrenAggregate :: (Eq a, Hashable a) => ([a] -> b) -> ChildrenMap a -> a -> b childrenAggregate aggregatorFnc childrenMap = aggregatorFnc . children childrenMap decendantAggregate :: (Eq a, Hashable a) => (b -> b -> b) -> (ChildrenMap a -> a -> b) -> ChildrenMap a -> a -> b decendantAggregate resultFoldFnc nodeFnc childrenMap node = foldl' resultFoldFnc nodeValue childResults where nodeValue = nodeFnc childrenMap node childFnc = decendantAggregate resultFoldFnc nodeFnc childrenMap childResults = map childFnc $ children childrenMap node The descendantAggragate recursively applies a function nodeFnc to a node node and all its descendants and folds the results using some function resultFoldFnc. This allows to define the necessary functions to count the total number of descendants of a node as follows. childrenCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int childrenCount = childrenAggregate length decendantCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int decendantCount = decendantAggregate (+) childrenCount totalDecendantCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int totalDecendantCount = decendantAggregate (+) decendantCount For part 2, we use that between two points in a tree, there is exactly one path (without repetition). First, we define a function to get a path from the root of a (sub)tree to the destination, provided it exists. pathFromRoot :: (Eq a, Hashable a) => ChildrenMap a -> a -> a -> Maybe [a] pathFromRoot childrenMap root destination | destination == root = Just [root] | null childPaths = Nothing | otherwise = Just $ root:(head childPaths) where rootChildren = children childrenMap root pathFromNewRoot newRoot = pathFromRoot childrenMap newRoot destination childPaths = mapMaybe pathFromNewRoot rootChildren This function only finds paths down from the root of a (sub)tree. General paths come in three variations: path from the root of a (sub)tree, the inverse of such a path or the concatenation of a path to the root of a subtree and one from that root to the end point. Thus, we get the path as follows. path :: (Eq a, Hashable a) => ChildrenMap a -> a -> a -> a -> Maybe [a] path childrenMap root start end = let maybeStartEndPath = pathFromRoot childrenMap start end in if isJust maybeStartEndPath then maybeStartEndPath else let maybeEndStartPath = pathFromRoot childrenMap end start in case maybeEndStartPath of Just endStartPath -> Just $ reverse endStartPath Nothing -> let rootPathToStart = pathFromRoot childrenMap root start rootPathToEnd = pathFromRoot childrenMap root end in if isNothing rootPathToStart || isNothing rootPathToEnd then Nothing else connectedPath (fromJust rootPathToStart) (fromJust rootPathToEnd) To connect the paths in the last alternative, we follow both paths from the root to the last common point and then build it by concatenation the reverse of the path to the start with the path to the destination. connectedPath :: Eq a => [a] -> [a] -> Maybe [a] connectedPath rootToStart rootToEnd = case pathPieces of Nothing -> Nothing Just (middle, middleToStart, middleToEnd) -> Just $ (reverse middleToStart) ++ [middle] ++ middleToEnd where pathPieces = distinctPathPieces rootToStart rootToEnd distinctPathPieces :: Eq a => [a] -> [a] -> Maybe (a, [a], [a]) distinctPathPieces [x] [y] = if x == y then Just (x, [], []) else Nothing distinctPathPieces (x1:y1:z1) (x2:y2:z2) | x1 /= x2 = Nothing | y1 /= y2 = Just (x1, y1:z1, y2:z2) | otherwise = distinctPathPieces (y1:z1) (y2:z2) distinctPathPieces _ _ = Nothing This solution heavily depends on the input describing a tree. In case a DAG is provided, a result will be produced that is not necessary correct. For totalDescendantCount, nodes after joining branches will be counted multiple times and path will find a path, but not nescesarily the shortest one. If there are cycles in the graph provided, the recursions in the functions will not terminate. Answer: Simplification In path, notice how the code gets more nested as you try each possible path (either from start to end, or end to start, or from end to root and root to start). You can use the Alternative instance for Maybe to simplify this code: let maybeStartEndPath = pathFromRoot childrenMap start end maybeEndStartPath = pathFromRoot childrenMap end start maybeRootPath = [...] -- see below in maybeStartEndPath <|> fmap reverse maybeEndStartPath <|> maybeRootPath This code will try maybeStartEndPath first. If it returns Nothing, it will move on to the next option and so on. For your final case (which I've named maybeRootPath), you do the following check: if isNothing rootPathToStart || isNothing rootPathToEnd then Nothing else connectedPath (fromJust rootPathToStart) (fromJust rootPathToEnd) This is more consicely done with liftA2 from Control.Applicative. liftA2 lifts a binary function into an applicative context: λ :set -XTypeApplications λ :t liftA2 @Maybe liftA2 @Maybe :: (a -> b -> c) -> (Maybe a -> Maybe b -> Maybe c) Then, if either argument is Nothing, the function will return Nothing without having to pattern match. So we can fill in maybeRootPath above with maybeRootPath = join $ liftA2 connectedPath rootPathToStart rootPathToEnd where rootPathToStart = pathFromRoot childrenMap root start rootPathToEnd = pathFromRoot childrenMap root end The join is needed because connectedPath returns a Maybe already, and we've lifted it into Maybe, which leaves us with a return value of Maybe (Maybe [a]). join flattens nested monads, bringing us back to Maybe [a] Minor points Your function applyToSecondElement is second from Control.Arrow λ :t second @(->) second @(->) :: (b -> c) -> (d, b) -> (d, c) toSingleElementList can also be written as (:[]) or return So orbitMap can be written orbitMap = Map.fromListWith (++) . map (second (:[])) To your credit, your naming made both of these functions clear anyway, but it's more recognizable if you use functions that already exist. Algorithm I was going to suggest keeping each edge bidirectional instead of one-directional, so that you can directly check for a path from start to end instead of checking 3 cases. After reviewing the code, I think your approach is better from a functional perspective because it eliminates the need for you to check for cycles and keep a set as you search the graph. Good work. Revised Code import Control.Applicative import Control.Monad import Control.Arrow import System.IO import Data.List.Split import Data.List import Data.Maybe import Data.Hashable import qualified Data.HashMap.Strict as Map main :: IO () main = do inputText <- readFile "Advent20191206_1_input.txt" let orbitList = catMaybes $ (map orbit . lines) inputText let orbits = orbitMap orbitList let pathToSanta = fromJust $ path orbits "COM" "YOU" "SAN" let requiredTransfers = length pathToSanta - 3 print requiredTransfers type OrbitSpecification = (String,String) type ChildrenMap a = Map.HashMap a [a] children :: (Eq a, Hashable a) => ChildrenMap a -> a -> [a] children childrenMap = fromMaybe [] . flip Map.lookup childrenMap orbit :: String -> Maybe OrbitSpecification orbit str = case orbit_specification of [x,y] -> Just (x, y) _ -> Nothing where orbit_specification = splitOn ")" str orbitMap :: [OrbitSpecification] -> ChildrenMap String orbitMap = Map.fromListWith (++) . map (second (:[])) childrenAggregate :: (Eq a, Hashable a) => ([a] -> b) -> ChildrenMap a -> a -> b childrenAggregate aggregatorFnc childrenMap = aggregatorFnc . children childrenMap decendantAggregate :: (Eq a, Hashable a) => (b -> b -> b) -> (ChildrenMap a -> a -> b) -> ChildrenMap a -> a -> b decendantAggregate resultFoldFnc nodeFnc childrenMap node = foldl' resultFoldFnc nodeValue childResults where nodeValue = nodeFnc childrenMap node childFnc = decendantAggregate resultFoldFnc nodeFnc childrenMap childResults = map childFnc $ children childrenMap node childrenCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int childrenCount = childrenAggregate length decendantCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int decendantCount = decendantAggregate (+) childrenCount totalDecendantCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int totalDecendantCount = decendantAggregate (+) decendantCount pathFromRoot :: (Eq a, Hashable a) => ChildrenMap a -> a -> a -> Maybe [a] pathFromRoot childrenMap root destination | destination == root = Just [root] | null childPaths = Nothing | otherwise = Just $ root:(head childPaths) where rootChildren = children childrenMap root pathFromNewRoot newRoot = pathFromRoot childrenMap newRoot destination childPaths = mapMaybe pathFromNewRoot rootChildren path :: (Eq a, Hashable a) => ChildrenMap a -> a -> a -> a -> Maybe [a] path childrenMap root start end = let maybeStartEndPath = pathFromRoot childrenMap start end maybeEndStartPath = pathFromRoot childrenMap end start maybeRootPath = join $ liftA2 connectedPath rootPathToStart rootPathToEnd where rootPathToStart = pathFromRoot childrenMap root start rootPathToEnd = pathFromRoot childrenMap root end in maybeStartEndPath <|> fmap reverse maybeEndStartPath <|> maybeRootPath connectedPath :: Eq a => [a] -> [a] -> Maybe [a] connectedPath rootToStart rootToEnd = case pathPieces of Nothing -> Nothing Just (middle, middleToStart, middleToEnd) -> Just $ (reverse middleToStart) ++ [middle] ++ middleToEnd where pathPieces = distinctPathPieces rootToStart rootToEnd distinctPathPieces :: Eq a => [a] -> [a] -> Maybe (a, [a], [a]) distinctPathPieces [x] [y] = if x == y then Just (x, [], []) else Nothing distinctPathPieces (x1:y1:z1) (x2:y2:z2) | x1 /= x2 = Nothing | y1 /= y2 = Just (x1, y1:z1, y2:z2) | otherwise = distinctPathPieces (y1:z1) (y2:z2) distinctPathPieces _ _ = Nothing
{ "domain": "codereview.stackexchange", "id": 37107, "tags": "haskell" }
Why would a semiconductor with a fixed bandgap have an absorption curve instead of an absorption spike?
Question: Wouldn't the semiconductor only be able to accept photons at one wavelength? Answer: Any valence electron as far as energy is concerned can move to anywhere in the conduction band. These bands are quite broad and so there is a broad range of acceptabel transition energies.
{ "domain": "physics.stackexchange", "id": 75526, "tags": "photons, semiconductor-physics" }
Confusion understanding causality?
Question: I already know the simple definition that causal system is the one that does not depend on future values of input but today i was confused as i came across a new definition of causality after reading "signal processing first "about causality as shown in attached snapshots I have attached two pages snapshots and i have drawn a thin red line to left side of common para, para that occurs in both snapshots As shown in common para in second line ,"cause does not precede corresponding effect" This statement is about causal filter but what does this statement means in simple words? Both snapshots contain a table ,upward snapshot contains table of non causal filter Bottom snapshot contains table of causal filter We can see from table of causal filter that both input and start at same value of time (n=0)while table of non_causal filter shows that output starts at earlier value of time(n=-2) while input starts at a later value of time(n=0),thus Indicating that output(effect)precede input(cause) in case of non-causal filter Answer: If you break down the sentence it is saying that a causal system is one where the input does not precede (come before in time) the output. That is just wrong. The definition of a causal system, using the impulse response, is that: $h[n]=0, \text{ for all } n<0$. This says that when the input is an impulse at $n=0$, the output should be zero until and including time index $n-1$. Once $n=0$ happens, the output can go non-zero. Put into words, this says that the output does not precede the input. This property of causal systems can be remembered by thinking $\text{causal} \approx \text{cause and effect}$. I think you'd be surprised how many typos and errors there are in textbooks and even some research articles. You can see how swapping the words input and output would be a likely mistake when writing an entire book. Textbooks even anticipate this and often will release something called an errata which is a document laying out the errors and fixes that have been brought to their attention.
{ "domain": "dsp.stackexchange", "id": 8615, "tags": "terminology, causality" }
Transforming to non-intertial frames in Hamiltonian Mechanics
Question: I have a translating potential $V(q,t) = V(q-x(t))$ (i.e. a potential which is following some trajectory $x(t)$) that I can write down the Lagrangian for, $\mathcal{L}=T_q-U_q=\frac{1}{2}m\dot{q}^2-V(q-x(t))$. I now want to transform to the coordinate $Q=q-x(t)$. I can arrive at the appropriate equations of motion for this system as derived in this question, which shows that there arises a pseudo-force $mQ\ddot{x}(t)$. In the Lagrangian Formalism, I get where the pseudoforce comes from. My issues is that I would now like to transform from a rest frame to a translating frame which follows $x(t)$, and write down the Hamiltonian. This should be identical to a particle in the static potential with some time-dependent force acting on it (where the time dependent force here depends on $\ddot{x}(t)$), a result I can arrive at if I work in a Newtownian frame, write down the equation of motion, and regroup things into a new potential with a time dependent homogeneous force. However, for completeness sake I want to be able to go from the Lagrangian formalism to the Hamiltonian formalism, which is how I feel this should actually be done. If I carry out the analysis as in the question I linked earlier and define a new Lagrangian $L=\frac{1}{2}m\dot{Q}^2-mQ\ddot{x}(t)-V(Q)$, then this has the correct form to be transformed into a Hamiltonian in $Q,P$ coordinates as I would like (and is what I find working in a Newtownian formalism and then using $H=PQ-L$). However $L = T_Q-U_Q\neq \frac{1}{2}m(\dot{Q}+\dot{x(t)})^2-V(Q) = \frac{1}{2}m\left(\dot{Q}^2+2\dot{Q}\dot{x(t)}+\dot{x(t)}^2\right)- V(Q)$, which only has first order derivatives on $x(t)$. I have also tried to use the Hamiltonian EOM, since there is $\frac{\partial\mathcal{H}}{\partial t}= - \frac{\partial\mathcal{L}}{\partial t}$ and the derivatives eliminate everything but a $\dot{Q}\dot{x(t)}$ term, which when I integrate by parts to get $\mathcal{H}(t)$ gives me $Q\ddot{x(t)}+f(Q,P)$ after discarding an integral that should be a constant on the ground of it appearing in the action integral of the Lagrangian which is stationary and therefor constant. However, I am slightly reluctant to do this, as it seems like it may not be entirely rigorous and I don't want to just cherry pick what appears to give the correct answer. I know that there is a justification for either re-defining the Lagrangian and more rigorously transforming from $\mathcal{L}$ to $L$, or a subtlety in the Legendre transformation that actually derives the Hamiltonian that I am missing. I believe this arises from the fact that I am working in a non-inertial frame, as otherwise $\dot{x}(t)=const$ and $\dot{Q}=\dot{q}$. Answer: Focusing on the theory of your specific case, and avoiding the more general discussions, here is how one could derive the Hamiltonian after a change of (curvilinear) coordinates. The beauty of Lagrangian mechanics is that it is covariant with respect to any change of coordinates, both related to inertial and non-inertial alike. This fact comes from the principle of least action formulation. If $L$ is the Lagrangian of a system (the Lagrangian doesn't have to be unique; as long as the critical value equation is the same for two different Lagrangians, they describe the same dynamics) the action functional associated with $L$ is $$S[q] = \int_{t_1}^{t_2} L\big(q(t), \dot{q}(t), t\big) dt$$ for curves $q(t)$ defined for $t\in [t_1,t_2]$ such that $q(t_1) = q_1$ and $q(t_2) = q_2$ are two fixed points. Then for an arbitrary one parameter family of curves $q(t,s)$ (the so called variation of the curves) fixed at $q_1$ and $q_2$ we get $$S[q](s) = \int_{t_1}^{t_2} L\big(q(t,s), \partial_t {q}(t,s), t\big) dt$$ so the critical curves, which are the trajectories of the dynamics, should satisfy the "zero functional gradient" condition, also known as principle of least action $$\delta S[q] = \frac{\partial}{\partial s} \, S[q](s)\,{\Big|_{s=0}} = 0$$ which is equivalent to the Euler-Lagrange equations $$\frac{d}{dt}\, \left(\frac{\partial L}{\partial \dot{q}}\big(q,\dot{q},t\big) \right)= \frac{\partial L}{\partial {q}}\big(q,\dot{q},t\big) $$ Therefore, if you change the coordinates $q,t$ to $Q,\tau$ of the integral one-form ${L}\big(q,\dot{q},t\big)dt$ to obtain a new one form $\tilde{L}\big(Q,\dot{Q},\tau\big)d\tau$, where $\dot{Q} = \frac{d Q}{d\tau}$, the integral and thus $ S[q] = S[Q]$ so $\delta S[q] = \delta S[Q] = 0$ which means that the equations $$\frac{d}{dt}\, \left(\frac{\partial L}{\partial \dot{q}}\big(q,\dot{q},t\big) \right)= \frac{\partial L}{\partial {q}}\big(q,\dot{q},t\big) \,\,\, \text{ and } \,\,\, \frac{d}{d\tau}\, \left(\frac{\partial \tilde{L}}{\partial \dot{Q}}\big(Q,\dot{Q},\tau\big) \right)= \frac{\partial \tilde{L}}{\partial {Q}}\big(Q,\dot{Q},\tau\big) $$ describe the same solutions but in different coordinates and possibly parametrized differently. In your case however, $\tau = t$ so it is enough to change the variables from $q$ to $Q$, while keeping the time $t$ parametrization the same, of the Lagrange function ${L}\big(q,\dot{q},t\big)$ to obtain the Lagrange function $\tilde{L}\big(Q,\dot{Q},t\big)$ in the new coordinates. In your case the change of variables is $Q = f(q,t)$, so $$\dot{Q} = D_q f(q,t) \dot{q} + \partial_t f(q,t)$$ thus $${L}\big(q,\dot{q},t\big) = \tilde{L}\Big(f(q,t),\, D_q f(q,t) \dot{q} + \partial_t f(q,t), \, t\Big)$$ The Euler-Lagrange equations with $L$ turn into the equations with $\tilde{L}$ and you are done (of course, you are allowed to manipulate the new Lagrangian $\tilde{L}$ by integrating by parts in the action $S[Q]$, if possible, to get an equivalent Lagrnagian, but that is not necessary). Now, the Hamiltonian picture. Recall the duality between Lagrangians and Hamiltonians: \begin{align*} {L}\big(q,\dot{q},t\big) &= p\cdot\dot{q} - H\big(q,p ,t\big)\\ \tilde{L}\big(Q,\dot{Q},t\big) &= P\cdot\dot{Q} - \tilde{H}\big(Q,P,t\big) \end{align*} Since $${L}\big(q,\dot{q},t\big) = \tilde{L}\big(Q,\dot{Q},t\big) = \tilde{L}\Big(f(q,t),\, D_q f(q,t)\dot{q} + \partial_t f(q,t), \, t\Big) $$ we get that $$ p\cdot\dot{q} - H\big(q,p ,t\big) = P\cdot\dot{Q} - \tilde{H}\big(Q,P,t\big)$$ Moreover, for the conjugate momenta we have \begin{align} p &= \frac{\partial L}{\partial \dot{q}}\big(q,\dot{q},t\big)\\ P&=\frac{\partial \tilde{L}}{\partial \dot{Q}}\big(Q,\dot{Q},t\big) \end{align} so for $p$ we have $$p =\frac{\partial}{\partial \dot{q}} L\big(q,\dot{q},t\big) = \frac{\partial}{\partial \dot{q}} \tilde{L}\Big(f(q,t),\, D_q f(q,t)\dot{q} + \partial_t f(q,t), \, t\Big) = \Big(D_qf(q,t)\Big)^*\frac{\partial \tilde{L}}{\partial \dot{Q}} = \Big(D_qf(q,t)\Big)^* P$$ where the $*$ superscript means transposed of the linear transformation $D_qf(q,t)$ so $$P = \Big(D_qf(q,t)^*\Big)^{-1} p$$ Thus \begin{align} p\cdot\dot{q} - H\big(q,p ,t\big) &= P\cdot\dot{Q} - \tilde{H}\big(Q,P,t\big)\\ &= \Big( \, \Big(D_qf(q,t)^*\Big)^{-1} p \Big) \cdot\Big( D_q f(q,t)\dot{q} + \partial_t f(q,t)\Big) - \tilde{H}\Big(f(q,t),P,t\Big)\\ &= p \cdot\Big( \big(D_qf(q,t)\big)^{-1} \big(\, D_q f(q,t)\dot{q} + \partial_t f(q,t) \, \big)\Big) - \tilde{H}\Big(f(q,t),P,t\Big)\\ &= p \cdot\Big(\dot{q} +\big(D_qf(q,t)\big)^{-1} \partial_t f(q,t) \, \big)\Big) - \tilde{H}\Big(f(q,t),P,t\Big)\\ &= p \cdot \dot{q} + p \cdot \Big(\big(D_qf(q,t)\big)^{-1} \partial_t f(q,t) \, \big)\Big) - \tilde{H}\Big(f(q,t),P,t\Big)\\ &= p \cdot \dot{q} - \Big[ \, \tilde{H}\Big(f(q,t),P,t\Big) - p \cdot \Big(\big(D_qf(q,t)\big)^{-1} \partial_t f(q,t) \, \big)\Big) \, \Big] \end{align} which after cancelling the common terms $p\cdot \dot{q}$ on both sides of the equation yields $$ H\big(q,p ,t\big) = \tilde{H}\Big(f(q,t),P,t\Big) - p \cdot \Big(\big(D_qf(q,t)\big)^{-1} \partial_t f(q,t) \, \big)\Big)$$ $$\tilde{H}\Big(f(q,t),P,t\Big) = H\big(q,p ,t\big) + p \cdot \Big(\big(D_qf(q,t)\big)^{-1} \partial_t f(q,t) \, \big)\Big)$$ $$\tilde{H}\Big(f(q,t), \big(D_qf(q,t)^*\big)^{-1} p, t\Big) = H\big(q,p ,t\big) + p \cdot \Big(\big(D_qf(q,t)\big)^{-1} \partial_t f(q,t) \, \big)\Big)$$ $$\tilde{H}\Big(f(q,t), P, t\Big) = H\big(q,p ,t\big) + \Big( \big(D_qf(q,t)^*\big)^{-1} p \Big) \cdot \Big(\partial_t f(q,t) \, \big)\Big)$$ $$\tilde{H}\Big(f(q,t), P, t\Big) = H\big(q,p ,t\big) + P \cdot \Big(\partial_t f(q,t) \, \big)\Big)$$ This is where the link between the two Hamiltonians come from. One can phrase it in terms of generating functions of canonical transformations. Let $G(q,P,t) = P \cdot f(q,t)$. Then $$Q = \frac{\partial G}{\partial P}\big(q, P, t\big) = f(q,t)\, ,\,\,\,\,\,\, p = \frac{\partial G}{\partial q}\big(q, P, t\big)$$ $$\frac{\partial G}{\partial t}\big(q,P,t\big) = \frac{\partial}{\partial t} \big( P \cdot f(q,t) \big) = P \cdot \partial_t f(q,t)$$ Thus the identity between the two Hamiltonians becomes $$\tilde{H}\Big(f(q,t), P, t\Big) = H\Big(q, \frac{\partial G}{\partial q} ,t\Big) + P \cdot \Big(\partial_t f(q,t) \, \big)\Big)$$ $$\tilde{H}\left(\frac{\partial G}{\partial P}, P, t\right) = H\left(q, \frac{\partial G}{\partial q} ,t\right) + \frac{\partial G}{\partial t}$$ In your case $Q = f(q,t) = q - x(t)$ so $$G\big(q, P, t\big) = P \cdot \big(q - x(t)\big) = P \cdot q - P \cdot x(t)$$ and thus $$P = p \,\,\,\, Q = q - x(t)$$ Let me put $m=1$ for simplicity. The original Hamiltonian is $$H = \frac{1}{2} p^2 + V\big(q-x(t)\big)$$ and $$\frac{\partial G}{\partial t} = \frac{\partial }{\partial t} \, P \cdot \big(q - x(t)\big) = - P \cdot \dot{x}(t)$$ Thus $$\tilde{H} = \frac{1}{2} P^2 + V\big(Q\big) - P \cdot \dot{x}(t)$$ which yields the Hamiltonian equations \begin{align*} \dot{Q} &= P - \dot{x}(t)\\ \dot{P} & = - \nabla\, V(Q) \end{align*} so $$\ddot{Q} = \dot{P} - \ddot{x}(t) = - \nabla\, V(Q) - \ddot{x}(t)$$ $$\ddot{Q} + \ddot{x}(t) = - \nabla\, V(Q) $$ which hare the Euler-Lagrange equations of the Lagrangian $$\tilde{L} = \frac{1}{2}\big(\dot{Q} + \dot{x}(t)\big)^2 - V(Q)$$
{ "domain": "physics.stackexchange", "id": 40035, "tags": "classical-mechanics, lagrangian-formalism, hamiltonian-formalism" }
Why isn't the universe uniform?
Question: I always have a question in my mind when I think about the big bang. It is that if the universe has expanded from a tiny singularity by an explosion to the universe of today, why didn't it expand into a uniform sphere with everything distributed uniformly. There wasn't any force present already before the starting of the universe. The Galaxies and different things don't seem to be uniform. And also where does this randomness of every planet, every star being different from others come from. Could anyone help me in understanding this? Answer: The comment by AFT above, referring to a duplicate is correct, as regards Why is the universe not perfectly uniform but I thought including the image above might add to John's answer. This is an representation of the Cosmic Radiation Background distribution, not at the time just after the Big Bang, but about 400,000 years later. The important part of this picture is that it is not uniform, you can clearly see differences in color, representative of differences in temperature. Please read the answer by anna v. The link I provide above gives more details, and I don't want to repeat the duplicate, but I wanted to illustrate that even at this early stage of the universe, it was not homogeneous, and reading the other answer, and the Wikipedia article should list some of the reasons we believe this occured.
{ "domain": "physics.stackexchange", "id": 38470, "tags": "cosmology, space-expansion, big-bang, randomness" }
How to make minimax optimal?
Question: By optimal I mean that: If max has a winning strategy then minimax will return the strategy for max with the fewest number of moves to win. If min has a winning strategy then minimax will return the strategy for max with the most number of moves to lose. If neither has a winning strategy then minimax will return the strategy for max with the most number of moves to draw. The idea is that you want to win in the fewest number of moves possible but if you can't win then you want to drag out the game for as long as possible so that the opponent has more chances of making mistakes. So, how do you make minimax return the best strategy for max? Answer: Minimax deals with two kinds of values: Estimated values determined by a heuristic function. Actual values determined by a terminal state. Commonly, we use the following denotational semantics for values: A range of values centered around 0 denote estimated values (e.g. -999 to 999). A value less than the smallest heuristic value denotes a loss for max (e.g. -1000). A value more than the biggest heuristic value denotes a win for max (e.g. 1000). The value 0 denotes either an estimated draw or an actual draw. The advantage of this denotational semantics is that comparing values is the same as comparing numbers (i.e. you don't need a special comparison function). We can extend this denotational semantics to incorporate optimality of winning and losing as follows: A range of values centered around 0 denote estimated values (e.g. -999 to 999). A range of values less than the smallest heuristic value denote loss (e.g. -2000 to -1000). A range of values more than the biggest heuristic value denote win (e.g. 1000 to 2000). The value 0 denotes either an estimated draw or an actual draw. A loss in n moves is denoted as -(m - n) where m is a sufficiently large number (e.g. 2000). A win in n moves is denoted as (m - n) where m is a sufficiently large number (e.g. 2000). Using this denotational semantics for values requires only a small change to the minimax algorithm: function minimax(node, depth, max) if max return negamax(node, depth, 1) else return -negamax(node, depth, -1) function negamax(node, depth, color) if terminal(node) return -2000 if depth = 0 return color * heuristic(node) value = -2000 foreach child of node v = -negamax(child, depth - 1, -color) if v > 1000 v -= 1 if v > value value = v return value Incorporating optimality for draws is a lot more difficult.
{ "domain": "ai.stackexchange", "id": 387, "tags": "minimax, game-theory, optimality" }
Plugin to find pose of a robot
Question: Hi, I'm new to Gazebo and ROS. I'm trying to find a plugin that I can hook to the sdf file of my robot which will publish its pose in (x,y,z) format. I noticed that there is already a topic being published to called "/gazebo/model_states" which contains the pose of every model in the world. I'm not sure if there's a way to manipulate this to get the pose from one singular model. Thanks in advance for any help. Originally posted by blastpower5 on Gazebo Answers with karma: 3 on 2016-08-12 Post score: 0 Original comments Comment by chapulina on 2016-08-12: To clarify, you'd like the pose to be published over a ROS topic, or a Gazebo topic? Comment by blastpower5 on 2016-08-12: A ROS topic would be best, but I think (not sure) either would work. Comment by wicked88 on 2016-08-16: blastpower5, i just started using the plugins and i can tell that you CAN do the task using both ROS and Gazebo communications. For ROS, you will have to link against the proper libraries. Using Gazebo, it's like the examples. Using a ROS topic would require to link against and include ROS, coding a publisher and publish the information. The plugins work like a normal executable. Since the update function receives a pointer to the model, it's always updated. Comment by wicked88 on 2016-08-16: // Continuation : You can then make a forever loop, with a sleep inside, and publish your information. Comment by wicked88 on 2016-08-16: If you go to the gazebo folder, under examples/plugin/model_move you can see the structure of a plugin, its a normal c++ program. The Load() function instantiates everything, it's like a constructor. From here, you can do everything, knowing that on startup of the model, the Load function is going to be ran. You also receive this pointer [ ::Load(physics::ModelPtr _parent] which is the pointer to the model, where every data about the model can be read, set and managed, like pose, velocity etc. Answer: Just proceed with the installation of ros-gazebo packages : $ sudo apt-get install ros-indigo-gazebo-ros-pkgs ros-indigo-gazebo-ros-control Then, just do $ rosrun gazebo_ros gazebo or $ rosrun gazebo_ros gzserver (to do it headless). You can inspect the rostopic list after that. You will see a bunch of topics, just do $ rostopic echo /gazebo/model_states and you will see the names and poses of the models. Parse that, given the name, and you will have eveything you need. Easy wait out is to write a parser to split the full message and separate it into small messages. Don't know if its possible to atatch a plugin to work with ROS to a model, but if it is, i WOULD LOVE IT. Originally posted by wicked88 with karma: 38 on 2016-08-13 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by blastpower5 on 2016-08-17: Thanks for the help! I have also found a plugin that I put in the sdf file of my robot's model: ''. It publishes messages of type nav_msgs/Odometry, which includes both the pose and twist of the robot. Comment by wicked88 on 2016-08-20: Btw, it's clearly possible and easy to make a ros node inside a plugin, to publish whatever you want :D
{ "domain": "robotics.stackexchange", "id": 3967, "tags": "gazebo-plugin" }
How do I look for (possibly) all coordinate transformations with a given metric?
Question: From what I learned in tensor calculus so far, coordinate transformations are supposed to preserve the metric of the space. (Here I used GR notation, but the metric doesn't have to be the spacetime metric.) $$\Lambda{^\rho}{_\mu}\Lambda{^\sigma}{_\nu}g{_\rho}{_\sigma}=g{_\mu}{_\nu}$$ So, to find all possible $\Lambda$'s, I thought that I just have to use the rule described above. That is to find all $\Lambda$'s that give back the exact same metric. However, for the transformations between Cartesian and Polar coordinates in a 2-d plane, the metric looks very much different in different coordinates, and yet they are equivalent. Is it because going from Cartesian to Polar is not a linear transformation or something? And the set of transformations that I get from the method above does not contain the Cartesian to Polar transformations? If so, then what kind of transformations are they? Answer: You seem a bit confused. A general coordinate transformation is just any differentiable, bijective and with differentiable inverse function (called a diffeomorphism) between open sets in $\mathbb{R}^n$. So you can't really list them all; any function that satisfies the above conditions will work. Under a transformation $x'^\mu = x'^\mu(x)$, the metric changes as $$g'_{\mu\nu} = \frac{\partial x^\alpha}{\partial x'^\mu} \frac{\partial x^\beta}{\partial x'^\nu} g_{\alpha\beta},$$ where the matrices $g_{\mu\nu}$ and $g'_{\mu\nu}$ will not in general be the same. A coordinate transformation preserves the metric in the sense that the abstract tensor is coordinate-independent, but its components do depend on the coordinates. There is a special class of diffeomorphisms, called isometries, that do leave the components of the metric invariant: $$g_{\mu\nu} = \frac{\partial x^\alpha}{\partial x'^\mu} \frac{\partial x^\beta}{\partial x'^\nu} g_{\alpha\beta}.$$ (Pay attention to the primes!) We think of them as the symmetries of our space. In Euclidean space they are rotations and translations; in Minkowski spacetime some rotations are replaced by Lorentz boosts. In a more general situation you might have less isometries: a black hole only has rotation and time translation symmetry, but not space translation or boost. A space might not have any isometries at all.
{ "domain": "physics.stackexchange", "id": 52699, "tags": "metric-tensor, coordinate-systems, tensor-calculus" }
Making Inference from a Correlation Heatmap
Question: I have constructed a Heatmap of my dataset for visualization. I have searched on various sites regarding what can I infer from the heatmap and I am unable to get any clear understanding from them. I want to select useful features for Exploratory Data Analysis and Visualization to get some insights. If you can provide any suggestion or advice it would be extremely helpful for me. Thank you Answer: Assuming you're using df.corr(), the results from a heatmap are Pearson correlation coefficients which can be thought of as "the explainability between two arrays." An effect score closer to 0 translates to there being no relationship. A score closer to 1 or -1 is a positive or negative relationship. A perfect score of 1 is a direct correlation. Additionally, I would caution against taking action using these values without testing the normalcy/distribution of your data.
{ "domain": "datascience.stackexchange", "id": 5530, "tags": "statistics, visualization, data-science-model, matplotlib, seaborn" }
How to calculate conservation of Angular momentum with several rotations/axis in a system?
Question: Imagine a turn-table in space, therefore there are no external forces on the system. This turntable has two motors, turning various masses at different radius. Imagine one motor is at the edge of the turntable, spinning parallel to the turntable. Other motor is the opposite side of the turntable but spinning in a vertical/perpendicular direction. Ang Momentum has to be conserved, but what axis would you use? And how would you calculate it? Would the overall system rotate about an axis that is a blend of both rotations? I dont specifically need a math answer. Just conceptually trying to understand how you would even work a problem like this out, and how would the overall system would look and/or rotate in space given rotation about 2 different axis. I get ang. momentum in our physics books, but it always about 1 axis, and very simplified systems. Examples are like a bullet and door, or two rotating disks falling ontop of a common axis. I know that ang. momentum doesn't apply where external forces exist, so many times it doesn't apply. But this has my brain wondering. Thank you Here is a picture. Image a disk, with two motors, spinning objects. How is Ang. Momentum conserved? Is it still conserved about each motor axis as well as the full system? Answer: Angular momentum has to be conserved along all possible axis of rotation. You can show that if it is conserved along three linearly independent axes, then it is conserved for all axis. So it is common to use a cartesian coordinate system and formulate angular momentum as a vector with components along all the directions of the coordinate system. The most complex part of this process, is formulating the 3×3 mass moment of inertia tensor $\mathbf{I}$ for each separate body to be used such that $$ \boldsymbol{L}_{\rm total} = (\mathbf{I}_1 \boldsymbol{\omega}_1 + \boldsymbol{r}_1 \times \boldsymbol{p}_1) + (\mathbf{I}_2 \boldsymbol{\omega}_2+ \boldsymbol{r}_2 \times \boldsymbol{p}_2) + \ldots $$ where $ \boldsymbol{L}_{\rm total}$ is a vector, as well as each rotational velocity $\boldsymbol{\omega}_i$. Also the location of each center of mass $\boldsymbol{r}_i$ and the momentum vector $\boldsymbol{p}_i$ has to be considered.
{ "domain": "physics.stackexchange", "id": 73755, "tags": "newtonian-mechanics, angular-momentum, torque" }
What type of air blower/compressor to use?
Question: I need a way to move air which is somewhere in between a compressor and a fan: something along the lines of 20-40 cfm at 2-3 psi (500-1000 liters/minute at 10-20 kPa). Typical small compressors (of the right power rating) seem to be approx 0.5 cfm at 30 psi. Typical centrifugal blowers are approx 60 cfm free flow but only 0.05 psi max static pressure. I'm having trouble visualizing what I need, let alone sourcing it. Leaf blower? Shopvac? Supercharger? Do I want a piston pump, diaphragm pump, screw compressor, turbopump, some type of higher pressure centrifugal fan, or ??? For this application energy efficiency at the described operating point is most important; size/cost/noise is much less important. UPDATE It also needs to be oil-free. Answer: I think fans are out, as you won't get your combination of flow rate and pressure. The only thing you might want to look at is multistage fans. For this answer I'll concentrate on PD compressors. All of the following should be available in oil-free versions: Side channel compressors come in the flow rate and pressure region you want. You will need to talk to an application engineer of a vendor to see if they fit your particular aplication. I don't know how they compare in terms of efficiency. All rotary lobe compressors and sliding vane compressors I found with quick googling where designed for higher pressures or flow rates. If you find one in the right pressure range, talk to the vendor if the flow rate can be adjusted with a VFD. Membrane pumps are built for lower flow rates. If all else fails, get ten or twelve, pumping in paralell into the same pressure manifold.
{ "domain": "engineering.stackexchange", "id": 1484, "tags": "pumps, compressed-air, compressors" }
Why doesn't the cell membrane just...break apart?
Question: Forgive me if this is a silly question. I can't understand the basics. Why doesn't the cell membrane just break apart? What's keeping the layers in the phospholipid bilayer together? I know that the membrane is embedded with proteins and lipids, but I still can't wrap my head around the "why". Are the hydrophobic interactions in the middle "stronger" than the hydrophilic interactions on the outside? What's keeping the individual phosphate heads together instead of, say, one of them just drifting away due to a nearby water molecule? Answer: The membrane bilayer is held together by hydrophobic forces. This is an entropy driven process. When a greasy or hydrophobic molecule is suspended in water, the water molecules form an organized "cage" around the hydrophobic molecule. When two hydrophobic molecules come into contact, they force the water between them out. This increases the entropy because the freed waters don't need to be organized into the cage. Lipid bilayers have many many many hydrophobic lipids that squeeze out a lot of water and greatly increase entropy. The polar phosphates allow the water to interact with the surface of the membrane, without a polar head group the lipids would form a spherical blob instead of a membrane. Read this section on wikipedia for more.
{ "domain": "biology.stackexchange", "id": 2801, "tags": "biophysics, cell-membrane" }
Does QFT re-interpret the meaning of the wave function of Schrodinger's equation?
Question: I'm wondering if quantum field theory re-interprets the meaning of the wave function of Schrodinger's equation. But more specifically, I'm trying to understand how to explain the double slit experiment using quantum field theory's interpretation that, in the universe, "there are only fields." As background, in this post, Rodney Brooks states: In QFT as I learned it from Julian Schwinger, there are no particles, so there is no duality. There are only fields - and “waves” are just oscillations in those fields. The particle-like behavior happens when a field quantum collapses into an absorbing atom, just as a particle would. ... And so Schrödinger’s famous equation came to be taken not as an equation for field intensity, as Schrödinger would have liked, but as an equation that gives the probability of finding a particle at a particular location. So there it was: wave-particle duality. Sean Carroll makes similar statements, that the question "what is matter--a wave or a particle?" has a definite answer: waves in quantum fields. (This can be found in his lectures on the Higgs Boson.) In the bolded passage above, Dr. Brooks seems to suggest that QFT provides a physical interpretation which removes superposition. And he says as much in another post here: In QFT there are no superpositions. The state of a system is specified by giving the field strength at every point – or to be more precise, by the field strength of every quantum. This may be a complex picture, but it is a picture, not a superposition. So taking up the double slit experiment, is the following description accurate? When the electron passes through the double slit, waves in the electron quantum field interfere. When the wave collapses into a particle, it takes on the position at one of the locations where the electron quantum field is elevated. So the electron particle can't "materialize" in any locations where the electron quantum field interferes destructively. This gives rise to the interference pattern on the back screen. Is this a correct description of the double slit experiment from QFT's interpretation that, in the universe, "there are only fields"? If this is correct, then it seems like QFT says the wave function is more than just a probability wave: the wave function describes a physical entity (excitations in the underlying quantum field). There is still a probabilistic element: the position where the wave collapses into a particle has some random nature. Am I understanding correctly that QFT adds a new physical entity (quantum fields) which expands our physical interpretation of the wave function? Answer: There is an overlap with other questions linked in the comments. But, perhaps the focus of this question is different enough to merit a separate answer. There are at least two distinct but equivalent formalisms of QFT: the canonical approach and the path integral approach. Although they are equivalent mathematically and in their experimental predictions, they provide very different ways of thinking about QFT phenomena. The one most suited to your question is the path integral approach. In the path integral approach, to describe an experiment, we start with the field in one configuration and then work out the amplitude for the field to evolve to another definite configuration representing a possible measurement in the experiment. So in the two-slit case, we can start with a plane wave in front of the two slits representing the experiment starting with an electron of a particular momentum. Then, our final configuration will be a delta function at the screen representing the electron measured at that point at some later specified time. We can determine the probability for this to occur by evaluating the amplitude for the field to evolve between the initial and final configuration in all possible ways. We then sum these amplitudes and take the absolute value squared in the usual QM way. So, in this approach, there are no particles, just excitations in the field.
{ "domain": "physics.stackexchange", "id": 44517, "tags": "quantum-field-theory, wavefunction, schroedinger-equation, double-slit-experiment, quantum-interpretations" }
ee_cart_imped doesn't work for real PR2
Question: Hi All, I'm just following the tutorial of ee_cart_imped package. It's working for Gazebo, but not for real PR2. I get the following: core service [/rosout] found Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[r_arm_stopper-1]: started with pid [8592] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[r_arm_cart_imped_controller_spawner-2]: started with pid [8593] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[r_arm_cart_imped_controller/r_arm_cart_imped_action_node-3]: started with pid [8594] [ERROR] [WallTime: 1348693065.095550] Failed to load r_arm_cart_imped_controller Any suggestions? Originally posted by J on ROS Answers with karma: 1 on 2012-09-26 Post score: 0 Original comments Comment by ahendrix on 2012-09-26: Which version of ROS are you running? Which version of Ubuntu? Have you built all of the ee_cart_imped_* packages? Comment by J on 2012-09-27: I'm using Fuerte/12.04. I think I've built them because I can see the controller is running in Gazebo... Could you let me know any suggestions or step-by-step instruction which I can follow? Thanks! Comment by ahendrix on 2012-09-28: You're running Ubuntu 12.04 on a PR2, or on your desktop? Comment by J on 2012-09-28: I'm running it on my desktop. Answer: Since PR2 controllers are dynamic libraries that are loaded from the file system, you need to have them built on your PR2 if you want to run them there, and you should run the relevant launch files on the PR2 rather than on your desktop. Originally posted by ahendrix with karma: 47576 on 2012-10-01 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Felix Endres on 2013-01-17: Also, as far as I know, you need to restart the pr2 after building the ee_cart_imped library. Something about the real-time loop I guess.
{ "domain": "robotics.stackexchange", "id": 11150, "tags": "ros" }
Project Euler 6: Difference between sum of squares and square of sum
Question: I've created a (very) simple solution for Project Euler problem 6: Project Euler Problem 6: Sum square difference The sum of the squares of the first ten natural numbers is, $$ 1^2 + 2^2 + ... + 10^2 = 385 $$ The square of the sum of the first ten natural numbers is, $$ (1 + 2 + ... + 10)^2 = 55^2 = 3025 $$ Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is \$3025 − 385 = 2640\$. Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum. Solution: \$25164150\$ To solve this in F# was pretty simple: I simply declared a function square which would multiply an x by itself, then a function which would run through a list of numbers and add the square of the sum of them, and subtract the sum of the squares of each. let square x = x * x let calcDiffSquareSumFromSumSquares list = (list |> List.sum |> square) - (list |> List.sumBy square) printfn "Solution to Project Euler 6: %i" (calcDiffSquareSumFromSumSquares [ 1 .. 100 ]) On this one, I'm particularly curious if there's a functional way to run through the list only once, calculate the total sum, and calculate the squares of sums so that I wouldn't need to use two sum methods. Answer: Looks good to me. Couple of things to consider though: You could split out two functions, squareOfSum and sumOfSquares. I think that would make it a little easier to read. You could make the functions more general by using Seq.sum/Sum.sumBy instead of List.sum/List.sumBy. While you could use fold, as joranvar mentioned, I think this would make the program much less clear and I would not recommend it (except as an exercise for learning about folds). There is another way to attack the problem, by using the two identities \begin{align} 1 + 2 + 3 + \cdots + n &= \frac{n(n + 1)}{2}, \quad \text{and} \\ 1^2 + 2^2 + 3^2 + \cdots + n^2 &= \frac{n(n+1)(2n+1)}{6}. \end{align}
{ "domain": "codereview.stackexchange", "id": 17247, "tags": "programming-challenge, functional-programming, f#" }
Using pi in an URDF file
Question: We want to use the value of pi in an URDF file, since we have to rotate a model by 90 degrees. Is there any builtin macro for pi in URDF? At the moment we are simply using our own M_PI macro, but it would be nice to know what the intended way of doing such a thing would be. Originally posted by MatthiasLoebach on ROS Answers with karma: 13 on 2017-11-30 Post score: 1 Answer: You can do this with xacro (a XML macro language): http://wiki.ros.org/xacro From section 3. Math Expression Since ROS Jade, Xacro employs python to evaluate expressions enclosed in dollared-braces (${}). > This allows for more complex > arithmetic expressions. Also, **some > basic constants, e.g. pi, are already > predefined**: > > <xacro:property name="R" value="2" /> > <xacro:property name="alpha" value="${30/180*pi}" /> You might be interested in the ROS xacro tutorial Originally posted by josephcoombe with karma: 697 on 2017-11-30 This answer was ACCEPTED on the original site Post score: 8
{ "domain": "robotics.stackexchange", "id": 29490, "tags": "urdf" }
Velocity of projectile with both quadratic and constant resistive force
Question: Suppose we have a projectile of mass $m$ that impacts a material with velocity $v_0$. As it travels through the material it is subject to the resistive force $F = \alpha v^2 + \beta$. How can I determine $v(t)$, the velocity of the projectile at time $t$ where $t=0$ is the impact time, and $v(p)$, the velocity of the projectile when it reaches a penetration depth of $p$? This question addresses $v(t)$ for an exclusively quadratic resistive force ($\beta = 0$), but is there a more general formula for when there is a constant component? Answer: Using $vdv=adx$ where $a=F/m=-\frac{\alpha}{m}v^{2}-\frac{\beta}{m}$ we have $$vdv=(-\frac{\alpha}{m}v^{2}-\frac{\beta}{m})dx\rightarrow \int\frac{v}{\frac{\alpha}{m}v^{2}+\frac{\beta}{m}}dv=-\int dx$$ Defining the new variable $u=\frac{\alpha}{m}v^{2}+\frac{\beta}{m}$ we have $\frac{du}{dv}=2\frac{\alpha}{m}v$. Thus, $vdv=\frac{m}{2\alpha}du$ and the integral reduces to $$\frac{m}{2\alpha}\int\frac{1}{u}du=-\int dx$$ Subsequently, $$\frac{m}{2\alpha}\ln u=-x+C$$ We use the initial condition that for $x=0$, $v=v_{0}$ and $u=u_{0}=\frac{\alpha}{m}v_{0}^{2}+\frac{\beta}{m}$. Hence $C=\frac{m}{2\alpha}\ln u_{0}$. With some straightforward algebra you should be able to get $$u(x)=u_{0}\exp{(-\frac{2\alpha}{m}}x)$$ Let $x=p$ and you'll get $u(p)$. From that you get $v(p)$.
{ "domain": "physics.stackexchange", "id": 85518, "tags": "homework-and-exercises, newtonian-mechanics, velocity, projectile, drag" }
Cannot save data Callback function
Question: What I am really looking for is a way to save all value's message that a node receives (in a list, CSV file..) in a way that I can later do something with this values. So this is data received : I tried to put all these values in myData List: def callback(data): id = data.id x = data.pose.x y = data.pose.y #print data #list_x.append(x) #print id ,x,y goals={} goals["x"]=x goals["y"]=y #print goals myData=[goals] print myData but it's not what I want(Even with append) ! it put every value in a list like this : Also, I tried to save this value in a CSV File : with open('goals_point.csv', 'with') as csvFile: writer = csv.writer(csvFile) writer.writerows([myData]) csvFile.close() it saves only the last value : So how can I save collected messages (data)? Any idea? Thanks for answering and for your time in advance, Tayssir Originally posted by Tayssir Boubaker on ROS Answers with karma: 17 on 2019-04-09 Post score: 0 Original comments Comment by jayess on 2019-04-09: Please don't use an image to display text. Images are not searchable and people cannot copy and paste the text from the image. Please see the support page Answer: You could save the data into a text file, and import this to excel ( or similar ). file = open("yourfile.txt", "w+") data_to_save = str(data.pose.x) + "\t" + str(data.pose.y) + "\n" file.write(data_to_save) With this code above you'll be able to create a multi-lines into your file. However, if you call the same program again the data will be overwritten. If you need: https://www.guru99.com/reading-and-writing-files-in-python.html Originally posted by Teo Cardoso with karma: 378 on 2019-04-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Tayssir Boubaker on 2019-04-09: thank you, I try it but it saves only the last value, a = open('goals.txt', 'w+') data_to_save = str(data.pose.x) + "\t" + str(data.pose.y) + "\n" a.write(data_to_save) output txtfile : 10.0 20.0 I think that i should not put it in my callback. So the problem now is who can I got data.pose out of my callback !?? Comment by Teo Cardoso on 2019-04-09: A parte de open('goals.txt', 'w+') você deve colocar em uma parte onde isso não se repita, ou sejá, no começo do programa não pode colocar essa linha dentro do callback, somente a parte de escrever. Comment by Teo Cardoso on 2019-04-09: Sorry, You shouldn't put the line: a = open('goals.txt', 'w+') inside the callback function, this line must be call once. The lines which write inside the text file must be in the callback function. something like this: a = open('goals.txt', 'w+') def callbackFunction(): data_to_save = str(data.pose.x) + "\t" + str(data.pose.y) + "\n" a.write(data_to_save) Comment by Tayssir Boubaker on 2019-04-09: Thanks i got it ;)
{ "domain": "robotics.stackexchange", "id": 32841, "tags": "ros, ros-melodic, callback" }
Why do we like salt?
Question: A few friends of mine told me that salt provides zero nutritional value to us, and in fact can harm our bodies. Now, these guys are medical students, and being an engineering student myself, I decided not to argue with them. The rest of this question assumes that this fact is true, so if it's not, you can just go ahead and call me out now... So here's my understanding of things: we 'like' doing things because of our instincts, which have slowly become refined over millions of years. For example, I 'like' eating foods with fat in it because my instinct compels me to do so. Fat is 'good' for my body, since it provides a lot of energy (obesity problems aside). So are there certain things, such as eating salt, that are not in fact beneficial in any way, and we only do these things because we were trained to as children? This is the only thing I could come up with, but it's not a very satisfying explanation for a few reasons. First, I think humans have been eating salt for a long time. This would mean that most likely, it is actually our 'instinct' to eat salt. Also, salt is eaten in every culture today, which has the same implication. So is there some better explanation? Answer: In developed countries we usually consume enough salt (sodium to be exact) without actually adding table salt to food. Everything can become toxic when consumed in excess - even water - and when we frequently add more salt to foods, we tend to consume sodium in potentially harmful excess. That's what your friends are referring to. However, salt (sodium) is one of the most essential substances your body needs to stay alive, for several reasons. One of the main purposes of sodium is the upkeep of blood's osmolarity (i.e. concentrations of osmotically active compounds. Higher salt concentration on one side of a permeable membrane attracts water to that side - I'm sure you've heard that before). There are numerous systems in your body to make sure the osmolarity of blood is correct. If they fail and blood becomes hypo or hypertonic, your cells will be sucked dry or pumped full of water and in either case, burst and die. Look up the renin-angiotensin-aldosterone system for example: when the kidney filters blood, it reabsorbs or lets through water depending on the current blood osmolarity; leading to higher or lower amount of higher or lower concentrated urine. You drink lots, your blood is diluted, it becomes less tonic, kidney registers that and lets water through more, you urinate more. There are many more elements involved there, including blood pressure, nerve signals stimulating thirst or hunger of different kinds, some hormones etc. As you can see, there's a reason why the basic infusion given in hospitals to replace lost blood quickly isn't just water but normal saline. Delayed update to pick up some side aspects of your description: 1) There are of course things that humans do which have absolutely no value to them whatsoever. Anything that plays into the feel-good-reward-circuit in our brain can become such an unhelpful habit. Take smoking and drug consumption as examples. 2) About evolutionary relevance: Being a key player in maintaining body function, evolution selected for instictively liking salt. Simultaneously, most people will not like food that is extremely salty - a protective mechanism against excess.
{ "domain": "biology.stackexchange", "id": 2939, "tags": "human-biology" }
Trying to simulate a Fourier transform spectrometer in Python
Question: I'm trying to simulate a Fourier transform spectrometer in Python. I started basically with a simple single frequency (f=1e10 Hz) sine signal coming into the spectrometer. I obtained the following plot of the signal against time: What the spectrometer is measuring is only the intensity of this signal - that is, the field amplitude squared. I plotted the field amplitude against frequency, so what I should obtain on my detector: Now I did the inverse Fourier of my intensity and plotted the amplitude of my obtained signal against time: As you can see, I fitted the signal that I obtained, and I don't get back the frequency that I settled at the beginning, so my code is not working. I don't understand why. Here is my code (it's quite long I'm sorry but I'm stuck.): import numpy as np import matplotlib.pyplot as plt import cmath import scipy #for integral from scipy.integrate import quad #for integral of kinf function f(x) between range a and b over x from scipy import optimize from scipy.fft import fft, fftfreq # define constants c = 3 * 10e8 # m/s, c = omega / k #define variables f = 10e9 #Hz phi = 0 # phase à l'origine en rad (souvent fixée par l'expérimentateur) # set the frequency range over which the integral will be calculated f_min = 1e9 f_max = 500e9 # number of sample N = 1000 #sample maximal value M = 8e-9 # in s #samples spacing S = M / N # 1/s # define time range time_range = np.linspace(1/f_max, 1/f_min, N) #create an array of the signal as a function of time signal_in_time = 1 * np.sin(f * time_range + phi) # plot of the signal of frequency w against time plt.figure(figsize=(10,4)) plt.plot(time_range, signal_in_time, color = 'b', label ='f = {:.1e} Hz'.format(f)) plt.xlabel('Time [s]') plt.ylabel('Field amplitude [V/m]') plt.title('Field amplitude of the signal against time') plt.legend(loc='upper right') plt.grid() # store this signal in an array of signal measured at every frequency: there is only one signal at frequency w field_amplitude = np.zeros(N) # define frequency range frequency_range = np.linspace(f_min, f_max, N) # store this signal at the slice of the array of corresponding ferquency j=0 for i in frequency_range: #print('{:e}'.format(i)) #print(i) if i == 9990990990.990992: print('BINGO') field_amplitude[j] = sin j += 1 else: j += 1 #print(field_amplitude) # in this cell, do the inverse Fourier transform of the Field versus angular frequency array # array of intensity: here just square of amplitude array intensity = field_amplitude * field_amplitude #inverse Fourier of the intensity inv_four = np.fft.ifft(intensity) # define the x axis for the plot of the signal time_range = np.linspace(1/500e9, 1/1e9, N) # Plot of the Intensity (inverse Fourier transform of the incident power) versus time plt.figure(figsize=(8,4)) plt.plot(time_range, inv_four.real, '+-', color = 'b', label = 'Inverse Fourier transform') plt.xlabel('Time [s]') plt.ylabel('Intensity [W]') plt.title('Measured signal in time domain') plt.ticklabel_format(useMathText=True) plt.xlim(np.min(time_range), np.max(time_range)) plt.grid() plt.legend(loc='center left', bbox_to_anchor=(0, -0.22)) plt.show() #define a sinus function to fit the measured signal def sin_fit(x, a, b, c): return a * np.sin(b * x + c) # sin = 1 * np.sin(f * 1e-9 + phi) # do the fit params, params_covariance = optimize.curve_fit(sin_fit, time_range, inv_four.real, p0=[0.0003, 11.3e+10, +0.9]) a = params[0] b = params[1] c = params[2] print('the parameters found by the fits are: ', params) print('Angular frequency of the signal that is set: {:2e}'.format(f)) print('Angular frequency found by Fourier transform: {:2e}'.format(b)) # Plot of the Intensity (inverse Fourier transform of the incident power) versus time plt.figure(figsize=(8,4)) plt.plot(time_range, inv_four.real, '+-', color = 'b', label = 'Inverse Fourier transform') plt.plot(time_range, sin_fit(time_range, a, b, c), '-', color = 'r', label = 'Fit: y = {:.2e}'.format(a) +'sin({:.2e} * t'.format(b) + ' + {:.2})'.format(c)) #plt.plot(time_range, sin_fit(time_range, 0.0003,11.3e+10, +0.9), color = 'orange') plt.xlabel('Time [s]') plt.ylabel('Intensity [W]') plt.title('Measured signal in time domain') plt.ticklabel_format(useMathText=True) plt.xlim(np.min(time_range), np.max(time_range)) plt.grid() plt.legend(loc='center left', bbox_to_anchor=(0, -0.22)) plt.show() Answer: The OP is trying to recover the time domain signal by taking the inverse FFT of the "intensity", I assume because that would be the only information available in an actual test (so amplitude vs frequency from the power spectrum, and no phase). The inverse FFT will result in a matching sinusoid (with arbitrary phase offset since the phase is not known). I believe the reason for the mismatch is because the OP did not use a frequency axis representing the FFT corresponding to the sampling frequency used (along with slight error in the way the time was indexed). An FFT with $N$ samples with a frequency index typically as $k$ with $k$ going from $k=0$ up to $k=N-1$, corresponds to a frequency axis that starts with "DC" (which means direct current from EE terminology as a short hand for 0 frequency like a DC battery), and extends to nearly the sampling rate ($k=N$ corresponds to the sampling rate exactly). If we review the OP's data, the sampling rate can be derived from the time axis that was created (subtract one sample from the previous to get $\Delta T$ and $f_s = 1/\Delta T$. I did this and got a sampling rate of 1.001002 THz (nearly 1E12). We then see from the plot the OP provided showing the amplitudes for each frequency that it only goes to 5E11: So the OP used a frequency axis of DC to half the sampling rate instead of what I defined above. From the code we see the OP intended to model a sinusoid at 10 GHz but instead got something nearly twice this (18 GHz) consistent with the frequency axis being half as long with the additional errors in the time indexing used, and visually from the OP's plot, we see 18 cycles in 1E-9 seconds: 18 GHz. So to properly do what the OP is intending, the following can be done: Create a frequency domain waveform with $N$ samples (starting with all zeros for now, we will fill the proper frequency bins in a subsequent step) where $N$ corresponds to sampling rate and time duration of the capture according to: $$N = f_s T$$ Where: $f_s$ is the sampling rate in Hz $N$ is the total number of samples (in time AND in frequency) $T$ is the total time duration in seconds For example if we want to simulate a 10 GHz tone and use an integer number of samples then we could use a 1 THz sampling rate (closest to the OP's case), and a total time duration of 1 cycle of the tone which is 1/10E9 = 1E-10 seconds (100 ps), OR 2 cycles of the tone which is 2E-10 seconds or any higher integer. This will result in exactly the case the OP has shown where all other FFT bins are zero. (We can use any other time durations, but then we have to pull spectral leakage into the discussion, and given the OP is less familiar with all that, I prefer to avoid that for getting initial results). The next thing to know is the bin spacing, $f_\Delta$, which in Hz would be the sampling rate divided by the total number of samples: $$f_\Delta = f_s/N$$ Note that this also corresponds directly to $$f_\Delta = 1/T$$ A real sinusoid (perfectly sampled so that the frequency is directly on bin center) would have two non-real bins, one at index $k$ and the other at index $N-k$. (Just take the FFT of the test sine wave and plot the magnitude of the results to confirm this). If the sine wave used doesn't complete an exact integer number of cycles in the complete waveform used, additional spectral leakage will result but if we have an exact number of cycles in time, there will be only two non-real bins in frequency (I provide more details why at the very end). So the final step now is to populate the proper bins for the frequency the OP desires. If the time duration chosen was $T= 1E-10$, then the first bin will correspond to the desired frequency of 10 GHz: The bin spacing is as $1/T$ in this case is 10 GHz, so if we count along the frequency axis in increments of $k$, we get $k=0 \rightarrow f=0$, $k=1 \rightarrow f=10$ GHz, $k=2 \rightarrow f=20$ GHz etc. So in this case we populate $k=1$ with a non-zero value, and as explained above, we also populate $N-k$ = $N-1$ with a non-zero value. Doing this will result in the correct sinusoidal frequency after processing the resulting array with the inverse FFT. If the time duration is doubled, then we would do this all with $k=2$ and $k=N-2$ instead, with all other bins zero. Finally the time indexing can be done accurately using: time = np.arange(N)*1/fs In summary the solution is to properly extend the FFT samples all the way to one sample less than the corresponding sample rate ($N$ samples with $k=0$ to $N-1$ where $N$ is the sampling rate). Why two bins? The reason two bins are populated is because the Fourier Transform of a sinusoid (which the DFT is representing) corresponds to the coefficients of frequencies given as $e^{j\omega t}$ NOT $\cos(\omega t)$. The general expression $Ke^{j\phi}$ means a phasor that has magnitude $K$ and phase $\phi$, thus $e^{j\omega t}$ is a "spinning phasor" on a complex plane. The two are related using Euler's formula: $$2\cos(\omega t) = e^{j\omega t} + e^{-j\omega t}$$ And thus a real sinusoid (either cosine or sine) would have two components in the Fourier Transform: one at a "positive frequency" (meaning the phasor spins clockwise) and one at a "negative frequency" a phasor of the same magnitude and conjugate phase so spinning counterclockwise. When two phasors are added on the complex plane, you can do this graphically by placing one at the end of the other, and thus the sum will always stay on the real axis. In short every real waveform must have for each positive frequency component a negative frequency component that is equal in magnitude and opposite in phase (which keeps the sum on the real axis). In the FFT, every sample in the upper half of the FFT (which the OP didn't include) represents all these negative frequencies. Note I tried to keep it simple in the outlined procedure, but it is worth mentioning that even if one bin was populated, the resulting time domain waveform would just be $e^{j\omega t}$ instead of $\cos(\omega t)$ or $\sin(\omega t)$. But using the following relationship also from Euler (rearranging the sine and cosine to exponential conversions): $$e^{j\omega t} = \cos(\omega t) + j\sin(\omega t) $$ So from the complex time domain result $e^{j\omega t}$ we can extract the waveform $\cos(\omega t)$ or $\sin(\omega t)$ just by taking the real or imaginary component. The primary issue here was that the OP did not extend the frequency axis used to correspond with the sampling rate (it was only extended to half the sampling rate), and did not accurately represent the time increments (that second part resulted in less error). Whether one bin or two bins get populated, we can still extract the resulting sinusoid.
{ "domain": "dsp.stackexchange", "id": 11965, "tags": "fourier-transform, spectrogram" }
Current in a potential divider
Question: I dont understand why the current in R wont be the same as that in the 50 ohms resistor. Isnt that the rule for series circuits? The emf for the battery is 18 volts. Answer: $R$ is not in series with the 50 Ohm resistor. It is in series with the parallel combination of the 50 Ohm resistor and the resistance of the lamp. Hope this helps
{ "domain": "physics.stackexchange", "id": 65631, "tags": "homework-and-exercises, electric-circuits, electric-current, electrical-resistance, batteries" }
Why is there a trade-off between bias and variance in supervised learning? Why can't we have best of both worlds?
Question: The bias-variance trade-off is like a law in machine learning. You cannot have the best of both worlds. What is it about supervised learning in machine learning that makes it impossible to satisfy the two at the same time? Answer: The tradeoff between bias and variance summarizes the "tug of war" game between fitting a model that predicts the underlying training dataset well (low bias) and producing a model that doesn't change much with the training dataset (low variance). What statisticians/mathematicians a while ago realized is that any model can be made to perfectly fit the dataset at hand (i.e. have zero bias). Look at this picture, for instance from the Wikipedia page on overfitting: The graph depicts two binary classifiers that are trying to distinguish between the blue class and the red class. The classifiers have been trained on all of the blue and red observations shown in the graph to generate their boundaries. Notice that the green line has zero bias; it perfectly separates the blue and red classes whereas the black line clearly has a few errors on the training set and therefore has higher bias. The problem? Observations in datasets are realizations of random variables that follow some unknown probability distribution. Thus, observed data will always have some sort of noise, containing observations that don't actually represent the underlying probability distribution. These anomalies can be seen in the graph above in which there are red and blue observations that appear to be "crossing" the implied boundary between the two classes. Clearly, the majority of observations are easily distinguishable in this simple example except for a few cases. These few cases can be viewed as noise, which is inherently random and therefore, cannot be predicted. The green line, as a result, is essentially fitting random noise which by definition is unpredictable and non-representative. The result is that if we were to train both classifiers again using a new training dataset (from the same process), the "green boundary model" will be expected to generate a very different boundary compared to what is shown above, since it was influenced by data that did not represent the actual underlying data generating process. This is overfitting = high variance. The model associated with the black line, which has refrained from fitting the noise of the data, should be expected to remain relatively stable since it was not influenced by data that was not actually representative of the population. What does this all mean? The model associated with the black line is closer to reality (the "truth"), and therefore, if we were to pull new observations from "reality" that neither classifier has seen it will, on average, be expected to produce lower overall error despite having higher bias. This is the bias-variance tradeoff, which is really just a way of stating the tradeoff between underfitting and overfitting. I don't have another graph to illustrate this, but imagine if I straight up drew a diagonal line from the bottom left corner to the top right corner, and called this another classifier. Now, my "model" has zero variance; in fact, it is completely independent of the training dataset. But now, I haven't even fit the training dataset. I now have zero variance for tons of bias and large overall error. The decomposition of MSE into bias, variance, and irreducible error encapsulates the tradeoff. When you hope to fit a model that generalizes well to the population, simply having a model with low bias is not enough and minimizing variance is also important, if not more important. But, if we are to take variance into consideration, we now have to sacrifice some ability to predict the training set well (through regularization) in hopes of obtaining a model that is closer to the truth. Almost all ML methods use regularization to effectively trade bias (again, worse performance on the training dataset) for hopefully, lower variance. Some examples: L1/L2/elastic nets introduce extra terms to the unbiased maximum likelihood estimators for the regression coefficients in GLM's. The introduction of these terms make the estimators biased, but the idea is to hopefully penalize in an effective way such that we get bigger gains in reducing variance, which in turn results in lower overall error. Support vector machines (see the picture above) have a cost parameter that controls how smooth or "wiggly" the boundary is, which as you can see in the graph above trades bias by purposely missing some observations in the training dataset for a model that generalizes better to the population overall. Neural networks have dropout layers that effectively introduce bias by simply removing hidden units from the model at random. The goal here is to purposely sack parts of the model that have learned from the training dataset in hopes of preventing complex co-adaptions that are highly specific to the training set and not to the general population. Decision trees can be controlled through pruning (cost complexity, removing branches that do not significantly reduce in sample error/lead to large enough information gains) and through limiting the depth of the tree. The result of both methods? Again, higher bias (removing rules near the bottom of the tree that have potentially been created to better predict non-representative noise observations, leading to sparse terminal nodes) but hopefully, lower variance.
{ "domain": "datascience.stackexchange", "id": 5927, "tags": "machine-learning, supervised-learning, variance, bias" }
What is the chemical name of the compound H2S4O?
Question: I have been studying nomenclature and was asked to find the name of a strange compound: $\ce{H2S4O}$. In my opinion, it should be dihydrogen-tetrasulfur oxide which seems very weird. Can somebody advise me on how to name this compound (and whether this compound really exists?, I was able to come up with a structure similar to benzene...) Answer: Section IR-5.4 of IUPAC’s Nomenclature of Inorganic Compounds (Red Book) of 2005 lists the rules that shall apply to naming inorganic compounds by a generalised stoichiometric name which does not carry any information about the compound’s structure. These rules dictate that: The constituents of the compound to be named are divided into formally electropositive and formally electronegative constituents. There must be at least one electropositive and one electronegative constituent. Cations are electropositive and anions electronegative, by definition. Electropositive elements occur later in Table VI than electronegative elements by convention. In principle, the division into electropositive and electronegative constituents is arbitrary if the compound contains more than two elements. In practice, however, there is often no problem in deciding where the division lies. Section IR-5.4.1 In our case, oxygen must be considered electronegative and hydrogen must be considered electropositive since they have the highest and lowest electronegativity, respectively. Within the classes of electronegative or electropositive substituents, ordering is strictly alphabetical. Electropositives come first, electronegatives last. Section IR-5.4.2.1 allows the use of multiplicative prefixes as is the case with binary compounds. Further information such as charge, oxidation numbers or structure is not at our disposal. Therefore, according to the principles laid out, we have two choices to name $\ce{H2S4O}$, depending on whether we wish to consider sulphur electropositive or electronegative: Dihydrogen tetrasulphur oxide Dihydrogen oxide tetrasulphide To the best of my knowledge, this compound does not exist. But the principles of nomenclature are designed in a way to allow the naming of all compounds whether synthesised, hypothesised or fancily drawn on paper.
{ "domain": "chemistry.stackexchange", "id": 8835, "tags": "nomenclature" }
What air pressure is needed on mars, to have fluid water?
Question: The atmospheric pressure on the Martian surface averages 600 pascals (0.087 psi), about 0.6% of Earth's mean sea level pressure. There is a lot of frozen ice on mars, but it can't melt, because of the low air pressure, it sublimates to steam if it's warm enough. But what air pressure is needed, to have 4°C fluid water on it's surface? Answer: The triple point of water is at 611.73 Pa and 273.16 K (0.01 °C), so it's actually very close. There can be liquid water in the lowest laying regions, especially if it comes as brine, but it won't last long and the temperature has to be just right.
{ "domain": "physics.stackexchange", "id": 18854, "tags": "fluid-dynamics, water, air, atmospheric-science" }
Why was the neutrino thought to be massless?
Question: Wolfgang Pauli once said (regarding the neutrino): I have done a terrible thing. I have postulated a particle that cannot be detected. Why did he figure it couldn't be detected? Was this because he thought it was massless? According to Wikipedia, "neutrinos were long believed to be massless". If so, why did they think it was massless? I thought the particle was hypothesised in order to maintain the conservation of momentum in a beta decay. If it was massless, this wouldn't have any effect, right? Answer: I thought the particle was hypothesised in order to maintain the conservation of momentum in a beta-decay. If it was massless, this would have no effect, right? This is where you are confused. Having no mass does not mean having no momentum. I think you are probably thinking of momentum as Newtonian mechanics would express it : $P=mv$ However Einstein came up with a relativistic expression linking energy and momentum which is : $$E^2 = m^2c^4+p^2c^2$$ where $m$ is rest mass. Now even if rest mass is zero, the particle has energy (like photons do) and you get : $$p = \frac E c$$ So massless neutrinos would have momentum.
{ "domain": "physics.stackexchange", "id": 80098, "tags": "mass, standard-model, history, neutrinos" }
What numbers would I apply to the equation $y=y_o+v_ot+\frac 12at^2$ using this simulator?
Question: Using a Projectile Motion Simulutator (selecting INTRO from the four choices), I set the height of the Cannon at $15 \ \text{m}$, Initial Speed = $8 \ \text{m/s}$, the bulls-eye at $13.9 \ \text{m}$. Using the Equation $y = y_0 + v_0 t + \frac{1}{2}at^2$, would the $y_0 = 15$, $v_0 = 13.9$, $a = 8$? Other information: There is no added air resistance, both the Velocity and Accelerator Vectors are checked for total. P.S (I really enjoy Physics but I'm new at it and I am trying to get a handle on it. Please forgive me if my inquiries come across as elementary because I am a novice in this fine subject) Answer: $Y_0$ is $15$ (height of starting point above the ground), $v_0$ is the initial vertical velocity of zero. So to find the time it takes to hit the ground do $$0=15-\frac{10t^2}{2}$$ Where the $-10$ is for the acceleration due to gravity and can be put in as $-9.8$ if preferred. Solving gives $t$ is $1.732$s. The horizontal distance travelled is then $1.732 \times 8$, the $13.9$m All the best with it.
{ "domain": "physics.stackexchange", "id": 83023, "tags": "kinematics, acceleration, velocity, projectile, free-fall" }
Error analysis of inverse tangent in sine
Question: I am trying to make an error analysis of the following function: λ = d∙sin⁡(tan^(-1)(⁡y/a)) My assumption was that it would be as follows: Δλ/λ= ∆d/d + (cos(1/(1+((y/a)*(∆y/y+∆a/a))^2))/(sin(tan^(-1)(y/a)) But the result is implausible. Where am I going wrong? Answer: The problem you're running into is that propagation of errors adds in quadrature, when the variables are independent. In this particular case, where $\lambda = f(d, y)$: $$\begin{align}(\Delta \lambda)^2 &= \left(\Delta d \frac{\partial f}{\partial d} \right)^2 + \left(\Delta y \frac{\partial f}{\partial y} \right)^2\\ &=\left(\Delta d \sin \left[\tan^{-1}\frac{y}{a}\right]\right)^2 + \frac{(\Delta y)^2 a^4}{(y^2 + a^2)^3}.\end{align}$$ When $f$ is a function of many variables, $\mathbf{x}$, that have a non-diagonal covariance matrix, $\Sigma$, then the variance in $f$ is: $$(\Delta f)^2 = (\nabla f)\cdot \Sigma \nabla f.$$ When $\mathbf{f}$ is a vector valued function (the number of components in $\mathbf{f}$ and $\mathbf{x}$ are not necessarily the same) the covariance matrix of its components is: $$\left[\Sigma_{\mathbf{f}}\right]_{i,j} = \sum_{n,m = 1}^{\operatorname{dim}\mathbf{x}} \frac{\partial f_i}{\partial x_n} [\Sigma_{\mathbf{x}}]_{n,m} \frac{\partial f_j}{\partial x_m}. $$ Notice how $\frac{\partial f_i}{\partial x_n}$ are the components of the Jacobian matrix, pointing to a relationship between propagation of errors and changing variables in an integral.
{ "domain": "physics.stackexchange", "id": 33635, "tags": "differentiation, error-analysis" }
Largest area of identical adjacent matrix elements
Question: Given a 2D matrix find the largest area of identical elements. Here is my first implementation: using System; using System.Collections.Generic; namespace ProgrammingBasics { class MaxArrayArea { static void Main() { // target matrix int[,] matrix = new int[,] { { 1, 3, 2, 2, 2, 4 }, { 3, 3, 3, 2, 4, 4 }, { 4, 3, 1, 2, 3, 3 }, { 4, 3, 1, 3, 3, 1 }, { 4, 3, 3, 3, 1, 1,} }; PrintMatrix(matrix); MaxAreaOfIdenticalAdjacentElements(matrix); } //---------------------------------------------------------------------------- /* Method: MaxAreaOfIdenticalAdjacentElements (arr2D); */ static void MaxAreaOfIdenticalAdjacentElements(int[,] arr2D) { int value = 0; int largestArea = 0; HashSet<Tuple<int, int>> largestAreaElements = new HashSet<Tuple<int, int>>(); for (int i = 0; i < arr2D.GetLength(0); i++) { for (int j = 0; j < arr2D.GetLength(1); j++) { // stores all unique matrix elements with the same values HashSet<Tuple<int, int>> visited = new HashSet<Tuple<int, int>>(); // mark start element as visited Tuple<int, int> MatrixElement = new Tuple<int, int>(i, j); visited.Add(MatrixElement); CheckLeft(arr2D, i, j, visited); CheckRight(arr2D, i, j, visited); CheckUp(arr2D, i, j, visited); CheckDown(arr2D, i, j, visited); // check if area with current value is largest if (visited.Count > largestArea) { value = arr2D[i, j]; largestArea = visited.Count; largestAreaElements = visited; } } } // mark the area char[,] area = new char[arr2D.GetLength(0), arr2D.GetLength(1)]; foreach (var item in largestAreaElements) { area[item.Item1, item.Item2] = '*'; } PrintMatrix(area); // print result Console.WriteLine("Value: {0} Area: {1}", value, largestArea); } //---------------------------------------------------------------------------- /* Method: CheckLeft (arr2D, i, j, visited); arr2D - target 2D matrix storing seached elements i - row of current element j - column of current element visited - collection holding the unique adjacent elements with same value It recursively checks for elements with identical values to the left, up, and down. */ static void CheckLeft(int[,] arr2D, int i, int j, HashSet<Tuple<int, int>> visited) { int currentValue = arr2D[i, j]; int col = j - 1; if (col < 0) { return; } while (col >= 0) { Tuple<int, int> nextElement = new Tuple<int, int>(i, col); if (arr2D[i, col] == currentValue && !visited.Contains(nextElement)) { visited.Add(nextElement); // check up CheckUp(arr2D, i, col, visited); // check down CheckDown(arr2D, i, col, visited); } else { break; } --col; } } //---------------------------------------------------------------------------- /* Method: CheckUp (arr2D, i, j, visited); arr2D - target 2D matrix storing seached elements i - row of current element j - column of current element visited - collection holding the unique adjacent elements with same value It recursively checks for elements with identical values to the up, left, and right. */ static void CheckUp(int[,] arr2D, int i, int j, HashSet<Tuple<int, int>> visited) { int currentValue = arr2D[i, j]; int row = i - 1; if (row < 0) { return; } while (row >= 0) { Tuple<int, int> nextElement = new Tuple<int, int>(row, j); if (arr2D[row, j] == currentValue && !visited.Contains(nextElement)) { visited.Add(nextElement); // check left CheckLeft(arr2D, row, j, visited); // check right CheckRight(arr2D, row, j, visited); } else { break; } --row; } } //---------------------------------------------------------------------------- /* Method: CheckRight (arr2D, i, j, visited); arr2D - target 2D matrix storing seached elements i - row of current element j - column of current element visited - collection holding the unique adjacent elements with same value It recursively checks for elements with identical values to the right, up, and down. */ static void CheckRight(int[,] arr2D, int i, int j, HashSet<Tuple<int, int>> visited) { int currentValue = arr2D[i, j]; int col = j + 1; if (col >= arr2D.GetLength(1)) { return; } while (col < arr2D.GetLength(1)) { Tuple<int, int> nextElement = new Tuple<int, int>(i, col); if (arr2D[i, col] == currentValue && !visited.Contains(nextElement)) { visited.Add(nextElement); // check up CheckUp(arr2D, i, col, visited); // check down CheckDown(arr2D, i, col, visited); } else { break; } ++col; } } //---------------------------------------------------------------------------- /* Method: CheckDown (arr2D, i, j, visited); arr2D - target 2D matrix storing seached elements i - row of current element j - column of current element visited - collection holding the unique adjacent elements with same value It recursively checks for elements with identical values to the down, left, and right. */ static void CheckDown(int[,] arr2D, int i, int j, HashSet<Tuple<int, int>> visited) { int currentValue = arr2D[i, j]; int row = i + 1; if (row >= arr2D.GetLength(0)) { return; } while (row < arr2D.GetLength(0)) { Tuple<int, int> nextElement = new Tuple<int, int>(row, j); if (arr2D[row, j] == currentValue && !visited.Contains(nextElement)) { visited.Add(nextElement); // check left CheckLeft(arr2D, row, j, visited); // check right CheckRight(arr2D, row, j, visited); } else { break; } ++row; } } //----------------------------------------------------------------------------- /* Method: PrintMatrix(arr); It prints all the elements of the 2D integer array. */ static void PrintMatrix(int[,] arr) { for (int row = 0; row < arr.GetLength(0); row++) { for (int column = 0; column < arr.GetLength(1); column++) { Console.Write("{0,3} ", arr[row, column]); } Console.WriteLine(); } Console.WriteLine(); } //----------------------------------------------------------------------------- /* Method: PrintMatrix(arr); It prints all the elements of the 2D char array. */ static void PrintMatrix(char[,] arr) { for (int row = 0; row < arr.GetLength(0); row++) { for (int column = 0; column < arr.GetLength(1); column++) { Console.Write("{0,3} ", arr[row, column]); } Console.WriteLine(); } Console.WriteLine(); } } } Output: Any constructive criticism will be greatly appreciated, especially regarding the reduction of the algorithm complexity. Answer: Any constructive criticism will be greatly appreciated, especially regarding the reduction of the algorithm complexity. Rather than doing a line-by-line review of your code, let me suggest a general problem-solving technique. Sometimes you get nicer code by solving the more general problem, and then applying your general solution to a specific problem. Suppose we have an interface: interface IGraph<T> where T : IEquatable<T> { IEnumerable<T> Nodes { get; } IEnumerable<T> Neighbors(T t); } That is, a graph is a thing that has a collection of nodes, and every node has a collection of nodes that it is "beside". Can you write this method? IEnumerable<T> Reachable(IGraph<T> graph, T start) That is, given a graph and a node, return the set of nodes that is "reachable" starting at the start node. Can you write this method? IEnumerable<IEnumerable<T>> Partitions(IGraph<T> graph) That is, given a graph, return a sequence where each element in the sequence is a collection of nodes that are all reachable from each other. That is, the transitive and reflexive closure of reachability. And can you write this method? IGraph<MyNode> ArrayToGraph(int[,] array) That is, the method takes in an array and returns a graph of MyNode where two nodes are neighbours if they are neighbours in the array and have the same value. If you can write those three methods then the solution to your problem is a one-liner: int max = Partitions(ArrayToGraph(arr)).Select(p => p.Count()).Max(); When you solve the more general problem, the code to solve the more specific problem becomes very short and easy to understand.
{ "domain": "codereview.stackexchange", "id": 21664, "tags": "c#, algorithm, matrix" }
Telescope in Sun's gravity lens focus - pointing, gain, distortions
Question: A telescope located in the gravitational focus of the Sun can use the Sun as a magnifying lens. The focus begins 550 AU away, but maybe a 700 or 1000 AU distance is needed to get rid of disturbances from the Corona, and the focus extends practically indefinately. Here are some slides by Dr. Maccone who has promoted this idea he calls FOCAL: http://www.spaceroutes.com/astrocon/AstroconVTalks/Maccone-AstroconV.pdf I intend to ask about the technical design and feasibility of such a project in the Space Exploration SE. Here I rather ask about the scientific value and challenges. POINTING: The magnification would occur only in the exact direction of the Sun. But since the Sun moves and the background objects magnified move, I suppose that the observed targets would change continuously. Would it even be practically possible to give the telescope a trajectory which keeps it aiming at for example Alpha Centauri? Would there most of the time be nothing in the right direction as the line between the telescope and the Sun sweeps across space, or would there alway be some star or galaxy in sight? Like CMB if nothing else. GAIN: In the slides linked above, Maccone has calculated the expected gain to 114 dB for infrared wavelengths. How many times "magnification" does this mean? I don't think I understand the units here, I get a ridiculously large number. Can it be explained somewhat intuitively? Would a FOCAL mission be a unique revolution in astronomy, or could similar results be achieved by building an interferometer with interplanetary sized baselines here nearer to the Sun? How does the science value of a gravity lens compare to that of a wide baseline? Are they good for different tasks? DISTORTIONS: Could the lensed signals be reconstructed thanks to our knowledge of the Sun and measurements of corona activity? If pointed towards a central part of the Milky Way, wouldn't signals come from multiple objects at the same time, some much further away than others? Would the gravity lens of the Sun have bigger problems with distortion than the intergalactic gravity lenses we know of today? And finally, can any natural strong lensing inside the Milky Way be used today, for example using a globular cluster as a lens? Answer: The pointing is not a fundamental problem with the suggested design: The suggested trajectory is designed to include a Sun flyby as the last flyby. This ensures an asymptotically radial trajectory away from the sun after the flyby, hence maintaining the pointing relative to the sun. The proper motion of the observed object may be some challenge, but the trajectory could be adjusted appropriately by an additional burn. 114 dB are an amplification of a factor of about $2.51\cdot 10^{11}$. It doesn't refer just to the magnification, but to the intensity of the signal. Therefore interferometry with a long baseline isn't the same; the latter provides a high resolution. Whether those numbers are achievable in practice is a different question; a factor of 1000 for naturally occuring gravitational lenses would mostly be regarded as excellent. Theory allows arbitrary amplification for perfect alignment of observer, lense and observed object. Most of the distortions can be deconvoluted, revealing the field of gravity of the sun, and the shape of the observed object. Gravitational lensing is used today, here some lecture notes, and here quasars as an example.
{ "domain": "astronomy.stackexchange", "id": 322, "tags": "radio-astronomy, space-telescope, gravitational-lensing" }
Can Amps be used as a unit of measure for the amplitude of an EM wave?
Question: As far as I know the definition for the amplitude of a wave is the distance between the line $y=0$ and the peak of the wave which in most cases is going to be a unit of length. I also understand that for different waves, different units of measure can be used, for example the amplitude of a sound wave can be measured in decibels. My question is, can the amplitude of an electromagnetic wave, in particular a radio wave be measured in amps? (This is what has been taught to me on a course but I am struggling to make the connection) Answer: No. An EM wave fundamentally consists of electric and magnetic fields, which in SI units are measured in volts per meter (V/m) and tesla (T), respectively. Customarily the peak value of the electric field $E_0$ is quoted as "the" amplitude of the wave, but the peak value of the magnetic field $B_0$ is related to $E_0$ quite simply: $B_0 = E_0/c$, where $c$ is the speed of light in whatever medium the waves are travelling through. It's true that if this wave is incident upon an antenna, the electric field would cause a current to flow through the antenna, and this current would be measured in amperes. But the amount of current that flows would be dependent on the specific properties of the antenna; a different antenna would experience a different amount of current when subjected to the same wave.
{ "domain": "physics.stackexchange", "id": 41070, "tags": "waves, electromagnetic-radiation, electric-current, units, radio" }
When should you use the existential and universal quantifiers for Relational Calculus?
Question: Could someone explain to me WHEN do you use existential and universal quantifiers for Relational Calculus? Given this schema: Hotel(hotelNo, hotelName, city) The expression below gets all hotel names based in London (expressed in Domain Relational Calculus) $$\mathrm{HotelName|(\exists hNo,cty)(Hotel(hNo,hotelName,cty)\wedge cty=\,\,"\!\!London\!")}$$ Why does the existential quantifier required for this example? Wouldn't this be the equivalent {hotelName | (Hotel(hNo, hotelName, cty) AND cty='London')} And why is only hNoand cty selected for the existential quantifier? What about the hotelName? Source: Herts University - Advanced Database Course Answer: This question is related to the very basics of database theory, finite model theory and logics. I would strongly suggest Abiteboul's book on Foundations of Databases, or Libkin's book on Finite Model Theory. Very roughly stated, a database is a collection of facts, and a query is a logical formula, which is used to specify certain patterns to be matched against the database. The most common database query language is unions of conjunctive queries, which is simply a disjunction of conjunctive queries. These are existentially quantified queries and there is NO universal quantification at all. The query in the question is indeed the simple conjunctive query $\exists \ x \ {Hotel(x, y, london)} $ where $x$ and $y$ are logical variables and $london$ is a constant. Intuitively, the $x$ variables are to be matched to hotel numbers, and $y$ variables to hotel names. Now, in this formula, $y$ is a free variable, i.e. it is not bound to any quantifier. It is wrong to assume that it is bound to a universal quantifier. Such variables are also called the answer variables as these are the variables for which you want to retrieve answers. Note that $x$ is not an answer variable; so, you are not interested in the hotel numbers. All you want to say with this query is: Give me all the names of the hotels in London! Differently, consider this query ${Hotel(x, y, london)} $ where $x$ is also a free variable. It asks for all names + numbers of the hotels in London. A Boolean query is a special case of a conjunctive query that does not contain any free variables. For instance, the query $\exists \ x,y \ {Hotel(x, y, london)} $ has no free variables and asks a yes/no question: Is there a hotel in London (with some hotel number and name)? Overall, please have a look at the reference books, and simply learn the query semantics.
{ "domain": "cs.stackexchange", "id": 8307, "tags": "database-theory, databases, relational-algebra" }
Binary genetic programming image classifier's fitness function
Question: I am trying to figure out how to improve my binary image genetic programming classifier's fitness. It takes images and classifies them if it has some feature X or not in it. These are the main points: It takes an image and looks at the first 8 x 8 pixel values (called window). It saves these 8 x 8 values into an array and runs decodeIndividual on them. decodeIndividual simply runs the individual's function and retrieves the first and last registers. Last register is the scratchVariable that is updated per each window throughout an image. The first register is the main identifier per window and it adds it to the y_result which is kept for one image. When all the windows have been evaluated, y_result is compared to the ground truth and the difference is added to the error. Then the same steps are repeated for another image. Heres the code: float GeneticProgramming::evaluateIndividual(Individual individualToEvaluate) { float y_result = 0.0f; float error = 0.0f; for (int m = 0; m < number; m++) { int scratchVariable = SCRATCH_VAR; for (int row = 0; row <= images[m].rows - WINDOW_SIZE; row += STEP) { for (int col = 0; col <= images[m].cols - WINDOW_SIZE; col += STEP) { int registers[NUMBER_OF_REGISTERS] = {0}; for (int i = 0; i < NUMBER_OF_REGISTERS-1; i++) { for (int y = 0; y < row + STEP; y++) { for (int x = 0; x < col + STEP; x++) { registers[i] = images[m].at<uchar>(y,x); } } } registers[NUMBER_OF_REGISTERS-1] = scratchVariable; // we run individual on a separate small window of size 8x8 std::pair<float, float> answer = decodeIndividual(individualToEvaluate, registers); y_result += answer.first; scratchVariable = answer.second; } } float diff = y_groundtruth - y_result; // want to look at squared error error += pow(diff, 2); // restart the y_result per image float y_result = 0.0f; } cout << "Done with individual " << individualToEvaluate.index << endl; return error; } images is just a vector where I stored all of my images. I also added the decodeIndividual function which just looks at instructions and the given registers from the window and runs the list of instructions. std::pair<float, float> GeneticProgramming::decodeIndividual(Individual individualToDecode, int *array) { for(int i = 0; i < individualToDecode.getSize(); i++) // MAX_LENGTH { Instruction currentInstruction = individualToDecode.getInstructions()[i]; float operand1 = array[currentInstruction.op1]; float operand2 = array[currentInstruction.op2]; float result = 0; switch(currentInstruction.operation) { case 0: //+ result = operand1 + operand2; break; case 1: //- result = operand1 - operand2; break; case 2: //* result = operand1 * operand2; break; case 3: /// (division) if (operand2 == 0) { result = SAFE_DIVISION_DEF; break; } result = operand1 / operand2; break; case 4: // square root if (operand1 < 0) { result = SAFE_DIVISION_DEF; break; } result = sqrt(operand1); break; case 5: if (operand2 < 0) { result = SAFE_DIVISION_DEF; break; } result = sqrt(operand2); break; default: cout << "Default" << endl; break; } array[currentInstruction.reg] = result; } return std::make_pair(array[0], array[NUMBER_OF_REGISTERS-1]); } The problem is that I have: 6 grey scale images reduced to size 60 x 80 The window size is 8 x 8 Step is 2 Number of registers is 65 Yet it takes over 3 seconds to evaluate these 6 incredibly small images. How do I improve my code? I would appreciate anyone pointing out some mistakes or at least providing some guidance. I am thinking of using threads to evaluate each individual separately. EDIT: So I have adjusted my code. float GeneticProgramming::evaluateIndividual(Individual individualToEvaluate) { float y_result = 0.0f; float error = 0.0f; for (int m = 0; m < number; m++) { int scratchVariable = SCRATCH_VAR; for (int row = 0; row <= images[m].rows - WINDOW_SIZE; row += STEP) { for (int col = 0; col <= images[m].cols - WINDOW_SIZE; col += STEP) { cv::Rect windows(col, row, WINDOW_SIZE, WINDOW_SIZE); cv::Mat roi = images[m](windows); std::pair<float, float> answer = decodeIndividual(individualToEvaluate, roi, scratchVariable); y_result += answer.first; scratchVariable = answer.second; } } float diff = y_groundtruth - y_result; // want to look at squared error error += pow(diff, 2); // restart the y_result per image float y_result = 0.0f; } cout << "Done with individual " << individualToEvaluate.index << endl; return error; } I also changed the decodeIndividual() so that it takes the roi and a scratchVariable as follows: std::pair<float, float> GeneticProgramming::decodeIndividual(Individual individualToDecode, cv::Mat &registers, int &scratchVariable) { int array[NUMBER_OF_REGISTERS]; unsigned char* p; for(int ii = 0; ii < WINDOW_SIZE; ii++) { p = registers.ptr<uchar>(ii); for(int jj = 0; jj < WINDOW_SIZE; jj++) { array[ii*WINDOW_SIZE+jj] = p[jj]; } } array[NUMBER_OF_REGISTERS-1] = scratchVariable; for(int i = 0; i < individualToDecode.getSize(); i++) // MAX_LENGTH { Instruction currentInstruction = individualToDecode.getInstructions()[i]; float operand1 = array[currentInstruction.op1]; float operand2 = array[currentInstruction.op2]; float result = 0; switch(currentInstruction.operation) { case 0: //+ result = operand1 + operand2; break; case 1: //- result = operand1 - operand2; break; case 2: //* result = operand1 * operand2; break; case 3: /// (division) if (operand2 == 0) { result = SAFE_DIVISION_DEF; break; } result = operand1 / operand2; break; case 4: // square root if (operand1 < 0) { result = SAFE_DIVISION_DEF; break; } result = sqrt(operand1); break; case 5: if (operand2 < 0) { result = SAFE_DIVISION_DEF; break; } result = sqrt(operand2); break; default: cout << "Default" << endl; break; } array[currentInstruction.reg] = result; } return std::make_pair(array[0], array[NUMBER_OF_REGISTERS-1]); } Yet I am still receiving unsatisfying results. Any ideas? Answer: I'm concerned with this bit of code, the inner 3 loops: int registers[NUMBER_OF_REGISTERS] = {0}; for (int i = 0; i < NUMBER_OF_REGISTERS-1; i++) { for (int y = 0; y < row + STEP; y++) { for (int x = 0; x < col + STEP; x++) { registers[i] = images[m].at<uchar>(y,x); } } } The inner loop writes into the same array element registers[i] every time. Therefore it can be simplified to: int registers[NUMBER_OF_REGISTERS] = {0}; for (int i = 0; i < NUMBER_OF_REGISTERS-1; i++) { for (int y = 0; y < row + STEP; y++) { int x = col + STEP - 1; registers[i] = images[m].at<uchar>(y,x); } } Again, the new inner loop does nothing: int registers[NUMBER_OF_REGISTERS] = {0}; for (int i = 0; i < NUMBER_OF_REGISTERS-1; i++) { int y = row + STEP - 1; int x = col + STEP - 1; registers[i] = images[m].at<uchar>(y,x); } And this we can simplify to: int registers[NUMBER_OF_REGISTERS]; int y = row + STEP - 1; int x = col + STEP - 1; int value = images[m].at<uchar>(y,x); for (int i = 0; i < NUMBER_OF_REGISTERS-1; i++) { registers[i] = value; } This of course does not look like anything you might have intended to write. I think your code does not do what you intended it to do. Don't worry with speed until your code works as intended.
{ "domain": "codereview.stackexchange", "id": 34295, "tags": "c++, performance, image, genetic-algorithm" }
How do I find the right lens for my laser?
Question: I purchased this line laser recently and I'm running into a bit of an issue. The laser shoots out at a 120 degree angle which is perfect. However, once the laser spreads to about 4.25 inches, I need to redirect the light to move straight again. However, I have never worked with lenses before. I'm assuming that if the output angle of the laser is 120 degrees then to redirect it into straight line the arc angle of the lens needs to be 120 degrees as well? Is this assumption correct? I'm looking at this for reference at the moment. Answer: You can't do this with a single "normal" lens. Because the beam width needs to be 4.25 inches you need a lens wider than that (which is huge compared to normal optical components). The focal length of the lens would need to be 4.25 in/(2*sin(60 degrees)) ~ 2.5 inches = 63.5 mm which is smaller than the width of the lens, and you can't really make normal plano-convex lenses like this. You have two options -- you can use multiple lenses (one lens to collimate, then a pair of lenses to step it up to a larger beam width), or you could also get ok results from using a Fresnel lens -- one that is fairly close to what you need is here. The second option will probably be much cheaper, but may suffer from more aberrations.
{ "domain": "physics.stackexchange", "id": 19415, "tags": "optics, electromagnetic-radiation, laser, lenses, experimental-technique" }
Which 2P1R Games are Potentially Sharp?
Question: Two-prover one-round (2P1R) games are an essential tool for hardness of approximation. Specifically, the parallel repetition of two-prover one-round games gives a way to increase the size of a gap in the decision version of an approximation problem. See Ran Raz's survey talk at CCC 2010 for an overview of the subject. The parallel repetition of a game has the astonishing property that while a randomized verifier operates independently, the two players can play the games in a non-independent way to achieve better success than playing each game independently. The amount of success is bounded above by the parallel repetition theorem of Raz: Theorem: There exists a universal constant $c$ so that for every 2P1R game $G$ with value $1-\epsilon$ and answer size $s$, the value of the parallel repetition game $G^n$ is at most $(1-\epsilon^c)^{\Omega(n/s)}$. Here is an outline on the work of identifying this constant $c$: Raz's original paper proves $c \leq 32$. Holenstein improved this to $c \leq 3$. Rao showed that $c' \leq 2$ suffices (and the dependence on $s$ is removed) for the special case of projection games. Raz gave a strategy for the odd-cycle game that showed Rao's result is sharp for projection games. By this body of work, we know $2 \leq c\leq 3$. My two questions are as follows: Question 1: Do experts in this area have a consensus for the exact value of $c$? If it is thought that $c > 2$, are there specific games which are not projective, but also specifically violate the extra properties of projection games that Rao's proof requires. Question 2: If $c > 2$, which interesting games violate Rao's strategy and have a potential to be sharp examples? From my own reading, it seems the most important property of projection games that Rao uses is that a good strategy for parallel repetition would not use many of the possible answers for certain questions. This is somehow related to the locality of projection games. Answer: I tend to believe that c=3 is the right answer for the general case, and that it should be possible to give an example. I'll have to think more about that to know for sure. It's a good question, and I don't know of existing work about it. Research recently focused on which types of games have (best possible) c=1, mostly because of possible applications to amplification of unique games. Barak et al generalized the counterexample of Raz to all unique games with SDP gaps. Raz and Rosen showed that for expanding projection games c=1. There are also previous results by a super-set of those authors for free games.
{ "domain": "cstheory.stackexchange", "id": 498, "tags": "cc.complexity-theory, approximation-hardness" }
How to filterout a type from a rosbag record?
Question: I have on my hands a 18GiB rosbag that contains multiple topics of PointCloud2 type. From my experience those are the main contributors to the rosbags' size. After the bag is recorded I can use the following to throw away the PointCloud2 messages: rosbag filter big.bag small.bag "'PointCloud2' not in str(type(m))" but what flag to rosbag record should I use to not record them? I know rosbag record -x but according to documentation that only works with regexes and my PointCloud2 topics have different names. By default I use rosbag record -a since the PointCloud2 topics are only a fraction of what I want to record. Originally posted by Aleksander Bobiński on ROS Answers with karma: 26 on 2021-04-22 Post score: 0 Answer: Somewhat guided by gvdhoorn's answer I wrote the following shell wrapper around rosbag record. unwanted_type=PointCloud2 unwanted_topics_as_regex_alternative=$(rostopic list --verbose | grep $unwanted_type | cut -d '[' -f 1 | cut -d '*' -f2- | sed -E 's/^ (.*) $/\1/g' | tr '\n' '|') rosbag record -a -x "($unwanted_topics_as_regex_alternative)" Originally posted by Aleksander Bobiński with karma: 26 on 2021-04-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2021-04-24: Nice idea to use rostopic in a shell script wrapper. Would rostopic list with rostopic type be less --verbose? Comment by Aleksander Bobiński on 2021-04-24: I think one would have to use a for loop since rostopic type does not accept multiple topic names as arguments. Maybe if rostopic type would accept topic names from stdin this could be done in one line without the cut tricks by piping from rostopic list to rostopic type to grep. Comment by gvdhoorn on 2021-04-24:\ Maybe if rostopic type would accept topic names from stdin this could be done in one line without the cut tricks by piping from rostopic list to rostopic type to grep. afaik, it does support that. Comment by Aleksander Bobiński on 2021-04-24: Are you sure? For me echo "/velodyne_points" | rostopic type results in: rostopic type: error: the following arguments are required: topic_or_field Comment by gvdhoorn on 2021-04-24: You can do it with xargs: rostopic list | xargs -i rostopic type {}. But you don't really gain anything doing this I believe.
{ "domain": "robotics.stackexchange", "id": 36353, "tags": "ros, filter, ros-melodic, rosbag" }
What is a projection method?
Question: Quoting from Solenthaler et. al. Predictive-Corrective Incompressible SPH (ACM Transactions on Graphics, Vol. 28, No. 3, Article 40, Publication date: August 2009) (PDF link here) These incompressible SPH (ISPH) methods first integrate the velocity field in time without enforcing incompressibility. Then, either the intermediate velocity field, the resulting variation in particle density, or both are projected onto a divergence-free space to satisfy incompressibility through a pressure Poisson equation. What does the author mean by "project?? Is there a simpler way of understanding this operation? I tried reading other articles, but they go even more deep saying solenoidal vectors etc. I am looking for a simple explanation at first to understand the concept. Answer: This would be a better question for scicomp.stackexchange.com. But, in essence, the projection method refers to a two step process of integrating the velocity field. Incompressible solvers are designed to advance the velocity field so that it most acurately solves the discretized momentum equation. There are several methods available to do this, such as, the pressure correction method, the theta method and the pressure projection method. In the latter case, there are essentially two steps. Step 1: In the first step, we obtain an intermediate velocity in the absence of a pressure field (that way we can solve it easily). $\frac{\mathbf{u^*}-\mathbf{u^n}}{\Delta t} =$ all remaining terms in momentum equation - pressure terms This gives you an intermediate velocity $u^*$ which because of the missing pressure term does not satisfy continuity (hence why he said " ... without enforcing incompressibility"). Step 2: The second step brings the pressure term back in order to correct the divergent velocity field, $u^*$. This is the part where he says "are projected onto a divergence-free space to satisfy incompressibility through a pressure Poisson equation.", in other words: The divergent velocity field is corrected with the corrector: $\frac{\mathbf{u}^{n+1}-\mathbf{u}^*}{\Delta t} = -\mathbf{G}p^{n+1/2}$, where $\mathbf{G}$ is the discrete gradient operator and the pressure field is computed with: $Lp^{n+1/2} = \frac{1}{\Delta t} \mathbf{D} \cdot \mathbf{u^*}$ (poisson equation), where $\mathbf{D}$ is the discrete divergence operator and $L$ is the discrete Laplacian operator. The superscripts indicate the time step, note that n+1/2 is at half time step. So, first you calculate a divergent velocity field $\mathbf{u^*}$ which lives in a space that does not satisfy continuity. Next you correct that velocity by "projecting" the divergent velocity field onto a subspace which is divergence free (that is, the continuity equation is satisfied) using the pressure solution from the Poisson equation. They mean projection in the mathematical sense that you are making a transformation from one space to another.
{ "domain": "physics.stackexchange", "id": 10932, "tags": "fluid-dynamics, calculus" }
(Conveniently) deleting packages using rosinstall/rosws
Question: For a large project, we use a combination of a catkin workspace and a rosbuild one (with the rosbuild one getting initialized from the catkin one). Adding new packages to either of those workspaces is convenient and easy, by just adding to the corresponding rosinstall file and running an update script. We now have a case where a package has been catkinized, which means it has to be migrated from the rosbuild to the catkin workspace. As mentioned, adding it to the catkin workspace is easy, by just adding it to the rosinstall file. Is there a similarly easy way to remove a package from the rosbuild workspace? The way I know is by running rosws/wstool rm [PATH_TO_PACKAGE] and performing a manual rm [PATH_TO_PACKAGE] -rf afterwards. That's pretty straightforward for performing it on a single machine, but when multiple/many people are working on a project, cluttering update scripts with this kind of additional commands for removal of old stuff quickly gets ugly. I suppose the most convenient option would be to be able to specify this kind of thing also in a rosinstall file, although that certainly is not without problems either. Any hints on how to perform selective delete operations on workspaces in a more elegant way are welcome :) Originally posted by Stefan Kohlbrecher on ROS Answers with karma: 24361 on 2013-08-21 Post score: 2 Answer: For the rosws workspace each folder is whitelisted into the ROS_PACKAGE_PATH based on the .rosinstall file. If you remove an entry from the .rosinstall file, and then share it with someone, they can run rosws regenerate to get new setup.*sh files which will not add the removed entry's folder to the ROS_PACKAGE_PATH. For catkin, you'll have to rm -rf the folder or touch src/pkg_to_ignore/CATKIN_IGNORE. Originally posted by William with karma: 17335 on 2013-08-21 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 15331, "tags": "ros, rosws, wstool, rosbuild, catkin-workspace" }
What kind of welding technology is used for welding this muffler?
Question: Here is an image of a muffler of an R/C aircraft engine. It's made of aluminium alloys. What do you think about the welding technology which is used to make this muffler? I can't see any welding bead. Answer: No conventional welds are visible. It could have been made with a furnace solder. Flux and solder are placed at joints and the unit is put into a furnace and heated; the solder melts and flows into gaps by capillary action. I am not sure a solder would work well depending on the temperature the muffler reaches. A zinc aluminum solder flows at roughly 700 F but would have low strength at about 400F. Conventional engine exhaust manifolds reach 1200 F.
{ "domain": "engineering.stackexchange", "id": 4587, "tags": "welding" }
How does Work done by external force gets COMPLETELY converted to potential energy when there is friction?
Question: I was solving this question and ended up getting an incorrect result. So this is the answerkey provided . So I noticed that the author explains work done by friction is converted completely to potential energy in the spring and the main reason behind this is because the question mentions "A force F drags slowly...". What I think: Since it is dragged slowly, the friction acting upon it will be limiting friction which would indeed be stronger than the kinetic friction if it was dragged fast. So then work done by friction would ofcourse be present right? So I wanted to know how the author can say the Work Done by external force gets completely stored as Potential energy in spring? Answer: When friction is present between two bodies, energy is dissipated as heat if one body is moving relative to the other. But if the two bodies are static relative to one another then no energy is dissipated as heat i.e. static friction itself does not "consume" any energy. Since B is not sliding over A, and there is no friction between A and the ground, then no energy is dissipated as heat. Since we are told A and B are moving slowly we can assume that the kinetic energy of A and B is negligible. So all of the work done by the force F goes into stretching the spring.
{ "domain": "physics.stackexchange", "id": 94444, "tags": "homework-and-exercises, newtonian-mechanics, work, friction, spring" }
Eigenvalue counting number in Functional Integral
Question: My question is about the calculation of a functional integral (which looks like a partition function). If we have the operator $A$ having discrete spectrum, and eigenvectors $\phi_{i}$ and eigenvalues $\lambda_i$, and the eigenvalues have density $\rho(\lambda)=\sum_i \delta(\lambda_i-\lambda)$. Then the functional integral $\int D\phi \exp\{-(\phi,A\phi)\}$ is written as integral over the "fourier" coefficients $a_i = (\phi_i,\phi)$, where we use $\phi = \sum_{i} a_i \phi_i$: $\int D\phi \exp\{-(\phi,A\phi)\} = \int_{-\infty}^{\infty}\prod_i da_i \exp\{-\lambda a_i^2\}$ This is product of gaussian integrals, so if we put "UV" cutoff on eigenvalue, $\Lambda$, then I think the result of this integral is the product of the inverse square root of eigenvalues: $ \int^{\Lambda}\prod_i da_i \exp\{-\lambda a_i^2\}= \prod_i^{\Lambda}\lambda_i^{-1/2}$ However, my TA wrote without deriving that the correct result is this: $N(\Lambda) \ e^{-1/2 \sum_\lambda^{\Lambda}\ln \lambda}$ where $N(\Lambda) = \int \rho(\lambda) \ d\lambda$ is the number of eigenvalues below $\Lambda$. I don't understand this result, especially this appearance of this counting number $N(\Lambda)$. Why does the functional integral not give the product of eigenvalues, and where is this $N(\Lambda)$ coming from? Is this related to the density of states that is sometimes written in a partition function? I would appreciate any help in understanding this, and any pointer to books that explain as well. Answer: I think you TA is either wrong, or you misunderstood what he wrote. You are correct up the factor of $(\sqrt \pi)^n$ where $n=\int_0^\Lambda \rho(\lambda) d\lambda$ from the $n$ Gaussian integrals. I expect that the TA just meant that there is some normalization factor $N(\Lambda)$ that depends on $\Lambda$, but that factor is the one I gave above, and not $\int_0^\Lambda \rho(\lambda) d\lambda$
{ "domain": "physics.stackexchange", "id": 56309, "tags": "quantum-field-theory, regularization, functional-determinants" }
How much energy is needed to split a given nucleus?
Question: I'm trying to do some calculations for possible energy yield of a fission reaction, and I need a general formula for the energy requirements of fissioning a nucleus Answer: An approximation is the Bethe-Weizsäcker formula. There you can get the approximate binding energy for the input and output nuclei and then compute the mass/energy difference. A more accurate result would be gained from measured data of the weight of the nuclei. Then take the difference in mass (be sure to include all free neutrons produced) and convert that into energy with $E = mc^2$. All this will give you the net energy produced by the reaction. It does not tell you how much energy is needed to initiate that process. For that one should perhaps look into the energy levels of the input nucleus and see what exited states it has. Computing this will be hard to impossible, I think.
{ "domain": "physics.stackexchange", "id": 37487, "tags": "nuclear-physics, binding-energy, elements" }
Why measure both alkalinity and pH in pools if pH alone tells us how acidic or basic something is?
Question: I am trying to better understand the chemistry of maintaining my pool so that I can use the least amount of chemicals to control algae growth and I'm confused by the need to measure both total alkalinity (recommended to be between 100 and 150 ppm) and pH. If pH is high then won't alkalinity always also be high, and vice versa? Or is there a specific need or advantage in checking for both pH and TA? Answer: Alkalinity is typically reported in terms of either bicarbonate (the dominant carbonate species from around $\pu{pH} \simeq$ 6.5 to 8.3) or carbonate. You can have a high $\pu{pH}$ and alkalinity (total carbonate) in the range you report. Basicity is frequently used as a synonym for alkalinity, which is not correct. You are introducing caustic alkalinity into the system by (likely) adding sodium hypochlorite to get chlorine into the pool, and there's typically lye ($\ce{NaOH}$) in that solution. $\pu{pH}$ will give you a measure of acidity and basicity, while including alkalinity will add some information about the so-called buffer capacity (ability here to neutralize acid) of the solution. It's important for a pool to have buffer capacity to maintain the proper $\pu{pH}$ range, and the alkalinity part (total alkalinity will be carbonate + caustic) is needed to make that determination. To do this you would likely need to combine a system of polyprotic acids and salts of those acids. I'm thinking about a combination of a phosphate and carbonate buffer, the mathematics of which I can't produce off the top of my head. But you would increase buffer capacity while maintaining that $\pu{pH}$. The relevant calculation/formulation of this sort of thing in general is represented in the Henderson–Hasselbalch equation. Finally, I note that it appears to be commonplace to use soda ash and/or sodium bicarbonate to control $\pu{pH}$ in swimming pools, so my suggestion of the additional phosphate buffer might not relevant here: as it so happens, the carbonate/bicarbonate buffer system in our bodies (which tightly regulates blood $\pu{pH}$, among other things) would result from adding the two sodium salts in the appropriate quantities for the target $\pu{pH}$ range you specify.
{ "domain": "chemistry.stackexchange", "id": 16131, "tags": "acid-base, everyday-chemistry, ph" }
What insect is this? (England)
Question: Found this in my room and haven't seen anything like it before. It was about 4-5cm long and has wings. Flew away before I could get a better picture. Photo taken from the Midlands, UK Answer: I think i have an answer it is this nice guy. Latin name is Leptoglossus occidentalis. Short description of Leptoglossus occidentalis: Western Conifer Seed Bug The Western Conifer Seed Bug Leptoglossus occidentalis is a large and conspicuous squashbug, reaching a length of 20mm when adult. It is easily distinguished from all other GB coreids by its reddish-brown body, transverse white zigzag line across the centre of its wings and characteristic leaf-like expansions on the hind tibiae.
{ "domain": "biology.stackexchange", "id": 9194, "tags": "species-identification" }
Where is the event horizon in a black hole?
Question: At the beginning I thought that the event horizon coincides with the surfaces, but then making a new name when you could just call it surface would seem a bit pointless. Then where is the event horizon? Is it inside or outside the black hole? Notice that I have a really basic knowledge about physics Answer: I understood what was my doubt, which was a more basic question: A star has a clear surface, and within the surface it is all mass. When a black hole is born, for example after a supernova, i couldn't understand if the event horizon contains all the mass like a normal star. Instead from my understanding, the radius of the surface which contains all the mass is much smaller than the radius of the event horizon. Of course, we can't say for sure that the the radius of the mass object is smaller than the event horizon, since we can't see what happens within. So in other words a black hole is by definition much bigger that I thought since I make it coincide with the surface of the mass object.
{ "domain": "physics.stackexchange", "id": 26716, "tags": "black-holes, causality, event-horizon" }
What is the argument that Einstein's induced emission and induced absorption coefficients $B_{mn}=B_{nm}$ must be equal?
Question: The following is a summary of my reading of https://www.feynmanlectures.caltech.edu/I_42.html#Ch42-S5 Defintions $N_{i}$ Population of molecules in state $i$ $R_{i\to j}$Transition rate from state $i$ to state $j$ $A_{mn}$ Coefficient of spontaneous emission $B_{mn}$ Coefficient of induced emission $B_{nm}$ Coefficient of absorption $E_{m}-E_{n}=\Delta E=\hbar\omega>0$ Transition energy $\mathcal{I}(\omega)$ Radiation intensity profile $N_{m}=N_{n}e^{-\frac{\Delta E}{\mathit{k}T}}$Boltzmann relation Feynman's equation 42-12 $$ \mathcal{I}(\omega)d\omega=\frac{\hbar\omega^{3}d\omega}{\pi^{2}c^{2}\left(e^{\frac{\hbar\omega}{\mathit{k}T}}-1\right)}. $$ Derivation Write the expressions for transition rates and set them equal using argument by footnote $$\begin{aligned} R_{n\to m}&=N_{n}\mathcal{I}(\omega)B_{nm}\\ R_{m\to n}&=N_{m}\left(A_{mn}+\mathcal{I}(\omega)B_{mn}\right)\\ R_{n\to m}&=R_{m\to n}. \end{aligned}$$ Combining expressions and applying basic algebra we get $$ \mathcal{I}(\omega)=\frac{A_{mn}}{B_{nm}e^{\frac{\hbar\omega}{\mathit{k}T}}-B_{mn}}=\frac{\hbar\omega^{3}}{\pi^{2}c^{2}\left(e^{\frac{\hbar\omega}{\mathit{k}T}}-1\right)}. $$ Therefore we can deduce something: First, that $B_{nm}$ must equal $B_{mn}$, since otherwise we cannot get the $(e^{\hbar\omega/kT} - 1)$. So Einstein discovered some things that he did not know how to calculate, namely that the induced emission probability and the absorption probability must be equal. Clearly, setting $B_{nm}=B_{mn}$ gives a compelling result, but I don't believe that follows from the algebra. Does the "necessity" of the result follow from a variation of $\omega$ or some other method of differential calculus? If $\omega$ were a continuous real number parameter with all other terms constant, the result would be obvious. But in this case $\omega$ is a discrete value determined by the transition energy. I also observe that in this recent and more detailed application of these ideas, the equation $B_{nm}=B_{mn}$ does not, in general, hold. See equation 14 https://doi.org/10.1155/2013/503727 Answer: The expression $$\mathcal{I}(\omega)=\frac{A_{mn}}{B_{nm}e^{\frac{\hbar\omega}{\mathit{k}T}}-B_{mn}}=\frac{\hbar\omega^{3}}{\pi^{2}c^{2}\left(e^{\frac{\hbar\omega}{\mathit{k}T}}-1\right)}$$ is a function of $\omega$. If it is supposed to hold for more than one specific value of $\omega$ (and $T$), then the argument holds. You can easily see this from inspection, but you could also say e.g. that for small $\omega$ (specifically, $ \omega \ll kT/\hbar$), the expression on the right becomes inversely proportional to $\omega$, while the expression on the left becomes inversely proportional to $\omega + \frac{kT}{\hbar}(1-B_{mn}/B_{nm})$; demanding the same low-frequency (or high-temperature) behavior requires that $B_{nm}-B_{mn}$ vanishes.
{ "domain": "physics.stackexchange", "id": 79855, "tags": "quantum-mechanics, statistical-mechanics, thermal-radiation, atmospheric-science, photon-emission" }
How sampling aperiodic signal will result in periodic repetitions of the same
Question: I am reading "Digital Signal Processing" - Proakis and often read that sampling guarantees periodicity (not exact as read) But I wonder how sampling aperiodic signal will result in periodic repetitions of the aperiodic signal? Answer: sampling guarantees periodicity Not exactly. Sampling in one domain guarantees periodicity in the other domain. So sampling in time creates a periodic spectrum and sampling in frequency creates a periodic time signal. Or the other one around: a periodic time signal has a discrete spectrum and a periodic spectrum has a discrete time signal. That's why there are four different "types" of Fourier Transform. One for each permutations of continuous vs discrete in each domain. See for example http://fourier.eng.hmc.edu/e101/lectures/handout4/node3.html or https://www.dspguide.com/ch8/1.htm
{ "domain": "dsp.stackexchange", "id": 8285, "tags": "sampling, periodic" }
rgbdslam on fuerte
Question: Good Morning, I'm kind of new with ROS, and I'm trying to compile rgbdslam on my ROS fuerte following this guide: here's the link While running "rosdep install rgbdslam_freiburg" I get an error telling me "missing resource rgbdslam". I read somewhere that rgbdslam is still not supported on fuerte and I was wondering if there is any solution to this...can someone help me? Or can someone tell me when fuerte will be supported? Originally posted by Flavio P. on ROS Answers with karma: 33 on 2012-10-15 Post score: 1 Original comments Comment by Felix Endres on 2012-10-17: I'm guessing you skipped a step somewhere, e.g. not working in a directory in your ROS_PACKAGE_PATH. Could you edit your answer to include more detail of what you exactly did. Answer: It seems that octomap_server can not be resolved automatically by rosdep install. To manually install octomap_server execute the following command: sudo apt-get install ros-DISTRO-octomap-mapping (where DISTRO is your ROS distribution, i.e. fuerte). After octomap is installed, run rosdep install again to resolve the other dependencies. Originally posted by Robbiepr1 with karma: 143 on 2013-02-19 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Felix Endres on 2013-02-20: Unfortunately, there seems to be no rule for ros-fuerte-octomap-mapping, to make it rosdep-able
{ "domain": "robotics.stackexchange", "id": 11386, "tags": "slam, navigation, ros-fuerte" }
Stupidly simple TCP client/server
Question: I'm trying to validate some results that I see when using NetPipe to test some connectivity between a couple of Linux boxes (over various hardware). So, I concocted this simple client and server to do the same and I cannot seem to get the same numbers as NetPipe - I'm about 30-40% off the rtt times that it sees. Is there something stupid that I'm doing wrong with my simple example? Server: #include <stdio.h> /* for printf() and fprintf() */ #include <sys/socket.h> /* for socket(), bind(), and connect() */ #include <arpa/inet.h> /* for sockaddr_in and inet_ntoa() */ #include <netinet/tcp.h> #include <stdlib.h> /* for atoi() and exit() */ #include <string.h> /* for memset() */ #include <unistd.h> /* for close() */ #include <stdio.h> /* for perror() */ #include <stdlib.h> /* for exit() */ #define MAXPENDING 1 void die(char *errorMessage) { perror(errorMessage); exit(1); } void handle(unsigned short quickAck, int clntSock) { long long c_ts; /* current read timestamp */ int value = 1; // Enable quickAck if (quickAck && setsockopt(clntSock, IPPROTO_TCP, TCP_QUICKACK, (char *)&value, sizeof(int)) < 0) die("TCP_QUICKACK failed"); /* Send received string and receive again until end of transmission */ while (recv(clntSock, (char*)&c_ts, sizeof(c_ts), 0) == sizeof(c_ts)) /* zero indicates end of transmission */ { // Enable quickAck if (quickAck && setsockopt(clntSock, IPPROTO_TCP, TCP_QUICKACK, (char *)&value, sizeof(int)) < 0) die("TCP_QUICKACK failed"); /* Echo message back to client */ if (send(clntSock, (char*)&c_ts, sizeof(c_ts), 0) != sizeof(c_ts)) die("send() failed to send timestamp"); // Enable quickAck if (quickAck && setsockopt(clntSock, IPPROTO_TCP, TCP_QUICKACK, (char *)&value, sizeof(int)) < 0) die("TCP_QUICKACK failed"); } close(clntSock); /* Close client socket */ } int main(int argc, char *argv[]) { int servSock; /* Socket descriptor for server */ int clntSock; /* Socket descriptor for client */ struct sockaddr_in echoServAddr; /* Local address */ struct sockaddr_in echoClntAddr; /* Client address */ unsigned short echoServPort; /* Server port */ unsigned short quickAck; unsigned int clntLen; /* Length of client address data structure */ int value = 1; if (argc != 3) /* Test for correct number of arguments */ { fprintf(stderr, "Usage: %s <Server Port> <Quick Ack>\n", argv[0]); exit(1); } echoServPort = atoi(argv[1]); /* First arg: local port */ quickAck = atoi(argv[2]); /* Whether quick ack is enabled or not */ /* Create socket for incoming connections */ if ((servSock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) die("socket() failed"); /* Construct local address structure */ memset(&echoServAddr, 0, sizeof(echoServAddr)); /* Zero out structure */ echoServAddr.sin_family = AF_INET; /* Internet address family */ echoServAddr.sin_addr.s_addr = htonl(INADDR_ANY); /* Any incoming interface */ echoServAddr.sin_port = htons(echoServPort); /* Local port */ /* Bind to the local address */ if (bind(servSock, (struct sockaddr *) &echoServAddr, sizeof(echoServAddr)) < 0) die("bind() failed"); /* Mark the socket so it will listen for incoming connections */ if (listen(servSock, MAXPENDING) < 0) die("listen() failed"); for (;;) /* Run forever */ { /* Set the size of the in-out parameter */ clntLen = sizeof(echoClntAddr); printf("Waiting for client...\n"); /* Wait for a client to connect */ if ((clntSock = accept(servSock, (struct sockaddr *) &echoClntAddr, &clntLen)) < 0) die("accept() failed"); /* clntSock is connected to a client! */ printf("Handling client %s\n", inet_ntoa(echoClntAddr.sin_addr)); if (setsockopt(clntSock, IPPROTO_TCP, TCP_NODELAY, (char *)&value, sizeof(int)) < 0) die("TCP_NODELAY failed"); handle(quickAck, clntSock); } /* NOT REACHED */ } Client: #include <stdio.h> /* for printf() and fprintf() */ #include <sys/socket.h> /* for socket(), connect(), send(), and recv() */ #include <arpa/inet.h> /* for sockaddr_in and inet_addr() */ #include <netinet/tcp.h> #include <stdlib.h> /* for atoi() and exit() */ #include <string.h> /* for memset() */ #include <unistd.h> /* for close() */ #include <sys/time.h> void die(char *errorMessage) { perror(errorMessage); exit(1); } int main(int argc, char *argv[]) { int sock; /* Socket descriptor */ struct sockaddr_in echoServAddr; /* Echo server address */ unsigned short echoServPort; /* Echo server port */ char *servIP; /* Server IP address (dotted quad) */ int iterations, gap, i; /* Number of timestamps to send, and gap between each send */ struct timeval ts; long long c_ts, o_ts, delta, total = 0, max = 0, min = 1000000000; int value = 1; if (argc != 5) /* Test for correct number of arguments */ { fprintf(stderr, "Usage: %s <Server IP> <Server Port> <Iterations> <Gap>\n", argv[0]); exit(1); } servIP = argv[1]; /* server IP address (dotted quad) */ echoServPort = atoi(argv[2]); /* server port */ iterations = atoi(argv[3]); /* number of timestamps to send */ gap = atoi(argv[4]); /* gap between each send */ /* Create a reliable, stream socket using TCP */ if ((sock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) die("socket() failed"); /* Construct the server address structure */ memset(&echoServAddr, 0, sizeof(echoServAddr)); /* Zero out structure */ echoServAddr.sin_family = AF_INET; /* Internet address family */ echoServAddr.sin_addr.s_addr = inet_addr(servIP); /* Server IP address */ echoServAddr.sin_port = htons(echoServPort); /* Server port */ /* Establish the connection to the echo server */ if (connect(sock, (struct sockaddr *) &echoServAddr, sizeof(echoServAddr)) < 0) die("connect() failed"); if (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *)&value, sizeof(int)) < 0) die("TCP_NODELAY failed"); /* Give the server a chance */ usleep(1000); /* Now for the given number of iterations */ for(i = 0; i < iterations; ++i) { /* Generate the current timestamp */ gettimeofday(&ts, NULL); c_ts = ts.tv_sec * 1000000LL + ts.tv_usec; //printf("sending %ld ", c_ts); /* Send this */ if (send(sock, (char*)&c_ts, sizeof(c_ts), 0) != sizeof(c_ts)) die("send() failed to send timestamp"); /* Now read the echo */ if (recv(sock, (char*)&o_ts, sizeof(o_ts), 0) != sizeof(o_ts)) die("recv() failed to read timestamp"); gettimeofday(&ts, NULL); c_ts = ts.tv_sec * 1000000LL + ts.tv_usec; /* Calculate the delta */ delta = c_ts - o_ts; //printf(" -> received %ld %ld\n", o_ts, delta); if (i > 0) { /* Track max, min, sum */ total += delta; max = (max < delta)? delta : max; min = (min > delta)? delta : min; } /* Now sleep */ usleep(1000*gap); } --iterations; printf("iterations %d, avg %f, max %ld, min %ld\n", iterations, (total/(double)iterations), max, min); close(sock); exit(0); } So, to run, start the server with the port and whether quick_ack is enabled or not (1/0) - this is for a different test. Something like: ./simple_sever 10000 1 Then run the client: ./simple_client <host IP address> 10000 1000 1 So send 1000 timestamps with a 1 millisecond gap between each. This, I guess, is where the above differs from NetPipe (which floods, as far as I know). The interaction is pretty straight forward, so is there something I'm missing? EDIT: Okay, I got to the bottom of the difference: caching. NetPIPE has an option to force invalidation of cache, and enabling this results in similar numbers to my test program. Phew, I don't have to re-evaluate my sockets programming! I'll leave this question up for reference I guess. Answer: Not much to comment on, this program is pretty straightforward. A few notes: The user of the client has a lot of information to input. fprintf(stderr, "Usage: %s <Server IP> <Server Port> <Iterations> <Gap>\n", argv[0]); The more the user has to enter, the steeper the initial learning curve to use the program is. Also, I'm not sure I want the user to control the <Iterations> and the <Gap> anyways. A malicious user might abuse this for a DOS of the server. Eliminating those as required input and setting them in your code would be a more secure and a more user-friendly option. You have too many comments. close(clntSock); /* Close client socket */ For that particular example, it is quite obvious what that statement does. There are a lot of other comparable examples in this code. It is more common to return 0; when main() is finished, rather than to exit(0). Both will call the registered atexit handlers and will cause program termination though.
{ "domain": "codereview.stackexchange", "id": 5990, "tags": "c, performance, socket, tcp" }
Raman transition and IR transition
Question: In a character table one can find the reducible representation of vibration of a molecule. In such a table we also see things like $xy,z,x...$. If this corresponds with an irrep we say that IR transitions can occur if $x,y,z$ and quadratic terms give Raman transitions. This is because the dipole moment and polarizability are non-zero and can change in that case. My question is: where does this come from? I prefer a physical or mathematical answer involving quantum mechanics since I am from physics. Answer: To simplify slightly both Raman and IR spectroscopy show the vibrational modes of a molecule (though the techniques used to reveal these are very different). IR spectroscopy relies on coupling between the electromagnetic field of light passing through a sample and the electric dipole of the molecule. But that absorption is only possible is the vibration in the molecule changes the molecules dipole. Consider, for example, carbon dioxide, a linear molecule with no net dipole moment. Symmetric stretches (ie both oxygens moving the same amount in opposite directions) don't change the dipole so can't be detected by IR. But asymmetric stretches or bending will change the dipole and can be detected by IR. Raman spectroscopy (simplifying a little) relies on detecting changes to the molecules polarisability not the dipole. So the symmetric vibrations of carbon dioxide are detectable because they do change the polarisability of the molecule. In both cases the symmetry tables tell you whether the symmetry of the vibrational mode changes the electric dipole or the polarisability of the molecule. In simple molecules physical intuition can usually tell the same thing but symmetry tables are more reliable. A full understanding would involve a quantum mechanical description of the interaction of EM radiation with the possible vibrational modes of a molecule considering the allowed and forbidden transitions. But the physical intuition above captures the essentials without getting too mathematical.
{ "domain": "chemistry.stackexchange", "id": 11384, "tags": "spectroscopy, ir-spectroscopy" }
Simple two player snake game
Question: I've been working on a two player snake game in python. I would really appreciate any suggestions/general input that would help improve my coding. """ 5-13-2018 Nathan van 't Hof 2 player snake game players can move around using wasd (player 1) and ijkl (player 2) eating apples: - will increase players length """ import random import keyboard import os import colorama import time def get_apple(width, height, snake1, snake2): """ find new random coordinate to place the apple """ if len(snake1) + len(snake2) >= width*height: u = raw_input('You win!') quit() while True: apple = [random.randint(0, width - 1), random.randint(0, height-1)] if apple not in snake1 and apple not in snake2: return apple def draw(game_field, i, key): """ change a specific coordinate of the game field to a different type """ game_field[i[0]][i[1]] = key return game_field def move_snake(game_field, snake, typ): """ move the snake one step """ game_field = draw(game_field, snake[-2], typ) game_field = draw(game_field, snake[-1], 'O') return game_field def change_pos(prev, width, height, dx, dy): """ change the coordinate of the head of the snake """ return [(prev[0]+dx)%width, (prev[1]+dy)%height] def print_game(game_field): """ print the game (in a readable format) """ output = '.' * ((len(game_field)) * 2 + 1) + '\n' for i in game_field: output += '.' + ' '.join(i) + '.' + '\n' output += '.' * ((len(game_field)) * 2 + 1) # moves the marker to the top to prevent flikkering print('\033[H' + output) def check_die(snake1, snake2): """ check whether the snakes 'die' by letting the head bump into a tail/other head """ dead = False if snake1[-1] == snake2[-1]: u = raw_input('Both snakes died') dead = True elif snake1[-1] in snake2 or snake1.count(snake1[-1]) >= 2: u = raw_input('Snake 1 died') dead = True elif snake2[-1] in snake1 or snake2.count(snake2[-1]) >= 2: u = raw_input('Snake 2 died') dead = True if dead: quit() def check_movement(dx, dy, ch1, ch2, ch3, ch4): """ check where the snake moves """ if keyboard.is_pressed(ch1) and dx != -1 and dy != 0: return -1, 0 if keyboard.is_pressed(ch2) and dx != 0 and dy != -1: return 0, -1 if keyboard.is_pressed(ch3) and dx != 1 and dy != 0: return 1, 0 if keyboard.is_pressed(ch4) and dx != 0 and dy != 1: return 0,1 return dx, dy def update_snake(new_pos, apple, game_field, snake1, snake2): snake1.append(new_pos) if new_pos == apple: apple = get_apple(width, height, snake1, snake2) game_field = draw(game_field, apple, '-') else: game_field = draw(game_field, snake1[0], ' ') del snake1[0] return snake1, apple, game_field # init width, height = 20, 20 snake1 = [[0,0],[0,1]] snake2 = [[width/2, 0], [width/2 + 1, 0]] apple = get_apple(width, height, snake1, snake2) dx1, dy1 = 0, 1 dx2, dy2 = 0, 1 os.system('cls' if os.name == 'nt' else 'clear') # this allows '\033[H' to work colorama.init() # draw inital positions of the field game_field = [height*[' '] for i in range(width)] for i in snake1: game_field = draw(game_field, i, 'x') for i in snake2: game_field = draw(game_field, i, '+') game_field = draw(game_field, snake1[-1], 'O') game_field = draw(game_field, snake2[-1], 'O') game_field = draw(game_field, apple, '-') prev_time = time.time() while True: try: fps = float(input('What framerate would you like to run at?')) break except: print('Please input a number') os.system('cls' if os.name == 'nt' else 'clear') while True: # check inputs from players dx1, dy1 = check_movement(dx1, dy1, 'w', 'a', 's', 'd') dx2, dy2 = check_movement(dx2, dy2, 'i', 'j', 'k', 'l') # update screen if enough time has passed if time.time() - prev_time > 1./fps: prev_time = time.time() print_game(game_field) new_pos1 = change_pos(snake1[-1], width, height, dx1, dy1) new_pos2 = change_pos(snake2[-1], width, height, dx2, dy2) # update snakes and playing field check_die(snake1, snake2) snake1, apple, game_field = update_snake(new_pos1, apple, game_field, snake1, snake2) snake2, apple, game_field = update_snake(new_pos2, apple, game_field, snake2, snake1) game_field = move_snake(game_field, snake1, 'x') game_field = move_snake(game_field, snake2, '+') Answer: First impressions: Good work on the multiple platform attempt, however given that you are already using colorama and raw ansi codes (i.e. \033[H), there exist one for clearing the screen which should be used instead of os.system. Also while at it, avoid duplicating code, i.e. wrap repeated calls in a function, something like: def clear_screen(): print('\033[1J\033[1;1H') That clears the screen (\033[1J) and place the cursor on top-left of the terminal (\033[1;1H). (diff) The other thing that should be done is wrap the main game in a function (from # init and down), and call it inside the if __name__ == '__main__': section. However, once moving everything inside we find that the update_snake function is now broken. This is caused by that function not actually taking in the width and height arguments like you had structure your program. Not a big deal, just add them as appropriate (so that it is not coupled to the value defined for the module): def update_snake(new_pos, apple, game_field, width, height, snake1, snake2): Fix the function call as appropriate with the same signature. I just followed the ordering that you used for get_apple. (diff) Another thing that jumped out (when running this under Linux) is that the root user is required for the keyboard module. This might be why the question didn't really get looked at, as that's the administrator account. The curses module may be more appropriate, however Windows user will require an additional wheel to be installed. As a bonus, ncurses provides a lot more flexibility with drawing on the terminal, it is something good to get familiar with. (doing cross-platform terminal interaction (or anything) can be surprisingly painful, so this is not a fault of your program, rather, I have to commend you for trying). You had used print as a function. Good, this brings up the Python 3 compatibility. However, this does not work under Python 3 as one might expect. First, width / 2 has the / (division) operator which no longer return an int if given floats as arguments. The // operator or the floor division operator should be used instead to get int for sure. snake2 = [[width//2, 0], [width//2 + 1, 0]] The other bit that could be problematic is towards the end of the game, where the call to raw_input crashes the game under Python 3. That is changed and since you are simply outputting a string, just use a print statement, and perhaps an input('') call at the end of that. While at it, normalise input to raw_input for Python 2 with a statement like this (diff): import sys if sys.version_info < (3,): input = raw_input I also wouldn't use quit() as that's for the interactive shell, you should use sys.exit instead. Ideally, random calls to abort the program shouldn't be invoked like so, but I am going to let this slide for now. Also the framerate conversion from input, for that matter any use of except should only trap the Exception class that is expected. Having it like that means the user cannot break out of the program with Ctrl-C even though they might want to terminate the program. It should only trap the ValueError exception (diff). While at that, we can also trap that on the main function, so invoking it might look like: if __name__ == '__main__': try: main() except KeyboardInterrupt: print('user exit') sys.exit(1) Although the keyboard library underneath has threads that might not terminate nicely like this, there is probably documentation in there that might reveal how that might be fixed, but in general this is a way to ensure that when user Ctrl-C on a program to terminate it, rather than showing them a big long traceback, a simple message will suffice. The code is pretty nicely formatted, almost fully PEP-8 compliant. Just remember to space out the symbols and operators for equations to improve readability. Other improvements I would do, but we are starting to make actual logic changes to the program. I had mentioned sprinkling random exits in functions isn't a good thing, as that needlessly couple the game logic to program lifecycle. Let's change the check_die function to return the exit string if a death happened, and an empty string if no death happened. def check_die(snake1, snake2): if snake1[-1] == snake2[-1]: return 'Both snakes died' # and so on ... In the game loop (diff) # update snakes and playing field death = check_die(snake1, snake2) if death: print(death) input('') # capture input characters in buffer return As for the get_apple, the while loop in there is going to be a massive liability for performance once you have really good players - the game will slow down as it tries to randomly find the last remaining empty spots. I would use random.choice on the tiles that are not currently occupied by a snake... while this map isn't available since empty tiles are not being tracked, generating that should be acceptable, though it will take some time, it may not matter too much. One last thing though, we can't exactly return a value for winning given how get_apple is used (two consecutive calls for each snake). One can use an Exception to signify that there is no valid spot remaining. try: snake1, apple, game_field = update_snake( new_pos1, apple, game_field, width, height, snake1, snake2) snake2, apple, game_field = update_snake( new_pos2, apple, game_field, width, height, snake2, snake1) except NoSpaceForApple: print('You win!') input('') # capture input characters in buffer return Finally, to better present what was done here, I created a gist that shows all the changes as a complete program. The reason why I used a gist is that the way they show revisions is very useful in showing how the corrections progressed. You may have noticed this already with the diff links that I've added, but you should be able to follow everything from bottom up, or git clone https://gist.github.com/7eb66780a852ee6e32862ba4ee896b61 should get you a copy. By the way, if you don't know git (or other version control) you should start using it, because you can make use of it like so to show how the program changes over time by the fixes done to it. Verdict: This is a really neat little game, you really invoke my nostalgia (back when I wrote things in DOS) and I give you props for that.
{ "domain": "codereview.stackexchange", "id": 30610, "tags": "python, snake-game" }
Finding Largest Integer Palindrome within a Set
Question: I've "solved" Project Euler Question 4 and I was wondering if I could make my answer more efficient (I'm using Project Euler to help me learn Haskell). The problem reads: Find the largest palindrome made from the product of two 3-digit numbers Here is my solution getMaxPalindrome = maximum[x*y | x<-[100..999], y<-[100..999], reverse(show(x*y)) ==show(x*y)] All suggestions for improvement are appreciated! Answer: First, since * is commutative, you can save 1/2 of your computation if you restrict yourself to cases where x >= y: [ x*y | x<-[100..999], y<-[100..x], ... ] Second, if you could generate all the products in a non-increasing list, you'd just be searching for the first element of such a list satisfying the predicate, which would also speed the search very much. See data-ordlist package, which implements many useful functions on sorted lists, in particular in your case you'll probably need unionBy or unionAllBy
{ "domain": "codereview.stackexchange", "id": 10165, "tags": "haskell, programming-challenge, palindrome" }
Can there be a perfect chess algorithm?
Question: Current chess algorithms go about 1 or maybe 2 levels down a tree of possible paths depending on the player's move's and the opponent's moves. Let's say that we have the computing power to develop an algorithm that predicts all possible movements of the opponent in a chess game. An algorithm that has all the possible paths that opponent can take at any given moment depending on the players moves. Can there ever be a perfect chess algorithm that will never lose? Or maybe an algorithm that will always win? I mean in theory someone who can predict all the possible moves must be able to find a way to defeat each and every one of them or simply choose a different path if a certain one will effeminately lead him to defeat..... edit-- What my question really is. Let's say we have the computing power for a perfect algorithm that can play optimally. What happens when the opponent plays with the same optimal algorithm? That also will apply in all 2 player games with finite number (very large or not) of moves. Can there ever be an optimal algorithm that always wins? Personal definition: An optimal algorithm is a perfect algorithm that always wins... (not one that never loses, but one that always wins Answer: Your question is akin to the old chestnut: "What happens when an irresistible force meets an immovable object?" The problem is in the question itself: the two entities as described cannot exist in the same logically consistent universe. Your optimal algorithm, an algorithm that always wins, cannot be played by both sides in a game where one side must win and the other must by definition lose. Thus your optimal algorithm as defined cannot exist.
{ "domain": "cs.stackexchange", "id": 10953, "tags": "algorithms, turing-machines" }
How to write wave equation in a reference frame?
Question: A plane sound wave is travelling in a medium. In reference to a frame A, its equation is $$y=A \cos (\omega t - k x)$$ In reference to a frame B, moving with a constant velocity $\vec{v}$ in the direction of propagation of the wave, the equation of the wave will be: $$y=A \cos \bigl[(\omega-k\cdot v) t-k x\bigr]$$ but I am getting $y=A \cos [(\omega+k\cdot v) t-k x]$! I solved it as follows: The equation connecting the two reference frames has coordinates related as $x'=x-vt$. Substituting this in the given equation yields $$y=A \cos [(\omega+k\cdot v) t-k x]$$ Where am I making a mistake? Answer: The wave equation in the reference frame B has to be expressed in terms of $x'$, not $x$. Thus you would have: $$y=A\cos(\omega t-kx)=A\cos\big[\omega t-k(x'+vt)\big]=A\cos\big[(w-kv)t-kx'\big].$$ I think that's the mistake you're looking for.
{ "domain": "physics.stackexchange", "id": 68311, "tags": "waves, reference-frames" }
How to pass launch args dynamically during launch-time
Question: I have a launch file that launch 2 ROS2 nodes for each vehicle(uav) I have. I would like to specify the number of vehicles I have while running the launch file so that the correct number of nodes is initialized. My launch file looks like this: import os from time import sleep from launch import LaunchDescription from launch_ros.actions import Node from launch.substitutions import LaunchConfiguration from launch.actions import DeclareLaunchArgument from ament_index_python.packages import get_package_share_directory launch_path = os.path.realpath(__file__).replace("demo.launch.py", "") ros2_ws = os.path.realpath(os.path.relpath(os.path.join(launch_path,"../../../../.."))) avoidance_path = os.path.join(ros2_ws,"src","avoidance") avoidance_config_path = os.path.join(avoidance_path,"config") number_of_uavs = int(input("how many uavs do you want to simulate? ")) def generate_launch_description(): ld = LaunchDescription() for i in range(1, number_of_uavs+1): namespace="uav_{:s}".format(str(i)) config = os.path.join( avoidance_config_path, namespace, "params.yaml" ) avoidance_node = Node( package="avoidance", namespace=namespace, executable="avoidance_client_1", output="screen", parameters=[config] ) trajectory_controller_node = Node( package="offboard_exp", namespace=namespace, executable="trajectory_controller", name="trajectory_controller_{:s}".format(namespace) ) ld.add_action(trajectory_controller_node) ld.add_action(avoidance_node) return ld Currently, I use number_of_uavs = int(input("how many uavs do you want to simulate? ")) To get the number of vehicles and then change the namespace of the node in each iteration. Is there a better way to pass this parameter instead of prompting the user to enter it? Originally posted by mamado on ROS Answers with karma: 61 on 2021-04-23 Post score: 5 Original comments Comment by 130s on 2023-04-28: IIUC, you're asking for a better option than (statically) passing arg upon execution (e.g. via cmdline). But in the subject you said "how to pass command args", which contradicts to what you're asking in the question body. I modified the subject to try to describe what you really want. Answer: I am not sure if it's better, but you could use sys.argv: import sys for arg in sys.argv: if arg.startswith("number_of_uavs:=") number_of_uavs = int(arg.split(":=")[1]) With this, the command suggested without explanation by @avzmpy works: ros2 launch PKG_NAME LAUNCH_FILE number_of_uavs:=NUMBER There is probably a prettier way to parse the argument :) You do have to use the := notation, because ros2 launch does not let any other through. Having said this, this is not how I would do what you want to do, to launch multiple identical UAVs. I would limit the launch file to launch a single UAV, with a launch argument declared to pass the UAV ID, and a LaunchConfiguration using it. Then I would create another wrapper (bash) script around it that does the loop and calls ros2 launch multiple times to launch the required number of UAVs. When done that way, you can also launch each with a different value of ROS_DOMAIN_ID to make it impossible for them to mix communication without needing to use namespaces. Originally posted by sgvandijk with karma: 649 on 2021-04-23 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 36357, "tags": "ros2" }
Boundary condition on a finite vibrating string with forced vibration
Question: Context Considering a finite string of length $L$. We suppose gravitation is negligible. The string is only moving transversaly, in $y$ direction, with small amplitude. The tension of a string is $T$. We're looking for $y(x,t)$. Right end is fixed $y(x=L,t)=0$. Left end is driven with a $y$ force $f(t)= Fe^{i\omega t} $. We are looking for solution of the form $y(x,t) = Ae^{i(\omega t -kx)} + Be^{i(\omega t +kx)}$ Question In the book (see source below), the boundary condition on left end is defined as $Fe^{i\omega t} + T \frac{\partial y}{ \partial x}(x=0) = 0$ I don't get this boundary condition: it assumes that at the left end the driving force is equal to the force exerted from the string (coming from the tension). In my opinion, this is a contradiction with the solution we are looking for : if left end is at equilibrium (forces are compensating), it shouldn't move (or at least at constant speed). However, at $x=0$, the solution we are looking for is oscillating with term $e^{i\omega t}$. How do you understand that ? Am I wrong ? is this formulation wrong ? Source Example taken from Fundamentals of acoustics, 4th edition, p43. Answer: The equation states that the transverse applied force at one point in the string is equal to the transverse force at that point expressed in terms of the string tension. It is not stating that there is no net force on a portion of string. If it did say the latter, then it really would contradict the string being able to accelerate.
{ "domain": "physics.stackexchange", "id": 82503, "tags": "acoustics, string, boundary-conditions, continuum-mechanics, vibrations" }
Which control volume is being used?
Question: Consider the control volume shown in this question To evaluate the lift, the integral form of the conservation of momentum was applied as shown in this solution video (Skip to 2:07 for evaluation part) Here, the surface of the airfoil AND the surface of the rectangle is being considered for the control surface integral. So what exactly is the control volume? I can't think of a control volume whose control surface is the airfoil and the rectangle boundary. This question is from a edx course which is not active anymore so I can't ask the doubts there Answer: It seems pretty clear from the video what the control volume is. It is all the air inside the rectangle and outside the airfoil. Everything he said in the video makes perfect sense in this context. The problem is done pretty much the same way the overall momentum balance is used when we analyze the force exerted on a pipe wall by fluid flowing in a curved pipe.
{ "domain": "physics.stackexchange", "id": 59595, "tags": "fluid-dynamics, aerodynamics" }
Where is directory of the pcl source codes?
Question: Hi my friends, I am a newbie and working on kinect with pcl on ROS. I can run the tutorials on the ros website, but the weird thing is that i cannot find the implementation of all the class I have imported. eg. #include <pcl/filters/extract_indices.h> I can find the headfile is in /opt/ros/groovy/include/pcl-1.6/pcl/filters/extract_indices.h but i cannot find its corresponding .cpp file. I thought it must be inside /opt/ros/groovy/stack/, but it is not there. Could anyone to clear my doubts? Thanks a lot in advance! JK Originally posted by ljk on ROS Answers with karma: 155 on 2013-11-01 Post score: 0 Answer: It will be there in /opt/ros/groovy/share/ if its not there in /opt/ros/groovy/stacks/. Or wherever it is, just locate it using command locate extract_indices.cpp in ubuntu. Originally posted by sudhanshu_mittal with karma: 311 on 2013-11-19 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 16039, "tags": "pcl" }
Separation of 2 particle system into relative coördinates and center of mass coördinates
Question: This might seem like a stupid question, and maybe it is, but I've been stuck on this for quite a while: So I have the hamiltonian for 2 particles: $$-\frac{\hbar^2}{2m_1}\vec{\nabla^2_{r_1}} - \frac{\hbar^2}{2m_2}\vec{\nabla^2_{r_2}} + V(\vec{r_1}-\vec{r_2})$$. Now invoking the relative coördinate $$\vec{r} = \vec{r_1} - \vec{r_2}$$ and the center of mass coördinate: $$\vec{R} = \frac{m_1\vec{r_1} + m_2\vec{r_2}}{m_1 + m_2}$$ It is said that "with some easy algebra" it is found that: $$-\frac{\hbar^2}{2m_1}\vec{\nabla^2_{r_1}} - \frac{\hbar^2}{2m_2}\vec{\nabla^2_{r_2}} = -\frac{\hbar^2}{2M}\vec{\nabla^2_{R}} - \frac{\hbar^2}{2\mu}\vec{\nabla^2_{r}}$$ with $M = m_1 + m_2$ and $\mu = \frac{m1m2}{m1 + m2}$. Now my question is: what "easy algebra"? I've tried the one dimensional version with $\vec{R} \rightarrow X$ and $\vec{r} \rightarrow x$. And then it is as simple as 'using the chain rule' as to prove: $$\frac{\partial}{\partial x} = \frac{\partial x}{\partial x_1}\frac{\partial}{\partial x} + \frac{\partial X}{\partial x_1}\frac{\partial}{\partial X}$$ But now i'm stuck with another question.. How is this true? Like how is this the chain rule? I know when for example Z depends on y and y on x this is true: $$\frac{dZ}{dx} = \frac{dZ}{dy}\frac{dy}{dx}$$ and this is regarded as the 'chain rule' but how is the previous statement formed? Thanks in advance. Answer: Oh I think @electronpusher has found it, it is indeed as with the Multivariable Chain Rule, as the wave function can be considered function of X and x: $\Psi(X,x)$ and thus using the multivariable chain rule on this function you get the above mentioned expression
{ "domain": "physics.stackexchange", "id": 74217, "tags": "quantum-mechanics" }
Why 3-chloroprop-1-en-1-ol is unstable?
Question: In the treatment of HCl with acrolein 3-chloroprop-1-en-1-ol is the first product which is (my textbook says) unstable and turns into 3-chloropropanal. Why is this 3-chloroprop-1-en-1-ol unstable? is −I effect of chlorine responsible? Answer: 3-Chloroprop-1-en-1-ol is in equilibrium with the aldehyde, 3-chloropropanal (aka $\beta$-chloropropionaldehyde). According to Organic Syntheses, this compound "is a very unstable substance which polymerizes rapidly, especially in the presence of traces of hydrochloric acid". But I do not see it being reduced to 3-chloropropanol. Reactivity/instability In 3-chloropropanal, or more precisely in its enol form 3-chloropropenol, the chlorine atom is in allylic position (i.e. on a carbon atom adjacent to a $\ce{C=C}$ double bond) which makes it more reactive towards nucleophilic substitution. Another molecule of the enol can then substitute it and the resulting product is $\ce{Cl-CH2-CH=CH-O-CH2-CH=CH-OH}$ which still has an allylic chlorine atom, so it can react further, hence the polymerization.
{ "domain": "chemistry.stackexchange", "id": 6163, "tags": "organic-chemistry" }
Power in series LCR circuit
Question: In a series LCR circuit R=200 ohm and the voltage and the frequency of the main supply is 220 V and 50 Hz respectively. On taking out the capacitance from the circuit the current lags behind the voltage by 30°. On taking out the inductor from the circuit the current leads the voltage by 30°. How much power is dissipated in the LCR circuit ? In this question as we can see, it is a resonance condition and hence power can be calculated by (220)²/200. But if we consider any other phase change where unequal phase difference comes when we remove inductor and capacitor, then how to find power dissipated? Answer: The instantaneous power of a circuit is always $P(t) = I(t) V(t)$, and so you can calculate the power by integrating this over one period of the driving. For the specific case of a sinusoidal voltage a simpler procedure is available, which will reproduce the result of this calculation. Sum up the impedances of all the circuit elements (which in general will be complex numbers), take its modulus, and then the power is $V_0^2 / |Z|$, where $V_0$ is the amplitude of the voltage.
{ "domain": "physics.stackexchange", "id": 63642, "tags": "homework-and-exercises, electric-current, resonance" }
8a-methyl-1,2,3,4,4a,8a-hexahydronaphthalen-4a-ylium carbocation rearrangement
Question: In this reaction after the attack of lone pairs on $\ce{H+}$ ions, a stable $3^{°}$ carbocation is formed. But seeing the six membered ring and the double bonds already present, I can't help but think that there's some way of obtaining a benzene ring through rearrangements. Can someone suggest a mechanism for this? Answer: I think Sameer Thakur was in right track when started to write the mechanism. But the path got lost at the end. I don't see reason to have a methide shift followed by a hydride shift and then proton abstraction. The 1,2-methide shift gives you a very stable $3^\circ$-carbocation, which is compatible with the initial $3^\circ$-carbocation given by elimination of water. Thus, I think the following mechanism is a very reliable one for gaining aromaticity:
{ "domain": "chemistry.stackexchange", "id": 12080, "tags": "carbocation, rearrangements" }
Does the unit of Inertia include radians?
Question: The unit for angular acceleration $\alpha$ is: $$\mathrm{rad/s^2}$$ The unit for torque is $\mathrm{Nm}$: $$\mathrm{kg\ m^2/s^2}$$ And their relationship with Inertia is: $$I = \tau/\alpha$$ So shouldn't the unit for for Inertia be: $$\mathrm{kg\ m^2/rad}$$ yet everywhere I read says it is simply $\mathrm{kg\ m^2}$ instead. How does the $\mathrm{rad}$ unit fall off? Answer: See also Simple Harmonic Motion - What are the units for $\omega_0$? and https://en.wikipedia.org/wiki/Joule#Confusion_with_newton-metre Here's a somewhat shorter explanation reflecting my own (possibly incorrect) intuition: Radians aren't "real" units; they're just a trick to keep track of which quantities involve angles and which don't, since it's usually a mistake to get those mixed up. However, it's occasionally valid to mix those two types of quantities, and then we drop the radians. Torque is one such place. It's probably possible to be fully rigorous about this and make radians an actual unit, but I've never seen it done.
{ "domain": "physics.stackexchange", "id": 13198, "tags": "angular-momentum, units, inertia" }
Issue with collapsed element in LISA Finite Element Analysis
Question: I'm new to the FEA world and trying to learn the ropes through resources available online. I've imported a model into LISA FEA that I would like to analyze, however, when I press solve to conduct my analysis I'm getting an error message that states the following: Error: Element 1 is collapsed. More than one local node share the same local node. Warning: Elements 395, 400, ... overlap each other. Failed Being so new to this, I do not have any idea as to how to interpret this. M question is: how can I tweak my model to get rid of these error messages. Namely, how do I "uncollapse" an element? How do I stop elements from overlapping? Although this is a LISA specific question, and I would appreciate advice as to how to overcome this directly through LISA, I am also open to general advice applicable to other types of FEA. If anyone could give me a few pointers as to how this issue is dealt with in FEA software, I would be greatly appreciative. Answer: I don't use LISA, but the most likely cause of both problems is that the geometry of the element mesh is invalid. The "collapsed element" message is saying that two (or more) nodes in element 1 have the same node numbers, or they are at the same position in space. For example if you have a 4-node "rectangular" element, you can't squish it down into a triangle shape by putting two "corners" at the same place, or you can't have a weird shape where two "opposite" sides of the element cross over each other, etc. The "overlapping element" message is saying that two (or more) elements cover the same area or volume. The elements have to fit together like building blocks, or tiles. They can't overlap in an arbitrary way. To see exactly what the problem is, display the model and highlight the element numbers in the error messages, and then figure out how to make a mesh that doesn't have those problems.
{ "domain": "engineering.stackexchange", "id": 2927, "tags": "finite-element-method, software" }
Can a vacuum cleaner be used to purify the air in a small room?
Question: A hepa vacuum cleaner will pick up fine dust from the floor, filter it and send the clean air out through the exhaust. However with movement in the room fine dust will also be goinng up in the air and so the vacuum will not take it in and this fine dust will settle hours later. As far as i can see, a vacuum cleaner is very similar to an air scrubber, takes air in, filters it and sends it out. 1) is it not possible to close windows and leave the vacuum on in the middle of the room and expect it to filter fine dust in the air/room? 2) what if i maneuvered around and tried to vacuum the air aswell as the floor for several hours, would this do the job? 3)is a air scrubber/filter necessary? The room in question is about $18\,\mathrm{m}^2$ and the vacuum cleaner i intend to use is a sealed hepa unit which is about 500 air watts and says it can do $58\,\mathrm{L/s}$ which i think is litres per second. Please give general answers to the questions I have asked aswell as specific to the vacuum and room in question. Thanks. Answer: As a first approximation, assuming your room ha a ceiling of ~3-3.5m, the total volume of the room is anywhere from $54m^3$ to $63m^3$ and at that would mean the total volume of the room is up to 63,000 liters. So at 58 liters per second the room would have the whole volume of air changed in about 18 minutes. You can pretty well assume that the vacuum (mostly) gets all the air because as it sucks air the space in front of the intake is emptied and "new" air comes in. But then you'd have to know the air circulation around the vacuum. That is, the air near the ceiling may take longer to cycle down. Intuitively my sense is that a vacuum near the floor with the exhaust letting out near the ceiling (like, maybe you attach a pipe, like on a dryer) would get you the 18 minutes to recirculate all the air in the room through the vac, since the air in the vac is heated a bit. Otherwise you have to account for the air near the floor not fully mixing. Thinking about it I suppose a very tightly closed room with a fan would do the trick, to ensure full mixing (and recirculating) of air. Does this make sense to you?
{ "domain": "physics.stackexchange", "id": 13664, "tags": "vacuum, air" }
Is G.R. internally inconsistent due to Shapiro Time Delay?
Question: An axiom of G.R. is that the speed of light is constant in all inertial reference frames. Is it possible to construct an inertial reference frame where the speed of light would not be constant due to Shapiro Time Delay? Answer: No. We interpret Shapiro delay as the light taking a longer path through curved spacetime than it would through flat spacetime. It still moves at the speed of light along that path.
{ "domain": "physics.stackexchange", "id": 90506, "tags": "general-relativity, time-dilation" }
Checking for new software updates to the client
Question: I have written a very simple web service with MVC and WebApi. Now I'm working on the client code which will be a WPF application (and soon Windows 8 Store/Phone app). What I have done works, but I'm not sure I'm doing it the "right" way. The purpose of the service is to check if there are any new software updates to the client. My server code looks like this (simplified): public class ProductVersionsController : ApiController { private ApplicationDbContext db = new ApplicationDbContext(); [HttpGet] public CheckVersionResult CheckVersion(string product, string platform, string version) { CheckVersionResult result = new CheckVersionResult(); //Logic removed... return result; } } My client code looks like this (simplified): string parameters = "product=myproduct&platform=wpf&version=1.2.3.4"; string CheckUrl = "http://localhost:61933/api/ProductVersions/CheckVersion"; var url = new Uri(CheckUrl + "?" + parameters); using (var client = new System.Net.WebClient()) { var json = await client.DownloadStringTaskAsync(url); CheckVersionResult data = JsonConvert.DeserializeObject<CheckVersionResult>(json); //Logic removed... } The deserializing is done with Json.net. Should I use HttpGet for a service like this, or should I use Post to send the parameters? How the parameters are sent feels a bit clumsy. If the parameters were encapsulated in a class, how should that be solved on the client side? Is there any good practice I have missed? Answer: Now I will answer myself, hopefully someone else has use of this. When it comes to the server side, I'll let it be like it is. Using HttpGet could be useful for caching so that's a good thing. The client code works but it's a bit messy so decided to clean it up with some simple helper classes. In these classes I'm using HttpClient instead of WebClient. WebClient isn't available on Windows Phone/Store apps, but HttpClient could be used if in wpf if you put a reference to System.Net.Http. I have also replaced Json.net with System.Runtime.Serialization.Json.DataContractJsonSerializer. The benefits with this is that this is supported in the framework in all platforms. The downside is that some variable types (like DateTime) can't be parsed directly. But in my application this is fairly easy to deal with, and I prefer to add some extra lines of code than a large library. This is my replacement for Json.net: public class JsonHelper { public static string Serialize(object obj) { EnsureHasDataContractAttribute(type); System.Runtime.Serialization.Json.DataContractJsonSerializer serializer = new System.Runtime.Serialization.Json.DataContractJsonSerializer(obj.GetType()); using (MemoryStream stream = new MemoryStream()) { serializer.WriteObject(stream, obj); byte[] rawData = stream.ToArray(); return System.Text.UTF8Encoding.UTF8.GetString(rawData, 0, rawData.Length); } } public static object Deserialize(string text, Type type) { EnsureHasDataContractAttribute(type); System.Runtime.Serialization.Json.DataContractJsonSerializer a = new System.Runtime.Serialization.Json.DataContractJsonSerializer(type); using (MemoryStream stream = new MemoryStream(System.Text.UTF8Encoding.UTF8.GetBytes(text))) { return a.ReadObject(stream); } } public static T DeserializeObject<T>(string text) { return (T)Deserialize(text, typeof(T)); } private static void EnsureHasDataContractAttribute(Type attributeHolder) { // I have had problems with classes where [DataContract]/[DataMember] is // is missing. This has caused DataContractJsonSerializer to crash // randomly. This method make a simple check that the // [DataContract] attribute is added to the class. Not perfect // but should catch mose problems I hope. //String is always safe if (attributeHolder == typeof(string)) return; //decimal is always safe if (attributeHolder == typeof(decimal)) return; //Primtives is always safe if (attributeHolder.GetTypeInfo().IsPrimitive) return; //Enums is always safe if (attributeHolder.GetTypeInfo().IsEnum) return; //byte[] could cause problems. if (attributeHolder == typeof(byte[])) { System.Diagnostics.Debug.WriteLine("Type byte[] is behaving differently in DataContractJsonSerializer and JSon.Net. You should probably use string and Convert.FromBase64String instead"); return; } //DateTime/DateTimeOffset could cause problems. if (attributeHolder == typeof(DateTime) || attributeHolder == typeof(DateTimeOffset)) { System.Diagnostics.Debug.WriteLine("Type DateTime/DateTimeOffset is behaving differently in DataContractJsonSerializer and JSon.Net. You should probably use string and DateTime/DateTimeOffset.Parse instead"); return; } //TimeSpan could cause problems. if (attributeHolder == typeof(TimeSpan)) { System.Diagnostics.Debug.WriteLine("Type TimeSpan is behaving differently in DataContractJsonSerializer and JSon.Net. You should probably use string and DateTime/DateTimeOffset.Parse instead"); return; } // If this is a collection, check the elemnt type instead. Type subType = attributeHolder.GetElementType(); if( subType != null ) { EnsureHasDataContractAttribute(subType); return; } // Check that DataContractAttribute is added to the type. // Note: using System.Reflection; is needed for GetTypeInfo. if (attributeHolder.GetTypeInfo().GetCustomAttribute(typeof(System.Runtime.Serialization.DataContractAttribute)) != null) return; // Oh no! [DataContract] is missing on the type that should de deserialized! System.Diagnostics.Debugger.Break(); throw new Exception("Missing [DataContract] for " + attributeHolder.ToString()); } } I have also written a small class to build the Uri with all parameters: public class UriQueryBuilder { public UriQueryBuilder() : this(null, null) { } public UriQueryBuilder(string baseurl) : this(baseurl, null) { } public UriQueryBuilder(string baseurl, string action) { Parameters = new Dictionary<string, string>(); BaseUrl = baseurl; Action = action; } public string BaseUrl { get; set; } public string Action { get; set; } public Dictionary<string, string> Parameters { get; set; } public string QueryString { get { var array = (from key in Parameters.Keys select string.Format("{0}={1}", System.Uri.EscapeDataString(key), System.Uri.EscapeDataString(Parameters[key]))) .ToArray(); return string.Join("&", array); } } public string FormattedUri { get { string formattedString = BaseUrl; if (!String.IsNullOrWhiteSpace(Action)) { if (!BaseUrl.EndsWith("/")) formattedString += "/"; formattedString += Action; } string query = QueryString; if (!String.IsNullOrWhiteSpace(query)) { formattedString += "?"; formattedString += query; } return formattedString; } } } Finally I wrote HttpHelper that takes care of the downloading part. Note that there is support of CancelleationTokenSource. I want that the user should be able to stop everything immediately if needed. public class HttpHelper { public static async Task<string> DownloadStringAsync(UriQueryBuilder builder, CancellationTokenSource cancelHandler = null) { var url = new System.Uri(builder.FormattedUri); using (var client = new System.Net.Http.HttpClient()) { if (cancelHandler != null) cancelHandler.Token.Register(client.CancelPendingRequests); return await client.GetStringAsync(url); } } public static async Task<T> DownloadJsonObjectAsync<T>(UriQueryBuilder builder, CancellationTokenSource cancelHandler = null) { string text = await DownloadStringAsync(builder, cancelHandler); return JsonHelper.DeserializeObject<T>(text); } } Example of usage First, define a class that will hold the parsed json data: [DataContract] public class CheckVersionResult { [DataMember] public bool HasUpdate { get; set; } [DataMember] public string DownloadUrl { get; set; } [DataMember] public string ReleaseDate { get; set; } public DateTime ReleaseDateParsed { get { try { return DateTime.Parse(ReleaseDate); } catch(Exception ) { return new DateTime(2000, 1, 1); } } } } [DataContract] and [DataMember] is needed for the DataContractJsonSerializer. ReleaseDateParsed is dirty but acceptable in my case. Finally some code to build up the Uri and download and parse the data: UriQueryBuilder builder = new UriQueryBuilder("http://localhost:61933/api/ProductVersions", "CheckVersion"); builder.Parameters.Add("product", "myproduct"); builder.Parameters.Add("platform", "wpf"); builder.Parameters.Add("version", "1.2.3.4"); CheckVersionResult data = await HttpHelper.DownloadJsonObjectAsync<CheckVersionResult>(builder, null); Quite small and elegant I think. The best thing is that all code works in Wpf, Windows Store and Windows Phone apps.
{ "domain": "codereview.stackexchange", "id": 8643, "tags": "c#, wpf, asp.net-web-api, windows-phone, json.net" }
Defining dominance and selection in these formulae
Question: This paper uses two equations to explain the different conditions under which sexually antagonistic genetic variance is maintained in a population, while allowing for unequal dominance, for autosomally linked loci: $h_f / 1-h_m + h_fs_f <$ $ $ $ s_m/s_f < 1-h_f/h_m(1-s_f)$ And X-chromosome linked (X-linked) loci: $2 h_f / 1+h_f s_f < $ $ $ $ s_m/s_f < $ $ $ $2(1-h_f)/1-h_f s_f$ where $s_m$ and $s_f$ are the selection coffecients against the less fit homozygote (or hemizygote) in males and females respectively where the most fit genotype has a relative fitness of 1. Similarly $h_m$ and $h_f$ represent the dominance of the less fit allele in males and females, and $h_m$ is not applicable in the X-linked equation because there is no dominance (due to hemizygosity it can only be homozygote). I'm having a little trouble understanding how $s$ and $h$ are defined here. For $s$, is it the fitness of the deleterious homozygote or the difference in relative fitness between the two homozygotes? Such that either $s$ = the relative fitness of the homozygote, or, $s$ = 1 - relative fitness of deleterious homozygote, where one represents the fitness of the fittest homozygote thus giving the fitness differential. For $h$, is the dominance defined as the fitness difference between the less-fit homozygote and the heterozygote, or the most-fit homozygote and the heterozygote? Or is it the deviation from the average fitness of the two homozygotes? Going from the graph (see below for values of fitnesses), what are the values of $s_m$, $s_f$, $h_m$, and $h_f$ where the dashed line is Females and solid line is Males. I see possible values of either $s_m$ = 0.2 or 0.8 (fittest genotype - fitness of weakest), likewise $s_f$ is either 0.3 or 0.7, $h_m$ is either 0.1 (deviation from most-fit genotype), 0.7 (deviation from least-fit) , or 0.3 (deviation from average) and $h_f$ is either 0.3, 0.4, or 0.05. The values here are, as fitnesses for genotypes A1A1, A1A2, and A2A2, for males, 1.0, 0.9, 0.2, and for females 0.3, 0.7, 1.0. Answer: Fry (2010) borrowed his variables from Kidwell et al. (1977). Kidwell defines the fitness of each genotype as, $w_{m1}$, $w_{f1}$ = male and female fitness of the A$_1$A$_1$ genotype. $w_{m2}$, $w_{f2}$ = male and female fitness of the A$_1$A$_2$ genotype. $w_{m3}$, $w_{f3}$ = male and female fitness of the A$_2$A$_2$ genotype. Kidwell then establishes the following parameters for each fitness variable for opposing additive selection, $w_{m1} = w_{f3} = 1$, $w_{m2} = 1-.5s_m$, $w_{f2} = 1-.5s_f$, $w_{m3} = 1-s_m$, and $w_{f1} = 1-s_f$. Therefore, $s_m = 1-w_{m3}$, and $s_f = 1-w_{f1}$. $s_m$ and $s_f$ are the relative fitness differences between the two homozygous genotypes of each gender. The heterozygotes are assumed to have half of that differential. This is what Fry refers to on page 2 (page 1511 of the publication): In Kidwell et al.’s notation ... the most fit genotype in a given sex has a relative fitness of 1, and $s_m$ and $s_f$ are the selection coefficients against the less-fit homozygote (or hemizygote) in males and females, respectively. For opposing selection with arbitrary dominance, Kidwell parameterizes the heterozygous fitness components as $w_{m2} = 1-h_ms_m$, and $w_{f2} = 1-h_fs_f$. Notice that $h$ is modifying the fitness difference $s$ in the heterozygotes. This is due to incomplete dominance of one allele over the other. If both alleles in a heterozygote contribute equally to the phenotype, then $h = 0.5$, and you have the additive fitness described above ($w = 1-.5s$). If one allele is incompletely dominant over the other, the selective difference $s$ will be modified by some amount, $h$. If A$_1$ is only slightly dominant over A$_2$, (say, $h=0.6$), the effect on $w$ won't be much different than if $h=0.5$. Both alleles will affect fitness but the more dominant allele will have a slightly greater affect on fitness than the other allele. If one allele is much more dominant, then $h$ becomes larger and the more dominant allele contributes more to the overall fitness than the other allele. Finally, if $h=1$ then one allele is completely dominant so the other allele will not contribute at all to fitness. When $h=1$, the equations reduce to $w = 1 - s$ (for the respective genders), remembering that $h$ is based on the dominance of the allele with maximum fitness in the other gender. Consider a female. Her maximum fitness is the A$_2$A$_2$ genotype. If A$_1$ is completely dominant over A$_2$, then a heterozygous female would have fitness equal to the A$_1$A$_1$ genotype which, for females is defined as $w = 1-s_f$. On the other hand, if A$_1$ is not completely dominant over A$_2$, then the heterozygote will have some reduction in fitness but not as much as for complete dominance. To answer your final question about your figure, given the following values for A$_1$A$_1$, A$_1$A$_2$, and $_2$A$_2$, respectively: $w_m = 1, 0.9, 0.2$, and $w_f = 0.3, 0.7, 1$, then $s_m = 1 - 0.2 = 0.8$, and $s_f = 1 - 0.7 = 0.7$. For the heterozygosity effect ($h$), given that $w = 1 - hs$, then $h = \frac{1-w}{s}$, therefore $h_{m2} = \frac{1 - 0.9}{0.8} = 0.125$. A$_1$ is nearly completely dominant over A$_2$ in males, so the presence of A$_2$ is the male is not reducing his fitness by very much. For females, $h_{f2} = \frac{1-0.7}{0.7} = 0.429$. The presence of the dominant A$_1$ allele in females has stronger heterozygous effect so her fitness is reduced more, compared to heterozygous males. Citations Fry, J.D. 2010. The genomic location of sexually antagonistic variation: some cautionary comments. Evolution 64: 1510-1516. Kidwell, J.F. et al. 1977. Regions of stable equilibria for models of differential selection in the two sexes under random mating. Genetics 85: 171-183.
{ "domain": "biology.stackexchange", "id": 2798, "tags": "mathematical-models, theoretical-biology, sex-chromosome" }
Generate random weighted graphs representing a road network
Question: in order to solve a DARP problem I created a Python class, that can generate random graphs. I attribute a random number to every edge which represents the cost to travel over that edge. My current solution for connecting vertices (and so create an edge) looks like this: def connectVertices(self, vertexA, vertexB): vertexA.addNeighbour(vertexB) vertexB.addNeighbour(vertexA) weight = randint(1, self.maxDistance) self.adjacencyMatrix[vertexA.index][vertexB.index] = weight self.adjacencyMatrix[vertexB.index][vertexA.index] = weight I insert a random integer in the adjacency matrix. How ever this can creates graph which can not represent realistic road networks. Example: Node A has a cost of 1 to B. Node B has a cost of 1 to C. Node C has a cost of 60 to A. Since the cost when travelling over B between A and C is only 2, it does not make much sens to have a cost of 60 for the direct connection between A and C. (I can not solve this problem by reducing the maximal cost, because I will need to generate large graphs.) Are there algorithms that solve this problem ? (Or :Is there maybe a python library which generates random weighted graphs which takes my problem in count ?) Answer: One approach is to generate an arbitrary graph $G$ with arbitrary (positive) lengths on each edge. Then, compute all-pairs shortest paths, and build a new fully-connected graph $G'$ where the length of the edge $u \to v$ in $G'$ is equal to the length of the shortest path from $u$ to $v$ in $G$. The nice thing about this is that you're guaranteed by construction that $G'$ will satisfy the triangle inequality. If you are happy with generating fully connected graphs, you can then output $G'$ as your random graph. If you don't want the graphs to be fully connected, you could keep only some subset of the edges of $G'$ and delete the rest, then output the result.
{ "domain": "cs.stackexchange", "id": 6185, "tags": "graphs, graph-traversal, weighted-graphs" }
USB_Cam, Could not open MJPEG Decoder
Question: Whenever I run usb_cam with pixel_format:="mjpeg", I get the following error: usb_cam video_device set to [/dev/video1] ... usb_cam pixel_format set to [mjpeg] [mjpeg @ 0x7fabe40f0160]codec type or id mismatches Could not open MJPEG Decoder Segmentation Fault I'm using a Logitech Webcam Pro 9000, on Ubuntu 11.04, with the latest "usb_cam" downloaded from the bosch-ros-pkg repository. usb_cam works great when I set pixel_format to "yuyv", and I can see the image with image_view. However, the frame rates for "mjpeg" should be 4x higher than "yuyv" at this camera's native resolution (according to the list at http://www.quickcamteam.net/devices/logitech_uvc_frame_format_list.pdf). I would ultimately like to use this webcam to track a 'fast' moving target, one that I should be able to catch at 30fps, but not reliably catch 5-7.5 fps. I've installed ffmpeg and libavcodec-dev from Ubuntu's software center. I'd be happy to supply any additional information, but I'm not sure what to look for. Thanks! Originally posted by davidcw on ROS Answers with karma: 21 on 2011-07-09 Post score: 1 Original comments Comment by cvcook on 2015-07-09: The link below: https://code.ros.org/trac/opencv/ticket/1281 seems broken. Does anyone has an answer to this? Same problem here. I am using Indigo, usb_cam and Ubuntu 14.04. Answer: I fixed up CvCaptureCAM_DShow to allow MJPG and setting of all the webcam parameters. Fixes are against 2.3.0 see https://code.ros.org/trac/opencv/ticket/1281 Originally posted by mgb with karma: 16 on 2011-08-31 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 6089, "tags": "usb-cam, camera" }
Linear regression with non-symmetric cost function?
Question: I want to predict some value $Y(x)$ and I am trying to get some prediction $\hat Y(x)$ that optimizes between being as low as possible, but still being larger than $Y(x)$. In other words: $$\text{cost}\left\{ Y(x) \gtrsim \hat Y(x) \right\} >> \text{cost}\left\{ \hat Y(x) \gtrsim Y(x) \right\} $$ I think a simple linear regression should do totally fine. So I somewhat know how to implement this manually, but I guess I'm not the first one with this kind of problem. Are there any packages/libraries (preferably python) out there doing what I want to do? What's the keyword I need to look for? What if I knew a function $Y_0(x) > 0$ where $Y(x) > Y_0(x)$. What's the best way to implement these restrictions? Answer: If I understand you correctly, you want to err on the side of overestimating. If so, you need an appropriate, asymmetric cost function. One simple candidate is to tweak the squared loss: $\mathcal L: (x,\alpha) \to x^2 \left( \mathrm{sgn} x + \alpha \right)^2$ where $-1 < \alpha < 1$ is a parameter you can use to trade off the penalty of underestimation against overestimation. Positive values of $\alpha$ penalize overestimation, so you will want to set $\alpha$ negative. In python this looks like def loss(x, a): return x**2 * (numpy.sign(x) + a)**2 Next let's generate some data: import numpy x = numpy.arange(-10, 10, 0.1) y = -0.1*x**2 + x + numpy.sin(x) + 0.1*numpy.random.randn(len(x)) Finally, we will do our regression in tensorflow, a machine learning library from Google that supports automated differentiation (making gradient-based optimization of such problems simpler). I will use this example as a starting point. import tensorflow as tf X = tf.placeholder("float") # create symbolic variables Y = tf.placeholder("float") w = tf.Variable(0.0, name="coeff") b = tf.Variable(0.0, name="offset") y_model = tf.mul(X, w) + b cost = tf.pow(y_model-Y, 2) # use sqr error for cost function def acost(a): return tf.pow(y_model-Y, 2) * tf.pow(tf.sign(y_model-Y) + a, 2) train_op = tf.train.AdamOptimizer().minimize(cost) train_op2 = tf.train.AdamOptimizer().minimize(acost(-0.5)) sess = tf.Session() init = tf.initialize_all_variables() sess.run(init) for i in range(100): for (xi, yi) in zip(x, y): # sess.run(train_op, feed_dict={X: xi, Y: yi}) sess.run(train_op2, feed_dict={X: xi, Y: yi}) print(sess.run(w), sess.run(b)) cost is the regular squared error, while acost is the aforementioned asymmetric loss function. If you use cost you get 1.00764 -3.32445 If you use acost you get 1.02604 -1.07742 acost clearly tries not to underestimate. I did not check for convergence, but you get the idea.
{ "domain": "datascience.stackexchange", "id": 7733, "tags": "machine-learning, logistic-regression" }
Removing cells zero for a gene from a scRNA-seq data
Question: I have a big single-cell RNA seq data > dput(head(new.dat[,1:10])) structure(list(cell1 = c(0.793763840992639, 0, 1.96843530982957, 0.461736429639991, 0.717968540649498, 0), cell2 = c(3.61741696702738, 0.231662370550224, 0, 0, 0, 0), cell3 = c(4.14348883366621, 0.118161316317251, 0.08074552209482, 2.27968429766934, 0.0470313356296409, 0), cell4 = c(1.34783143327084, 0.0094666040612932, 1.14392942941128, 0.652535826921119, 0.357542816432864, 0.149587369334621), cell5 = c(1.27104023273899, 1.55185229643731, 0, 0, 0, 0.0117525723115277), cell6 = c(1.92307653575663, 0, 0, 0.319156642478379, 0, 0), cell7 = c(3.9343015424917, 0.132824589520901, 0.119679885703561, 0.772516422897241, 0.0236884909844904, 0), cell8 = c(3.74969491678643, 0.103404975609384, 0.0354753982873036, 0, 0, 0), cell9 = c(1.19084857532713, 3.9213265721495, 0, 0.0341973245272891, 0.0419122921627454, 0), cell10 = c(4.1224255501566, 0.301871669274068, 0.0633536200981225, 0.389959552469879, 0, 0.0405296102106492)), row.names = c("PTPRC", "MHC-II", "ITGAM", "Ly6C", "Ly6G", "EMR1"), class = "data.frame") > > dim(new.dat) [1] 33 263086 > How I remove every columns which are zero for one gene, let's say PTPRC? Answer: WhichCells(seurat_object made by this matrix, slot = 'counts', expression = PTPRC > 0 ) By Seurat R package this is possible
{ "domain": "bioinformatics.stackexchange", "id": 1878, "tags": "r, scrnaseq, 10x-genomics" }
How does one sketch a proof to show that the following problem is in the P Complexity Class?
Question: I have the following problem. I do not know where to start or how I should approach this problem. I am not sure about how to prove if a problem is in a complexity class of P . I know how to do NP but P confuses me. Answer: So to prove that the following problem is in $\mathcal{P}$, we need to construct an algorithm that solves the problem in polynomial time. First we analyse the problem. We know that we are given $n$ boolean expressions of the form $$C_1 \land C_2 \land \dots \land C_n $$ with $C_i$ a disjunction of literals. So the solve this problem, we reduce it by first analyzing whether each $C_i$ is true or false. Then we save the boolean of each $C_i$ in a list. This list will now contain the boolean of each $C_i$. All we have to do now is to check if the conjuction of all the elements inside this list is true or false. A conjunction is false iff there exists only one element in the list that is false. Thus if we find one single element that is false, we return false, otherwise we return true. Below you can find a high level algorithm. Lets say that $C_1 \land C_2 \land \dots \land C_n $ = $C_{total}$ If $n$ is the number of elements in $C_{total}$ and $m$ is the average number of elements in $C_i$, the complexity of this algorithm is $\mathcal{O}(n*m)$, which is polynomial. I hope this helped you out, let me know if you have any question in the comments below.
{ "domain": "cs.stackexchange", "id": 17417, "tags": "complexity-theory, computability, np-complete" }
What type of filter is that?
Question: I have a transfer function in z-plane with two poles and two zeros. I plotted the function with matlab k = 0.15; z = [0.8 -1]'; p = [(0.51+1i*0.68) (0.51-1i*0.68)]'; [b,a] = zp2tf(z,p,k); What type of filter is that? The closer I can think of is bandpass. But it has a really non -smooth peak so bandwidth would be zero? And also the magnitude remains negative which I don't know how to interpet. Answer: This is a 2nd order IIR biquadratic filter, consisting of a 2nd order resonator with additional zeros on the real axis at $z=0.8$, and $z=-1$. which is a bandpass filter. I think the confusion is with the magnitude given in dB. dB is 20Log10(mag) and in that if the magnitude was a quantity between 0 and 1 it would be a negative dB quantity. As examples, the magnitude 0.707 is $20Log_{10}(0.707)= -3$ dB. Similarly the magnitude 1 is 0 dB, and the magnitude 0.001 is -60 dB. Note that if we had the poles alone (with the two zeros at the origin), this would be a "2nd Order Resonator" with the response as given in the plot below showing the same transfer function (I adjusted the gain to be closer to 0 dB at resonance). Comparing this to the OP's plot, we see the impact of moving the zeros in the OP's transfer function to not be trivially at the origin. The one that is right on the unit circle at $z=-1$, corresponding to the frequency response at Nyquist, pulls the magnitude down toward 0 as the frequency approaches Nyquist, as a large negative dB quantity (exactly 0 at $f_s/2$). The zero near $z=1$, corresponding to the frequency response at DC, lowers the magnitude response at this point. Since this zero is not exactly on the unit circle, it only serves to reduce the response, but not significantly. We also see how the zero modifies the phase response. Please refer to this post to visualize the relationship between the poles and zeros on the z plane and the frequency response. For more fun stuff with 2nd order resonators, see this great blog post by @RichardLyons dsprelated.com/showarticle/183.php and this post here.
{ "domain": "dsp.stackexchange", "id": 11742, "tags": "filters, discrete-signals, z-transform" }
What is the solve of F(n,n) = F(n-1,n) + F(n, n-1) + 1 Where F(0,a) = 1 and F(a, 0) = 1 for every a
Question: I'm given the following python function: def recurser(i, j): x = 0 if j == 0: return 1 if i == 0: return 1 x += recurser(i, j - 1) x += recurser(i - 1, j) x += 1 return x And I'm Asked to find x for any i = j = n where n can be any positive integer. however the recursion can do the job but the question says no recursion is allowed so that I have to solve the following recursive function: F(n,n) = F(n-1,n) + F(n, n-1) + 1 F(0,a) = 1 for every positive a F(a, 0) = 1 for every positive a Is there any solution for it? Answer: Your function produces the sequence A109128, that is $$ \mathit{recurser}(i,j) = 2\binom{i+j}{i} - 1. $$ You can prove this by induction. How I found out: I computed the first few values, and searched the OEIS.
{ "domain": "cs.stackexchange", "id": 17139, "tags": "computability, recursion, counting" }
Jefimenko's equations contradiction between free space solutions
Question: In Jefimenko's equations, which are a solution to Maxwells equations. Each term has either $\rho $ or $J$ in it. Setting $\rho$ and J to be zero, Should reduce to the electromagnetic plane wave equation? But it does not. and $r'$ isnt even defined properly for free space I'm guessing as well. Why is this the case? and has it got to do with the $+ c$ in the integration? Answer: Jefimenko's equations are the solutions to Maxwell's equations assuming the fields vanish at infinity and that their initial conditions are compatible with Jefimenko's equations. However, if the solutions are not assumed to vanish at infinity or one takes a different set of initial conditions (for example, by assuming there is a plane wave), then one must add a solution of the homogeneous Maxwell's equations (i.e., Maxwell's equations with charges and currents set to zero) to completely fix the initial and/or boundary conditions chosen. See, e.g., the discussion on Zangwill's Modern Electrodynamics, Chap. 15, p. 509. Yes, the problem is quite similar to the $+c$ of solving ordinary differential equations. Since Maxwell's equations are a system of PDEs, it gets a little more subtle, but in essence that is what is happening. As for the $\mathbf{r'}$ notation, this is often used to indicate the variable being integrated over, as opposite to the point $\mathbf{r}$ where one is evaluating the field. $\mathbf{r'}$ is still defined for free space, but the integral will vanish identically. You might want to take a look at Wald's Advanced Classical Electromagnetism. The first chapter (there is a preview of it available at the link I provided) has a discussion on how the fields are not completely defined by the charges and currents. It doesn't explicitly mention Jefimenko's equations, but it discusses pretty much the same issues.
{ "domain": "physics.stackexchange", "id": 84748, "tags": "electromagnetism, boundary-conditions, plane-wave" }
Possible to grow optical calcite?
Question: I notice that most optical calcite for sale seems to be from natural (mined) sources. Also, I know in World War 2, mining optical calcite was a strategic objective. Is there some reason why high-grade optical calcite cannot be lab grown? Answer: The crystal structure is trigonal, so has very low symmetry and high chance of twinning or other line defects forming during, say, Czochralski type growth methods. My guess would be the good optical quality stuff sat and annealed in situ for a long time to clear most of the twins/dislocations out. Support for this comes from A.J. Gratz et al., Geochimica et Cosmochimica Acta 57 491-495 (1993) where they observed, with AFM, calcite growth occuring primarily on screw dislocations.
{ "domain": "chemistry.stackexchange", "id": 3865, "tags": "crystal-structure" }
How to deal with non fixed springs?
Question: In every physics book they explain how to compute the force that a spring exert if one of its ends is fixed to a wall (or equivalent) and the other end is compressed or stretched. But how to deal with the problem if both ends are 'free'. For example, suppose you have two blocks attached with a spring, and someone apply an horizontal force of, say, 50N. (Lets suppose that there is no friction with the surface, the blocks are originally at rest, the spring is originally in its relaxed state, and the spring constant is, say, 300N/m). How you would compute the acceleration of each block? Answer: There are several principles you can use here. Remember Hooke's law, $|F|=k |x|$. I've put absolute values around so that we can worry about the direction of the force later. The spring is idealized as massless and so its extension is only to be defined by the positions of the walls of blocks A and B. Each action has an equal and opposite reaction. (except for the "applied" force, whose reaction we are disregarding) We know that if the spring is extended past equilibrium A and B will be pulled towards each other, and if the spring is squished away from equilibrum then A and B will be pushed apart. We wind up with the equations: $$F_A=m_A a_A=F_0+k x$$ $$F_B=m_B a_B=-k x$$ where I have defined "x" with the sign convention that it is positive when the spring is overextended and negative when the spring is squished. It is also a function of the distance between blocks A and B. If we wanted to find the dynamics of the system, we'd have to do some differential equation stuff, because the accelerations of B and A would be functions of time. To find the accelerations of A and B we'd have to assume the system is in equilibrium. If $x$ is not changing, the velocities of A and B are always the same and so the accelerations of A and B are the same. We wind up with the equations: $$m_A a=F_0+k x$$ $$m_B a=-k x$$ which can be solve for for $a$ and $x$ exactly and is left as an exercise for the reader.
{ "domain": "physics.stackexchange", "id": 12956, "tags": "homework-and-exercises, newtonian-mechanics" }
Should I bother myself with roslisp?
Question: Hello. I am choosing whether or not to dive into roslisp. It seems that tutorials are quite obsolete right now. The quesion is for experienced users: How good is lisp-ROS integration nowdays and where is it going? Will it be maintained in the future? Originally posted by mukhachev on ROS Answers with karma: 33 on 2015-04-08 Post score: 2 Answer: The code for roslisp is maintained and there are some people actively using it. But it is true that the tutorials are not properly taken care of... I'll look into it at some point in May. Originally posted by gaya with karma: 311 on 2015-04-09 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by firstprayer on 2015-04-21: I've been trying to work out a simple ros hello world for a day but still gets nothing... can you provide some simple tutorial code that could work under catkin system? My email is zhangty10@gmail.com, thanks in advance! Comment by gaya on 2015-04-21: You can find the code for the tutorials here: https://github.com/code-iai/roslisp_tutorials/ The only thing you need to do to get them to run with the current roslisp version is to comment out all the add_lisp_executable lines in the two CMakeLists.txt files. Comment by firstprayer on 2015-04-21: Right now in order to run the script with ros, I'm putting extra header to the lisp file as mentioned in http://wiki.ros.org/roslisp/Tutorials/OrganizingFiles, and put install(PROGRAMS src/beckon.lisp DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) in the CMakeList.txt. But it can't work... Comment by firstprayer on 2015-04-21: ...saying ROSLISP does not designate any package Comment by gaya on 2015-04-21: Yes, you're definitely doing something wrong, at least the install line seems suspicious. Please create a new question with proper examples of your CMakeLists and the executable script. Comment by firstprayer on 2015-04-21: http://answers.ros.org/question/207561/create-executable-roslisp-script/ I'd be very appreciated if you could have a look at it :)
{ "domain": "robotics.stackexchange", "id": 21388, "tags": "ros, roslisp" }
Convolutions in Physics
Question: At a high-level Wikipedia states: "A convolution between two functions produces a third expressing how the shape of one is modified by the other." But there are clearly many ways of combining functions to get a third one. A convolution is a specific type of such combination; one that requires reversing and shifting one of the operands and that combines them with a product and an integral to generate the output. While not very complex algebraically, the operation itself is somewhat "convoluted" (pun intended). Why are convolutions noteworthy? What physical phenomena can be explained mathematically as a convolution? Answer: Preamble The lesson here is that graphical intuition isn't always the best choice. For example, you can say that the intuition for derivatives is that they're slopes, and for integrals is that they're areas. But why would "slopes" be that useful in physics? I mean, besides inclined planes, you don't see that many literal slopes in a physics class. And why "areas"? We live in 3D space, so shouldn't volumes be more important? In case that all sounds dumb, the point is that sometimes, you can actually make something less physically intuitive by explaining it visually, because usually the visual explanation is completely devoid of the dynamic context that would be present in a real physics problem. One of the main reasons derivatives and integrals appear so often in introductory physics is that they're with respect to time, so the derivative means "a rate of change" and the integral means "an accumulation over time". This is a distinct intuition from the geometric one. The point of the geometric intuition is to help you see what the derivative and integral is given a graph, but it doesn't really help you interpret what it physically does. Similarly, there is a complicated geometric intuition for the convolution, which can hypothetically help you eyeball what the convolution of two graphs of functions would look like. But in this case the "dynamic" intuition is much simpler. One piece of intuition Convolutions occur whenever you have a two-stage process where the stages combine linearly and independently. Suppose that I kick an initially still mass on a spring at time $t = 0$, and the subsequent trajectory of the spring is $x(t)$. If I apply the same kick at time $t = 1$, then by time translational invariance, the subsequent trajectory is $x(t-1)$. Now suppose that I kick both at $t = 0$ and $t = 1$, with strengths $f(0)$ and $f(1)$. Then by linearity, the subsequent trajectory is $$f(0) x(t) + f(1) x(t-1).$$ This is like the "reversing and shifting" structure of a convolution. So more generally, if I apply a continuous force $f(t)$, then the trajectory is $$\int dt' \, f(t') x(t-t')$$ which is precisely a convolution. This is where a large fraction of the convolutions in physics and electrical engineering come from. (A large fraction of the remainder come from the fact that the Fourier transform of the convolution is the product of the Fourier transforms, and products are simple and ubiquitous.)
{ "domain": "physics.stackexchange", "id": 68232, "tags": "fourier-transform, signal-processing" }