anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Gaussian filter in terms of pixel radius | Question: So I'm writing an app that blurs and downscales large quantities of images, but all the image processing libraries I've come across define gaussian blurs in terms of sigma and kernel size rather than pixel radius (like an image editing app would do). Does radius relate to sigma and kernel size in some way, or is it an entirely different thing?
Answer: After a bit more digging I found a gaussian kernel calculator that seems to suggest sigma is roughly half the blur radius, and kernel size should be somewhere around (sigma*4)+1 for a full blur. | {
"domain": "dsp.stackexchange",
"id": 3457,
"tags": "gaussian"
} |
Divide an array into two sub arrays such that their sums are equal and possibly maximum | Question: Given an array A, we should partition A into two subarrays whose sums are equal, and that maximizes this sum. We are free to omit items from the subarrays.
For example, [7,2,5,7,12] can be divided as the following:
[5,2][7] = 7
[5,7][12] = 12
[7,7][2,12] = 14
We need the final answer as 14 because the max possible sum is 14.
I initially tried to generate all possible combinations of from length 1 to N and compare the sums and take intersection between two subsets. But it takes pow(2,N) time.
Is there an efficient algorithm to solve this problem?
Please help how to approach this problem. I have tried state space tree with recursion like in the problem subset_sum. But couldn't solve this. Please help.
Answer: The problem is NP-hard, by a straightforward reduction from the partition problem. Therefore, you should not expect any efficient algorithm. However, there is a pseudo-polynomial-time algorithm using dynamic programming (see pseudo-polynomial-time algorithm for the partition problem for inspiration). | {
"domain": "cs.stackexchange",
"id": 11913,
"tags": "dynamic-programming, partitions, backtracking"
} |
Bicycle tire friction on wet roads | Question: Recently, I got caught in the rain while riding my road bike. Being wet and miserable made me ponder the following:
The general wisdom is that wet roads are more slippery, thus the rider should be more careful with braking and cornering.
If there is less friction between the road and the tire, it should take less energy to propel the bike forward or increased speed with the same power.
At the same time, breaking the surface tension and viscosity of water (as in riding through puddles) should require additional power.
How does the physics of this works out? Am I correct that wet roads are faster?
I am looking for the back of a napkin calculation.
Answer:
The general wisdom is that wet roads are more slippery, thus the rider
should be more careful with braking and cornering.
Correct, because we rely on static friction to prevent skidding and sliding on the road.
If there is less friction between the road and the tire, it should
take less energy to propel the bike forward or increased speed with
the same power.
This doesn't make sense. Static friction is your friend. It prevents relative motion between the tire and road (prevents spinning of the wheel) thus enabling you to accelerate your bike. You know this because if you attempt to accelerate the bike (or a car) on a slippery surface the wheel(s) will simply spin in place and you will go nowhere.
At the same time, breaking the surface tension and viscosity of water
(as in riding through puddles) should require additional power.
Sure, but that has less to do with friction of the water against the tire and more to do with with the extra work you have to do to plow through the water (push it out of the way).
Am I correct that wet roads are faster?
Quite the contrary. When riding in wet weather conditions you should go considerably slower than normal due to the increased risk of uncontrolled sliding and skidding.
Regarding your second point: The resistance comes from static friction
or kinetic friction?
The resistance to the tire slipping comes from static friction. Think of static friction as what gives you traction when you accelerate. It also slows you down when your are braking while the wheels are rolling (not skidding)
Once sliding or skidding occurs, the resistance to sliding or skidding is kinetic friction. For example, if you brake hard so that the wheel stops turning and you skid then your stopping distance is determined by kinetic friction. The lower the kinetic friction, the farther you will skid before stopping.
And, which one is affected by the wetness of the surfaces?
Both are affected by the wetness of the surface, that is, both the coefficients of static and kinetic friction are lower. But the maximum possible static friction force must first be exceeded before the kinetic friction force takes over. The kinetic friction force is generally lower than the static friction force.
The last point: My question was: Does the wetness of tire and the road
somehow reduces friction to the extent that it gets easier to pedal to
maintain a constant speed (compared to a dry road)? –
Moving at constant speed means the net force acting on the bicycle is zero. For the net force to be zero, generally you need to apply a torque (force) to your bike that equals the air resistance and rolling resistance forces.
As long as the torque you apply by pedaling does not result in you exceeding the maximum static friction force $u_{s}mg$ between the tire and the road, it doesn't matter if the road is wet or dry. Problem is, if it is wet, $u_{s}$ is lower than for a dry surface so you will exceed the maximum static friction force sooner and slip if the road is wet instead of dry.
So the short answer to your question is the greater the constant speed you want to maintain, the dryer the road has to be. It's not a matter of having to pedal harder, it's that you won't be able to pedal harder to maintain a constant speed on a wet surface because you will lose traction sooner than on a dry surface.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 70824,
"tags": "friction"
} |
What fraction of air gets expelled upon heating an empty flask? | Question:
A student forgot to add the reaction mixture to the round bottomed flask at $\pu{27 ^\circ C}$ but instead he placed the flask on the flame. After a lapse of time he realised his mistake and using a pyrometer, he found the temperature of the flask was $\pu{477 ^\circ C}$. What fraction of air would have been expelled out?
By solving $$\frac{V_1}{V_1 + V_2} = \frac{T_1}{T_2}$$ I found $$\frac{V_2}{V_1} = \frac{3}{2}.$$ But according to my book it is $\frac 35$. What exactly is this ratio?
Answer: Your first equation should be:
$${V_1\over V_2} = {T_1\over T_2}$$
We want to know what $V_2$ is in terms of $V_1$. Here, I call $V_2$ final volume. Arranging the equation and using the values for temperature in kelvin:
$${\rm final\;volume} = {V_1 \times 750\over 300}= {5\over 2}V_1$$
To get the amount that was expelled, we subtract the initial volume ($V_1$) from the final volume:
$${\rm amount\;expelled} = {5\over 2}V_1 - V_1 = {3\over 2}V_1$$
The fraction expelled is then:
$${{\rm amount\;expelled}\over{\rm final\;volume}} = {3\over 5} $$ | {
"domain": "chemistry.stackexchange",
"id": 3866,
"tags": "physical-chemistry, gas-laws"
} |
Equilibrium between spring and centrifugal force | Question: I was doing a problem regarding balancing springs in rotation about an axis with a respective centrifugal force.
Axis of rotation goes through a human. Springs starts 1 meter away from the axis of rotation and the spring equilibrium length is 1 meter. There is a mass at the end of the spring.
When stretched (due to rotation), it will measure 2L + x, where L = 1 and x = the stretched distance.
Given K = 100N/m
m = 1kg
L = 1 meter
Tension spring = centrifugal force
$$\begin{align}100x&=m\omega^2(2+x)\\ 100x-m\omega^2x&=2m\omega^2\\ x(100-m\omega^2)&=2m\omega^2\\ &\to m=1\ \mathrm{kg}\\ x(100-\omega^2)&=2\omega^2\\ x&=\frac{2\omega^2}{100-\omega^2}\end{align}$$
A solution which breaks at ω=10.
Am I doing something wrong?
Or why does that happen?
Thanks for any insights you may provide!
Answer: Your solution is correct for $|\omega| < 10$. When $\omega=10$ the spring exerts an inwards force of $100x$ N on the mass at extension $x$ metres, but the centripetal force required to keep the mass moving in a circle with radius $2+x$ metres is $200 + 100x$ N, which is always greater than $100x$ N. So when $\omega=10$ the spring is not strong enough to keep the mass moving in a circle, no matter how far it is extended. | {
"domain": "physics.stackexchange",
"id": 83045,
"tags": "homework-and-exercises, classical-mechanics, spring, rotational-kinematics"
} |
change timeout of rostest | Question:
I want to test a lot of topics with hztest ros.org/wiki/rostest/Nodes (no Karma no Link)
Since all tests are launched in their own environment, it takes more than one hour to complete the testing, and nobody can use the robot.
Because of that, I created a modified hztests to check all given topics in one bringup.
But when I check more than about 6 topics the time limit is hit and the test fails:
File "/opt/ros/electric/stacks/ros_comm/tools/roslaunch/src/roslaunch/launch.py", line 669, in run_test
raise RLTestTimeoutException("test max time allotted")
I successfully changed line 664:
timeout_t= time.time()+test.time_limit to timeout_t= time.time()+1000.0
But is there a way to change the time_limit without manipluating ros_comm?
I tried 'rostest mypckg mytest --bare-limit=1000' and also setting a parameter time_limit on the parameter server, but both without success.
Originally posted by msieber on ROS Answers with karma: 181 on 2012-12-20
Post score: 3
Answer:
The parameter is time-limit not time_limit. So it's
Originally posted by msieber with karma: 181 on 2013-02-08
This answer was ACCEPTED on the original site
Post score: 6 | {
"domain": "robotics.stackexchange",
"id": 12172,
"tags": "rostest"
} |
How to check if a fasta file and a GTF file fit and form a valid pair? | Question: I'd like to know if there is a simple way to check the concordance between a GTF file and a fasta file (that I use as mapping reference) ?
It may be a dumb question, but I suspect discrepancies between my 2 files and I don't want to check this manually
Answer: Answer from @ATpoint, converted from comment:
Check whether chromosome identifiers are the same and both files contain the same chromosomes. You can also check that the end coordinate of the GTF never exceeds the length of the respective chromosome. Maybe you can also check whether the first three nucleotides of every CDS indeed start with a start codon. If this is all true then I guess I would trust this pair. | {
"domain": "bioinformatics.stackexchange",
"id": 1921,
"tags": "fasta, gtf"
} |
gazebo starting view pose control | Question:
ı want to change gazebo first starting view pose on my model. When gazebo first started, ı want to see my world above.I dont want always change my world view with mouse. how can I change gazebo initial view pose in .world file.
so :
my first view: http://i.imgur.com/qTkYEJc.png
BUT: I want to see like this first starting.
http://i.imgur.com/ouqEYEi.png
what can ı do in my sdf file ?
Originally posted by osmancns on Gazebo Answers with karma: 3 on 2015-08-28
Post score: 0
Answer:
You can set the user camera pose in the SDF as follows:
<world name="default">
<gui>
<camera name="user_camera">
<pose>0 0 0 0 0 0</pose>
</camera>
</gui>
...
</world>
From Gazebo 6, you can find out the current camera pose on the World tab, under GUI. Then you can use that value in your SDF.
On older versions of Gazebo, you can save your world with the desired camera pose and find the pose in the saved SDF.
Originally posted by chapulina with karma: 7504 on 2015-08-28
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 3813,
"tags": "gazebo"
} |
Multi-label classification for text messages (convert text to numeric vector) | Question: Given a dataset of messages which are labeled with 20 features, I want to predict the value of each feature for a new message.
Dataset example:
message feature1 feature2 feature3 feature3 feature4 ...
'hi' 1 0 1 1 0 ...
'i am bussy' 0 0 0 0 1 ...
... ... ... ... ... ... ...
Split data into train & test to train the model:
from sklearn.model_selection import train_test_split
x= df.iloc[:,0:1].values
y = df.iloc[:,1:-1].values
train_x, test_x, train_y, test_y = train_test_split(x, y, random_state=42)
Now, my train_x is an array of text values (impossible to fit into a train model), how could I convert them to numeric vectors?
Answer: What you want to do is find a vector representation of those strings which are in your $X$ vector. Two such techniques are Bag-of-Words and $n$-grams.
Bag-of-Words (BoW)
This technique will build a dictionary with all the words that exist in your training set. Then we will build a vector with the count of each word in each instance. For example let's consider these three separate instances:
'hi'
'i am bussy'
'how are you doing?'
Then we can see that the following "words" in this training set are: hi, i, am, bussy, how, are, you, doing. So the vector representation of the above strings would be:
[1, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 1, 1, 0, 0, 0, 0]
[0, 0, 0, 0, 1, 1, 1, 1]
There are ways to make this technique more effective by removing tenses from verbs or plurality of words. This is called stemming and should be used with BoW.
n-grams
n-grams is a feature extraction technique for language based data. It segments the Strings such that roots of words can be found, ignoring verb endings, pluralities etc...
The segmentation works as follows:
The String: Hello World
2-gram: "He", "el", "ll", "lo", "o ", " W", "Wo", "or", "rl", "ld"
3-gram: "Hel", "ell", "llo", "lo ", "o W", " Wo", "Wor", "orl", "rld"
4-gram: "Hell", "ello", "llo ", "lo W", "o Wo", " Wor", "Worl", "orld"
Thus in your example, if we use 4-grams, truncations of the word Hello would appear to be the same. And this similarity would be captured by your features. Then you can vectorize the results of the $n$-gram in the same way as BoW.
Term Frequency-Inverse Document Frequency (TF-IDF)
Both of the above techniques can be enhanced with TF-IDF. This removes words that appear too often and do not have much information about the string, for example: the, like, a, ab, is...
For a term $t$ in a string $s$, the weight for that term is given by
$W_{t,s} = TF_{t,s} log(\frac{N}{DF_t})$
where N is the number of strings in your corpus, $TF$ is the number of times the term appeared in the given instance string and $DF$ is the number of documents in which the term appears. | {
"domain": "datascience.stackexchange",
"id": 4479,
"tags": "machine-learning, python, multilabel-classification"
} |
What does leakage of charge in plate and isolated spherical capacitors mean? | Question: I've read quite a lot of times about leakage of charge in capacitors, but what does it exactly mean?
Do we see sparks? Or is it some current flow? Or something else?
Say I have a charged sphere, so where will the charge go on charge leaking? (Assuming that it is has an electric field less than the breakdown strength of the medium)
Answer: Leakage of charge in capacitors can have different reasons in practice. Charges may escape into an imperfect insulator, or flow through the insulator as a tiny current. Charges may also escape via surface conduction on the capacitor itself or on the PCB it is mounted on. The charged sphere in air may lose charge via surface conduction of its support, or even via charging air molecules or neutralizing ions present in the air. Very thin insulators may allow for small currents between capacitor plates due to quantum tunneling. Depending on the exact mechanism and measurement method, discharging could be seen as discrete events (e.g. of single charge escape) or as a steady continuous process. | {
"domain": "physics.stackexchange",
"id": 56114,
"tags": "electrostatics, charge, capacitance"
} |
Does quantum mechanics imply that particles have no trajectories? | Question: In Classical Mechanics we describe the evolution of a particle giving its trajectory. This is quite natural because it seems a particle must be somewhere and must have some state of motion. In Quantum Mechanics, on the other hand, we describe the evolution of a particle with its wave function $\Psi(x,t)$ which is a function such that $|\Psi(x,t)|^2$ is a probability density function for the position random variable.
In that case, solving the equations of the theory instead of giving the trajectory of the particle gives just statistical information about it. Up to there it is fine, these are just mathematical models. The model from Classical Mechanics has been confirmed with experiments in some situations and the Quantum Mechanics model has been confirmed with experiments in situations Classical Mechanics failed.
What is really troubling me is: does the fact that the Quantum Mechanics model has been so amply confirmed implies a particle has no trajectory? I know some people argue that a particle is really nowhere and that observation is what makes it take a stand. But, to be sincere, I don't swallow that idea. It always seemed to me that it just reflects the fact that we don't really know what is going on.
So, Quantum Mechanics implies that a particle has no trajectory whatsoever or particles do have well defined trajectories but the theory is unable to give any more information about then than just probabilities?
Answer: Quantum systems do not have a position. This is intuitively hard to grasp, but it is fundamental to a proper understanding of quantum mechanics. QM has a position operator that you can apply to the wavefunction to return a number, but the number you get back is randomly distributed with a probability density given by $|\Psi |^2$.
I can't emphasise this enough. What we instinctively think of as a position is an emergent property of quantum systems in the classical limit. Quantum systems do not have a position, so asking for (for example) the position of an electron in an atom is a nonsensical question. Given that there is no position, obviously asking for the evolution of that position with time, i.e. the trajectory, is also nonsensical.
You say:
I don't swallow that idea. It always seemed to me that it just reflects the fact that we don't really know what is going on.
and you are far from alone in this as indeed his Albertness himself would have agreed with you. The idea that we don't know what is going on is generically referred to as a hidden variable theory, however we now have experimental evidence that local hidden variable theories cannot exist. | {
"domain": "physics.stackexchange",
"id": 36961,
"tags": "quantum-mechanics, quantum-interpretations"
} |
Problem getting real-time_map using hector_slam! | Question:
'Hello all,
I am using Xbox360, Ros Kinetic, Ubuntu 16.04LTS
I am new to ROS. I want to map an unknown enviroment using kinect. For this I am using freenect and gmapping.
I have done these steps:
roslaunch freenect_launch freenect.launch
rosrun pointcloud_to_laserscan pointcloud_to_laserscan_node cloud_in:=/camera/depth/points
rosbag record -O mybag /scan
rostopic echo /scan /////////i can see scan data is available here
rqt_graph
here is my MY RQT_GRAPH and here is MY FRAMES.PDF
This is all great.
But the problem is getting data from the bag file
roscore
rosparam set use_sim_time true
roslaunch hector_slam_launch tutorial.launch scan:=/base_scan
rosbag play --clock mybag.bag
rqt_graph
here is my MY RQT_GRAPH and here is MY FRAMES.PDF
I can't see any error in RViz. But I can't see the map either.
The terminal where I launch hector_slam shows the error that
[ WARN] [1515990267.492285826]: No transform between frames /map and scanmatcher_frame available after 20.002855 seconds of waiting. This warning only prints once.
[ INFO] [1515990270.173023564]: lookupTransform base_footprint to camera_depth_optical_frame timed out. Could not transform laser scan into base_frame.
rostopic pub syscommand std_msgs/String "savegeotiff"
but the terminal in which tutorial.launch is running stated that
[ INFO] [1515995399.174200553]: HectorSM sysMsgCallback, msg contents: savegeotiff
[ INFO] [1515995399.174327483]: HectorSM Map service called
[ INFO] [1515995399.194847114]: GeotiffNode: Map service called successfully
[ INFO] [1515995399.222169236]: Cannot determine map extends!
[ INFO] [1515995399.222222911]: Couldn't set map transform
I HAVE ALSO RUN rosrun tf view_frames AND I NOTICED THAT world IS NOT CONNECTED TO base_frame
can anyone help me with this
please do post a comment if any other specifications required.
Thank you in advance.....
Originally posted by NAGALLA DEEPAK on ROS Answers with karma: 18 on 2018-01-14
Post score: 0
Original comments
Comment by jayess on 2018-01-14:\
Lookuptransform Data _______cannot publish transform...............or something like........... no connection between base_frame
I highly doubt that this is the actual error that you're getting. Please update your question with a copy and paste of the actual error message from the terminal.
Comment by NAGALLA DEEPAK on 2018-01-14:
I have updated my question.@jayess
Comment by gvdhoorn on 2018-01-16:
@NAGALLA DEEPAK: is your caps lock and/or / key broken?
Comment by NAGALLA DEEPAK on 2018-01-18:
Thank you @gvdhoorn for all the struggle you have taken to edit my question. There isn't any problem with my keys but I have typed like that only to highlight important things.
Comment by NAGALLA DEEPAK on 2018-01-18:
Could some one give solution to my question?
It is very urgent.
Comment by jayess on 2018-01-18:
@NAGALLA DEEPAK we all have deadlines. It's not like people aren't answering because they don't want to. Perhaps no one who has seen your question has time to answer it (giving good answers takes a lot of time) or knows the solution. http://wiki.ros.org/Support#Etiquette
Comment by jayess on 2018-01-18:
I haven't used hextor_slam but it looks like you have a problem with your network. Are you using a simulator or a physical robot?
Comment by NAGALLA DEEPAK on 2018-01-19:
Sorry for the earlier post.
Comment by NAGALLA DEEPAK on 2018-01-19:
I only have a kinect and no robot. I have read that hector slam does not transform data
Comment by sai on 2018-01-26:
Try this command---- rosrun tf static_transform_publisher 0 0 0 0 0 0 1 base_frame world
Comment by NAGALLA DEEPAK on 2018-01-26:
I have tried the command and got this https://drive.google.com/open?id=188nGPpFR-59_4Bd7rHJo_NNa2cSJCRSk yes there is some change in the frames. But a new frame 1 is being added to the frames.pdf but I think base_frame should be connected to world and not to the "1"
Answer:
Hello all,
After going through different tutorials, and through the help of some people, I am able to produce the map.
For this I have established the TF between different frames manually.
By this command
rosrun tf static_transform_publisher 0 0 0 0 0 0 nav base_footprint 100
in the terminal.
Also used the above line to connect ( base_footprint and base_frame) also ( laser and camera_link ).
This way I am able to produce the real time map.
Thanks to all who has helped me in getting this work done.
Originally posted by NAGALLA DEEPAK with karma: 18 on 2018-01-29
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 29747,
"tags": "slam, navigation, kinect, pointcloud-to-laserscan, world"
} |
Which processes are included in the process called cells eating themselves? | Question: I am reading about atrophy and I am thinking to which processes the phrase cells eating themselves refer to.
Cells need something to survive these difficult times. To decrease
protein synthesis to survive. If insufficient, cells start
to eat themselves. Then, ubiquitine-proteosome pathway and autophagy
pathway.
I am not sure if it is only apoptosis.
It can refer here to longer cascade.
I think it can at least refer to autophagy where one cell degrades its components.
Which processes does the phrase cells eating themselves include?
Answer: So:
the processes are mentioned in the quote you provided: "ubiquitine-proteasome pathway (note: it's proteAsome) and autophagy pathway". As far as I know, apoptosis per se is not utilized for recycling. | {
"domain": "biology.stackexchange",
"id": 1797,
"tags": "physiology, pathology"
} |
Can we use some type of magnifying glass to magnifying gravity react on a body by theoretical physics? | Question: Can we use some type of magnifying glass to magnifying gravity react on a body by theoretical physics?
If so, could someone use it to destroy the earth
Answer: As Anna says, in practice gravity is too weak a force to be used as a death ray, however it is possible.
You may have heard that gravity can focus light. For example the most distant galaxy known was discovered just a couple of weeks ago, and it can only be seen because a galaxy cluster in between the galaxy and us is focussing it's light and making it more intense. Well in principle, you could use a similarly high mass to focus a source of gravity waves on a planet. This would heat it up, just as the moon Io is heated by tidal forces from Jupiter.
But, this is pure science fiction. Firstly generating intense gravity waves requires huge masses, for example two colliding black holes. Secondly you need something with the mass of many galaxies to focus the waves. While these do exist in the universe, how are you going to move them around to focus on your target? | {
"domain": "physics.stackexchange",
"id": 4770,
"tags": "string-theory"
} |
Time system when using tf2 lookupTransform | Question:
Given I have a topic /tf in rosbag, I want to run the rosbag and use lookupTransform to get transform between different frames. The most usual way to get latest transform is lookupTransform("target_frame", "source_frame", rospy.Time(0)). What time system does the function lookupTransform use, does this function use the wall clock time of the computer calling the function lookupTransform, or does it use the timestamp in rosbag. Also, is there any difference when using lookupTransform("target_frame", "source_frame", rospy.Time(0)) and lookupTransform("target_frame", "source_frame", rospy.Time())
Originally posted by hck007 on ROS Answers with karma: 29 on 2023-07-02
Post score: 0
Answer:
As mintar puts it:
The value of ros::time::now() depends on whether the parameter use_sim_time is set.
If use_sim_time == false, ros::time::now() gives you system time (seconds since 1970-01-01 0:00, so something like > 1471868323.123456).
If use_sim_time == true, and you play a rosbag, ros::time::now() gives you the time when the rosbag was recorded (probably also something like 1471868323.123456).
If use_sim_time == true, and you run a simulator like Gazebo, ros::time::now() gives you the time from when the simulation was started, starting with zero (so probably something like 63.123456 if the simulator has been running for 63.123456 seconds).
So in your case if you want the nodes to use the time from where the rosbag was recorded, you have to append --clock to rosbag play so that it publishes simulated time in /clock, and well as set the param use_time_time before any other node starts in a fresh roscore instance.
There might not be a difference in rospy.TIme(0) and rospy.Time() because the default arguments are 0 anyway, see API.
Originally posted by Guess Giant with karma: 26 on 2023-07-02
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 38440,
"tags": "ros"
} |
Is there any physical scalar potential where $U$ is also depends on $\dot q_i$s? | Question: In the book of Classical Mechanics by Goldstein, at page 21, while deriving the Lagrange's equation, when the external forces are derivable from a scalar potential $U$, the author implicitly assumes that $U = U(q_i, t)$.
However, is there any physical scalar potential where $U$ is also depends on $\dot q_i$s ?
Answer: Yes, e.g. the velocity-dependent potentials for the Lorentz force and the Coriolis force. See also e.g. this Phys.SE post. | {
"domain": "physics.stackexchange",
"id": 56002,
"tags": "classical-mechanics, lagrangian-formalism, potential, potential-energy, velocity"
} |
What would Betelgeuse look like from Earth if it was at the edge of the Solar System | Question: If we parked Betelgeuse just outside the Solar System, how big would it look from Earth?
Answer: The distance to Betelegeuse is not precisely known for reasons you can read about here and here. But let's assume a likely distance of 200 pc. The angular diameter of the star has been measured with optical and IR interferometry to be about 0.05 arcseconds (see the relevant section of the wikipedia page on Betelgeuse), though this is uncertain by about 10%.
These two numbers translate into a linear photospheric diameter for the star of 10 astronomical units (au).
We then have to interpret what you mean by just outside the Solar System. If you mean half way to the next star - i.e. around 0.6 pc, then a star with this diameter would have an angular diameter of 6 arcseconds - so similar in size to Mars viewed from Earth. However, if you meant just beyond the edge of the Kuiper belt at say 100 au from the Sun, the angular diameter would be about 6 degrees! This is more than ten times the size of our Sun, which would indeed look very impressive if it were at all possible for us to view such an event. | {
"domain": "astronomy.stackexchange",
"id": 6684,
"tags": "distances, angular-diameter, betelgeuse"
} |
Replacing simple jQuery methods for better use | Question: There are a few common jQuery call I find my self calling when creating my app. I need some help and maybe a better way to do all this or rewrite it.
1) Singleton Selector
If I want to select only one class I would do it like this:
$("#tempo").on("click", ".entity", function(e){
$(".selected").removeClass("selected");
$(this).addClass('selected');
// ....
2) Resource Selector
I have a div with all the resource elements (divs,ul) with formated code added class and even children). This is the code I write a lot to select it, clone it and append it.
// $R = $('div.resources'); - Create only once.
$R.find(".folder").clone()
.addClass(id++)
.appendTo("#folders");
3) Prevent Double Actions
I have quite a lot of ajax request to the server once a button gets clicked and have to disable the button from being double clicked each time.
$("button#new_game").on("click",function(e){
if( $(this).hasClass("clicked") == false ){
var $pre = $(this);
$pre.addClass("clicked");
createAjaxCall(function(result){
$pre.removeClass("clicked");
}
}
});
Adding the clicked class can allow me to customize the button a bit I should mention, So I can't really use .data()
These are the three main jQuery repetitive/non optimized problems I run into.
Answer: I'd say you're on the right track with all your code. The trick is probably to abstract some of the oft-repeated code into functions/plugins.
For the "singleton selector", wrapping your code in a plugin would seem the way to go. But note that your current code might need tweaking. You have this line:
$(".selected").removeClass("selected")
which will remove the selected class from elements anywhere on the page. In your example, you'd probably want only to remove the class from .entity elements within the #tempo element.
Here's a (very quick) sketch of a plugin, that should do what you want (and keep it limited to the container element):
$.fn.singleSelection = function(classSelector, handler) {
return this.each(function() {
var container = $(this);
container.on("click", classSelector, function (event) {
container.find(classSelector + ".selected").removeClass("selected");
$(this).addClass("selected");
if( typeof handler === "function" ) {
handler.call(this, event);
}
});
});
};
Your code can then be shortened to
$("#tempo").singleSelection(".entity"); // you can add a click handler too
Here's a demo.
But... jQueryUI already has this functionality, so if you feel like using that, go right ahead. It's certainly been tested better than what you see above :)
However, for such a simple thing, jQueryUI seems like overkill. But there are probably other, leaner plugins you can find, which'll do what you need - this is just an example.
For your second point, I'd probably wrap the code in an addFolder function somewhere. But it's a little hard to be more specific without knowing more about the context.
For your third point, you could again make a simple jQuery plugin that'd only forward events to your click handler if the element doesn't have the "clicked" class. There are also various "debounce" plugins out there to throttle clicks, but most simply impose a time limit rather than explicitly wait for an ajax operation to finish.
In any event, here's another (very quick) sketch of a jQuery plugin
$.fn.clickOnce = function(handler) {
return this.each(function() {
var element = $(this);
element.on("click", function (event) {
if( !element.hasClass("clicked") ) {
element.addClass("clicked");
if( typeof handler === "function" ) {
handler.call(this, event);
}
}
});
});
};
In your code you could then do:
$("button#new_game").clickOnce(function (event) {
// this will only be called if the button wasn't already clicked
var button = $(this);
createAjaxCall(function (result) {
// it's your responsibility to remove the "clicked" class
button.removeClass("clicked");
});
});
Here's a demo
Again, there are no doubt ready-made plugins that do this and do it better.
By the way, I personally find it easier to pass jQuery's XHR objects around, as they act as deferred objects/promises. I.e.:
var xhr = $.ajax(...); // create an XHR obj from any of jQuery's ajax functions
xhr.done(function (data) {
// success handler
});
xhr.fail(function (err) {
// failure handler
});
xhr.always(function () {
// complete handler
});
So if your createAjaxCall() function returns an xhr object, you can attach a handler to the always "event", and use that to clear the clicked class. That seems more robust (and readable) to me.
p.s. You could use a data-* attribute (set with .attr() instead of .data()), and still have custom styling with CSS like
button[data-clicked='true'] { // CSS attribute selectors are neat!
...
}
but browser support is limited compared to using simple classes. | {
"domain": "codereview.stackexchange",
"id": 3850,
"tags": "javascript, jquery, html, html5, jquery-ui"
} |
Understanding Parseval's Theorem with Discrete Wavelet Transform | Question: I have difficulty to understand the results I get with implementing Parseval's Theorem in Python to DWT. I have the good results getting the Energy with Fourier transform and the time series in python:
# Parseval theorem energy
def ParsevalTheorem(data):
energy_sum = 0
for i in range(len(data)):
energy_sum += abs(data[i])**2
return energy_sum
# dwt_data[0] => approximation component at final level, dwt_data[1:] => detail components
def DWTParseval(dwt_data):
details_sum = 0
for i in range(len(dwt_data)-1):
details_sum += ParsevalTheorem(dwt_data[i+1])
approx_sum = ParsevalTheorem(dwt_data[0])
final_sum = approx_sum + details_sum
return final_sum
fourierTransform = np.fft.fft(short_signal)
print("fourier energy: ", ParsevalTheorem(np.abs(fourierTransform))/len(fourierTransform))
print("Org energy: ", ParsevalTheorem(short_signal))
print("DWT energy: ", DWTParseval(app1)) # app1 is haar discrete wavelet transform using pywt.wavedec(data, "haar", level = 3)
Results:
fourier energy: 1305035.7546624008
Org energy: 1305035.7546624022
DWT energy: 1309077.6827128115
I've gathered the information on using Parseval Theorem from equation: Equation Link1
I have also encountered another equation to get the Energy but if I divide the Approximation sum with it's length it's in whole different scope than the original signal energy: Equation Link2
I some what understand the Parseval theorem when dealing with fourier transform, but lost with these equations when dealing with DWT.
PS: I know there is more Pythonic way to do the code but I intend to apply it in a different language also.
Answer: Parseval's identity and Plancherel's theorem finally boil down to orthogonality. When one decomposes a data (with samples), via a scalar product, onto an orthogonal sequence (yielding coefficients), there exists a certain preservation (equality, up to a proportionality factor) of energy between samples and coefficients. There are some technical conditions, and in certain cases, one only get inequalities (cf. Bessel's inequality) or frame bounds.
The equation for the discrete wavelet transform (DWT) might be incomplete, with respect to the indices. I for instance think that in the second term of the RHS, the scaling factor should be $N_j$, not $N_J$ (and this depends a bit on how discrete wavelets are implemented). Basically, an orthogonal wavelet transform projects data onto basis elements gathered in groups called subbands. Each wavelet subband comes from $N_j$ vectors, with an additional $N_J$ vectors for the approximation. And normally, the total number of vectors should be (about, honestly, this depends on signal extension) the number of samples $N$, in other words: $N=N_J +\sum_{j=1}^JN_j$. | {
"domain": "dsp.stackexchange",
"id": 9092,
"tags": "discrete-signals, python, wavelet, transform, parseval"
} |
What happens to galaxies when they die? | Question: Stars explode when they die and blast heavy elements into space. Do galaxies do the same thing?
Answer: Well, it would be useful to define what a 'dead' galaxy is. Probably the most simple method would be a galaxy that is no longer producing new stars. We might also consider a galaxy that no longer produces significant light in the visual spectrum, or perhaps EMR across the entire spectrum.
Generally, there's unlikely to be a firm line between living and dead, and not nearly as dramatic as larger stars. More akin to watching a camp fire burn itself out. Star formation is largely dependent available gases, but as more and more stars fuse those gases into heavier elements, there is less gas available for star formation. For your average sized galaxy, this will eventually result in running out of gas. Eventually the galaxy will dim and go dark, a process purported to begin at the center of the galaxy, where star formation is heaviest according to research based on Hubble images of giant galaxies. (Tacchella, et al.) The matter ought to (mostly) all still be there and still orbiting the (presumed) SMBH, but with no energy coming from fusion, it's going to be a dark, cold, and barren place. Sounds dead to me.
There are some complicating factors. It's believed that encounters with nearby galaxies can affect available gases. The gravity from a larger galaxy could potentially strip the gases from a smaller one, a fatal blow for the smaller galaxy. Fortunately, it won't suffer much as the death will come (relatively) quickly. This process has been deemed 'strangulation' by a study published in Nature several years months days ago. (Ping, et al.) Note that as the study indicates, the methods of death are proposed solutions - not conclusive understanding of the exact processes that result in a galaxy's death.
S. Tacchella, C. M. Carollo, A. Renzini, N. M. Förster Schreiber, P. Lang, S. Wuyts, G. Cresci, A. Dekel, R. Genzel, S. J. Lilly, C. Mancini, S. Newman, M. Onodera, A. Shapley, L. Tacconi, J. Woo, and G. Zamorani. Evidence for Mature Bulges and an Inside-out Quenching Phase 3 Billion Years After the Big Bang
Science 17 April 2015: 348 (6232), 314-317. [DOI:10.1126/science.1261094]
Y. Peng, R. Maiolino & R. Cochrane. Strangulation as the primary mechanism for shutting down star formation in galaxies Nature 521, 192–195 14 May 2015 [DOI:10.1038/nature14439]
Andrea Cattaneo. Astrophysics: The slow death of red galaxies Nature 521, 164–165 14 May 2015 [DOI:10.1038/521164a] | {
"domain": "astronomy.stackexchange",
"id": 895,
"tags": "galactic-dynamics"
} |
The sorting problem for partially ordered sets | Question: I have two questions about sorting for posets, one easy and one hard:
Easy: Suppose we have a set of objects and a partial order. Given any two objects such that $a \leq b$, we want to delete $b$ from the set. Is there a sub-quadratic time algorithm that can accomplish this, perhaps for some suitable data structure?
Hard: suppose we actually want to sort the list by putting it into some suitable data structure (such as a directed graph). Does there exist a nice data structure leading to a sub-quadratic time sorting?
The second is the natural generalization of the sorting problem to posets. The first is an easier variant that will suffice for this application.
Answer: Take a worst case scenario.
The set is built out of pairs of objects $a_i, b_i$ where $a_i < b_i$ and there exists no order between $a_i < b_j$ when $i\neq j$.
This means that to know whether you can delete an element you need to check it against every other element resulting in a quadratic general case.
The best data structure to avoid that is where each node has a direct reference to which nodes it is greater or equal to. But to build that you need a quadratic algorithm. | {
"domain": "cs.stackexchange",
"id": 9741,
"tags": "algorithms, complexity-theory, sorting, partial-order, order-theory"
} |
python directory import issue on ROS | Question:
hi all,
My ros_workspace directory is like this:
ros_workspace --> src--> package1(src, msg, script((folder1-->abc1.py), ( folder2-->abc2.py))):
I want import "abc2.py" from "abc1.py", how should i write ?
I tried "import package1.script.folder2.abc2" from "abc1.py", it is so long for the path, also I use catkin_make under ros_workspace build, the error show "ImportError: No module named scripts.folder2"
How should i import the abc2.py from abc1.py ? please don't say "put abc1.py and abc2.py together", this is just a example, I have too many python files, so i need category them to several folder. thanks
Originally posted by apanda on ROS Answers with karma: 1 on 2017-08-09
Post score: 0
Answer:
You would need to write setup.py and include catkin_python_setup() in CMakeLists as described in Section 1.2 here. This would add your packages to PYTHONPATH when you source the workspace.
Originally posted by naveedhd with karma: 161 on 2017-08-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28562,
"tags": "ros, python, multiple"
} |
A service that returns paged Products with optional filtering | Question: In order to better understand what the code does, I've captured this screencast.
Basically, I have a products page and a user can filter products.
CODE
Here is the Page action method on my ProductController:
public ActionResult Page(SearchViewModel search, int page = 1)
{
var viewModel = _productService.GetPagedProducts(search, page);
PopulateDropDownSelectListsForSearchVm(viewModel.Search);
return View(viewModel);
}
That PopulateDropDownSelectListsForSearchVm is a helper method that sits within the controller.
Here is the implementation of the GetPagedProducts method on the IProductService interface:
public ProductPageViewModel GetPagedProducts(SearchViewModel search, int page = 1)
{
int totalNumberOfProducts;
if (HaveSearchTermsChanged(search)) page = 1;
var products = _context.Products.Where(p => p.IsDeleted == search.ShowDeleted).AsQueryable();
if (search.CategoryId != 0) products = products.Where(p => p.CategoryId == search.CategoryId);
if (search.BrandId != 0) products = products.Where(p => p.BrandId == search.BrandId);
if (search.QualityId != 0) products = products.Where(p => p.QualityId == search.QualityId);
if (!string.IsNullOrEmpty(search.SearchTerm))
{
products = products
.Where(p => p.Name == search.SearchTerm.Trim())
.OrderBy(p => p.Name);
totalNumberOfProducts = products.Count();
products = products.Skip(_recordsPerPage * (page - 1)).Take(_recordsPerPage);
}
else
{
products = products.OrderBy(p => p.Name);
totalNumberOfProducts = products.Count();
products = products.Skip(_recordsPerPage * (page - 1)).Take(_recordsPerPage);
}
var productPageVm = new ProductPageViewModel
{
Products = ProductViewModelFactory.BuildListOfProductViewModels(products.ToList()),
Pagination = new PaginationViewModel
{
CurrentPage = page,
RecordsPerPage = _recordsPerPage,
TotalRecords = totalNumberOfProducts
},
Search = search
};
TrackCurrentSearchTerm(productPageVm);
return productPageVm;
}
Readability is really important to me, and right now it just doesn't feel readable. What can I do to improve the overall code, and have it be very readable?
Answer: I would suggest you to move the pagination settings to your SearchModel
Then you can have an utility/extension method or whatever you whish with this signature
public IQueryable<Product> ApplyFilter(IQueryable<Product> productQuery, SearchModel model)
and you write there ll the filtering business, so you can re use it in more methods and simplify the readability of your main method.
Finally, I would not recommend to rewrite your if statements like this
if (search.CategoryId != 0)
{
products = products.Where(p => p.CategoryId == search.CategoryId);
} | {
"domain": "codereview.stackexchange",
"id": 28183,
"tags": "c#, asp.net-mvc"
} |
Relation between central density and mass of ZAMS stars | Question: Let's compare ZAMS stars with different mass and equal composition. Higher mass might produce higher central density (and pressure) because more gas is pulled inwards and compresses the center. But this seems to be simplistic: very low mass stars have an electron degenerate core and are fully convective, which isn't true for heavy stars. The temperature profile in a massive star isn't the same as in the Sun. Assuming constant density throughout the star gives an equation for the central density depending on the mass, but the assumption isn't realistic. So it isn't obvious how the central density depends on the stellar mass.
My questions:
(1) how does the central density depend on the total mass for ZAMS stars with the same composition? Is the relation monotonic? A diagram would be welcome.
(2) is the relation between central pressure and total mass the same as for the central density?
Answer: The central density decreases with increasing mass; from about 600 g/cm$^3$ at $0.1 M_{\odot}$, to about 10 g/cm$^3$ at $7M_{\odot}$.
The central gas pressure also decreases with increasing mass; from about $10^{16}$ Pa at $0.1 M_{\odot}$ to $10^{14}$ Pa at $7M_{\odot}$.
The relationships are roughly monotonic. Both central pressure and density then increase with age on the main sequence.
You can investigate and produce your own plots using the tables generated here http://164.15.254.82/~siess/pmwiki/pmwiki.php?n=WWWTools.Isochrones which are the Siess et al. (2000) models. Other models will give slightly different numerical results.
Note that it's quite tricky to pin this information down. The ZAMS is normally defined as the state where the star is mostly powered by hydrogen fusion. This occurs at different ages for different masses, so you can't just use an isochrone. The link above uses this definition, but I suspect the numerical resolution is not that great and may account for some "lumpiness" in the relation. There may be a little kink at around $1.5M_{\odot}$ where there is a transition from the pp chain to the CNO cycle as the dominant energy generation mechanism. Note also that my estimate for the central pressure in low-mass stars will be a bit off because I used the perfect gas law, but below $0.3M_{\odot}$ electron degeneracy becomes increasingly important. | {
"domain": "physics.stackexchange",
"id": 66623,
"tags": "astrophysics"
} |
How to chose right lens for concentrating IR signal? | Question: I am looking for the right acrylic lens. Since will be buying at least 1000 pieces I don't want to make any mistake.
I want to concentrate the signal from IR LED in a 1cm diameter tube at one point 20cm in diameter. From what I know, IR LEDs disperse the beam too widely, hence the need for a lens. What shape of lens should I buy? Are there any online resources that could help me
Wall is 5-10m away and LED wavelength is 970nm.
Answer: A convex lens can act to magnify the image of the LED onto the wall - which is what you want to do. The magnification depends on the distance of the lens to the wall and the focal length of the lens.
There are several things to consider in this:
What fraction of the light from the LED are you trying to collect: this helps determine how large the lens should be (diameter)
Does the LED already have a lens on it (it seems that it does) - if so what does that mean for the optical path
I would highly recommend that you experiment with a visible light LED first - you will find that at a certain distance the light from the LED is already "focused" - but exactly how it will be focused depends on the construction of the LED (often the LED is specified with a "view angle" which will tell you if the light is being concentrated into a narrow cone or a wide one. It makes a huge difference in answering this question). It is actually possible to get an "image" of the silicon die at the heart of a red LED (with clear lens) on a piece of paper for certain LEDs - basically almost exactly what you are asking for.
If we ignore the lens on the front of the LED for a moment (let's assume you have a bare die), then if the light emitting area is 1 mm^2 and you want to make it 20 cm in diameter, you need a 200x magnification.
In general, magnification is given by the ratio of distances on either side of the lens - if you are 1 m from the screen with your lens, then you would need the LED to be 5 mm from the lens since
$$M = \frac{d_{screen}}{d_{LED}}\\
d_{LED}=\frac{d_{screen}}{M}$$
Then you compute the focal length of the lens you need as
$$\frac{1}{f} = \frac{1}{d_{LED}} + \frac{1}{d_{screen}}$$
or
$$f_{lens}=\frac{d_{screen}}{M+1}$$
meaning that the focal length would be just a tiny bit less than 5 mm. If you get this even slightly wrong, then the object will either be out of focus, or the magnification will be significantly off. I suspect you can cope with the former better than the latter.
If you can assume your LED-with-built-in-lens behaves like a 6.35 mm diameter uniform disk, then the magnification you need is less - about 30x. In that case, for a screen distance of 1 m you need the LED at 33 mm and the focal length of the lens needs to be about 32 mm.
It doesn't really matter for this application whether the lens is plano-convex, or biconvex; I would recommend plano-convex, with the curved surface towards the LED. That is probably the cheapest solution.
But buy just one, and make an adjustable setup. Use a web camera to "see" the IR light from the LED, and play with the settings until you get it right.
Buying 1000 of the wrong thing is not fun - but it's not possible with the information given to give you an authoritative answer on the best solution.
Happy experimenting!
UPDATE
Since you have now specified the distance to the wall, you will probably have to stop down the aperture of the LED in order to get a sufficiently small spot - and in the process, lose a lot of the light output.
The key concept here is that when a light source is of a finite size, you can only focus it into a certain size spot at a certain distance. 7 m distance and 20 cm spot means that the angle subtended by the spot at the lens is only 1.6°. This in turn means that the LED must subtend the same angle at the lens - so if the LED is 1 mm in diameter, it has to be 35 mm away (namely $7000 * \frac{1}{200}$). If your LED already has a built in lens and an apparent aperture of 6 mm (the typical diameter of a LED with a lens) then you would need your second lens to be 6x35 = 210 mm away. If your LED's built in lens has a focusing angle of 8° (which is a fairly typical value), and you're trying to get down to 1.6°, you cut the angle by a factor 5x, and the area (fraction of power) by 25x. That's a big power loss...
Without knowing what exactly you are trying to achieve I don't know what to recommend - except "think more about what you are trying to achieve". To project a small spot at a long distance you need a well collimated source - and unless it starts out awfully small, you can only collimate by reducing intensity. Keep that in mind. | {
"domain": "physics.stackexchange",
"id": 17712,
"tags": "optics, lenses, light-emitting-diodes"
} |
Is this image captured by the Hubble Telescope an original image? | Question:
Is this image an original picture which is not photoshoped? I see a huge number of Hubble Telescope images on Google and I'm curious whether they are real images or not. If not, where I can find the real images captured by the Telescope which are not photoshopped?
Answer: All images provided from large telescopes are a rendering of the data for viewing pleasure. Usually sensors are sensitive in a wide range of wave length, even outside what humans can perceive and color is added by using filters. Hubble uses infrared filters and the so-called Hubble palette (false color).
So, if you mean, do these pictures show what a human observer will see? No, they don't. They are a visualisation of the data collected.
All things observed at low light conditions are black and white, as humans lose color vision at low light. | {
"domain": "astronomy.stackexchange",
"id": 2354,
"tags": "hubble-telescope"
} |
Hamiltonian constraint in spherical Friedmann cosmology | Question: I'm taking a GR course, in which the instructor discussed the 'Hamiltonian constraint' of spherical Friedmann cosmology action. I'm not quite clear about the definition of 'Hamiltonian constraint' here. I searched internet but cannot find a satisfying answer. Could anybody help me a bit?
Answer: The Hamiltonian constraint comes from but one component of Einstein's equations.
$$\underline G(a) = 8 \pi \underline T(a)$$
The Einstein tensor $\underline G(a)$ deals with the metric and its derivatives--terms involving curvature. The stress-energy tensor $\underline T(a)$ describes the matter and energy in a point of space. This equation relating the two contains the great physical content of Einstein's theory--it is the relationship between matter/energy and spacetime curvature.
The Hamiltonian constraint comes from looking at the time-time component of this equation. That is
$$\text{Hamiltonian constraint: } \underline G(e_t) \cdot e^t = 8 \pi \underline T(e_t) \cdot e^t$$
This is referred to as a constraint because it involves no time derivatives in the equation--only spatial derivatives are buried or hidden within $\underline G(e_t)$. It is called Hamiltonian because it is closely related to energy and the overall Hamiltonian formulation of GR.
The Hamiltonian constraint is important because any description of a system at some initial time must obey the constraint to be physically possible, and it should do so at all times. Hence, ensuring that some initial data obeys the constraint is a key consideration when solving Einstein's equations using, for example, numerical simulations. | {
"domain": "physics.stackexchange",
"id": 6406,
"tags": "general-relativity, cosmology, hamiltonian-formalism, constrained-dynamics"
} |
Identifying the origin of replication of an unannotated *E. coli* plasmid | Question: I have attempted a few searches for a list of origins of replication for plasmids in E. coli, but I was only able to find a list of origins, but not their individual sequences. The available plasmid maps are often extremely vague on where exactly do the origins start or end, and in any case it would be extremely tedious to collate a list of origins in order to determine the exact origin of replication of a plasmid.
Is there a database of origins of replication available anywhere that would allow one to perform BLAST searches on the plasmid sequence in order to obtain the compatability group of the plasmid, in cases where the plasmid's origin of replication is not specified?
Answer: Plasmapper Is quite good a recognizing common origins of replication. You will have to look up compatibility yourself. It is what I usually use when confronted with unnannotated plasmids. | {
"domain": "biology.stackexchange",
"id": 4088,
"tags": "bioinformatics, plasmids"
} |
What are non-held-out data or non-held-out classes? | Question: I'm Spanish and I don't understand the meaning of "non-held-out". I have tried Google Translator and online dictionaries like Longman but I can't find a suitable translation for this term.
You can find these term using this Google Search, and in articles like this one:
"computing SVD on the non-held-out data" from here.
"The training set consists all the images and annotations containing non-held-out classes while held-out classes are masked as background during the training" from Few-Shot Semantic Segmentation with Prototype Learning.
"A cross-validation procedure is that non held out data (meaning after holding out the test set) is splitted in k folds/sets" from here.
What is non-held-out data and held-out data or classes?
Answer: Held-out simply means "not included" particularly in the sense of:
This part of the data was not included in this specific training run.
Depending on the context of all of these text non-held-out data/classes means the data that actually was included in a particular modeling exercise.
Consider this excerpt from your first example:
For instance, Owen and Perry (2009) show a method for holding out data, computing SVD on the non-held-out data, and selecting k so as to minimize the reconstruction error between the held-out data and its SVD approximation.
It actually means:
For instance, Owen and Perry (2009) show a method for excluding data, computing SVD on the remaining data, and selecting k so as to minimize the reconstruction error between the excluded data and its SVD approximation.
So it simply talks about a particular way of train-test-validation splitting the data. | {
"domain": "ai.stackexchange",
"id": 1973,
"tags": "machine-learning, deep-learning, terminology, cross-validation"
} |
ROS Answers SE migration: gscam error | Question:
hi ;
when i type this command in the terminal """ rosrun gscam gscam"""
the following error appears """" [ERROR] [1399638704.450296280]: Unable to open camera calibration file [../camera_parameters.txt] ,,,, [ERROR] [1399638704.450423311]: No camera_parameters.txt file found. Use default file if no other is available.
"""
How Can I Solve that ???
Originally posted by smart engineer on ROS Answers with karma: 11 on 2014-05-09
Post score: 0
Original comments
Comment by BennyRe on 2014-05-09:
Is the """" your error message?
Comment by smart engineer on 2014-05-09:
@BennyRe ::
[ERROR] [1399638704.450296280]: Unable to open camera calibration file [../camera_parameters.txt] ,,,,
[ERROR] [1399638704.450423311]: No camera_parameters.txt file found. Use default file if no other is available
Comment by BennyRe on 2014-05-09:
Is the path to the camera parameters file correct?
Comment by adreno on 2014-05-09:
Check if you have camera_parameters.txt file in your gscam package directory...
Answer:
gscam need the camera_parameters.txt calibration file in the gscam package directory
For it to work well with what you want to do for image processing you should calibrate the camera that you are using
camera_calibration package will create the right calibration file for you, which you can place in the gscam package directory
Enjoy!
Originally posted by adreno with karma: 253 on 2014-05-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17894,
"tags": "ros, ros-fuerte"
} |
Parachute jumping with air resistance | Question: I need to write down a model for a man parachuting from a plane at a height $h$ above the ground, having a velocity $v_{\text{plane}}$. I've had a look at many models online and they all start saying that $F=ma=-kv-mg$. Where $k$ is the air resistance before the deployment of the parachute, $v$ is the velocity, $m$ is the mass of the man and $g$ is the acceleration. However, I think that the trajectory should also be in three dimensions, i.e. $F=(F_x,F_y,F_z)$, cause with the jump from the plane, the man should have an initial velocity coming from leaning out of the plane, and therefore there should also be some air resistance in that direction.
Is that correct or can we neglect it?
Furthermore, I don't understand why $ma=-kv-mg$, I mean this two forces should oppose eachother shouldn't they? The air resistance should be in the opposite direction compared to the acceleration due to the gravity.
I really hope you can help me with this!
Answer:
The air resistance should be in the opposite direction compared to the acceleration due to the gravity. I really hope you can help me with this!
No. The drag force simply points in the opposite direction of the velocity vector.
Now consider the following simplified model:
Assume the dropping plane was flying horizontally and parallel to the $x$-axis, at speed $v_0$, then at the drop point ($t=0$) the parachute has two velocity vectors with scalars:
$$v_x=v_0$$
$$v_y=0$$
As the chute doesn't deploy immediately, in the $y$ direction two forces act: gravity and air drag, so with Newton we can write:
$$ma=mg-\frac12 \rho C_{y,1}A_{y,1}v_{y}^2$$
Set: $\frac12 \rho C_{y,1}A_{y,1}=\alpha_1$
Then after integration between $t=0, v_y=0$ and $t, v_y$:
$$\large{v_y(t)=\sqrt{\frac{mg}{\alpha_1}\big(1-e^{-\frac{2\alpha_1t}{m}}\big)}}$$
Assume the chute opens at $t=\tau$ then for $t>\tau$ we can derive also:
$$\large{v_y(t)=\sqrt{\frac{1}{\alpha2}\big(mg-\big(mg-\alpha_2v_{y,\tau}^2)e^{-\frac{2\alpha_2t}{m}}\big)}}$$
With:
$$\large{v_{y,\tau}=\sqrt{\frac{mg}{\alpha_1}\big(1-e^{-\frac{2\alpha_1\tau}{m}}\big)}}$$
The parachute also experiences drag in the $x$-direction. Prior to deployment of the chute (and assuming no side wind):
$$ma=-\frac12 \rho C_{x,1}A_{x,1}v_{x}^2$$
Or:
$$a=-\alpha_3v_x^2$$
$\frac12 \rho C_{x,1}A_{x,1}=\alpha_3$
On integrating between $t=0, v_x=v_0$ and $t, v_x$
$$v_x(t)=\frac{v_0}{1+v_0\alpha_3t}$$
And for $t>\tau$:
$$v_x(t)=\frac{v_{x,\tau}}{1+v_{x,\tau}\alpha_4t}$$
where:
$$v_{x,\tau}=\frac{v_0}{1+v_0\alpha_3\tau}$$ | {
"domain": "physics.stackexchange",
"id": 28759,
"tags": "newtonian-mechanics, drag, free-fall"
} |
Can the Sun capture dark matter gravitationally? | Question: I think my title sums it up. Given that we think the dark matter is pseudo-spherically distributed and orbits in the Galactic potential with everything else, then I assume that its speed with respect to the Sun will have a distribution with an rms of a few 100 km/s.
But the escape velocity at the solar surface is 600 km/s. So does that mean that, even though sparse, the Sun will trap dark matter particles as it moves around the Galaxy? Will it accumulate a cloud of dark matter particles by simple Bondi-Hoyle accretion and, in the absence of any inelastic interactions, have a swarm of dark matter particles orbiting in and around it with a much higher concentration than the usual interstellar density? If so, what density would that be?
EDIT: My initial premise appears to be ill-founded since a dark matter particle falling into the Sun's gravity well will gain enough KE to escape again. However, will there still be a gravitational focusing effect such that the DM density will be higher in the Sun?
Answer: Well, like anything else that comes in from distant parts it's going out again without a either a three-body momentum transfer or some kind of a non-gravitational interaction.
If you assume a weakly interacting form of dark matter, then I think the answer has to be yes, but the rate is presumably throttled by the weak interaction cross-section of your WIMPs. | {
"domain": "physics.stackexchange",
"id": 20499,
"tags": "astrophysics, sun, dark-matter"
} |
relativistic acceleration equation | Question: A Starship is going to accelerate from 0 to some final four-velocity, but it cannot accelerate faster than $g_M$, otherwise it will crush the astronauts.
what is the appropiate equation to constraint the movement so the astronauts never feel a gravity higher than $g_M$? for a moment i thought the appropiate relationship was
$$ \left\lvert \frac{d u}{d \tau}\right\rvert \le g_M $$
where the absolute value is of the spatial component of the four-acceleration
But going down this route i get the following:
$$ \lvert u_F \rvert = \int_0^{\tau_F} \left\lvert \frac{d u}{d \tau} \right\rvert\,d \tau \le g_M \int_0^{\tau_F} d \tau = g_M \tau_F $$
where $u_F$ is the spatial component of the final velocity, and $\tau_F$ is the proper time it takes to reach the final velocity. The above gives me:
$$ \tau_F = \frac{ \lvert u_F \rvert }{ g_M } $$
i'm doing some silly mistake, because there are no gamma factors, and i'm getting a finite proper time to reach $\lvert u_F \rvert = c$
Answer: You mistake is that you use the absolute value "of the spatial components" (your words) of the velocity only. Picking spatial components only is clearly not a Lorentz-covariant procedure, so it cannot calculate the invariant "feelings of the astronauts".
Instead, the right condition is given by the same inequality but $|d u^\mu / d \tau|$ is the length of the four-vector one obtains by differentiating the four-velocity $u^\mu$, where $u_\mu u^\mu = 1$, over the proper time $\tau$. The vector $d u^\mu / d \tau$ is spacelike and perpendicular (according to the Lorentzian metric) to the velocity vector $u^\mu$ itself; but this derivative isn't a purely spatial vector in any inertial system. In a frame in which the spatial components of $u^\mu$ are already nonzero, $d u^\mu / d \tau$ contains a nonzero time component, too.
When you calculate it correctly, the proper time needed to achieve the speed of light is infinite.
The easiest way to calculate it is one that assumes some knowledge of the Lorentzian geometry and how it's analogous to the Euclidean geometry. A uniformly accelerating object in the Euclidean spacetime would produce a circular world line. In the real, Minkowski space, the world line is a hyperbola. The coordinates after proper time $\tau$ may be written in analogy with sines and cosines but they're hyperbolic ones:
$$ t = \sinh (\tau/\tau_0),\quad x = \cosh(\tau/\tau_0) $$
Here, $\tau_0$ is a constant depending on the acceleration. Consequently, the speed after proper time $\tau$ is simply the ratio,
$$ v = \tanh (\tau/\tau_0) $$
For a small $\tau$, this gets reduced to $\tau/\tau_0$ in the limit and the $\tau$-derivative $1/\tau_0$ should be the (maximum) acceleration $g_M$ so $\tau_0=1/g_M$:
$$ v = \tanh (\tau g_M) $$
in the $c=1$ units. You may invert it:
$$ \tau = \frac{c}{g_M} {\rm arctanh} (v/c) $$
where I restored the powers of $c$ for your convenience. Note that arctanh of one is infinity. For a small $v/c$, one uses ${\rm arctanh}\, x\approx x$ and the right formula reduces to your nonrelativistic formula from the original question. | {
"domain": "physics.stackexchange",
"id": 4314,
"tags": "homework-and-exercises, special-relativity, acceleration"
} |
Are ladder operators ever extensively used in any model of quantum computation? | Question: Computer scientists and others who are interested in learning more about quantum computation might be exposed, or re-exposed, to various concepts and classes of matrices from linear algebra. For example because of familiarity with truth-tables, (classical) reversible operators are often introduced in conjunction with unitary matrices.
Within quantum mechanics, there are a number of classes of matrices that are commonly used:
As mentioned, unitary matrices are matrices $U$ such that $U^\dagger U = UU^\dagger = I$. These unitary matrices form the basis for discussion within the gate model of computation.
Furthermore hermitian matrices are matrices $A$ such that $A = A^\dagger$. These matrices form the basis for, among other things, adiabatic computation.
Additionally there are matrices, such as the creation $a^\dagger$, annihilation $a$, and number $N=a^\dagger a$ matrices, commonly referred to as ladder operators.
The familiar Pauli matrices $X$, $Y$, and $Z$ are both hermitian and unitary, while the creation and annihilation matrices are neither.
Nonetheless reviewing Feynman's 1985 paper "Quantum Mechanical Computers", it appears that Feynman envisioned programming a quantum computer with an algebra or a calculus of sorts with sums and products of these ladder operators; indeed, he considered three qubits $a,b,c$ with $a$ and $b$ controlling the negation of $c$, and wrote a Toffoli gate explicitly as:
$$\mathsf{CCNOT}=1+a^\dagger ab^\dagger b(c+c^\dagger-1).$$
But other than Feynman's paper, I'm not aware of any extensive use of these ladder operators in any other model of quantum computation.
Is there any such model of quantum computation that focuses on ladder operators in lieu of unitary or hermitian operators? Did Feynman's "calculus" in his 1985 paper ever gain traction anywhere?
This is also partly motivated because I've read that Sophus Lie sort of envisioned infinitesimal generators of what we now call a Lie algebra as actual elements of the Lie group; it took others like Killing and Cartan to revise and formalize this intuition, and to put the Lie group - Lie algebra correspondence on more solid footing in the sense that the Lie algebra "linearizes" or is "tangent to" the Lie group.
I think perhaps there may be a similar analogy between the works of Feynman and, for example, Lloyd and Kitaev and Aharanov and others who came after, in the sense that these ladder operators somehow "linearize" or are "tangent to" the Hamiltonian, which enables Hamiltonian simulation. I'm trying to understand more of Feynman's intuition regarding these operators, but I might be making this analogy in hindsight.
Answer: Looking over Feynman's paper, the ladder operators he uses are spin ladder operators, what we would in modern language write as $\sigma_+$ and $\sigma_-$. (This is in contrast to fermionic or bosonic annihilation or creation operators, I'm assuming you are not asking about those.) Note that the operator that you've written down for the CCNOT, in this language, is a unitary matrix.
I guess what you are asking about is whether these operators, as used in a Lie algebraic sense, are used in quantum computing. In some sense they are always used because the $\sigma_\pm$ operators and $\sigma_+ \sigma_-$ are a basis for 2x2 matrices. So when we use $X$, $Y$, and $Z$ Pauli operators we could always express them in this basis.
But maybe you are after whether the Lie algebraic point of view is used. One place this occurs is in decoherence-free (noiseless) subspaces and subsystems. A good reference is this review article. In particular, looking at the Lie algebra of the system part of a system-bath interaction can give you information about subspaces that are protected from decoherence. For example if your bath couples to all your qubits in exactly the same way, you end up seeing terms like $\sum_i (\sigma_+)_i$ and $\sum_i (\sigma_-)_i$ which are best analyzed by looking at the Lie algebra.
By the way, one of the interesting parts about Feynman's paper is that he constructs a "clock state", this was later used by Kitaev and even later by others to show a lot of interested results in quantum complexity theory and adiabatic quantum computing. | {
"domain": "quantumcomputing.stackexchange",
"id": 3872,
"tags": "computational-models, history"
} |
Why do plasma filaments of plasma globes move outwards towards the glass? | Question: In plasma bowls, the plasma filaments made of ionized inert gases extend from the central 'mini Tesla coil', all the way to the glass outer sphere. What makes them want to go to the glass? Why don't they simply form some other random shapes that don't touch the glass?
Answer: So this is actually a question which is not very well-studied! Princeton appears to have put out a paper about it in 2010 (PDF) with some concrete answers gleaned from watching these globes with high-speed cameras, so that's something nice; in 2013 the lead author also did a teleseminar with some slides(PDF).
So the basic physics is that the outside world is electrically neutral (zero voltage) and the inside bulb is being rapidly varied from a very high positive voltage to a very high negative voltage, and electrons like to move from negative voltages to positive ones. So as the central core goes negative, the electrons want to fly outward to the glass bulb; as the central core goes positive, the electrons want to fly back. But, the gas in the ball is not able to conduct these electrons intrinsically -- gases are normally electrical insulators. But pretty much every electrical insulator becomes a conductor if you put enough voltage across it -- what happens is that the atoms of the insulator eventually start getting torn apart by the desire for the electrons to go one way and the nuclei to go the other, and we enter this new state of matter, "plasma", where some electrons get shared among multiple nuclei, able to conduct electricity over them. This is called "dielectric breakdown." The dielectric breakdown goes from inside to outside basically because the voltage field is approximately spherically symmetric -- the dominant force on the electrons is either being pulled straight towards the center or being pushed straight away from it, so when the plasma ionizes the electrons are being pushed or pulled directly away from the center, and that's what causes the arcs to go to the outside.
One of the simple results gained by looking at these filaments with a high-speed camera is that actually these plasma filaments are only visible for about 15% of each voltage cycle and are not initially visible at all. The plasma starts off being a sort of diffuse "glow" around the sphere, however tiny irregularities in which parts are getting more or less ionized turn out to be slightly better or worse conductors when the plasma is switched on/off.
So what you're seeing is what we'd call in physics "hysteresis", an effect where the history of a system impacts what it does after. The gas inside the ball has these certain pathways which have been -- at first by random chance -- slightly better at conducting electrons towards the outside than others. When the plasma reforms, those gas molecules get preferentially ripped apart first, and this is why filaments persist over thousands of cycles and can be seen by the naked eye.
So the plasma globe is actually showing you "tracks where electrons have recently been moving" and these tracks look like solid entities but actually they're flickering faster than your eye can see. The electrons are moving outward and inward to the center based on these very fast back-and-forth forces on them, and that's why these tracks go from the center to the edge. | {
"domain": "physics.stackexchange",
"id": 42854,
"tags": "electricity, plasma-physics"
} |
link library from devel/lib | Question:
Hi,
I'm pretty sure that this question has been answered already, but for some reason I'm not able to find the right solution, and currently I am confused by the results I read. So I hope you can point me in the right direction.
I have created a library in one package, using the install() command of the cmakelists. This Library does appear in my workspaces devel/lib, as "libMYLIB.so". I want to use this library within another package, but I'm not exactly sure how. I tried it with just using
target_link_libraries(${PROJECT_NAME}_node
${catkin_LIBRARIES}
MYLIB )
But i get the error
/usr/bin/ld: cannot find -lMYLIB
collect2: error: ld returned 1 exit
status
I tried using
link_directories( ${CATKIN_PACKAGE_LIB_DESTINATION} )
, since this directory was used to install the library, but to no effect. What should I do, or where can I find the definite answer?
Originally posted by Jan Fox on ROS Answers with karma: 13 on 2017-11-20
Post score: 0
Answer:
http://docs.ros.org/kinetic/api/catkin/html/howto/format2/building_libraries.html
In the library building package be sure to have your library listed as something other projects can use:
catkin_package(
...
LIBRARIES MYLIB)
Put the name of the package you built the library into a <build_depend> in package.xml and put it into in the list of packages in find_package() in your CMakeLists.txt. The actual name of the library doesn't matter (just the name of the package that built it), it will get included in ${catkin_LIBRARIES}.
Originally posted by lucasw with karma: 8729 on 2017-11-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Jan Fox on 2017-11-20:
That was easier than expected. Thanks. I was sure I tried this, but maybe I made a spelling error. | {
"domain": "robotics.stackexchange",
"id": 29411,
"tags": "catkin"
} |
How do you calculate spectral flatness from an FFT? | Question: Ok, the spectral flatness (also called Wiener entropy) is defined as the ratio of the geometric mean of a spectrum to its arithmetic mean.
Wikipedia and other references say the power spectrum. Isn't that the square of the Fourier transform? The FFT produces an "amplitude spectrum" and then you square that to get a "power spectrum"?
Basically what I want to know is, if spectrum = abs(fft(signal)), which of these is correct?
spectral_flatness = gmean(spectrum)/mean(spectrum)
spectral_flatness = gmean(spectrum^2)/mean(spectrum^2)
Wikipedia's definition seems to use the magnitude directly:
$$
\mathrm{Flatness} = \frac{\sqrt[N]{\prod_{n=0}^{N-1}x(n)}}{\frac{\sum_{n=0}^{N-1}x(n)}{N}} = \frac{\exp\left(\frac{1}{N}\sum_{n=0}^{N-1} \ln x(n)\right)}{\frac{1}{N} \sum_{n=0}^{N-1}x(n)}
$$
where $x(n)$ represents the magnitude of bin number $n$.
SciPy docs define power spectrum as:
When the input a is a time-domain signal and A = fft(a), np.abs(A) is its amplitude spectrum and np.abs(A)**2 is its power spectrum.
This source agrees about the definition of "power spectrum" and calls it $S_{f}(\omega)$:
We can define $F_{T}(\omega) $ which is the fourier transform of the signal in period T, and define the power spectrum as the following:
$\displaystyle S_{f}(\omega) = \lim_{T \rightarrow \infty} \frac{1}{T}{\mid F_{T}(\omega)\mid}^2.$
This source defines Wiener entropy in terms of $S(f)$.
But I don't see the squaring in equations like this, which seems to be based on the magnitude spectrum:
$$
S_{flatness} = \frac{\exp\left(\frac{1}{N} \sum_k \log (a_k)\right)}{\frac{1}{N} \sum_k a_k}
$$
Likewise, another source defines the spectral flatness in terms of the power spectrum, but then uses the magnitude of the FFT bins directly, which would seem to conflict with the above definition of "power spectrum".
Does "power spectrum" mean different things to different people?
Answer: The most authoritative reference I can come up with is from Jayant & Noll, Digital Coding Of Waveforms, (c) Bell Telephone Laboratories, Incorporated 1984, published by Prentice-Hall, Inc.
On page 57, they define the spectral flatness:
and, previously, on page 55 they define $S_{xx}$:
So the FFT-squared version is the one you want.
It looks like Makhoul & Wolf, Linear Prediction and the Spectral Analysis of Speech, Bolt, Beranek, and Newman, Inc. Technical Report, 1972 is also available.
And it has the same definition: | {
"domain": "dsp.stackexchange",
"id": 6574,
"tags": "fft, frequency-spectrum, power-spectral-density"
} |
"Find the net force the southern hemisphere of a uniformly charged sphere exerts on the northern hemisphere" | Question: This is Griffiths, Introduction to Electrodynamics, 2.43, if you have the book.
The problem states Find the net force that the southern hemisphere of a uniformly charged sphere exerts on the northern hemisphere. Express your answer in terms of the radius $R$ and the total charge $Q$. Note: I will say its uniform charge is $\rho$.
My attempt at a solution:
My idea is to find the field generated by the southern hemisphere in the northern hemisphere, and use the field to calculate the force, since the field is force per unit charge.
To do this I start by introducing a Gaussian shell with radius $r < R$ centered at the same spot as our sphere. Then in this sphere,
$$\int E\cdot\mathrm{d}a = \frac{1}{\epsilon_0}Q_{enc}$$
Now what is $Q_{enc}$? I feel like $Q_{enc} = \frac{2}{3}\pi r^3\rho$ , since we're just counting the charge from the lower half of the sphere (the part thats in the southern hemisphere of our original sphere). (Perhaps here is my error, should I count the charge from the entire sphere?, if so why?)
Using this we get $$\left|E\right|4\pi r^2 = \frac{2\pi r^3\rho}{3\epsilon_0},$$ so $$E = \frac{r\rho}{6\epsilon_0}.$$ Using these I calculate the force per unit volume as $\rho E$ or $$\frac{\rho^2 r}{6\epsilon_0}$$
Then by symmetry, we know that any net force exerted on the top shell by the bottom must be in the $\hat{z}$ direction, so we get $$ F = \frac{\rho^2}{6\epsilon_0} \int^{2\pi}_0\int^{\frac{\pi}{2}}_0\int^R_0 r^3\sin(\theta)\cos(\theta) \mathrm{d}r\mathrm{d}\theta\mathrm{d}\phi$$
integrating we get $F = \frac{1}{4}\frac{R^4\rho^2\pi}{6\epsilon_0}$.
Now Griffiths requests us to put this in terms of the total charge, and to do so we write $\rho^2 = \frac{9Q^2}{16\pi^2R^6}$
Plugging this back into $F$ we get $$F = (\frac{1}{8\pi\epsilon_0})(\frac{3Q^2}{16R^2})$$
Now the problem is that this is off by a factor of $2$ ...
I tried looking back through and the only place I see where I could somehow gain a factor of $2$ is the spot I mentioned in the solution, where I could include the entire charge, however, I can't see why I should include the entire charge, so if that's the reason I would be very grateful if someone could explain to me why I need to include the entire charge.
If that is not the reason, and perhaps this attempt at a solution is just complete hogwash, I would appreciate if you could tell me how I should go about solving this problem instead. (but you don't need to completely solve it out for me.)
Answer: The factor of two is coming from the place you identified.
Think about throwing out that factor of two, so you're considering only the bottom hemisphere. When you make your Gaussian shell and have it enclose charge in the bottom hemisphere only, the charge is no longer uniformly distributed inside your Gaussian shell. Thus, the electric field created by the charge you're considering is not the same at all parts of the shell, so you can't find the magnitude of the electric field in the way you described. That only works when the charge distribution has some sort of symmetry you're exploiting. You'd have to do a difficult integral instead.
However, if you don't throw out that factor of two, you're simply finding the electric field inside the shell. Suppose you carry out the rest of your calculation. Then you've found the net force in the z-direction in the north half of the sphere. However, the north half cannot exert any net force on itself, so this entire net force must be the same as the net force from the southern hemisphere.
So you're including all the charge when you make your Gaussian surface because you need to find the true electric field in the shell. The true electric field, when integrated, gives you the net force, which by basic mechanics arguments must be due to the southern hemisphere. | {
"domain": "physics.stackexchange",
"id": 7716,
"tags": "homework-and-exercises, forces, electrostatics, electric-fields, gauss-law"
} |
Vanilla Javascript: Chained AJAX request | Question: I'd be grateful for a review of my AJAX request. If I have taken the correct approach here and especially any security concerns!
I am creating a Purchase Order system for my manager (as part of an apprenticeship I am doing).
There is a select html element where a supplier can be selected. I also have a button to trigger a bootstrap modal with a form to enter a new supplier. I have an AJAX request to add the new supplier to the database then a chain AJAX request to query the database so the select element can be updated.
<script>
window.addEventListener('DOMContentLoaded', function () {
// User can add a new supplier using a modal form.
// 1) Send an AJAX POST request to update the database
// 2) Send AJAX GET request to query the database
// 3) Repopulate the select element with the updated records.
document.getElementById("newSupplierModalForm").addEventListener('submit', function (e) {
// Send AJAX request to add new supplier.
// TODO: Validate name field has text and handle error.
e.preventDefault();
const token = document.querySelector('meta[name="csrf-token"]').getAttribute('content');
const xhr = new XMLHttpRequest();
xhr.open('POST', '/supplier', true);
// Set headers
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
xhr.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
xhr.setRequestHeader('X-CSRF-Token', token);
//Set paramaters
const params = new URLSearchParams();
params.set("name", document.getElementById('supName').value);
params.set("address_1", document.getElementById('supAddress_1').value);
params.set("address_2", document.getElementById('supAddress_2').value);
params.set("town", document.getElementById('supTown').value);
params.set("post_code", document.getElementById('supPost_code').value);
params.set("contact_name", document.getElementById('supContact_name').value);
params.set("contact_tel", document.getElementById('supContact_tel').value);
params.set("contact_email", document.getElementById('supContact_email').value);
// Logic
xhr.onload = function (e) {
if (xhr.status === 200) {
// Close modal
document.getElementById('addSuppModalClose').click();
// AJAX request for updated supplier records to repopulate select element.
const xhr_getSuppliers = new XMLHttpRequest();
xhr_getSuppliers.open('GET', '/getSuppliers', true);
// Set headers
xhr_getSuppliers.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
xhr_getSuppliers.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
xhr_getSuppliers.setRequestHeader('X-CSRF-Token', token);
// Logic
xhr_getSuppliers.onload = function (e) {
if (xhr.status === 200) {
const data = JSON.parse(xhr_getSuppliers.response);
const selectElem = document.getElementById('supplier');
selectElem.options.length = 0; //reset options
// populate options
for (index in data) {
console.log(data[index], index);
selectElem.options[selectElem.options.length] = new Option(data[index], index);
}
}
};
// Send GET Request
xhr_getSuppliers.send();
} else {
// Show alert if AJAX POST Request does not receive a 200 response.
// I need to write code to feed this back to the user later.
alert(xhr.response);
};
};
// Send POST Request
xhr.send(params.toString());
});
});
</script>
```
Answer: Feedback
The code uses const for variables that don't need to be re-assigned. This is good because it prevents accidental re-assignment. It also uses === when comparing the status codes, avoiding type coercion. That is a good habit.
There are at least four indentation levels because the onload callbacks are anonymous functions. Some readers of the code would refer to this as "callback hell", because really there is just one large function here with functions defined inside of it, making it challenging to read. See the first section of the Suggestions below for tips on avoiding this.
The const keyword was added to the Ecmascript Latest Draft in Draft status and standard in the Ecmascript 2015 specification1 so other ecmascript-2015 (AKA ES-6) features could be used to simplify the code.
Suggestions
Callbacks and Nesting/Indentation levels
While most of the examples on callbackhell.com focus on promise callbacks and asynchronous code, the concepts apply here as well. Name the functions and then move the definitions out to different lines. If you need to access variables from outside then you may need to pass them as arguments.
Data Object and repeated DOM lookups
I will be honest - I haven't encountered the URLSearchParams before. I typically see code that uses FormData. Are all of the input elements inside of a <form> tag? if they are then a reference to that element could be passed to the FormData constructor. Otherwise, do the name attributes match the keys added to params - i.e. the id attributes without the sup prefix? You could iterate over the input elements and add the parameters to params.
Avoid global variables
In the review by dfhwze, The following line is referenced:
for (index in data) {
Without any keyword before index, like var, let or const that variable becomes a global (on window). | {
"domain": "codereview.stackexchange",
"id": 35580,
"tags": "javascript, event-handling, ajax"
} |
B cell clones and affinity maturation | Question: As B-cells undergo affinity maturation, their BCR sequences change. Are they still considered to be part of the same clone?
I couldn't find a clear answer in response to this very similar question:
What is meant by clones of B-cells?
Answer: There are different opinions on this, since it's about usage. You could make an argument that, if you're talking about a B-cell clone, it is the product of a specific VDJ recombination (a mature, but naive B-cell), and refers to B-cells that produce antibody to one antigen. However, the term was originally proposed in this context:
...the expendable cells of the body can be regarded as belonging to clones which have arisen as a result of somatic mutation or conceivably other inheritable changes. Each such clone will have some individual characteristic and in a special sense will be subject to an evolutionary process of selective survival within the internal environment of the body.
In this case, I'd argue that, when the BCR sequence changes in progeny from that original naive B-cell (as in affinity maturation), you end up with different clones.
You do have to be careful, though, because people use this term differently. | {
"domain": "biology.stackexchange",
"id": 9001,
"tags": "immunology"
} |
Centre of Mass and tangential velocity star and planet | Question: We have two objects - one massive star, and one planet which has a considerably smaller, but non-negligible mass, in a circular orbit about a common centre of mass. Using equation
$F$=$GMm$/$r^2$ where M is the mass of the star, m is the mass of the planet, and r is distance from the centre of mass of the two objects.
We can work out the velocity of the star about the centre of mass of the two objects by equating F to the centripetal force. This will give us
$v^2$=$GM$/$r$
This seems to indicate that the tangential velocity of the star is much greater than that of the planet, because as r gets smaller v becomes greater, and r is so much smaller than the star.
What I understand is that the period of the two objects is equal, and so the star travels much slower to complete a orbit which has a far smaller distance travelled. This makes sense to me, however I want to know what's wrong with my maths, or perhaps my notation.
Answer: It seems that you are mixing up the distance between the planet and the star and the distance between the star and the centre of mass (both are called $r$ in your derivation).
Try to use $r = r_M + r_m$, where $r$ is the total distance between planet and star, $r_M$ is the distance between the star and the combined centre of mass,and $r_m$ is the distance between the planet and the combined centre of mass. You will need to use that the centrifugal force is given by $F = \frac{M v^2}{r_M}$.
Hope this helps! | {
"domain": "physics.stackexchange",
"id": 20747,
"tags": "cosmology, newtonian-gravity"
} |
Equation of a Multi-Layer Perceptron Network | Question: I'm writing an article about business management of wine companies where I use a Multi-Layer Perceptron Network.
My teacher then asked me to write an equation that lets me calculate the output of the network. My answer was that due to the nature of multi-layer perceptron networks there is no single equation per se. What I have is a table of weights and bias. I can then use this formula:
$$f(x) = (\sum^{m}_{i=1} w_i * x_i) + b$$
Where:
m is the number of neurons in the previous layer,
w is a random weight,
x is the input value,
b is a random bias.
Doing this for each layer/neuron in the hidden layers and the output layer.
She showed me an example of another work she made (image on the bottom), telling me that it should be something like that. Looking at the chart, I suppose that it is a logistic regression.
So, my questions are the following:
Is there any equation to predict the output of a multi-layer perceptron network other than iterating over each neuron with $w*x+b$?
Should I just tell my teacher that a logistic regression is a different case and the same does not apply to this type of neural networks?
Is the first formula correct to show that a value of a neuron is the sum product of the previous layers plus the bias?
Edit 1: I didn't wrote the formula but I do also have activation functions (relu).
Answer: You are forgetting one element of the MLP which is the activation function. If your activation function is linear - then you can simply flatten out all the neurons into one single linear equation. The advantage of MLP however is its non-linearities so I suspect in your network you do have some activation (sigmoid? tanh? relu? etc..).
As for your graph - you could simply output predictions from your MLP and plot the exact scatter plot you have above. The only difference would be you wouldn't have a simple way of expressing this network in algebraic notation (as you have done on the existing x-axis).
To describe networks effectively in text you should look into matrix notation describing the weights and inputs of each layer. Maybe take a look at something like this to get started: https://www.jeremyjordan.me/intro-to-neural-networks/ | {
"domain": "datascience.stackexchange",
"id": 8458,
"tags": "neural-network, perceptron"
} |
Dempster-Shafer theory initial belief values | Question: I am looking to implement D-S Theory in my (computer science) research, I'll be using it to determine the probability that a triggered sensor event is a true positive.
How would you calculate an initial belief value without having the ability to perform data mining on a dataset (a cold start)?
One solution that has been postulated is to use the manufactures performance statistics that the sensors are working correctly and then adjust this over time to take account for false positives. Although this will result is a very high initial belief rate for some sensors (95% belief).
Answer: If the 95% (initial) belief corresponds to the probability of a true positive according to the performance statistics, then that seems appropriate to me. | {
"domain": "cs.stackexchange",
"id": 5586,
"tags": "probability-theory"
} |
Is it possible to know the order of the filter, just looking on the pole zero plot? | Question: is it possible to know the order of the filter, just looking on the pole zero plot.
I know how to get the order of the filter using calculations(highest order), but I want to know is it possible to see that in pole zero plot.
So far my idea is that the higher number of zeros/poles (pairs) is the order, for example if I have 3 poles and 1 zero in a zero ploe plot, than is the filter 3 order.
My knowledge over this is still small, so I am sorry if it is a stupid question.
Thanks
Answer: Count the number of poles and zeros; if the numbers aren't the same then the larger number is the filter order. Don't forget to count multiple poles or zeros with their multiplicity. Actually, you always get the same number of poles and zeros but on a pole-zero plot you might not see them all because some could be at infinity or at the origin.
E.g., in the following pole-zero plot you see 12 zeros, 6 poles away from the origin, and a pole at the origin with multiplicity 6. So the total filter order is 12. (This is an IIR low-pass filter with approximately linear phase in the passband): | {
"domain": "dsp.stackexchange",
"id": 2540,
"tags": "filters, poles-zeros"
} |
Generic implementation of a hashtable with double-linked list | Question: I implemented a hashtable that handles collisions by chaining entries using a double linked list.
The idea is that it can store/lookup any kind of generic data structure, while maintaining an API that is convenient to use.
For now the implementation only supports adding, obtaining and removing entries, - resizing of the table is planned in the future.
Can I make my implementation more performant and readable, so that I can easiely reuse it in future projects?
I feel like the ht_add and ht_delete functions especially have room for improvement. Looking forward for any feedback.
hashtable.h
#ifndef HASHTABLE_H
#define HASHTABLE_H
typedef struct entry_t
{
unsigned int key;
void* value;
struct entry_t* next;
struct entry_t* prev;
} entry_t;
typedef struct hashtable_t
{
unsigned int buckets_count;
struct entry_t **buckets;
} hashtable_t;
// ToDo: Add resizing functionality
unsigned int compute_hash(struct hashtable_t *ht, unsigned int key);
hashtable_t *ht_new(unsigned int size);
int ht_add(struct hashtable_t* ht, unsigned int key, void* value);
entry_t* ht_get(struct hashtable_t* ht, unsigned int key);
void ht_print(struct hashtable_t* ht);
void ht_delete(struct hashtable_t* ht, unsigned int key);
void ht_free(struct hashtable_t* ht);
#endif
hashtable.c
#include "hashtable.h"
#include <stdlib.h>
#include <stdio.h>
// Sources used as reference/learning material:
// http://pokristensson.com/code/strmap/strmap.c
// https://github.com/Encrylize/hashmap/blob/master/hashmap.c
// https://github.com/goldsborough/hashtable/blob/master/hashtable.c
hashtable_t *ht_new(unsigned int bucket_count)
{
struct hashtable_t *ht;
ht = malloc(sizeof(struct hashtable_t));
if (ht == NULL)
return NULL;
ht->buckets = malloc(sizeof(entry_t) * bucket_count);
if (ht->buckets == NULL)
return NULL;
ht->buckets_count = bucket_count;
return ht;
}
void ht_delete(struct hashtable_t *ht, unsigned int key)
{
unsigned int index = compute_hash(ht, key);
entry_t *e_curr = ht_get(ht, key);
/* Chain has no entries */
if (e_curr == NULL)
return;
entry_t *e_prev = e_curr->prev;
entry_t *e_next = e_curr->next;
/* Entry is first element in the chain */
if(e_prev == NULL)
{
if(e_next != NULL)
{
e_next->prev = NULL;
}
ht->buckets[index] = e_next;
free(e_curr);
}
/* Entry is not the first element in the chain */
else
{
while(e_curr->key != key)
{
e_curr = e_next;
e_prev = e_curr;
}
e_prev->next = e_next;
if(e_next != NULL)
e_next->prev = e_prev;
free(e_curr);
}
}
int ht_add(struct hashtable_t *ht, unsigned int key, void* value)
{
unsigned int index = compute_hash(ht, key);
struct entry_t *new_entry = malloc(sizeof(struct entry_t));
if (new_entry == NULL)
return -1;
if (ht->buckets[index] != NULL)
{
/* Go to the end of the linked list and append new entry */
entry_t *current_entry = ht->buckets[index];
while (current_entry->next != NULL)
{
current_entry = current_entry->next;
}
new_entry->key = key;
new_entry->value = value;
new_entry->next = NULL;
new_entry->prev = current_entry;
current_entry->next = new_entry;
} else
{
new_entry->key = key;
new_entry->value = value;
new_entry->next = NULL;
new_entry->prev = NULL;
ht->buckets[index] = new_entry;
}
return 0;
}
// Taken from https://burtleburtle.net/bob/hash/integer.html
// Should give a fairly decent distribution of entries
unsigned int compute_hash(struct hashtable_t *ht, unsigned int key)
{
key = (key ^ 61) ^ (key >> 16);
key = key + (key << 3);
key = key ^ (key >> 4);
key = key * 0x27d4eb2d;
key = key ^ (key >> 15);
return key % ht->buckets_count;
}
void ht_print(struct hashtable_t *ht)
{
for (int i = 0; i < ht->buckets_count; ++i)
{
struct entry_t *e_curr = ht->buckets[i];
printf("[%d]", i);
do
{
if (e_curr == NULL)
{
printf(" NULL\n");
break;
}
printf(" %d <->", e_curr->key);
if (e_curr->next == NULL)
{
printf(" NULL\n");
break;
}
e_curr = e_curr->next;
} while (1);
}
}
entry_t *ht_get(struct hashtable_t *ht, unsigned int key)
{
unsigned int index = compute_hash(ht, key);
entry_t *e_curr = ht->buckets[index];
while (e_curr->key != key)
{
e_curr = e_curr->next;
/* Specified key does not exist in ht */
if (e_curr == NULL)
return NULL;
}
return e_curr;
}
void ht_free(struct hashtable_t *ht)
{
free(ht->buckets);
free(ht);
}
main.c
#include <stdio.h>
#include "hashtable.h"
int main()
{
hashtable_t *ht = ht_new(5);
printf("ADDING ELEMENTS ...\n");
ht_add(ht, 10, 1);
ht_add(ht, 20, 2);
ht_add(ht, 30, 3);
ht_add(ht, 40, 4);
ht_add(ht, 50, 5);
ht_add(ht, 60, 6);
ht_add(ht, 70, 7);
ht_add(ht, 80, 8);
ht_add(ht, 90, 9);
ht_add(ht, 100, 10);
ht_print(ht);
printf("DELETING ELEMENTS ...\n");
ht_delete(ht, 80);
ht_delete(ht, 70);
ht_delete(ht, 50);
ht_print(ht);
ht_free(ht);
return 0;
}
Answer: Design:
The frist thing I see is that the hashtable is in two parts. You don't need to do this. Make it a single object and remove a whole bunch of edge cases you need to test.
typedef struct hashtable_t
{
unsigned int buckets_count;
struct entry_t **buckets; // A pointer to another block.
// This block needs to be seprately
// allocated and maintained.
} hashtable_t;
Rather do this:
typedef struct hashtable_t
{
unsigned int buckets_count;
entry_t buckets[0]; // This is a fake array object.
// You just allocate enough space for
// the buckets you want and they
// be correctly aligned in the same
// object.
} hashtable_t;
hashtable_t& table = (hashtable_t*)malloc(sizeof(hashtable_t) + sizeof(entry_t) * buckets_count);
Hash tables are hard to get correct and sizing the number of buckets is important if you want to avoid clashes. So *Normally you want a bucket count that is a prime number. You can not expect the user of your hash table to know this some asking them for a bucket count is a bad idea. I would change the interface so that your code simply uses a prime number of buckets (if you want to be expandable then allow them to input some value that gets converted to nearest prime or select from a know good set of primes that you support).
Your design allows for multiple entries in the table to have the same key. You could argue this is a design decision I suppose. But usually you would expect keys to be unique and subsequent writes with the same key to over-right existing values.
If you deliberately want to support multiple items with the same key then you really need to explain this in the documentation up front to make sure the user is clear on the concept.
Code Review
I would note that POSIX reserves all identifers that end in _t.
typedef struct hashtable_t
{} hashtable_t;
ht = malloc(sizeof(struct hashtable_t));
if (ht == NULL)
return NULL;
ht->buckets = malloc(sizeof(entry_t) * bucket_count);
if (ht->buckets == NULL)
// If you return here.
// You have leaked the initial allocation.
// you need to add free(ht) first
// don't forget the braces.
return NULL;
Why not put the ht_free() next?
void ht_free(struct hashtable_t *ht)
{
free(ht->buckets);
free(ht);
}
Would have nice to have the allocation/deallocation close to each other.
This is a bad choice.
/* Go to the end of the linked list and append new entry */
entry_t *current_entry = ht->buckets[index];
while (current_entry->next != NULL)
{
current_entry = current_entry->next;
}
Put new clashes at the front of the list. It is more likely that new values will be used sooner than old values. So a value that has recently been put in the table will likely be used again and thus putting it at the front of the list will decrease accesses time.
Useless code that is never executed.
/* Entry is not the first element in the chain */
else
{
// This is never true.
// You just searched for the item with ht_get() which either
// returns the first item that matches key or null.
while(e_curr->key != key)
{
// This loop will never be entered.
e_curr = e_next;
e_prev = e_curr;
}
We could simplify the print a lot:
void ht_print(struct hashtable_t *ht)
{
for (int i = 0; i < ht->buckets_count; ++i)
{
printf("[%d] ", i);
struct entry_t *e_curr = ht->buckets[i];
for (;e_curr != NULL; e_curr = e_curr->next)
{
printf("%d <=>", e_curr->key);
}
printf(" NULL\n");
}
}
I would simplify the loop here:
entry_t *ht_get(struct hashtable_t *ht, unsigned int key)
{
unsigned int index = compute_hash(ht, key);
entry_t *e_curr = ht->buckets[index];
for (;e_curr != NULL; e_curr = e_curr->next) {
if (e_curr->key == key) {
return e_curr;
}
}
return NULL;
} | {
"domain": "codereview.stackexchange",
"id": 40233,
"tags": "c, linked-list, hash-map"
} |
What precisely is quantum annealing? | Question: Many people are interested in the subject of quantum annealing, as an application of quantum technologies, not least because of D-WAVE's work on the subject. The Wikipedia article on quantum annealing implies that if one performs the 'annealing' slowly enough, one realises (a specific form of) adiabatic quantum computation. Quantum annealing seems to differ mostly in that it does not seem to presuppose doing evolution in the adiabatic regime — it allows for the possibility of diabatic transitions.
Still, there seems to be more intuition at play with quantum annealing than just "adiabatic computation done hastily". It seems that one specifically chooses an initial Hamiltonian consisting of a transverse field, and that this is specifically meant to allow for tunnelling effects in the energy landscape (as described in the standard basis, one presumes). This is said to be analogous to (possibly even to formally generalise?) the temperature in classical simulated annealing. This raises the question of whether quantum annealing pre-supposes features such as specifically an initial transverse field, linear interpolation between Hamiltonians, and so forth; and whether these conditions may be fixed in order to be able to make precise comparisons with classical annealing.
Is there a more-or-less formal notion of what quantum annealing consists of, which would allow one to point to something and say "this is quantum annealing" or "this is not precisely quantum annealing because [it involves some additional feature or lacks some essential feature]"?
Alternatively: can quantum annealing be described in reference to some canonical
framework — possibly in reference to one of the originating
papers, such as Phys. Rev. E 58 (5355),
1998
[freely available PDF
here]
— together with some typical variations which are accepted as
also being examples of quantum annealing?
Is there at least a description which is precise enough that we can say that quantum annealing properly generalises classical simulated annealing, not by "working better in practise", or "working better under conditions X, Y, and Z", but in the specific sense in that any classical simulated annealing procedure can be efficiently simulated or provably surpassed by a noiseless quantum annealing procedure (just as unitary circuits can simulate randomised algorithms)?
Answer: I'll do my best to address your three points.
My previous answer to an earlier question about the difference between quantum annealing and adiabatic quantum computation can be found here. I'm in agreement with Lidar that quantum annealing can't be defined without considerations of algorithms and hardware.
That being said, the canonical framework for quantum annealing and the inspiration for the D-Wave is the work by Farhi et al. (quant-ph/0001106).
Finally, I'm not sure one can generalize classical simulated annealing using quantum annealing, again without discussing hardware. Here's a thorough comparison: 1304.4595.
Addressing comments:
(1) I saw your previous answer, but don't get the point you make here. It's fine for QA not to be universal, and not to have a provable performance to solve a problem, and for these to be motivated by hardware constraints; but surely quantum annealing is something independent of specific hardware or instances, or else it doesn't make sense to give it a name.
(2) You're linking the AQC paper, which together with the excerpt by Vinci and Lidar, strongly suggests that QA is just adiabatic-ish evolution in the not-necessarily-adiabatic regime. Is that essentially correct? Is this true regardless of what the initial and final Hamiltonians are, or what path you trace through Hamiltonian space or the parameterisation with respect to time? If there are any extra constraints beyond "possibly somewhat rushed adiabatic-ish computation", what are those constraints, and why are they considered important to the model?
(1+2) Similar to AQC, QA reduces the transverse magnetic field of a Hamiltonian, however, the process is no longer adiabatic and dependent on the qubits and noise levels of the machine. The initial Hamiltonians are called gauges in D-Wave's vernacular and can be simple or complicated as long as you know the ground state. As for the 'parameterization with respect to time,' I think you mean the annealing schedule and as stated above this is restricted hardware constraints.
(3) I also don't see why hardware is necessary to describe the comparison with classical simulated annealing. Feel free to assume that you have perfect hardware with arbitrary connectivity: define quantum annealing as you imagine a mathematician might define annealing, free of niggling details; and consider particular realisations of quantum annealing as attempts to approximate the conditions of that pure model, but involving the compromises an engineer is forced to make on account of having to deal with the real world. Is it not possible to make a comparison?
The only relation classical simulated annealing has with quantum annealing is they both have annealing in the name.
The Hamiltonians and process are fundamentally different.
$$H_{\rm{classical}} = \sum_{i,j} J_{ij} s_i s_j$$
$$H_{\rm{quantum}} = A(t) \sum_{i,j} J_{ij} \sigma_i^z \sigma_j^z + B(t) \sum_i \sigma_i^x$$
However, if you would like to compare simulated quantum annealing with quantum annealing, Troyer's group at ETH are the pros when it comes to simulated quantum annealing. I highly recommend these slides largely based on the Boxio et al. paper I linked above.
Performance of simulated annealing, simulated quantum annealing and D-Wave on hard spin glass instances — Troyer (PDF)
(4) Your remark about the initial Hamiltonian is useful and suggests something very general lurking in the background. Perhaps arbitrary (but efficiently computable, monotone, and first differentiable) schedules are also acceptable in principle, with limitations only arising from architectural constraints, and of course, also the aim to obtain a useful outcome?
I'm not sure what you're asking. Are arbitrary schedules useful? I'm not familiar with working on arbitrary annealing schedules. In principle, the field should go from high to low, slow enough to avoid a Landau-Zener transition and fast enough to maintain the quantum effects of qubits.
Related; The latest iteration of the D-Wave can anneal individual qubits at different rates but I'm not aware of any D-Wave unaffiliated studies where this has been implemented.
DWave — Boosting integer factoring performance via quantum annealing offsets (PDF)
(5) Perhaps there is less of a difference between the Hamiltonians in QA and CSA than you suggest. $H_{cl}$ is clearly obtained from $H_{qm}$ for $A(t)=1,B(t)=0$ if you impose a restriction to standard basis states (which may be benign if $H_{qm}$ is non-degenerate and diagonal). There's clearly a difference in 'transitions', where QA seems to rely on suggestive intuitions of tunnelling/quasi adiabaticity, but perhaps this can be (or already has been?) made precise by a theoretical comparison of QA to a quantum walk. Is there no work in this direction?
$A(t)=1,B(t)=0$ With this schedule you're no longer annealing anything. The machine is just sitting there at a finite temperature so the only transitions you'll get are thermal ones. This can be slightly useful as shown by Nishimura et al. The following publication talks about the uses of a non-vanishing transverse field.
arXiv:1605.03303
arXiv:1708.00236
Regarding the relation of quantum annealing with quantum walks. It's possible to treat quantum annealing in this way as shown by Chancellor.
arXiv:1606.06800
(6) One respect in which I suppose the hardware may play an important role --- but which you have not explicitly mentioned yet --- is the role of dissipation to a bath, which I now vaguely remember being relevant to DWAVE. Quoting from Boixo et al.: "Unlike adiabatic quantum computing [...] quantum annealing is a positive temperature method involving an open quantum system coupled to a thermal bath." Clearly, what bath coupling one expects in a given system is hardware dependent; but is there no notion of what bath couplings are reasonable to consider for hypothetical annealers?
I don't know enough about the hardware aspects to answer this, but if I had to guess, the lower the temperature the better to avoid all the noise-related problems.
You say "In principle, the field should go from high to low, slow enough to avoid a Landau-Zener transition and fast enough to maintain the quantum effects of qubits." This is the helpful thing to do, but you usually don't know just how slow that can or should be, do you?
This would be the coherence time of the qubits. The D-Wave annealing schedules are on the order of microseconds with T2 for superconducting qubits being around 100 microseconds. If I had to give a definitive definition of an annealing schedule it would be 'an evolution of the transverse field within a length of time less than the decoherence time of the qubit implementation.' This allows for different starting strengths, pauses, and readouts of field strengths. It need not be monotonic.
I thought maybe dissipation to a bath was sometimes considered helpful to how quantum annealers work when operating in the non-adiabatic regime (as it often will be when working on NP-hard problems, because we're interested in obtaining answers to problems despite the eigenvalue gap possibly being very small). Is dissipation not potentially helpful then?
I consulted with S. Mandra and while he pointed me to a few papers by P. Love and M. Amin, which show that certain baths can speed up quantum annealing and thermalization can help find the ground state faster.
arXiv:cond-mat/0609332
I think that maybe if we can get the confusion about the annealing schedules, and whether or not it the transition has to be along a linear interpolation between two Hamiltonians (as opposed to a more complicated trajectory), ...
$A(t)$ and $B(t)$ don't necessarily have to be linear or even monotonic. In a recent presentation, D-Wave showed the advantages of pausing the annealing schedule and backward anneals.
DWave — Future Hardware Directions of Quantum Annealing (PDF)
Feel free to condense these responses however you'd like. Thanks. | {
"domain": "quantumcomputing.stackexchange",
"id": 73,
"tags": "annealing, adiabatic-model"
} |
Can the classically described singularity be a smooth fractal? | Question: I plan to study general relativity in the next few months and for now I keep gathering information about it. I read here and there that Penrose and Hawking proved that general relativity as we know it entails the existence of a singularity inside a black hole. I would like to know if this follows from an assumption of a Riemaniann structure where spacetime may lose it. Laurent Nottale in his books explains quantum effects by lack of differentiability of spacetime, so are there models in the literature where the supposed singularity is described as a "smooth fractal" (loose translation of French "fractale lisse")? Like in the visualization of Nash isometric embedding theorem, that is a $C^{1}$ (and not $C^2$) manifold with infinitely many corrugations (see http://hevea-project.fr/ENPageToreDossierDePresse.html)?
Answer: In general relativity the space manifold has a metric described by the Einstein equation, and there is no need for an embedding space. When we say there is a singularity in the metric this usually means a curvature singularity, where scalar invariants diverge as you approach it, and the point is usually taken to not be a part of the manifold.
The corrugated embeddings of manifolds in higher spaces do not affect their differentiability or their internal curvature and metric. So the problem near a black hole singularity is not due to that. It is due to the curvatures diverging, but in a standard Schwarzschild black hole the divergence is smooth, spherically symmetric and has a fairly simple form. It is only at $r=0$ where things become undefined. | {
"domain": "physics.stackexchange",
"id": 84741,
"tags": "general-relativity, singularities, fractals"
} |
Wouldn't radiolabelled phosphorus in DNA break it apart as it disintegrates? | Question: The Hershey-Chase experiment was designed to prove that DNA is the genetic material in organisms. In this experiment, two batches of viruses were grown in two separate media A and B, with A having an abundance of nutrients involving $\ce{^32_15P}$, and B having a lot of $\ce{^35_16S}$.
Leaving solution B aside, we have $\ce{^32_15P}$ undergo a beta decay:
$$\ce{^32_15P -> ^32_16S + e- + \bar{\nu_e}}$$
This process releases $\pu{1.709 MeV}$ of energy, which is quite sufficient to break apart the phosphodiester linkages.
With all the phosphorus in the viral DNA being radiolabelled, this would very well break apart all the phosphodiester linkages. and give us a soup of sulfur-based nucleosides. In such a case, how come this hasn't affected the virus' mechanism to transfer the DNA to a host cell?
Answer: Of course it would break, just like you said; also, a high-energy $\beta$ particle would kill quite a lot of bystander molecules. Also, if not for other reason, the resulting molecule would no longer be DNA , since the decayed atom would no longer be $\rm P$. Also, the product would no longer be radioactive, so we wouldn't be able to detect it anyway.
The same applies to any other case of radiolabeling, in biochemistry or elsewhere, and you seem to be missing the very point of it: All this is not important.
Say, we have some $\ce{A + B -> C + D}$, and we label some atom in $\ce{A}$ so as to find out whether it will go to $\ce{C}$ or to $\ce{D}$. Then we run the reaction as usual. Now, radioactive atoms do not decay all at once; some decay earlier and some later. Those which decay earlier (that is, before and during the reaction) are lost to us, and we don't care about them one bit. Then we separate $\ce{C}$ from $\ce{D}$ and measure the radioactivity of each, and it is those atoms which decay now that matter.
Oh, and it is not like we label all atoms in $\ce{A}$. No, a very tiny fraction would suffice. Those viruses would probably get but a few radioactive atoms each. Some will be lucky enough to reproduce. Some will be killed by an early decay and remembered as martyrs of science.
So it goes. | {
"domain": "chemistry.stackexchange",
"id": 8360,
"tags": "biochemistry, energy, radioactivity, dna-rna"
} |
ROS2: Preferred method of making worker threads | Question:
Hi, what is a preferred method of making worker threads in ROS2? If I would want real-time, should I use independently made rea- time threads, or is there an API for that?
Originally posted by uthinu on ROS Answers with karma: 118 on 2018-11-12
Post score: 1
Answer:
There's no API for creating threads in ROS 2. The Executor class calls callbacks (if that's what you mean), and if you want a different threading model from SingleThreadedExecutor and MultiThreadedExecutor then you should make you're own executor, using one of those two as an example. They are fairly simple:
https://github.com/ros2/rclcpp/blob/33a755c535d654c97401eb6d199580368c19eb40/rclcpp/include/rclcpp/executors/single_threaded_executor.hpp#L38-L63
https://github.com/ros2/rclcpp/blob/33a755c535d654c97401eb6d199580368c19eb40/rclcpp/src/rclcpp/executors/single_threaded_executor.cpp#L21-L39
Then you can create an instance of your custom executor, add nodes to it, and then call spin on it.
Originally posted by William with karma: 17335 on 2018-11-12
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by uthinu on 2018-11-14:
Any help on the code? It is somewhat cryptic for me, I can not see where the thread is created... I would like a fixed node:thread mapping which does not seem unusual to me, think nodes real-time and nodes non real-time. Any plans of adding such an executor?
Comment by gvdhoorn on 2018-11-14:\
think nodes real-time and nodes not real-time.
Mixed Real-Time Criticality with ROS2 - the Callback-group-level Executor (slides).
Comment by William on 2018-11-14:
Just add each node to it's own SingleThreadedExecutor, and call spin on each of those executors in their own thread, and you can setup each thread to be a real-time priority thread or not.
Comment by William on 2018-11-14:
The only built-in executor that creates a thread is the MultiThreadedExecutor, see: https://github.com/ros2/rclcpp/blob/33a755c535d654c97401eb6d199580368c19eb40/rclcpp/src/rclcpp/executors/multi_threaded_executor.cpp#L48-L56
Comment by uthinu on 2018-11-14:
And if I would want this thread:node mapping and also reentrant callbacks? Could I spin each MultiThreaderExecutor in its own thread, and the threads created by that executor in its pool would inherit the parent thread's properties?
Comment by William on 2018-11-14:
I don't know, it depends on what priority your operating system gives to new threads. If they are not inherited, then you'd need to make a new MultiThreadedExecutor which modifies the priorities of the threads it creates before using them. But as I said the MultiThreadedExecutor is trivial.
Comment by uthinu on 2018-11-14:
I do not know C++ well... Could you point me to an exact line in MultiThreaderExecutor.cpp where the threads are created?
Comment by William on 2018-11-14:\
I do not know C++ well...
That won't mix well with writing real-time safe code...
I already linked to it, the emplace_back: https://github.com/ros2/rclcpp/blob/33a755c535d654c97401eb6d199580368c19eb40/rclcpp/src/rclcpp/executors/multi_threaded_executor.cpp#L48-L56 | {
"domain": "robotics.stackexchange",
"id": 32035,
"tags": "ros2"
} |
Sort List By Boolean | Question: I have a class called Task that contain a member boolean variable Completed.
I created a list of Task objects in a variable called Tasks.
I have the following code, which sorts the objects depending on whether the Task is completed. (I want the incompleted tasks to be in the list first)
private List<Task> GetSortedTask(List<Task> tasks)
{
List<Task> completedTaskList = new List<Task>();
List<Task> incompleteTaskList = new List<Task>();
for (int i = 0; i < tasks.Count; i++)
{
HelperObject T = task[i].getHelperObject();
if (T != null && T.Completed)
completedTaskList.Add(tasks[i]);
else
incompletetaskList.Add(tasks[i]);
}
//merge the two lists together
imcompleteTaskList.AddRange(completedTaskList);
return incompleteTaskList;
}
Is there a way to do this without creating so many lists? Thanks in advance
EDIT: Made my question a bit more specific since orderBy seems to be the correct answer, however my problem may be a bit more complicated than that.
Answer: I think my original comment in regards to using OrderBy could still possibly still even with your latest edits (and BTW that code does not compile, there is no declared task object :)).
How about something like:
var query = from task in tasks
let helperTask = task.getHelperObject()
select new
{
Completed = helperTask != null && helperTask.Completed,
Task = task
};
return query.OrderBy(p => p.Completed).Select(p => p.Task).ToList();
UPDATE:
Another excellent option suggested by svick to further refine and remove the anonymous object selection is something along the lines of:
var query = from task in tasks
let helperTask = task.getHelperObject()
orderby helperTask != null && helperTask.Completed
select task;
return query.ToList();
I guess if I could I would do away with ToList() altogether and return IEnumerable but that depends on the context of the operation. I also personally like using an intermediate variable here (query) as I think it just helps readability. All to their own on that front though. | {
"domain": "codereview.stackexchange",
"id": 3387,
"tags": "c#, optimization, sorting"
} |
Why is Sachdev-Ye-Kitaev (SYK) Model model important? | Question: In the past one or two years, there are a lot of papers about the Sachdev-Ye-Kitaev Model (SYK) model, which I think is an example of $\mathrm{AdS}_2/\mathrm{CFT}_1$ correspondence. Why is this model important?
Answer: People hope that it may be an example of AdS/CFT correspondence that can be completely understood.
AdS/CFT correspondence itself has been an incredibly important idea in the hep-th community over the past almost twenty years. Yet it remains a conjecture. In the typical situation, quantities computed on one side of the duality are hard to check on the other. One is computing in a weakly coupled field theory to learn about some ill defined quantum gravity or string theory. Alternatively, one is computing in classical gravity to learn about some strongly interacting field theory where the standard tool box is not particularly useful.
The original hope was that SYK (which is effectively a quantum mechanical model) might have a classical dilaton-gravity dual description in an AdS$_2$ background. That hope seems to have faded among other reasons because the spectrum of operator dimensions does not seem to match (see e.g. p 52 of this paper). Yet, there still might be a "quantum gravity" dual, for example a string theory in AdS$_2$. String theories in certain special backgrounds have been straightforwardly analyzed. | {
"domain": "physics.stackexchange",
"id": 45067,
"tags": "quantum-field-theory, string-theory, conformal-field-theory, ads-cft, holographic-principle"
} |
Would it be possible to determine the dataset a neural network was trained on? | Question: Let's say we have a neural network that was trained with a dataset $D$ to solve some task. Would it be possible to "reverse-engineer" this neural network and get a vague idea of the dataset $D$ it was trained on?
Answer: You can already do this with some neural networks, such as GANs and VAEs, which are generative models that learn a probability distribution over the inputs, so they learn how to produce e.g. images that are similar to the images they were trained with.
Now, if you're interested in whether there is a black-box method, i.e. a method that, for every possible neural network, would tell you the dataset a neural network was trained with, that seems to be a harder task and definitely an ill-posed problem, but I suspect that people working on adversarial machine learning have already attempted or will attempt to do something similar. | {
"domain": "ai.stackexchange",
"id": 2640,
"tags": "neural-networks, machine-learning, datasets, generative-model"
} |
How to design a neural network when the number of inputs is variable? | Question: I'm looking to design a neural network that can predict which runner wins in a sports game, where the number of runners varies between 2-10. In each case, specific data about the individual runners (for example, the weight, height, average speed in previous races, nationality, etc) would be fed into the neural network.
What design would be most advantageous for such a neural network?
Essentially this is a ranking problem where the number of inputs and outputs are variable.
Answer: The best option in your case would probably be zero-padding or padding up. This is simply zeroing out inputs for cases in which there is no data. It's done a lot on the borders of images for CNNs.
Alternatively, you could just use an RNN, which can handle your variable-length inputs with ease. | {
"domain": "ai.stackexchange",
"id": 3129,
"tags": "neural-networks, recurrent-neural-networks, prediction, network-design"
} |
Why are neutrino and antineutrino cross sections different? | Question: Particularly in the case of Majorana neutrinos, it seems a little odd that the particle and antiparticle would have differing cross sections.
Perhaps the answer is in here, but I've missed it:
http://pdg.lbl.gov/2013/reviews/rpp2013-rev-nu-cross-sections.pdf
In the caption of figure 48.1 of the PDG excerpt linked above, it says "Neutrino cross sections are typically twice as large as their corresponding antineutrino counterparts, although this difference can be larger at lower energies."
Is it common for particles to have different cross sections from their corresponding antiparticle?
Is there a reason for this difference? Can we theoretically predict the magnitude of this difference?
Answer: It is "easy" to show that the cross-section of the 2 reactions:
$$ (1)~~~\nu_{\mu} + d \to \mu^- + u~~~~~~(2)~~~\bar{\nu}_{\mu} + u \to \mu^+ + d$$
are different. The naive calculation gives a factor $\sigma_1/\sigma_2 = 3$ as shown below. The fact that the figure in the particle data group gives actually almost a factor 2 needs to take into account the structure function of the nucleon.
So let's check first the factor 3 and explain why:
Neglecting all masses of fermions, thus assuming an energy large enough but still negligible with respect to the $W$ bosons, the amplitudes of the 2 processes are:
$$\mathcal{M}_1=\frac{G_F}{\sqrt{2}}[\bar{u}_u \gamma^{\mu}(1-\gamma^5)u_d][\bar{u}_{\mu} \gamma_{\mu} (1-\gamma^5)u_{\nu}]$$
$$\mathcal{M}_2=\frac{G_F}{\sqrt{2}}[\bar{u}_d \gamma^{\mu}(1-\gamma^5)u_u][\bar{v}_{\nu} \gamma_{\mu} (1-\gamma^5)v_{\mu}]$$
where $G_F$ is the Fermi coupling constant and $u$,$v$ the spinors for particles and anti-particle. The Cabibbo angle has been neglected ($\cos \theta_c =1)$. The differences between the 2 amplitudes is related to the presence of anti-spinors in $\mathcal{M}_2$ instead of spinors as in $\mathcal{M}_1$. First notice that for anti-spinors, the incoming anti-particle appears on the left-hand side of the $\gamma^\mu$ matrix while for spinors, the incoming particle is on the right-hand side. This will be the source of the $u$ Mandelstam variable appearing after the squaring/averaging of the amplitude $\mathcal{M}_2$ below. Second, and this is the main point, $(1-\gamma^5)$ is a chirality projector (modulo a factor 2). This is this $(1-\gamma^5)$ matrix which is responsible for the chiral nature of the weak interaction. When applied to a spinor, it selects chiral left-handed particles, while when it is applied on anti-spinor, it selects chiral right-handed anti-particles.
We'll see the consequences few lines below.
Squaring the amplitudes, averaging over the spin of the initial quark and summing over the spins of the outgoing quark and lepton yields:
$$|\overline{\mathcal{M}}_1|^2 = 16 G_F^2 s^2$$
$$|\overline{\mathcal{M}}_2|^2 = 16 G_F^2 u^2$$
where $s$ and $u$ are the 2 Mandelstam variables. (Denoting by $p_1,p_2$ the 4 momenta of the 2 particles in the initial state and $p_3,p_4$ the ones of the 2 particles in the final states, we have $s=(p_1+p_2)^2$ and $u=(p_1-p_4)^2$). Using a well known formula for the differential cross-section $\frac{d\sigma}{d\Omega}=\frac{1}{64\pi^2 s} |\overline{\mathcal{M}}|^2$ (valid in our massless approximation, $\Omega$ being the solid angle in the center of mass frame) gives:
$$\frac{d\sigma_1}{d\Omega}= \frac{G^2_F}{4\pi^2}s$$
$$\frac{d\sigma_2}{d\Omega}= \frac{G^2_F}{4\pi^2}\frac{u^2}{s} = \frac{G^2_F}{16\pi^2}s(1+\cos\theta)^2 $$ with $\theta$ the angle in the center of mass frame between the $\bar{\nu}_{\mu}$ and the $\mu^+$. At this stage we can better appreciate the chiral structure of the weak interaction. Indeed, weak interaction (via charged current i.e. $W$ boson exchange) involves only left-handed chirality of particles and the right-handed chirality of anti-particles. With the massless approximation, chirality is equivalent to helicity, the projection of the spin along the momentum direction. You can notice that for $\theta=\pi$, $\frac{d\sigma_2}{d\Omega}=0$. It is related to this chiral structure. Indeed the $\bar{\nu}_{\mu}$ must be right handed and the $u$ quark left handed. Thus in the center of mass frame the spin of these 2 particles are in the same direction (since they are back-to-back) giving a projection on the $z$ axis $s_z=1$ (or -1 depending on your choice). By conservation of the angular momentum, the $s_z$ of the final state must be also $s_z = 1$. But with an angle $\theta=\pi$, that means that the $\mu^+$ is going in the opposite direction of the $\bar{\nu}_{\mu}$. Since it has the same $s_z$ it must be left-handed (since the $\bar{\nu}_{\mu}$ was right-handed). But, the weak interaction only involves right-handed components of the anti-particle $\mu^+$. So necessarily this configuration cannot be possible explaining the null cross-section at this angle! We clearly see the difference between the 2 reactions at this step. We can go a bit further and integrate over the solid angle, giving:
$$\sigma_1 = \frac{G^2_F}{\pi} s$$
$$\sigma_2 = \frac{G^2_F}{3\pi} s$$
We thus have the announced factor 3 for the ratio of cross-section at quark level. Moving from quark to nucleon is a bit complicated and I give here only the result assuming a target made of as many protons as neutrons:
$$\sigma_{\nu N} = \frac{G^2_F}{\pi}\frac{s}{2}\int_0^1 x(q(x)+\frac{\bar{q}(x)}{3})dx$$
$$\sigma_{\bar{\nu} N} = \frac{G^2_F}{\pi}\frac{s}{2}\int_0^1 x(\bar{q}(x)+\frac{q(x)}{3})dx$$
the function $q(x)$ and $\bar{q}(x)$ being the Parton Distribution Function (PDF) of the quark and anti-quark. You do have to take into account the quarks and anti-quarks coming from the sea (quantum fluctuations) inside the nucleon. The PDF have to be measured. It is known (measured) that about half of the proton momentum is carried by quarks (the other half by gluons), meaning that $\int_0^1 x(q(x)+\bar{q}(x)) dx= 0.5$. The individual contribution of quarks and antiquarks to the proton momentum is about:
$$\int_0^1 x q(x) dx = 42\%~~~~~ \int_0^1 x \bar{q}(x) dx = 9\%$$
The result of the integrals above is such that the ratio of the 2 cross-sections is pretty close to 2 (instead of 3 at the quark level). | {
"domain": "physics.stackexchange",
"id": 23068,
"tags": "particle-physics, scattering, neutrinos, antimatter, scattering-cross-section"
} |
Check if a JavaScript variable exists and checking its value | Question: I feel like this bit of code could be condensed or made better in some way. It may in fact not be, but figured I'd get some people to have a look. I have multiple pages on my site and in certain pages I'm setting a JavaScript variable:
var header_check = "user-profile";
Some pages I'm not setting that variable. In another JavaScript file that gets loaded on the page I check if that variable exists and perform various actions if it does.
Is this the best way to check if the variable exists? Also, is this the best way to see if user-profile is set?
var header_cookie = typeof header_check !== 'undefined' ? 'user-profile' : 'admin-profile';
var cookie_check = header_cookie == 'user-profile' ? true : false;
var city_profile = cookie_check ? 'userCity' : 'city';
var state_profile = cookie_check ? 'userState' : 'state';
Answer: First off, your code really breaks down to this logic and I find it helpful to write it out the longer version to full understand the logic flow before trying to shorten it:
var header_cookie, cookie_check, city_profile, state_profile;
if (typeof header_check !== "undefined") {
header_cookie = 'user_profile';
cookie_check = true;
city_profile = 'userCity';
state_profile = 'userState';
} else {
header_cookie = 'admin_profile';
cookie_check = false;
city_profile = 'city';
state_profile = 'state;
}
FYI, I also find this a LOT easier to follow what's actually happening than the code you have. It also saves several comparisons on the cookie_check value.
There are some ways to shorten this, but it's not entirely clear that any are "better" where the definition of better includes readability by someone who has never seen this code before, but you can decide what you think of that issue for the alternatives:
Since you really only have two states, you could predefine each state and then just pick which one to use and access the properties off a single state object:
var userState = {
header_cookie: 'user_profile', city_profile: 'userCity', state_profile: 'userState';
};
var adminState = {
header_cookie: 'admin_profile', city_profile: 'city', state_profile: 'state';
};
var state = typeof header_check !== "undefined" ? userState: adminState;
Done this way, you'd access state.header_cookie, state.city_profile and state.state_profile rather than your standalone variables.
Or, if you wanted to keep the individual variables, you could do this:
var states = {
header_cookie: ['user_profile', 'admin_profile'],
city_profile: ['userCity', 'city'],
state_profile: 'userState', 'state'];
};
var stateIndex = typeof header_check !== "undefined" ? 0 : 1;
var header_cookie = states.header_cookie[stateIndex];
var city_profile = states.city_profile[stateIndex];
var state_profile = state.state_profile[stateIndex]; | {
"domain": "codereview.stackexchange",
"id": 5951,
"tags": "javascript, jquery"
} |
Hokuyo UST-10LX connected via Ethernet and urg_node | Question:
Hello,
TL;DR
When I don't use urg_node with Hokuyo UST-10LX, I can see ROS2 topics from the robot on the other PC, after using urg_node with the Hokuyo laser scanner - ros2 topic list doesn't have them, ssh doesn't work either, I don't know why?
My configuration:
ROS 2 Galactic, Ubuntu 20.04 on each PC and an Intel NUC on the TurtleBot 2
Everything is on the local network with addresses in 192.168.5.xxx, i.e. TB2 IP is 192.168.5.55, the PC IP is 192.168.5.100 etc. Gateway's IP: 192.168.5.1.
TB2 has Hokuyo UST-20LX connected via Ethernet cable to Intel NUC, NUC has a WiFi internet access; default IP of the Hokuyo sensor is 192.168.0.10 (important later).
To see other ROS topics on the local network on my PC, that are published by nodes running on TB2 or other computer, I need to add this route on the PC and similar on the TB2:
sudo route add -net 192.168.5.0 gw 192.168.5.1 netmask 255.255.255.0 dev wlp7s0
I was able to use ros2 run demo_nodes_py talker on the PC and see it on the TB2 (ros2 run demo_nodes_py listener), same with analogous situation - talker on the TB2 and listener on the PC. It seems that everything works as expected.
Everything broke down when I added the Hokuyo connected via Ethernet cable - I couldn't see (ros2 topic list) any other topics or send/listen to any messages.
When I tried to change Hokuyo's IP from the default 192.168.0.10 to the 192.168.5.65 and restarted the whole robot and it's NUC, it seemingly broke the Internet connection - I couldn't ssh to the TB2 or see the robot's ROS topics in the network on the PC.
I found some other questions which seems to be related to this problem, but I can't find a solution:
https://answers.ros.org/question/297291/error-connecting-to-hokuyo-could-not-open-network-hokuyo/
https://github.com/ros-drivers/urg_node/issues/79
https://www.finnrietz.dev/linux/hokuyo-ros-setup/
https://answers.ros.org/question/211508/etcnetworkinterfaces-configuration-for-urg-node/
How can I use the TB2 with Hokuyo LiDAR connected via Ethernet, with TB2 accessing the Internet via WiFi (so I could ssh to it) and with ROS2 DDS working as it should (e.g. topics from TB2 are visible from another PC on the same network)?
Originally posted by ljaniec on ROS Answers with karma: 3064 on 2022-07-21
Post score: 1
Original comments
Comment by igrak34 on 2022-07-22:
I encountered same problem, so if someone knows how to fix it please share.
Answer:
With help of my friend I get it to work, I will try to describe changes with steps:
I don't need to use any route add - I deleted every added route with sudo route del ...
Wired connection on the TB2 with Hokuyo LiDAR - settings: IP 192.168.0.15 with netmask 255.255.255.0 without gateway
For the working DDS I had to use this setup guide, with preparation of XML settings file called cyclonedds.xml (I changed wifi0 to wlp58s0, check it with ifconfig for your case):
<?xml version="1.0" encoding="UTF-8"?>
<CycloneDDS xmlns="https://cdds.io/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://cdds.io/config
https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd">
<Domain id="any">
<General>
<NetworkInterfaceAddress>wifi0</NetworkInterfaceAddress>
</General>
</Domain>
</CycloneDDS>
(XML isn't showing nice in this editor, so there is the picture too)
and then put export CYCLONEDDS_URI=file://$PWD/cyclonedds.xml in the ~/.bashrc
I could then launch my nodes on the TB2 with plugged-in Hokuyo, with ROS2 topics visible on the other PC.
Originally posted by ljaniec with karma: 3064 on 2022-07-22
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 37866,
"tags": "ros2, wifi, turtlebot, urg-node, hokuyo"
} |
Round structure in southern United States | Question: Assuming here might be someone who knows something about this, I wanted to ask what is behind this round structure I have spotted today on Google Earth:
There seems to be a large (~200 km), nearly perfect half-circle covering the states of Alabama, Mississippi and Tennessee. How could this regular structure possibly originate? I did some research on the web, but could not find anything. It looks a bit like an impact crater, but there is none listed in this location and especially of this large size. So how did this structure emerge?
Answer: This is a sedimentary sequence representing the shoreline of a Cretaceous-Paleogene inland sea, the Western Interior Seaway. You can look at the sequence of sediments laid down in the USGS Geological Map of North America. I recommend downloading the Southern Sheet in high resolution and the Explanation Sheet to explain what's going on.
The land use pattern as seen by other people answering this question is actually putting the effect in front of the cause; due to the nature of these sediments being a positive influence on the fertility of the land, it is more likely to be used for farming. | {
"domain": "earthscience.stackexchange",
"id": 2784,
"tags": "geology, geography, satellite-oddities"
} |
How can I find the Green's function of this modified Poisson equation? | Question: I am looking for the Green's function necessary to solve the following PDE:
$$
\nabla^2\Phi-\epsilon M\Phi=M,
$$
where $\Phi,M$ are scalar fields in 3D, $\nabla^2$ is the 3D Laplace operator and $\epsilon>0$ is a positive parameter which, if necessary, can be taken to be small. Boundary conditions can be specified to be that $\nabla\Phi$ vanishes at infinity.
What confuses me is that the source is coupled to the field itself, which makes it not clear at all on how I should go about finding a Green's function.
Many thanks.
EDIT: $\Phi$ and $M$ could be taken to be spherically symmetric if necessary
Answer: You have to view Green's functions as inverse operators. In your case, you want $\Phi$, so the operator you wish to invert is the RHS:
$$
\Delta-\epsilon M
$$
so the Green's function you are looking for is:
$$
(\Delta_x-\epsilon M(x))G(x,y)=\delta(x-y)
$$
Note that since $M$ breaks the translation invariance, $G$ takes two spatial arguments, and does not depend only on the difference. $\Phi$ can therefore be expressed as:
$$
\Phi(x) = \int d^Dy G(x,y)j(y)
$$
so that:
$$
(\Delta-\epsilon M)\Phi = j
$$
in particular, you can take $j=M$.
Since you are assuming that $\epsilon$ is a small parameter, you can write $G$ as a perturbative series in $\epsilon$. This is the Born approximation if you truncate at leading order and interpret $M$ as a potential energy and $\Phi$ a first quantisation wave function.
Formally, using the unperturbed case:
$$
\begin{align}
G_0 &= \Delta^{-1} \\
&= -\frac{1}{4\pi |x-y|}
\end{align}
$$
you would write using linear algebra:
$$
\begin{align}
G &= \frac{1}{\Delta-\epsilon M} \\
&= \frac{1}{\Delta}\frac{1}{1-\epsilon \Delta^{-1}M} \\
&= \frac{1}{1-\epsilon \Delta^{-1}M}\frac{1}{\Delta} \\
&= (1+\epsilon \Delta^{-1}M)\frac{1}{\Delta} \\
&= G_0 +\epsilon G_0 MG_0 \\
G(x,y) &= G_0(x,y)+\epsilon\int d^D z G_0(x,z)M(z)G_0(z,y) \\
&= -\frac{1}{4\pi |x-y|}+\frac{\epsilon}{(4\pi)^2}\int d^D z \frac{M(z)}{|x-z||y-z|}
\end{align}
$$
with the associate diagrammatic representation.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 95691,
"tags": "differential-equations, greens-functions"
} |
Wrong application of optical theorem | Question: I am trying to use optical theorem* ( given in box 24.2 in Quantum Field Theory and the Standard Model, M. Schwartz ). I am trying to calculate the imaginary part of this diagram for the scalar field:
But I am getting extra factor of 2 in the calculation. I have calculated the imaginary part using complex analysis, and also using the Cutkosky cutting rules. I have also checked some papers. I am quite sure that I am getting an extra 2 factor when I calculate using the optical theorem. I suspect that I might have misunderstood the formula (24.2 ).
Can anyone please verify my understanding of the formula?
Here it goes
( m is the mass of both the particles, s is a Mandelstam variable, $\lambda$ is the coupling constant )
:
$E_{CM}$ stands for the sum of energies of the two incident particles in the Center of Momentum frame. It should be equal to $\sqrt{s}$.
$|\vec{p_i}|$ stands for the magnitude of the four-momentum of either one of the incident particles. It comes out to be $\sqrt{\frac{s}{4} -m^2}$.
$\Sigma_x \sigma(A->X)$ is the sum of scattering cross section of all the possible diagrams at order $\lambda$. In this case there is just one term to be summed over. And it is $(4\pi) \frac{\lambda^2}{64\pi^2s}$ . Here $4\pi$ comes from the integration of total solid angle. Remaning is just the differential cross section $\frac{d\sigma}{d\Omega}$ for scattering at order $\lambda$ .
The total answer that I get is $\frac{\lambda^2}{8\pi} \sqrt{\frac{1}{4} - \frac{m^2}{s}}$.
I have spent quite some time on this but still don't understand what am I doing wrong here. By every other method, I am getting an answer half this value. I can show more calculations and more arguments if anyone asks for them. Thanks.
Update:
*: Here is the formula: $Im M(A->A) = 2E_{CM}|\vec{p_i}|\Sigma_x \sigma(A->X)$
Answer: It turns out that I was calculating the cross section incorrectly. $\Sigma_x \sigma(A->X)$ should come out to be $(2\pi) \frac{\lambda^2}{64\pi^2s}$ because I have to divide the phase space by 2 ( output particles are identical ). | {
"domain": "physics.stackexchange",
"id": 73039,
"tags": "homework-and-exercises, quantum-field-theory, scattering-cross-section"
} |
How fast must Nadia travel so that she is the same biological age as her twin upon returning to Earth? | Question:
Two twins, Nadia and Aidan, decide to have an adventure when they turn 21. Aidan chooses to travel to a distant star 10 light years away at a speed of 0.8c. Nadia decides to travel to a closer star, which is 8 light years away. How fast must Nadia travel to and from the closer star so that she is the same biological age as Aidan once they both return to Earth?
My attempt:
In Aidan's frame of reference, the perceived distance of 10 light years is contracted:
$L'= L/\gamma$, where $L$ = proper length and $\gamma = 1/\sqrt{1-v^2/c^2}$
So $L' = 10 \times \sqrt{1-0.64} = 6$. The total distance Aidan travels is 12m. So when he returns back on Earth, he will be $12/0.8 = 15$ years older.
For Nadia, the total distance she travels is:
$2 \times 8 \times \sqrt{1-v^2/c^2} = 16 \times \sqrt{1-v^2/c^2}$
In order to return to earth 15 years older, her speed has to be:
$16 \times \sqrt{1-v^2/c^2}/v = 15$
Solving for v gives $16c/\sqrt{225c^2-256}$
Is this the correct approach to solving the problem?
Answer:
Is this the correct approach to solving the problem?
The result you give for $v$ can't be correct since, within the parenthesis, you're subtracting a number from a speed squared.
There's a much cleaner approach to the problem that does not require length contraction, time dilation, or the Lorentz factor.
The best and most correct approach, in my opinion, is to use the invariant interval.
Assuming that, for either Aidan or Nadia, the outbound and inbound speed are identical, the proper time for either is simply twice the proper time for the outbound trip.
Thus, Aidan ages (we use units where $c = 1$)
$$\Delta \tau_A = 2 \sqrt{(\Delta t_A)^2 - (\Delta x_A)^2} = 2\sqrt{\left(\frac{10}{0.8}\right)^2 - 10^2}$$
and Nadia ages
$$\Delta \tau_N = 2 \sqrt{(\Delta t_N)^2 - (\Delta x_N)^2} = 2\sqrt{\left(\frac{8}{v_N}\right)^2 - 8^2}$$
Setting these equal so that both age the same amount of time yields a straightforward equation for Nadia's speed $v_N$ | {
"domain": "physics.stackexchange",
"id": 16307,
"tags": "homework-and-exercises, special-relativity, time-dilation, length-contraction"
} |
Distortion of light during narrowing of eyes as a consequence of Uncertainty principle | Question: I had recently watched Prof. Walter Lewin's lecture on the Heisenberg Uncertainty principle, where he conducts an experiment where light enters a slit, and as the slit was made narrower the light spread out. The explanation given for this was that when the slit was made narrower the uncertainty in position decreased in the x direction, which would consequently increase the uncertainty of momentum in the x direction.
Hence,the light was found to broaden.
Is this the similar reason why light spreads out when we narrow our eyes?
Answer: No, we are not able to get our eyes "this closed". What you are describing "light spreading out when we narrow our eyes" is probably due to diffraction or closing narrowing our eyes deforms our lens'. | {
"domain": "physics.stackexchange",
"id": 41489,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle"
} |
How can I find the documented T1 time of a retired IBM system? | Question: I need to find the published mean T1 time for the 'ibm_oslo' system that I used in an experiment a while back. Unfortunately, that system has now been retired and there's seemingly no way for me to access the system properties window that I can access for all the active systems.
I have tried to run on IBM Quantum Labs/Qiskit some code along the lines of:
backend = provider.get_backend('ibm_oslo')
system.properties().backend_version
system.properties().last_update_date
system.properties().qubits
but to no avail. The error is something about this backend cannot be reached... Is there any way to access the system properties of a retired IBM system?
Thanks :)
Answer: You can look at system properties snapshots of IBM Quantum systems in qiskit-terra either by 1) browsing property snapshot file in https://github.com/Qiskit/qiskit-terra/tree/main/qiskit/providers/fake_provider/backends or 2) retrieving the properties using qiskit.providers.fake_provider module.
1. Browsing property snapshot file.
ibm_oslo property file can be found here: https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/providers/fake_provider/backends/oslo/props_oslo.json
qubit 0 has T1 of 148.84747328441438 us and so on
2. Retrieve using fake backends in qiskit.providers.fake_provider module
from qiskit.providers.fake_provider import FakeOslo
backend = FakeOslo()
backend.qubit_properties(0)
outputs
QubitProperties(t1=0.00014884747328441437, t2=7.369298024352508e-05, frequency=4925043161.619499)
You can pass a list or range to get multiple qubit properties.
backend.qubit_properties(range(7))
outputs
[QubitProperties(t1=0.00014884747328441437, t2=7.369298024352508e-05, frequency=4925043161.619499),
QubitProperties(t1=0.00013706651413337355, t2=3.7020155523694795e-05, frequency=5046272849.03205),
QubitProperties(t1=0.00021921703514896735, t2=4.669878652698784e-05, frequency=4961998490.6786995),
QubitProperties(t1=0.00012128335234199257, t2=4.648766010485143e-05, frequency=5108098767.473062),
QubitProperties(t1=0.00018813842702629488, t2=0.0001843690321834765, frequency=5011074105.950458),
QubitProperties(t1=0.00014217817897014813, t2=4.113094933306526e-05, frequency=5173290077.743308),
QubitProperties(t1=0.00010304578798951729, t2=0.00020846324125703032, frequency=5319311606.468152)]
If you want to get only the list of T1, use python list comprehension:
t1s = [prop.t1 for prop in backend.qubit_properties(range(7))]
print(t1s)
outputs
[0.00014884747328441437, 0.00013706651413337355, 0.00021921703514896735, 0.00012128335234199257, 0.00018813842702629488, 0.00014217817897014813, 0.00010304578798951729] | {
"domain": "quantumcomputing.stackexchange",
"id": 4877,
"tags": "qiskit, ibm-quantum-devices"
} |
Does Z gate swap complex amplitudes of $|0\rangle$ and $|1\rangle$? | Question: I am reading Quantum Computing 1st Edition By Parag Lala, this book says
It seemed that the Z gate swapped the complex amplitudes $\alpha$ and $\beta$.
Can Z gate implement that, or are there any errata? Because
$$
\begin{pmatrix} \alpha \\ -\beta \end{pmatrix} = \alpha\begin{pmatrix} 1 \\ 0 \end{pmatrix} - \beta\begin{pmatrix} 0 \\ 1 \end{pmatrix} \neq \alpha\begin{pmatrix} 0 \\ 1 \end{pmatrix} + \beta\begin{pmatrix} 1 \\ 0 \end{pmatrix} = \alpha|1\rangle + \beta|0\rangle
$$
And, is it true that Z Gate merely add $\pi$ to the relative phase $\phi$ of a superposition $|q\rangle$?
$$
|q\rangle = \alpha|0\rangle + e^{i\phi}\beta|1\rangle
$$
$$
Z|q\rangle = \alpha|0\rangle + e^{i(\phi+\pi)}\beta|1\rangle
$$
Answer: This is wrong for sure.
And according to the book reviews on Amazon, this book is "unreliable",
"riddled with errors", and "someone studying for the first time will get confused" | {
"domain": "quantumcomputing.stackexchange",
"id": 2763,
"tags": "quantum-gate, textbook-and-exercises"
} |
How do I solve this truss to get T4 and T5 force I got T3 | Question:
It seems to me there is too many unknowns to solve further
Answer: Yes, it's a determinate structure and you can solve for T4 and T5 using statics.
Method of Joints
One approach is to use Method of Joints to progressively solve for the member forces, and it looks like this is the approach you've chosen. With two equations of equilibrium ($\sum F_x = 0$, $\sum F_y = 0$) we can solve for two unknown member forces at any node. From your calculations, this is what you've arrived at for the node where $F_2$ is applied (Node C, below). Everything is known except T4 and T5. Sometimes it is necessary to use a system of equations to achieve a solution, and that is the case here.
Below is a sketch of the truss with named nodes for ease of reference. I kept A and B as you had designated. Lengths of the angled members are noted in parentheses.
The steps followed in your calculations are appropriate for the Method of Joints:
Determine external support reactions
Solve Node B
Solve Node G
Solve Node C (where the difficulty arose)
Looking at Node C:
$F_2$ is known from the problem statement. $F_{CB}$ and $F_{CG}$ were solved for in Steps 2 and 3, respectively, leaving us to solve for $F_{CD}$ and $F_{CA}$.
We use the known geometry to break each member force into x and y components, and then solve our two equations of equilibrium, $\sum F_x = 0$, $\sum F_y = 0$. Both equations will have $F_{CD}$ and $F_{CA}$ as unknowns. Solve simultaneously and, voila.
Side note: I find it convenient to use the triangle geometry of each member (i.e. the member is the hypotenuse of a right triangle) along with SOH-CAH-TOA to solve for the X and Y components of any member force. It's often simpler than dealing with the angles themselves. As an example, the vertical component of the force in Member AC can be calculated as $F_{AC} \frac{2.57}{3.95}$.
Method of Sections
Another option is to make a vertical section cut through T5, T4, and T3 and use Method of Sections to solve for those element forces. For example, take the portion of the structure from the section cut to the left. We can sum moments about Support A to determine T5 ($F_{DC}$), because T4 and T3 will produce no moment about A. Sum of the forces in X and Y directions get you the rest of the way.
Solving the Remainder of the Truss
Should you want or need to solve for all the member forces, you can continue with Method of Joints. By inspection we can say the only way equilibrium can be satisfied at Node F is for the forces in both members to be zero. Similarly, with no external force applied at Node D, we know that $F_{DC} = F_{DE}$ and thus $F_{DA} = 0$. | {
"domain": "engineering.stackexchange",
"id": 3346,
"tags": "statics"
} |
Radio wave speed in air | Question: Couldn't Google credible answer.
What is accepted constant in applied physics to estimate radio wave speed in earth atmosphere near water surface?
Taking on account humidity inside few meters off water surface.
Disregarding ionisation clouds and all other high altitude effects
Answer: The figure I was looking for is about 0.9997c
according to https://www.tau.ac.il/~tsirel/dump/Static/knowino.org/wiki/Electromagnetic_wave.html * | {
"domain": "physics.stackexchange",
"id": 68409,
"tags": "electromagnetic-radiation, applied-physics"
} |
"Simple" physics problem without the solution | Question: it's my first question here, i'm a french student in "classe préparatoire scientifique" and I find a problem on the net witch is :
A slingshot consists of a rubber band that extends 3cm when you shoot with a force of 10N. I throw a pebble 20g having stretched elastic 20cm. How fast will the stone he kicked? We give the answer in kilometers per hour rounded to the unit. We neglect the inertia of the slingshot.
Well I'm ashamed of myself, but this problem get me angry ^^
I precise my reasoning :
So I name $\vec{F}$ the force exercise buy the elastic on the pebble and $\vec{P}$ his weight, so by Newton's law I have :
$$
\sum \vec{F_{ext}} = ma
$$
Where $a$ is the acceleration
Then in the coordinate system $(O,\vec{u_x},\vec{u_y})$ and $\vec{{u_y}}$ the direction of $\vec{P}$,
$$
\vec{F}+\vec{P}=m(\vec{a_x}+\vec{a_y})
$$
becomes $$F=ma_x$$
Then I have $$a_x=\frac{F}{m} = 10^4 m.s^{-2}$$
Well now I don't know how to find the speed because if I integrate I will find the speed depends on time but not in distance because at $t=0$ the pebble has no speed and here we need the speed when it travels 10 cm (because the rubber band is hand in her to part I think)
Can you help me please ?
Thank you very much by advance and sorry if I made some english mistakes :)
Answer: I'm going to assume that you stretch the rubber band horizontally, and that the pebble doesn't start falling until after it's left the slingshot - that way, we don't have to worry about gravitational potential energy. Assumption number 2 will be that the pebble leaves the slingshot at $x=0$.
Conservation of energy tells you that the initial total energy $E_{tot,i}$ (slingshot cocked) must be equal to the final total energy $E_{tot,f}$ (pebble leaves the slingshot). We therefore need to consider the kinetic energy $E_{kin}$ of the pebble, and the potential energy $E_{pot}$ stored in the rubber band.
As you know:
$$E_{kin}=\dfrac{1}{2}m_{pebble}v^2$$
$$E_{pot}=\dfrac{1}{2}k_{rubber}x^2$$
The spring constant $k$ can be computed as:
$$k=\dfrac{\Delta F}{\Delta x}=\dfrac{10\ N}{3\ cm}\simeq 333,3\ N\cdot m^{-1}$$
Let's now compute $E_{tot,i}$ and $E_{tot,f}$:
$$E_{tot,i}=E_{kin,i}+E_{pot,i}=\dfrac{1}{2}mv_i^2+\dfrac{1}{2}kx_i^2=\dfrac{1}{2}\cdot 0,02\cdot 0^2+\dfrac{1}{2}\cdot 333,3\cdot 0,2^2\simeq 6,67\ J$$
$$E_{tot,f}=E_{kin,f}+E_{pot,f}=\dfrac{1}{2}mv_f^2+\dfrac{1}{2}kx_f^2=\dfrac{1}{2}\cdot 0,02\cdot v_f^2+\dfrac{1}{2}k\cdot 0^2=0,01\cdot v_f^2$$
And from conservation of energy:
$$E_{tot,f}=E_{tot,i}\Longrightarrow 0,01\cdot v_f^2=6,67\Longrightarrow v_f=\sqrt{\dfrac{6,67}{0.01}}\simeq 25,8\ m\cdot s^{-1}$$
Therefore, the pebble exits the slingshot at a maximal speed of:
$$v_f\simeq 93\ km/h$$
(Je peux aussi taper la réponse en français si ça t'arrange) | {
"domain": "physics.stackexchange",
"id": 20591,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
Why doesn't a photon state have to be infinite in length? | Question: In all discussions I've seen so far (old quantum theory, semiclassical QM, QFT), when we talk about photon states, we seem to say they have a definite momentum. At the same time, we also say a photon is a particle that is localized in space. These two statements seem contradictory to me, given what I know so far.
In de Broglie's proposition, a photon is supposed to have energy $E = \hbar\omega$ and momentum $p = \hbar k$ ($k = 2\pi/\lambda$). Now in this semi-classical / old quantum theory treatment, the photon has a precise momentum, so doesn't that mean by Heisenberg's uncertainty principle that $\Delta x = \infty$? Even classically, a light wave of definite wavelength would have to be an infinite planewave.
In the second quantization formalism, we can think of a photon as a state of the form $\hat{a}_{\vec{k},\vec{\epsilon}}^{\dagger} | 0\rangle$ (where $\vec{k}$ is a definite wavevector and $\vec{\epsilon}$ is a definite polarization). If I'm not mistaken, the electric and magnetic fields would have nonzero amplitude distribution at every point in the entire quantization volume. But the quantization volume is arbitrary, so it seems there is no sense in which the photon is located in any specific point.
So why do we say photons are localized? It seems like they have to be infinite in size, but that doesn't make any sense.
Answer: Actually, for both electrons and photons, we are more secure in the understanding of their momentum eigenstates, that are infinite plane waves that cannot be normalised, than their position eigenstates. The nice property that momentum eigenstates hold, is that they are the free particle's Hamiltonian eigenstates.
(Sidenote: And in the mathematical framework of QFT, the S matrix really needs to be used between Hamiltonian eigenstates, because if you do a small mixture of different energy eigenvalues in any of the initial or final states, the accumulation of infinite phase factors will destroy the calculation's sanity. Just work with Hamiltonian eigenstates first, and the use the postulates of quantum theory to figure out what superpositions of different energy eigenstates are supposed to do.)
Every good textbook will tell you that the real thing has thus to be gotten by wavepackets. Only by superposing many different momentum eigenstates can you hope to normalise the wavefunction, and so forth.
Then you should not have as much difficulty with localising the photon too.
We need to be able to localise the photon because when we detect the photon with a detector, the detector is localised, and so we must be able to say that the photon is at the detector. This is just the most convenient and sensible and natural way to think about it.The photon must also be able to pass through slits, which are localised.
Note that both momentum eigenstates and position eigenstates are not normalisable. That is ok. The correct statements of the postulates of quantum theory are that the quantum state lives in Hilbert space, but the eigenstates of the continuous part of observable operators live in rigged Hilbert space. Those non-normalisable eigenstates cannot be realised, but they can be safely used in superpositions to expand any realisable quantum state. | {
"domain": "physics.stackexchange",
"id": 95383,
"tags": "quantum-mechanics, photons, heisenberg-uncertainty-principle"
} |
Easy Filter with FFT and Convolution | Question: I'am pretty new to this topic so i dont have much experience with DSP.
I want to filter(highpass) a WAVE file, i have already programmed the FFT, and now want to filter the FFT vector via convolution (which is a multiplication in frequency domain ?!).
1) Can anyone give me an easy equation for such a filter?
2) If i have an equation i compute it, also Fourier-Transform it, an multiply it with my FFT vector and inverse Transform it an i have applied the filter?!
Answer: Actually, yes. As Hilmar pointed out, it might be cheaper to use an ordinary filter, but that depends on the circumstances.
If you have two signals $A$ and $B$ with Fourier transform being $\mathcal{F}$, their convolution $A\ast B$ will be
\begin{equation}
A\ast B = \tilde{\mathcal{F}}(\mathcal{F}(A)\cdot\mathcal{F}(B))
\end{equation}
With $\tilde{\mathcal{F}}$ being the inverse transform. This is the well known "convolution lemma". There is not much of a theory to see. In the Fourier domain, you just replace convolution by (point-wise) multiplication. Your question already contained the answer, to me it just appears that you were not sure about the details. | {
"domain": "dsp.stackexchange",
"id": 1483,
"tags": "fft, filters, convolution"
} |
Python tuition calculator 2.0 | Question: Here is my program that calculates tuition cost for the next five years. I have had some review on it so far, but I am seeking more. Please keep in mind I am new to programming, so I may require a few extra steps to understand something. Thanks in advance!
Version 1.0 is here
RESIDENCY_COST = {
"I": 10000,
"O": 24000,
"G": 40000,
}
cost_of_tuition = None
while True:
residency = raw_input('Please input your type of residency, I for in-state, O for out-of-state, and G for graduate: ')
try:
cost_of_tuition = RESIDENCY_COST[residency]
break
except KeyError:
print ('Please enter I, G or O ONLY.')
years = []
tuition_increase = []
academic_year = []
academic_year_inc = []
for _ in range(5):
intMath = cost_of_tuition * 0.03
tuition_increase.append(intMath)
fnlMath = intMath + cost_of_tuition
years.append(fnlMath)
cost_of_tuition = fnlMath
academic_year.append("${:,.2f}".format(fnlMath))
academic_year_inc.append("${:,.2f}".format(intMath))
total_tuition_increaseSum = sum(tuition_increase)
total_tuition_increase = "${:,.2f}".format(total_tuition_increaseSum)
print('UNDERGRADUATE TUITION FOR THE NEXT FIVE YEARS ')
print('ACADEMIC YEAR TUITION INCREASE ')
print('------------- ------------ -------- ')
for i, year in enumerate(range(16, 21)):
print('{}-{} {} {}'.format(year + 2000, year + 1,
academic_year[i],
academic_year_inc[i]))
print('TOTAL TUITION INCREASE ' + total_tuition_increase)
Answer: You don't use total_tuition_increaseSum for anything other than that one print, so you may as well call sum on it in the print line:
total_tuition_increase = "${:,.2f}".format(sum(total_tuition_increase))
I would also recommend you look more at splitting the code into functions. Functions usually make code easier to read, use and change. For example, you might put your input management into one function so it's a small self contained chunk. All it needs to do that's different is return the resulting tuition cost instead of setting it directly. This is how it'd work:
def get_cost():
while True:
residency = raw_input('Please input your type of residency, I for in-state, O for out-of-state, and G for graduate: ')
try:
return RESIDENCY_COST[residency.upper()]
break
except KeyError:
print ('Please enter I, G or O ONLY.')
And you could call it like this:
cost_of_tuition = get_cost()
Similarly, you could then wrap your printing into a print_tuition_table function. This one would need to take parameters, your two lists and total_tuition_increase.
def print_tuition_table(academic_year, academic_year_inc, total_tuition_increase):
print('UNDERGRADUATE TUITION FOR THE NEXT FIVE YEARS ')
print('ACADEMIC YEAR TUITION INCREASE ')
print('------------- ------------ -------- ')
for i, year in enumerate(range(16, 21)):
print('{}-{} {} {}'.format(year + 2000, year + 1,
academic_year[i],
academic_year_inc[i]))
print('TOTAL TUITION INCREASE {}'.format(total_tuition_increase))
Of course, you could change this to be more re-usable now if you didn't hard code it as being five years. Instead, you could check the length of the academic_year list and then just change the way the loop works slightly:
def print_tuition_table(academic_year, academic_year_inc, total_tuition_increase):
length = len(academic_year)
print('UNDERGRADUATE TUITION FOR THE NEXT {} YEARS '.format(length))
print('ACADEMIC YEAR TUITION INCREASE ')
print('------------- ------------ -------- ')
for i, year in enumerate(range(16, 16 + length)):
print('{}-{} {} {}'.format(year + 2000, year + 1,
academic_year[i],
academic_year_inc[i]))
print('TOTAL TUITION INCREASE ' + total_tuition_increase)
Then you could do the same with your calculation loop. It would need to take in cost_of_tuition and return multiple values, but return multiple values is easy in Python. First, you wrap it in a function like this:
def calculate_fees(cost_of_tuition):
for _ in range(5):
intMath = cost_of_tuition * 0.03
tuition_increase.append(intMath)
fnlMath = intMath + cost_of_tuition
years.append(fnlMath)
cost_of_tuition = fnlMath
academic_year.append("${:,.2f}".format(fnlMath))
academic_year_inc.append("${:,.2f}".format(intMath))
total_tuition_increaseSum = sum(tuition_increase)
total_tuition_increase = "${:,.2f}".format(total_tuition_increaseSum)
return academic_year, academic_year_inc, cost_of_tuition
Then you can set those values like this:
academic_year, academic_year_inc, cost_of_tuition = calculate_fees(cost_of_tuition)
Also, you could easily expand this function to take a years parameter so that it doesn't have to be stuck with 5 years.
def calculate_fees(cost_of_tuition, years):
for _ in range(years):
And since our printing function checks the length of your list, now you just need to change one number to be able to adjust the whole output.
With all those functions defined, here's how you could call them to run the script:
cost_of_tuition = get_cost()
academic_year, academic_year_inc, cost_of_tuition = calculate_fees(cost_of_tuition)
print_tuition_table(academic_year, academic_year_inc, cost_of_tuition) | {
"domain": "codereview.stackexchange",
"id": 15965,
"tags": "python, beginner, calculator, finance"
} |
Creating lists of Card objects | Question: I need to create two lists of Card objects in C#.
A Card class is defined as:
public class Card {
public Suits Suit { set; get; }
public Values Value { set; get; }
public static Random rand = new Random();
public Card() {
}
public Card(Suits s, Values v) {
Suit = s;
Value = v;
}
public override string ToString() {
return Suit.ToString() + " that has a value of " + Value.ToString();
}
public Card RandomCardGenerator() {
Suit = (Suits)rand.Next(4);
Value = (Values)rand.Next(1, 14);
return this;
}
In a different class called Action:
static class Action {
static public List< Card > CreateRandomCardList() {
List< Card > cardList1 = new List< Card >();
for (int loop = 0; loop < 12; loop++) {
cardList1.Add(new Card().RandomCardGenerator());
}
return cardList1;
}
static public List < Card > CreateFullSetCard() {
List< Card > cardList2 = new List< Card >();
for (int s = 0; s < 4; s++) {
for (int v = 0; v < 13; v++)
cardList2.Add(new Card((Suits)s, (Values)v));
}
return cardList2;
}
}
CreateRandomCardList() is later called upon:
public partial class Form1 : Form {
public Form1() {
InitializeComponent();
PopulateListBox1(Action.CreateRandomCardList());
}
private void PopulateListBox1(List< Card > l) {
listBox1.DataSource = l;
}
}
I used static for class Action.
It makes sense to create instances of Card, but is it necessary to create an instance of Action? Can I keep it static?
Answer: It's unclear what CreateRandomCardList is intended for during gameplay. I know, technically, it's generating random cards but then why just 11 (loop < 12), why not 13?
Nevertheless, I strongly feel, the Action class is clearly misleading. I would instead prefer to have a Deck which knows what cards it holds. Also, Card type should be as simple as possible, perhaps knowing only Suit and Value. RandomCardGenerator does not belong there.
// Deck is general term. You can more appropriately
// name it as DeckOfCards, CardDeck, or anything you deem fit.
public class Deck {
private Card[] cards;
// Setup() is purposely part of Deck creation because without cards deck is useless.
public Deck() {
// fill up 13 cards
Setup();
}
private Cards[] Setup() {} // CreateFullSetCard()
private Cards[] Shuffle() {}
//.. so on
}
The Deck should not be static because
It is going to have it's own state.
If there are > 1 players, each will have their own Deck of cards. Even if it's single player, it should not be static for #1 reason. | {
"domain": "codereview.stackexchange",
"id": 29152,
"tags": "c#, winforms, static"
} |
How can a rigid body's weight do work on it to make it rotate? | Question: Consider a cylinder that rolls without sliding on an inclined plane. If it's placed at the top of the plane, with its center of mass at a height $h$ from the bottom, it will have a potential energy $mgh$ (considering the bottom of the plane as the zero point). Then, the cylinder will start rolling down the plane, as its weight does work on it, causing its potential energy to be transformed into rotational and translational kinetic energy.
I'm confused about the origin of the rotational kinetic energy of the cylinder. Since the static friction and the normal force are applied at the point of contact with the plane, neither of them can do work on the cylinder (which is why its mechanical energy is conserved), since the cylinder is, by constraint, not sliding. This seems to imply that the cylinder's rotational kinetic energy comes from the work done by its weight.
However, I don't understand how this happens, since a rigid body's weight can be seen as being applied on its center of mass, which means it can exert no torque on it. Instead, the only torque exerted with respect to its center of mass comes from static friction.
Moreover, if the inclined plane was frictionless, there would be no torque exerted on the cylinder, so it would not rotate. Yet, I don't see how this situation could be distinguished from the previous one by only looking at the cylinder's mechanical energy (since the normal force would still be doing no work).
So, my question is: how is gravitational potential energy converted into rotational kinetic energy, and what is friction's role on this?
Answer: It is important to keep in mind that there are three different conserved quantities of interest here: linear momentum, angular momentum, and energy. Rotational kinetic energy is not itself a conserved quantity.
Each of the conserved quantities has an associated rate of change, or “flow”. The rate of change of linear momentum is the force, the rate of change of the angular momentum is the torque, and the rate of change of the energy is the power.
Each interaction can produce all three: force, torque, and power. It is not necessary that the force which delivers torque also deliver power.
For a disk rolling without slipping there are two interactions, the gravitational interaction and the friction interaction. Assuming no dissipation, it is straightforward to show that the change in the linear momentum is equal to the sum of the frictional and gravitational forces, that the change in the angular momentum (about the center of mass) is equal to the torque from the friction only, and that the change in the energy is equal to the power from gravity only.
It is incorrect to assume that the torque must provide any energy. It does not. It only provides angular momentum. Only the power provides energy, and that comes entirely from the gravitational interaction. The frictional interaction provides torque. It does not provide power, although it does provide a constraint which splits the power into rotational and translational KE. Providing such a constraint does not itself require energy. | {
"domain": "physics.stackexchange",
"id": 76276,
"tags": "newtonian-mechanics, energy-conservation, work, potential-energy, rigid-body-dynamics"
} |
Characterising Minkowski spacetime as a flat manifold with some other property? | Question: It is known that flat manifolds can be characterized as follows
If a pseudo-Riemannian manifold $M$ of signature $(s,t)$ has zero Riemann
curvature tensor everywhere on $M$, then the manifold is locally isometric to $\mathbb{R}^{s,t}$.
My question is this $-$ what other properties should a pseudo-Riemannian manifold have, beyond zero curvature, for it to be globally isometric to $\mathbb{R}^{s,t}$?
My guess is that non-compactness should be at least one of the conditions, but I haven't been able to show that.
Answer: Exercise 8 of Chapter 8 in Barrett O'Neill's Semi-Riemannian Geometry With Applications to Relativity (1983) asks,
(a) Let $M$ be a flat connected semi-Riemannian manifold complete at
a $o\in M$ (that is, $\exp_{o}$ is defined on all of $T_{o}(M)$). Prove that $\exp_{o}: T_{o}(M) \rightarrow M$ is a semi-Riemannian covering map. (Hint: Use Proposition 6.) (b) Give an example of a connected semi-Riemannian manifold that is complete at one point but not complete.
I haven't done the exercise, but based on this, if $M$ is a pseudo-Riemannian manifold of signature $(s, t)$, then a necessary and sufficient condition that $M$ is globally isometric to $\mathbb{R}^{s, t}$ is that $M$ is
flat,
simply-connected (and connected), and
geodesically complete.
If $M$ is a flat pseudo-Riemannian manifold such that it is connected and geodesically complete, then by the exercise we know $T_{o}M\cong\mathbb{R}^{s, t}$ is a pseudo-Riemannian covering space of $M$. If $M$ is simply-connected, then it is globally isometric to its covering space. | {
"domain": "physics.stackexchange",
"id": 95898,
"tags": "general-relativity, spacetime, differential-geometry, curvature, topology"
} |
Calcium carbonate and hydrochloric acid | Question: I am trying to solve an exercise, where a block of 605.5g calcium carbonate should be completely dissolved in 30% hydrochloric acid (w/w) and the concentration of the acid should be 3% in the end. The question is: How many g of 30% HCl-acid is needed in the beginning.
$$\ce{CaCO3 + 2 HCl -> CaCl2 + H2O + CO2}$$
So for each dissolved molecule of $\ce{CaCO3}$:
2 molecules of $\ce{HCl}$ are used
1 molecule of $\ce{H2O}$ is produced
1 molecule of $\ce{CaCl2}$ is produced
1 molecule of $\ce{CO2}$ is produced, but leaves the solution
I got a result (about $\pu{1671.524g}$ 30% hydrochloric acid), which I verified this way:
Let $p$ be the number of dissolved $\ce{CaCO3}$ moles, which is about $\mathrm{605.5/100.0869 = 6.05}$
$\ce{HCl = 1671.524g * 0.3}$
$\ce{H_2O = 1671.524g * 0.7}$
$\ce{HCl_{end} = HCl - 2 * (p * M(HCl)}$)
$\ce{H2O_{end} = H2O + p * M(H2O)}$
$\ce{CaCl_{2end} = p * M(CaCl2)}$
$c_{end}$ = $\ce{\frac{HCl_{end}}{HCl_{end} + H2O_{end} + CaCl_{2end}} = 0.03}$
But my solution is claimed to be incorrect. Am I missing something here?
Answer: We'll assume the reaction loses only $\ce {CO2}$ from the system (although it is a exothermic reaction, we'll assume $\ce{H2O}$ produced would not be lost as vapors):
$$\ce{CaCO3 + 2 HCl -> CaCl2 + H2O + CO2}$$
Mass of $30\%~\ce{HCl}$ needed to react with $\pu {605.5 g}$ of $\ce{CaCO3}$ $\pu {= 605.5 g \ce{CaCO3} \times \frac{\pu{1mol}~\ce{CaCO3}}{\pu{100.09g}~ \ce{CaCO3}}\times \frac{\pu{2mol}~\ce{HCl}}{\pu{1 mol}~\ce{CaCO3}}\times \frac{\pu{36.45g}~\ce{HCl}}{\pu{1mol}~\ce{HCl}}\times \frac{\pu{100g}~\text{solution of 30%}~\ce{HCl}}{\pu{30 g}~\ce{HCl}}= 1470.04 g of 30\% \ce{HCl} solution}$.
Similarly, mass of $\ce{CO2}$ released when $\ce{HCl}$ reacted with $\pu{605.5 g}$ of $\ce{CaCO3}$ $\pu{= 605.5 g \ce{CaCO3} \times \frac{\pu{1mol}~\ce{CaCO3}}{\pu{100.09g}~\ce{CaCO3}}\times \frac{\pu{1mol}~\ce{CO2}}{\pu{1mol}~\ce{CaCO3}}\times \frac{\pu{44.0g}~\ce{CO2}}{\pu{1mol}~\ce{CO2}} = 266.18 g \ce{CO2}}$.
According to the law of conservation of mass,
$\pu {mass of reactant = mass of products}$
Assuming no solvent left the system,
$\pu {mass of reactant + solvent = mass of products + solvent}$
$\pu{mass of reactant + solvent = 1470.04 g + 605.5 g = 2075.54 g}$
$\pu{mass of products + solvent remained in the flask = mass of reactant + solvent - mass of \ce{CO2} released = 2075.54 g - 266.18 g = 1809.41 g}$
Now, suppose extra $\pu{A g}$ of $30\%~\ce{HCl}$ added to the solution, so that final concentration is $3\%~\ce {HCl}$.
$\pu {mass of \ce{HCl} in A g of 30\% solution = A g of 30\% \ce{HCl} solution \times \frac {\pu{30g}~\ce {HCl}}{\pu{100g}~ \text{of 30% $\ce{HCl}$ solution}} = 0.3A g of 30\% \ce{HCl}}$
Thus, since final concentration of solution is $\mathrm {3\%}$,
$$\pu {\frac {(0.3 A) g}{(1809.41 + A) g} = 0.03}$$
$$\pu {0.3 A = 0.03 \times 1809.41 + 0.03A}$$
Thus, $\mathrm {A} = \frac {0.03 \times 1809.41}{0.3-0.03} = \pu{201.04 g}$
Finally, the mass of $30\%~\ce{HCl}$ initially added is $\pu {(1470.04 + 201.04) g = 1671.1 g}$. | {
"domain": "chemistry.stackexchange",
"id": 9978,
"tags": "acid-base, reaction-mechanism"
} |
Electron Double Slit Experiment-de broglie wavelength relation to distance btw slits | Question: In the 2 slit experiment with electrons, is the distance between the slits related to the individual electron's de broglie wavelength?
In other words, if the slits are too far apart which would prevent the electron's matter wave from passing through both slits, does the interference pattern then fail?
A broader question is what is the relationship between the size of the slits, the distance between the slits, and the observed interference pattern?
Answer: You ask:
A broader question is what is the relationship between the size of the slits, the distance between the slits, and the observed interference pattern?
The answer here covers your question.
The de Broglie wavelength describes the effective wavelength that a particle would have when it was behaving as a wave.
to decide what wavelength an electron should have so as to be able to see the interference pattern one has to see separation and and distance to the screen that the slits should have.
The interference pattern is dictated by the distance from one bright line (coherence) to the next:
$$\Delta y = \frac{\lambda D}{d}$$
where D is the distance from the slit to the screen (or detector), little d is the spacing between the slits, and λ is going to be our de Broglie wavelength.
Let's assume we want to use electrons for our experiment. We build a setup with the screen placed 1 meter from the slits, and the two slits 1 millimeter apart (maybe we found this equipment in a storage closet in the physics department...). This setup will make the distance between the bright spots on our screen 1000 times what the de Broglie wavelength of our incoming electron is. We want to be able to actually see the interference pattern in our detectors, so perhaps we should request that the spacing of the bright spots be about 1 millimeter (this would depend on the detectors, of course). This means the de Broglie wavelength of our electron has to be about one meter. Now we go back to the equation for de Broglie wavelength, and see that we know h and we now know λ, so we can calculate what p should be. Since we know the mass of the electron, calculating the momentum is essentially the same as calculating the speed; for our experiment, we find the electron needs to be going about 0.0007 m/s! That's a tiny speed... about 2 inches a minute (kind of like pouring ketchup)!
So experiments are not easy with electrons.
For the buckyball experiment , the researchers used slits about 100 nanometers apart (a nanometer is one millionth of a millimeter), and shot the buckyballs through the slits at about 200 meters per second (roughly 500 mph), much slower than the speed of light. | {
"domain": "physics.stackexchange",
"id": 26194,
"tags": "quantum-mechanics"
} |
Microscope and dual nature of light | Question: Does a light microscope also prove the particle nature of light?
As in electron microscope there is either transmission or absorbance of electrons to create an image, hence the question above!
Answer: The working principle of microscope doesn’t warrant the particle description of light. But where the particle nature can be observed is at the level of our detectors.
For example, we recently built a microscope in our lab for single photon purposes where we finally have an EMCCD which is essentially a fancy camera sensitive enough to detect single photons. So if we look at our camera’s readout, we see the microscope’s image buildup one photon at a time. See also this related answer of mine. | {
"domain": "physics.stackexchange",
"id": 92571,
"tags": "visible-light, wave-particle-duality, microscopy"
} |
A general question about PID Controller | Question: I have a basic question because I'm trying to understand right now a concept that I thought it was obvious.
Looking at this video he is going to feedback the variable state x with the input of the system, which is a force f.
Now, if I'm correct it is only possibile to feedback variables which share the same units, so I expect to drive a meter through an input variable which is a meter and the difference will be then feed into the PID. Is the example in the video just to show up how to use simulink?
Or I m wrong?
Answer: On the video you've linked the reference(x desire) is distance and the feedback variable is also distance, what is different is the input to the system which is force. The magic happens inside the PID where the fine tuning of the gains, which are commonly regarded as abstract values, makes the system behave the way you want.
So you can only compare variables of the same units to input the error to the PID, the PID will take care of calculating the right value no matter what units the input to the system is in. | {
"domain": "robotics.stackexchange",
"id": 485,
"tags": "pid"
} |
Complexity of membership testing for finite-regular languages | Question: A language $L \subseteq \Sigma^*$ is finite-regular if there exists $C$ such that for all $n$, $L \cap \Sigma^n$ is accepted by some DFA with at most $C$ final states.
Given a finite-regular language $A_L$, am I guaranteed that there exists a polynomial-time algorithm $A_L$ that recognizes $L$? In other words, I want an algorithm such that on input $x$ it returns true or false according to whether $x \in L$ or not, and the running time is polynomial in the length of $x$. Is this guaranteed?
Note that the number of states of the complete DFA accepting such $L$ may be $O(|\Sigma|^n)$ so any trivial algorithm actually constructing the full DFA would not be polynomial time. However the number of accepting final states is $O(1)$, which leads me to believe there is some shortcut available here.
Thanks to Yuval Filmus for formalizing this language class, and explaining its closure properties.
Answer: Finite-regular languages need not even be decidable. Indeed, if $L$ is any language such that $|L \cap \Sigma^n| \leq C$ for some $C$ independent of $n$, then $L$ is finite-regular (you can show this by considering the case $C=1$). In particular, the language $\{ 1^n : \text{ the $n$th Turing machine halts on the empty input} \}$ is finite-regular but not decidable. | {
"domain": "cs.stackexchange",
"id": 9529,
"tags": "time-complexity, regular-languages, finite-automata"
} |
Bayesian Nets & Markov Blanket | Question: As i passed PHD entrance exam, some days ago, i want to find solutions for challenging problem.
In Bayes network on X={X1,...Xn} each random variable has P parents and Q child's. for Xi we want to find minimum number of variable that Xi independent from other variables. at least, how many variable we need?
i think, we use Markov Blanket for this problem. any solution for this problem?
Regards.
Answer: In a Bayesian network, a variable is independent from all the variables given its Markov blanket (except of course the variables in the Markov blanket).
However, the Markov blanket is not the minimal set that renders two variables independent.
Also note that a variable may be independent of some variables in the Markov blanket, given another set of variables (think about the case of a spouse in the network).
By the way, I think that this problem has been solved already (if you refer to the problem of identifying the minimal set that renders two variables independent).
Edit: Check this out: http://www.cs.iastate.edu/~jtian/r254_min_separator.pdf
They show how to find a minimal d-separating set in a given Bayesian network. | {
"domain": "cs.stackexchange",
"id": 2986,
"tags": "machine-learning, artificial-intelligence, information-theory, hidden-markov-models, bayesian-statistics"
} |
Why is ice less dense than water? | Question: The answers to this question explain that ice is less dense than water because it has a "crystal structure", but they dont explain what exactly that is and why this happens, also I saw this answer from another site stating that not all ice is less dense than water.
What is the "crystal structure" that ice has? Why is ice structured that way? Can ice be more dense than water, and if yes, how and when?
Answer: I'm sure you have seen photographs of snowflakes up close. You will notice that there are hundreds of small crystals of ice. This is the crystal structure of ice. You don't see ice cubes with a crystal structure because they freeze too fast. The water doesn't have enough time to move into the crystal lattice when you freeze the water. This web site shows how the molecules line up in the crystals.
Yes, some ice is denser than water. If you put pressure on regular ice, and give it time to rearrange, the molecules will move into a new crystal lattice which results in the ice being more dense than water. In the first ice crystal, there are spaces between some of the molecules which is not there in the second crystal structure.
With extreme pressure, you can have frozen water at 100 °C. | {
"domain": "physics.stackexchange",
"id": 13289,
"tags": "water, density, ice"
} |
Behaviour at an interface plane wave | Question: I have this example diagram that was given in one of my lectures and I am just going through what the equation given actually mean and calculating some results from the equation. Which are the angle of incidence and the type of polarisation the incident wave is.
So here is the diagram with equation given:
So starting with the first part of the equation:
$$30\left(0.866i-0.5k\right)$$
What I understand of this is that this represent the direction and magnitude of the plane wave, so if a particle was put in front of this plane then, it would oscillate in the $30\left(0.866i-0.5k\right)$ direction.
So for the next part of the equation
$$e^{i\left(0.5x+0.866z\right)}$$
From this I gather that as the wave propagates parallel to the plane which is indicated by the $x$ and $z$ the we are looking at what is called a p polarisation, but I assume that if say the $z$ or $x$ component was replaced with a $y$ then we would be looking at a s polarisation due to the wave propagating perpendicular to the plane?
So to find the angle of incidence I did the following:
I took the experiential part to of the plane wave equation:
$$e^{\left(k.r\right)}$$
and then looking at the diagram from the propagation vector for incidence $k_i$ I get the following.
$$k_i=\left|k_i\right|\sin\left(90-\theta _i\right)\hat{z}+\left|k_i\right|\cos\left(90-\theta _i\right)\hat{x}$$
which simplify to
$$k_i=\left|k_i\right|\sin\left(\theta _i\right)\hat{x}+\left|k_i\right|\cos\left(\theta _i\right)\hat{z}$$
Now r vector is given by
$$r=x\hat{x}+z\hat{z}$$
using the dot product relationship I get the following for $k \cdot r$
$$\left|k_i\right|\sin\left(\theta _i\right)x+\left|k_i\right|\cos\left(\theta \:_i\right)z$$
comparing this to the equation given I make
$$\left|k_i\right|=1$$
$$\theta _i=\tan^{-1}\left(\frac{0.5}{0.866}\right)=30°.$$
Have I understood the equation correctly?
Answer:
From this I gather that as the wave propagates parallel to the plane which is indicated by the $x$ and $z$ the we are looking at what is called a p polarisation,
Yes.
but I assume that if say the $z$ or $x$ component was replaced with a $y$ then we would be looking at a s polarisation due to the wave propagating perpendicular to the plane?
No. The $s$ polarization corresponds to a pure $y$ polarization (in the conventions you've set up). If you just replaced $x$ or $z$ in your $p$-polarized wave for $y$, i.e. if you had a polarization along $0.5\hat{\mathbf j}+0.866\hat{\mathbf k}$ or $0.5\hat{\mathbf i}+0.866\hat{\mathbf j}$, then the result would not be a viable plane wave, as the polarization would not be orthogonal to the wave vector.
Other than that, though,
Have I understood the equation correctly?
Yes. | {
"domain": "physics.stackexchange",
"id": 53809,
"tags": "electromagnetism, optics, classical-electrodynamics, plane-wave"
} |
How to print a Confusion matrix from Random Forests in Python | Question: I applied this random forest algorithm to predict a specific crime type. The example I took from this article here.
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
import random
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
import matplotlib
import matplotlib.pyplot as plt
import sklearn
from scipy import stats
from sklearn.cluster import KMeans
import seaborn as sns
# Using Skicit-learn to split data into training and testing sets
from sklearn.model_selection import train_test_split
# Import the model we are using
from sklearn.ensemble import RandomForestRegressor
import os
os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'
features = pd.read_csv('prueba2.csv',sep=';')
print (features.head(5))
# Labels are the values we want to predict
labels = np.array(features['target'])
# Remove the labels from the features
# axis 1 refers to the columns
features= features.drop('target', axis = 1)
# Saving feature names for later use
feature_list = list(features.columns)
# Convert to numpy array
features = np.array(features)
# Split the data into training and testing sets
train_features, test_features, train_labels, test_labels = train_test_split(features, labels, test_size = 0.25, random_state = 42)
baseline_preds = test_features[:, feature_list.index('Violent crime')]
# Baseline errors, and display average baseline error
baseline_errors = abs(baseline_preds - test_labels)
print('Error: ', round(np.mean(baseline_errors), 2))
# Instantiate model with 1000 decision trees
rf = RandomForestRegressor(n_estimators = 1000, random_state = 42)
# Train the model on training data
rf.fit(train_features, train_labels);
# Use the forest's predict method on the test data
predictions = rf.predict(test_features)
# Calculate the absolute errors
errors = abs(predictions - test_labels)
# Print out the mean absolute error (mae)
print('Promedio del error absoluto:', round(np.mean(errors), 2), ' Porcentaje.')
# Calculate mean absolute percentage error (MAPE)
mape = 100 * (errors / test_labels)
# Calculate and display accuracy
accuracy = 100 - np.mean(mape)
print('Precision:', round(accuracy, 2), '%.')
# Get numerical feature importances
importances = list(rf.feature_importances_)
# List of tuples with variable and importance
feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(feature_list, importances)]
# Sort the feature importances by most important first
feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True)
# Print out the feature and importances
[print('Variable: {:20} Importance: {}'.format(*pair)) for pair in feature_importances];
# Import tools needed for visualization
from sklearn.tree import export_graphviz
import pydot
# Pull out one tree from the forest
tree = rf.estimators_[5]
# Import tools needed for visualization
from sklearn.tree import export_graphviz
import pydot
# Pull out one tree from the forest
tree = rf.estimators_[5]
# Export the image to a dot file
export_graphviz(tree, out_file = 'tree.dot', feature_names = feature_list, rounded = True, precision = 1)
# Use dot file to create a graph
(graph, ) = pydot.graph_from_dot_file('tree.dot')
# Write graph to a png file
graph.write_png('tree.png')
So my question is: how can I add a Confusion matrix to measure accuracy? I tried this example from this here, but it doesn't work. The following error appears:
Any advice?
Answer: From the code and task as your present it, a confusion matrix wouldn't make sense. This is because it shows how well a model is classifying samples i.e. saying which category they belong to. Your problem (as the author in your link states) is a regression problem, because you are predicting a continuous variable (temperature). Have a look here for more information.
In general, if you do have a classification task, printing the confusion matrix is a simple as using the sklearn.metrics.confusion_matrix function.
As input it takes your predictions and the correct values:
from sklearn.metrics import confusion_matrix
conf_mat = confusion_matrix(labels, predictions)
print(conf_mat)
You could consider altering your task to make it be a classification problem, for example by grouping the temperatures in to classes of a given range.
You could say transform the target temperature to be a new_target_class, then change your code to use the [RandomForestClassifier][3].
I have done a quick and dirty conversion on the same data linked in that article, check it out here. I basically use the minimum and maximum values of the target variable to set a range, then aim for 10 different classes of temperature and create a new column in the table which assign that class to each row. The top it looks like this (click on picture to enlarge):
If you can get those predictions going using the RandomForestClassifier, you can then run the confusion matrix code above on the results. | {
"domain": "datascience.stackexchange",
"id": 6943,
"tags": "random-forest"
} |
What exactly is the fireball caused by a nuclear bomb? | Question: This seems like a pretty simple question, but I can't seem to come up with a satisfactory answer. When a nuclear bomb is detonated a large fireball forms. What is the fuel that drives this fireball? Or is it not a fire in the traditional sense (i.e. requiring fuel, oxygen, and a spark)?
Answer: A fireball marks the radius at which the plasma - ionised atoms and free electrons from the air, the ground (if detonated near the ground) the bomb casing and the nuclear explosive - become transparent to visible radiation. It is a "fireball" because visible radiation is produced by hot plasma (a few thousand Kelvin and upwards) via a number of processes involving electrons interacting with ions or recombining with ions. The visible radiation escapes to us from the outer part of the fireball - a bit like the photosphere of the Sun.
The plasma is made hot by material absorbing energy in the form of radiation (predominantly gamma and x-rays and the kinetic energy of reaction products) released in the initial fission or fusion explosion and then kept hot from absorbing its own radiation thereafter. Ultimately, the energy arises from the potential energy of the strong nuclear force that binds neutrons and protons together, which is millions of times greater than the atomic chemical potential energy associated with normal "burning".
Unlike the solar photosphere the fireball from an explosion evolves rapidly - expanding and cooling as it does so - because there is nothing like the gravity of the Sun to constrain the hot plasma. | {
"domain": "physics.stackexchange",
"id": 96852,
"tags": "nuclear-physics, explosions"
} |
What is trace-based seismic Inversion | Question: Looking through different literature articles and books concerning seismic inversion - mostly post-stack - phrases like "trace-based" and "model-based" keep appearing. I know that model-based in when lower-frequency info from either stacking velocities or low-pass sonic/rho logs are used. However, I do not know what trace-based inversion means or how it's performed. Can someone please help me with this?
Answer: I too have found this nomenclature rather confusing. I am pretty sure that trace-based inversion is merely where each seismic trace is inverted independently of surrounding traces. This would take into account most deterministic/probabilistic inversion types (e.g. model, coloured, sparse, etc). In contrast, geostatistical inversion would not strictly be trace-based inversion, as it uses a spatial and temporal variogram to takes into account the expected results of surrounding traces during the inversion, even though it does invert on a trace-by-trace basis. Confusing.
The following paper describes trace and geostatistical inversion methods:
Germán Merletti, and Julio Hlebszevitsch. 2003. Geostatistical Inversion for the Lateral Delineation of Thin Layer Hydrocarbon Reservoirs: A Case Study in San Jorge Basin, Argentina. SEG 2003.
I've found the following paper very useful for understanding seismic inversion (geostatistical isn't discussed, though):
Dennis Cooke and John Cant. 2010. Model-based Seismic Inversion: Comparing deterministic and probabilistic approaches. CSEG Recorder, Vol. 35, No. 4. | {
"domain": "earthscience.stackexchange",
"id": 1127,
"tags": "geophysics, seismic, inversion"
} |
Singly linked list optimized for search & insert in O(1) | Question: I have tried to implement my singly linked list to be optimal in the sense that when inserting I don't have to search for the tail and then chain the new node to it, I just keep track of the tail and then insert, resulting in O(1). This is my insert:
#include <stdio.h>
#include <stdlib.h>
typedef struct Node {
int data; // integer data
struct Node* next; // pointer to the next node
} Node;
Node* head = NULL;
Node* prev = NULL;
int count = 0;
void insert_end_sll(int elm) {
count += 1;
Node* cur = malloc(sizeof * cur);
if (!cur) exit(EXIT_FAILURE);
cur->data = elm;
cur->next = NULL;
if (!head) {
head = cur;
prev = head;
}
else {
prev->next = cur;
prev = cur;
}
}
void print_sll() {
Node* trav = head;
while (trav) {
printf("%d ", trav->data);
trav = trav->next;
}
}
int main() {
insert_end_sll(1);
insert_end_sll(2);
insert_end_sll(3);
print_sll();
return 0;
}
Do you find any optimization still possible?
Answer: General Observations
If the code used calloc() rather than malloc(), there would be no need to initialize next to NULL since the memory returned by calloc() is cleared.
I would recommend separating the creation of the node from the insertion. This is more along the lines of the Single Responsibility Principle. The Single Responsibility Principle states:
that every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by that module, class or function.
While the program does perform some action if the malloc() fails, it is generally better to provide error messages to the user telling them the program failed.
Node* new_node(int data)
{
Node* return_value = calloc(1, sizeof(*return_value));
if (!return_value)
{
fprintf(stderr, "malloc of node failed in new_node()\n");
exit(EXIT_FAILURE);
}
return_value->data = data;
return return_value;
}
There is a well defined set of operations on linked lists:
Traversal : To traverse all the nodes one after another.
Insertion : To add a node at the given position.
Deletion : To delete a node.
Searching : To search an element(s) by value.
Updating : To update a node.
Sorting: To arrange nodes in a linked list in a specific order.
Merging: To merge two linked lists into one.
If you implement one, you should implement all. Sometimes insertion and append are implemented as separate functions.
You might want to have a specialized list pointer to point to the head that maintains more information:
typedef struct listhead
{
unsigned int count;
Node first;
Node Last;
} ListHead;
Avoid Global Variables
It is very difficult to read, write, debug and maintain programs that use global variables. Global variables can be modified by any function within the program and therefore require each function to be examined before making changes in the code. In C and C++ global variables impact the namespace and they can cause linking errors if they are defined in multiple files. The answers in this stackoverflow question provide a fuller explanation.
Each operation for the linked list should have a first parameter that is the head of the list. | {
"domain": "codereview.stackexchange",
"id": 44433,
"tags": "c, linked-list"
} |
String merge sort in Java | Question: This snippet is about sorting a set of strings using merge sort. However, this version is tailored for strings as it relies on lcp values:
Given two strings \$A\$ and \$B\$, \$lcp(A, B)\$ is the length of the longest prefix shared by both \$A\$ and \$B\$. The cool point to note here is that if you know at least a lower bound of \$lcp(A, B)\$ (say, \$k < lcp(A, B)\$), whenever comparing \$A\$ and \$B\$, you can skip the first \$k\$ characters, and this string merge sort relies on these values in order to speed up the computation.
StringMergesort.java:
package net.coderodde.util;
import java.util.Arrays;
import java.util.Random;
public class StringMergesort {
private static final int ALPHABET_SIZE = 26;
private static final class ComparisonResult {
int cmpValue;
int lcp;
}
public static void sort(String[] array) {
sort(array, 0, array.length);
}
public static void sort(String[] array, int fromIndex, int toIndex) {
if (toIndex - fromIndex < 2) {
return;
}
String[] auxArray = array.clone();
int[] sourceLCPArray = new int[auxArray.length];
int[] targetLCPArray = new int[auxArray.length];
ComparisonResult result = new ComparisonResult();
sortImpl(auxArray,
array,
sourceLCPArray,
targetLCPArray,
fromIndex,
toIndex,
result);
}
private static void lcpCompare(String a,
String b,
int k,
ComparisonResult comparisonResult) {
int length = Math.min(a.length(), b.length());
for (int i = k; i < length; ++i) {
char ach = a.charAt(i);
char bch = b.charAt(i);
if (ach != bch) {
comparisonResult.cmpValue = Character.compare(ach, bch);
comparisonResult.lcp = i;
return;
}
}
comparisonResult.lcp = length;
if (a.length() > length) {
comparisonResult.cmpValue = 1;
} else if (a.length() < length) {
comparisonResult.cmpValue = -1;
} else {
comparisonResult.cmpValue = 0;
}
}
private static void sortImpl(String[] source,
String[] target,
int[] sourceLCPArray,
int[] targetLCPArray,
int fromIndex,
int toIndex,
ComparisonResult result) {
int rangeLength = toIndex - fromIndex;
if (rangeLength < 2) {
return;
}
int middle = fromIndex + ((toIndex - fromIndex) >> 1);
sortImpl(target,
source,
targetLCPArray,
sourceLCPArray,
fromIndex,
middle,
result);
sortImpl(target,
source,
targetLCPArray,
sourceLCPArray,
middle,
toIndex,
result);
merge(source,
target,
sourceLCPArray,
targetLCPArray,
fromIndex,
middle - fromIndex,
toIndex - middle,
result);
}
private static void merge(String[] source,
String[] target,
int[] sourceLCPArray,
int[] targetLCPArray,
int offset,
int leftRangeLength,
int rightRangeLength,
ComparisonResult result) {
int left = offset;
int leftBound = offset + leftRangeLength;
int right = leftBound;
int rightBound = right + rightRangeLength;
int targetIndex = offset;
while (left < leftBound && right < rightBound) {
int leftLCP = sourceLCPArray[left];
int rightLCP = sourceLCPArray[right];
if (leftLCP > rightLCP) {
target[targetIndex] = source[left];
targetLCPArray[targetIndex++] = sourceLCPArray[left++];
} else if (rightLCP > leftLCP) {
target[targetIndex] = source[right];
targetLCPArray[targetIndex++] = sourceLCPArray[right++];
} else {
lcpCompare(source[left], source[right], leftLCP, result);
if (result.cmpValue <= 0) {
target[targetIndex] = source[left];
targetLCPArray[targetIndex++] = sourceLCPArray[left++];
sourceLCPArray[right] = result.lcp;
} else {
target[targetIndex] = source[right];
targetLCPArray[targetIndex++] = sourceLCPArray[right++];
sourceLCPArray[left] = result.lcp;
}
}
}
while (left < leftBound) {
target[targetIndex] = source[left];
targetLCPArray[targetIndex] = sourceLCPArray[left];
targetIndex++;
left++;
}
while (right < rightBound) {
target[targetIndex] = source[right];
targetLCPArray[targetIndex] = sourceLCPArray[right];
targetIndex++;
right++;
}
}
public static void main(String[] args) {
long seed = System.nanoTime();
Random random = new Random(seed);
String[] array1 = createRandomStringArray(500_000, 300, random);
String[] array2 = array1.clone();
System.out.println("Seed = " + seed);
long startTime = System.nanoTime();
Arrays.sort(array1);
long endTime = System.nanoTime();
System.out.format("Arrays.sort in %.2f milliseconds.\n",
(endTime - startTime) / 1e6);
startTime = System.nanoTime();
StringMergesort.sort(array2);
endTime = System.nanoTime();
System.out.format("StringMergesort.sort in %.2f milliseconds.\n",
(endTime - startTime) / 1e6);
System.out.println("Arrays equal: " + Arrays.equals(array1, array2));
}
private static String[] createRandomStringArray(int size,
int maxLength,
Random random) {
String[] ret = new String[size];
for (int i = 0; i < size; ++i) {
ret[i] = randomString(maxLength, random);
}
return ret;
}
private static String randomString(int maxLength, Random random) {
int length = random.nextInt(maxLength + 1);
StringBuilder sb = new StringBuilder(length);
for (int i = 0; i < length; ++i) {
sb.append((char)('a' + random.nextInt(ALPHABET_SIZE)));
}
return sb.toString();
}
}
My performance figures are:
Seed = 5896859395728
Arrays.sort in 1220.41 milliseconds.
StringMergesort.sort in 669.97 milliseconds.
Arrays equal: true
Is there anything to improve here? Once again, I would like to hear comments on naming conventions, coding style, performance, and optimisation opportunities.
Answer: (Had to sort out what I wanted from this code, leaving:)
What is your goal coding this? From the amount of explanation, it does not seem to be educational. From the embedded micro-benchmark, it might be benchmarking an implementation of LCP mergesort.
I've tried to follow some of the thoughts presented below on Ideone, sort of minimally invasive. (The doc-comments are barely readable without tool support - I used Eclipse Mars. I've taken it quite a bit further - too early to put that in the open.)
Code should be commented - at least javadoc comments for public (and protected) members. (I'm still yearning for a quotable, easy to understand presentation of LCP merge-sort, esp. an intuition why the LCP handling in merge is sufficient, what string is the other one in common. Not quite easy enough: A. Eberle's thesis "3.1.1. LCP-Compare" and "3.1.2. Binary LCP-Merge [and] Mergesort" (pp. 19-22).)
Consider making sortImpl an instance method (you didn't hide the constructor, anyway), instead of passing around the arrays (and the ComparisonResult) in each and every call; lcpCompare() too, while you're at it. (non-trivial)
(Using a static sortImpl,) I'd drop auxArray:
sortImpl(array.clone(), array, …, new ComparisonResult()).
Cloning the array as opposed to instantiation deserves a comment
interchanging the roles of source and target in the recursive calls in sortImpl deserves a comment.
int k in lcpCompare() might be named from, prefixLength, mightDiffer or postPrefix.
I'd much prefer sourceLCP/targetLCP over …LCPArray.
I'd rather have index increment in the "copy rest of run loops" look consistent with the merge loop.
I'm not sure my reasoning about the LCP handling in merge() is correct: please comment this part of your code. (and check the @param comments on Ideone)
(Roll-your-own micro benchmarks are even more dangerous than tool-supported ones - I don't intend to go into main() & co. any further.) | {
"domain": "codereview.stackexchange",
"id": 17829,
"tags": "java, algorithm, strings, sorting, mergesort"
} |
inferring missing objects | Question:
Hi, i begin to do asimple robot that can inferring missing objects on the table like that :
firstly i want to list all i need to do this
1- kinect : can i use any type of kinect or must openni kinect ? or can i use a normal camera ?? why ?
2- the simulation in the picture up is it gazebo ?? how can i link my simulation with reasoning and knowrob ??
3- the relation between my simulation and the picture captuered by my kinect or camera ?
Thanks :)
Originally posted by salma on ROS Answers with karma: 464 on 2012-10-11
Post score: 1
Original comments
Comment by yangyangcv on 2012-10-11:
too little info. where do you get this picture?
Comment by salma on 2012-10-11:
it is a simple target using kinect or camera to just inferring missing component using semantic mapping for objects ,this picture from pdf in semantic
Comment by salma on 2012-10-12:
this pdf
http://www.mediafire.com/view/?8ujbn5q7gqowk9v
Answer:
Though inferring the missing objects may appear simple, it is in fact quite a complex problem that requires the integration of perception, knowledge representation and statistical relational models. You can have a look at our paper "Combining Perception and Knowledge Processing for Everyday Manipulation" to get an idea of the techniques we have used.
The corresponding ROS packages are comp_missingobj, prolog_perception, mod_probcog, srldb, and the knowrob stack. The experiment was done in early 2010, however, and I am not sure if all components still work in current ROS versions, especially prolog_perception may make problems since we haven't used it in a while. In addition, you will need an object recognition system that can give you the types and positions of objects.
The visualization is not gazebo, by the way, but the visualization module of the knowrob knowledge base (package mod_vis).
Originally posted by moritz with karma: 2673 on 2012-10-13
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 11332,
"tags": "knowrob"
} |
Where do we get the terms involving $\Phi$ in parentheses come from in the static weak field metric? | Question: I am confused about the static weak field metric. As written in Hartle, it reads
\begin{equation}
ds^2 =-\left(1+\frac{2\Phi(x^i)}{c^2}\right)(cdt)^2 +\left(1-\frac{2\Phi(x^i)}{c^2}\right)(dx^2+dy^2 +dz^2)
\end{equation}
From what I read, he doesn't derive it and I can't seem to find a derivation just by googling.
My Question
Where do we get the terms involving $\Phi$ in parentheses from?
Answer: Let $h_{\mu\nu}$ be a small perturbation of the Minkowski metric, i.e. $h_{\mu\nu}=g_{\mu\nu}-\eta_{\mu\nu}$. We then define the quantity $\gamma_{\mu\nu}=h_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}\eta_{\rho\sigma}h^{\rho\sigma}$. It can be shown$^1$ that the Einstein field equations take the simple form
$$\gamma_{\mu\nu}(x)=4G\int\frac{T_{\mu\nu}(t-|\mathbf{x}-\mathbf{x}'|,\mathbf{x}')}{|\mathbf{x}-\mathbf{x}'|}\,\mathrm{d}^3x'.$$
We then consider nearly Newtonian sources, with $T_{00}\gg |T_{0j}|,|T_{ij}|$ and such small velocities that the retardation effects in the above integral are negligible$^2$. Then, to leading order,
$$\gamma_{00}=-4\Phi,\quad \gamma_{0j}=\gamma_{ij}=0,$$
where $\Phi$ is the Newtonian potential
$$\Phi(x)=-G\int\frac{T_{00}(t,\mathbf{x}')}{|\mathbf{x}-\mathbf{x}'|}\,\mathrm{d}^3x'.$$
Using
$$g_{\mu\nu}=\eta_{\mu\nu}+\gamma_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}\eta_{\rho\sigma}\gamma^{\rho\sigma}$$
we get
$$g=-(1+2\Phi)\mathrm{d}t^2+(1-2\Phi)\mathrm{d}\mathbf{x}^2.$$
Alternatively, this may be derived in the Post-Newtonian scheme. For this derivation, cf. S. Weinberg, Gravitation and Cosmology (1972) Equations (9.1.57) and (9.1.60).
$^1$ See, e.g. N. Straumann, General Relativity (2013) Section 5.1.
$^2$ In other words, the time dependence of $T_{\mu\nu}$ is negligible. | {
"domain": "physics.stackexchange",
"id": 20395,
"tags": "general-relativity, newtonian-gravity, approximations"
} |
Using Moveit! to actually control a robot | Question:
Hey Guys,
First of if the authors of Moveit! gets a chance to read this post, I just want to say thanks and congratulate them on a great piece of software which opens up lots of opportunities as researchers to work with a robot easily and effectively.
I am currently running Moveit! 0.4 alpha with ROS Groovy on Ubuntu 12.04. We had a new robot from Meka Robotics that I am trying to get Moveit! to work on. I followed the first two quick start tutorials on the Moveit!: setup assistant and rviz plugin. Both of them worked without any problems and I am able to "plan" and see the animation of either the arms, base, zlift (specific to the meka) move. However, when I came to the gazebo part, I could not get it work, primarily because as of now there is no gazebo model for the meka robot and also gazebo just would not start for some reason. I have decided to bypass the gazebo part for now and go directly to controlling the robot. Even for this I'm guessing I would need to setup the controllers.yaml and a controller_manager.launch for my robot. I was hoping I could get some help on that.
If I understand Moveit! correctly, the way it works is it plans and publishes the "plan" to some topics and I would have to write some code to subscribe to that and get it to the controllers for the robot. Please let me know if my understanding is incorrect. Is there instructions or tutorials or even examples of working code that shows how to do the entire pipeline: plan, publish to robot, execute by the robot etc and how we can customize this to our own robot? If someone can point me to the correct direction I would be very grateful.
Thanks for your time.
Originally posted by Sudarshan on ROS Answers with karma: 113 on 2013-09-29
Post score: 6
Answer:
MoveIt uses a controller manager plugin to turn it's internal plans (of type moveit_msgs::RobotTrajectory) into some form of output that robots actually understand.
The most widely used plugin is the MoveItSimpleControllerManager, which is found in moveit_plugins (https://github.com/ros-planning/moveit_plugins). This plugin turns the RobotTrajectory into a call to a control_msgs/FollowJointTrajectoryAction action server. If you already have a FollowJointTrajectoryAction action server, setup pretty much consists of entering your joint names and the namespace of the action server to a controllers.yaml and loading that yaml file.
If you don't have the standard FollowJointTrajectoryAction, you will either have to implement that action in your robot drivers/controllers, or create a new plugin that publishes the non-standard topics/actions you are using.
There are several examples of controllers.yaml with the simple plugin, which are found in moveit_robots package (https://github.com/ros-planning/moveit_robots). I'm going to avoid posting those here, as copied code here is likely to eventually go out of date.
Originally posted by fergs with karma: 13902 on 2013-09-29
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by Sudarshan on 2013-09-29:
The manufacturer's controller code lists and subscribes to some ROS topics that when published to will move the robot. I think creating a new plugin that takes the result produced by Moveit! and publishes to these topics would work. Is there any documentation on how to do this or something similar?
Comment by fergs on 2013-09-29:
Best documentation is probably to read/modify an existing plugin. The interface for the plugin is well defined by it's API, the RobotTrajectory message is also pretty well defined.
Comment by PeterMilani on 2013-09-29:
I've a similar sort of question though not specific to MoveIt! http://answers.ros.org/question/84546/implementing-realtime-controllers-with-ros_control/. So the program that interfaces to the hardware just has to present a FollowJointTrajectoryAction server to the simple CM in order to use moveit?
Comment by fergs on 2013-09-29:
Generally speaking there are several levels of "standard" interface available. One of the upper layers is the ROS messages/actions themselves -- and FollowJointTrajectoryAction is a very standard one to use indeed. A layer lower would be whatever interface ros_control uses internal.
Comment by Sudarshan on 2013-09-30:
Please forgive me for the trivial question. To implement a FollowJointTrajectoryAction server I can follow the tutorial here: http://wiki.ros.org/actionlib/Tutorials? Essentially will this "action server" will set inbetween the robots controller topics and moveit's planning mechanism? Thanks!
Comment by tobiasfeil1993 on 2017-11-27:
The repository has moved and the new link is https://github.com/ros-planning/moveit/tree/kinetic-devel/moveit_plugins | {
"domain": "robotics.stackexchange",
"id": 15702,
"tags": "moveit"
} |
In supervised learning, what does "Estimating $p(y \vert x)$" mean? | Question: I read chapter 5.1.3 of Joshua Bengio's deeplearning book, which says:
supervised learning involve observing examples of random vectors $\textbf{x}$ and associated value or vector $\textbf{y}$ and learning to predict $\textbf{y}$ from $\textbf{x}$ by estimating $p(\textbf{y} \vert \textbf{x})$.
What does $p(\textbf{y} \vert \textbf{x})$ mean?
From basic statistics, I know that $p(\textbf{y} \vert \textbf{x})=\frac{p(\textbf{y},\textbf{x})}{p(\textbf{x})}$.
How do we find $p(\textbf{y},\textbf{x})$ and $p(\textbf{x})$?
Answer: You are correct that
\begin{equation*}
p(y|\mathbf{x})=\dfrac{p(\mathbf{x},y)}{p(\mathbf{x})}.
\end{equation*}
Similarly, we can write the joint probability $p(\mathbf{x},y)$ as follows:
\begin{equation*}
p(\mathbf{x},y)=p(\mathbf{x}|y)\cdot p(y)
\end{equation*}
From the above two equations, we obtain
\begin{equation*}
p(y|\mathbf{x})=\dfrac{p(\mathbf{x}|y)\cdot p(y)}{p(\mathbf{x})}
\end{equation*}
In the context of supervised learning, the variable $y$ is used to denote the class labels, and the vector $\mathbf{x}$ for measurement vector or feature vector. For the purpose of discussion, let us assume that the class label $y$ takes values in the set $\{1,2\}$, where $1$ denotes male and $2$ denotes female. Similarly, $\mathbf{x}$ is a measurement vector on two variables say, $(x_{1},x_{2})$, where $x_{1}$ stands for height and $x_{2}$ stands for weight of individuals.
$p(y|\mathbf{x})$ denote the posterior density for $y$ given the observation $\mathbf{x}$. For example, $p(1|\mathbf{x})$ means that given the observation $\mathbf{x}$, what is the probability that sample belongs to the class males. Similarly, we can interpret $p(2|\mathbf{x})$.
$p(\mathbf{x}|y)$ stands for the class conditional probability density. For example, $p(\mathbf{x}|1)$ denote the probability density for males and $p(\mathbf{x}|2)$ denote the probability density for females respectively. In supervised learning, these class conditional densities are usually known in advance. Finally $p(y)$ denote the prior probability for the class label $y$. For example, $p(1)$ denotes the probability that an individual/example selected randomly from the population is a male.
Assuming that, the prior probabilities for the classes and class conditional densities are known, Bayes rule says, assign an observation $x_{0}$ whose class label is not known, to the class, say, male, if
\begin{equation*}
p(\mathbf{x_{0}}|1)p(1)>p(\mathbf{x_{0}}|2)p(2).
\end{equation*}
Note that, $p(\mathbf{x})$ is not used in the classifier. Classification rules usually require class conditional densities. | {
"domain": "datascience.stackexchange",
"id": 1243,
"tags": "machine-learning, supervised-learning, probability"
} |
Why is E.coli used as a model? | Question: Is there a reason for the choice of E.coli as a model for many bacterial systems? Other bacteria such as B.subtilis are also used, but why is E. coli preferred?
Answer: Short answer. It was discovered pretty early (late 1800's). It is easy to get (you probably know where it comes from), purify, grow and is not virulent. E.coli spreads very rapidly (30 minutes division rate).
Why this one in particular and not another similar bacteria? Well you have to choose something at some stage and usually the more an organism is used the most valuable it becomes which means that more and more scientists will start to use it creating a loop which favors certain organisms, like it happened with E.coli.
E.coli started to be studied because Theodor Escherich wanted to prove the "germ theory of disease" which led him and later other scientists to work with E.coli and everybody uses it today.
By the way, the mouse model is actually quite interesting. Started by breeding huge amount of mice for selling them as pets. | {
"domain": "biology.stackexchange",
"id": 3601,
"tags": "biochemistry, molecular-biology, bacteriology, history"
} |
Why do electrostatic potentials superimpose? | Question: I've been trying to convince myself that the assertion that I've read in basic E&M books (Halliday & Resnick, Purcell), and even Griffiths, that the electrostatic potential at a point in space is equal to the sum of the potential contributions from each of the individual charges. My hang-up has been the direction in 3D space that a path integral is brought in from infinity, given that there are multiple charges each creating their own lines with the arbitrary point, and each line extends to infinity along different directions. (I hope my meaning is not lost here).
Would this reasoning starting from a system of one charge, and then adding additional charges, be sufficient to explain it:
1) For a system of one charge, the electrostatic potential at an arbitrary point is the negative path integral of the field dot ds from infinity, in the direction along the line connecting the point to the charge.
2) Adding a second charge, the electrostatic potential at the same arbitrary point is the negative path integral of the sum of the fields dot ds from infinity (in the same direction as 1) to the point.
Can this sum be broken down as such:
-a) the path integral of the sum of the fields is the sum of the path integrals from the field of the original charge E1 and the path integral of the field of the second charge E2
-b) the path integral of the field of the second charge dot ds (along the original direction), because of the nature of conservative fields, is equal to the sum of
---i) the path integral of the field of the second charge along the line
aligning the second charge to the arbitrary point, given in the
usual form: V = kq/r. This is in a different direction than
the path integral for charge 1, but the change in potential is described
by the equation here.
---ii) the path integral of field along the connecting arc at infinite
distance. This connecting arc path integral has length
proportional to r (going to infinity), but field strength
inversely proportional to r squared, so that this component
becomes negligible.
So the potential is then the sum of the original path integral of one charge and the path integral of the second charge along a different direction to infinity. This logic is repeated for additional charges.
I can draw a picture of my thinking if people suggest that in the comments.
Answer: Alternatively, potentials superpose because electric fields superpose:
$$
V=-\int_\infty^r \sum_{s}\vec E_s\cdot d\vec \ell=\sum_s \left(-\int_\infty^r \vec E_s\cdot d\vec\ell\right)=\sum_s V_s
$$
where
$$
V_s=-\int_\infty^r \vec E_s\cdot d\vec\ell
$$
is the potential due to source $s$ creating the field $\vec E_s$ of that source. The sum becomes an integral when the source distribution is continuous.
Note that, because the field is conservative, the integral from $\infty$ to $\vec r$ does not depend on the path taken. Thus, from a computational perspective, it is always possible to move on a path of arbitrary large radius without changing $V$ so that the path for each individual $\vec E_s\cdot d\vec \ell$ is radial, thus simplifying each integral, irrespective of the initial starting point at $\infty$. | {
"domain": "physics.stackexchange",
"id": 55707,
"tags": "electrostatics, potential, superposition, linear-systems"
} |
Finding Mass of Star with only Luminosity | Question: Estimate the mass of the star given this formula:
$$\frac{L}{L_\mathrm{\odot}} = \left(\frac{M}{M_\mathrm{\odot}}\right)^{3.5}$$
Given $L= 2.752\times10^{28}\,\mathrm{W}$, how do I find the mass of the star?
Thanks for any help.
Answer: Sounds like homework (correct me if that assumption is wrong, as you may want a somewhat different answer then.), so here is a hint:
Clean up the equation first, to something that looks more like an equation for mass:
$$\frac{L}{L_{sun}} = \left(\frac{M}{M_{sun}}\right)^{3.5}$$
$$\left(\frac{L}{L_{sun}}\right)^{\frac{1}{3.5}} = \frac{M}{M_{sun}}$$ | {
"domain": "astronomy.stackexchange",
"id": 1489,
"tags": "fundamental-astronomy, luminosity"
} |
$\mathcal{N} \ge 2$ Supersymmetry massive supermultiplets | Question: In Bertolinis SUSY notes [https://people.sissa.it/~bertmat/susycourse.pdf] we have defined:
$$
\{Q^I_\alpha,\bar{Q}_\dot{\beta}^J\}=2m\delta_{\alpha\dot{\beta}}\delta^{IJ}\tag{3.24}
$$
$$
\{Q^I_\alpha,Q^J_\beta\}=\epsilon_{\alpha\beta}Z^{IJ}
$$
$$
\{\bar{Q}_{I\dot{\alpha}},\bar{Q}_{J\dot{\beta}}\}=\epsilon_{\dot{\alpha}\dot{\beta}}\bar{Z}_{IJ}
$$
Also we define:
$$
a_\alpha=\frac{1}{\sqrt{2m}}Q_\alpha\;\;\;\;\;a_\dot{\alpha}^\dagger=\frac{1}{\sqrt{2m}}\bar{Q}_\dot{\alpha}
$$
Lastly, the central charges $Z^{IJ}$ can be written in the form
$$Z^{IJ}=
\left(\matrix{0 & Z_1 \\-Z_1 & 0\\&&0&Z_2\\&&-Z_2&0\\&&&&\ddots\\&&&&&0&Z_{\mathcal{N/2}}\\&&&&& -Z_{\mathcal{N}/2}&0\\}
\right)\tag{3.28}
$$
(Where the charges are non-zero only for even $\mathcal{N}$)
From these we define the following:$$
a^r_\alpha=\frac{1}{\sqrt{2}}\left(Q_\alpha^{2r-1}+\epsilon_{\alpha\beta}(Q_\beta^{2r})^\dagger\right)
$$
$$
b^r_\alpha=\frac{1}{\sqrt{2}}\left(Q_\alpha^{2r-1}-\epsilon_{\alpha\beta}(Q_\beta^{2r})^\dagger\right)
$$
where $r= 1,\dots,\mathcal{N}/2$
These equations satisfy the oscillator algebra:
$$
\{a^r_\alpha,(a^s_\beta)^\dagger\}=(2m+Z_r)\delta_{rs}\delta_{\alpha\beta}
$$
$$
\{b^r_\alpha,(b^s_\beta)^\dagger\}=(2m-Z_r)\delta_{rs}\delta_{\alpha\beta}
$$
How does one "see" that we need to define those equations for $a^r_\alpha$,$b^r_\alpha$?
Answer: The point is to devise an algorithm that constructs supermultiplets given only the commutation relations.
Since you have a nice basis of harmonic oscillator esque creation/annihilation operators, you can define a supermultiplet by postulating a vacuum state $|s\rangle$ annihilated by $a,b$, so that every state in the supermultiplet is given by hitting $|s\rangle$ with creation operators $a^\dagger,b^\dagger$. Since the $a^\dagger$ and $b^\dagger$ are all fermionic the procedure will terminate at a finite number of steps, leading to a finite-dimensional supermultiplet.
You get different supermultiplets by postulating different $|s\rangle$s. You can label them by Poincare casimirs like helicity, mass squared etc. and central charge | {
"domain": "physics.stackexchange",
"id": 65340,
"tags": "operators, supersymmetry, anticommutator, superalgebra"
} |
Why doesn't finite field propagation speed contradict Gauss's law? | Question: [Edit] Not sure why this was closed. The answers there do not answer my question, and are not even correct...
Imagine a charge sitting in space. It causes an electric field everywhere, with magnitude $\propto r_{old}^{-2}$
But now let's say we move the charge a little. This will change the electric field everywhere to be $\propto r_{new}^{-2}$. However, according to Maxwell's other equations (I'm told), this propagation is not instant, but happens with a finite speed $c$.
Let's draw a box that intersects the boundary of this wavefront. One end of the box will be have electric field vectors with magnitude $\propto r_{old}^{-2}$, while the other end will be $\propto r_{new}^{-2}$. These are not equal, so $\nabla \cdot E \neq 0$. But since there's no charge in the box, this should be impossible according to Gauss's law!
What's going on?
Answer: You're right: if there were just a Coulomb field outside some expanding shell, and a different Coulomb field inside the shell, then Gauss's law wouldn't hold, as can plainly be seen by drawing a Gaussian surface that straddles the shell.
However, the shell itself contains an additional, transverse electric field. This is the pulse of radiation produced by accelerating the charge, and it ensures that the flux through the Gaussian surface is zero. To see this visually, note that having zero flux through a Gaussian surface is equivalent to an equal number of electric field lines enter and exit.
Now consider the Gaussian surface drawn in red.
Four field lines enter it radially and only one exits radially. But three extra field lines exit transversely, so the radiation field ensures that Gauss's law keeps working. (And it keeps working no matter how quickly you kick the charge: kicking it faster makes the shell narrower, but the radiation field larger as well.)
In fact, this is one of the nicest ways of deriving the radiation field; see Appendix H of Purcell and Morin, Electricity and Magnetism for a full derivation using this method. | {
"domain": "physics.stackexchange",
"id": 69760,
"tags": "electromagnetism, special-relativity, speed-of-light, gauss-law, causality"
} |
Why is carbon dioxide bent in its first excited state? | Question: R. H. Crabtree writes in The Organometallic Chemistry of the Transition Metals (p 145, 6th ed.) that
$\ce{CO2}$ is linear in the free state but bent [...] in the first excited state...
Why is this so? (The context refers to an electronically excited state.)
Answer: This effect is the so-called Renner–Teller effect and is a consequence of a coupling between vibrational and electronic motion and thus a breakdown of the Born–Oppenheimer approximation. You might be familiar with the Jahn–Teller effect which is related to the Renner–Teller effect. In degenerate states of linear molecules with three or more atoms, the total energy of the molecule can be reduced by bending the molecule. Consider a three atomic linear molecule in a degenerate state (such as $\ce{CO2}$ in its excited $\Pi$ state). When the molecule bents, the degeneracy is lifted as it can bend in the $x$ or $y$ direction. Because of symmetry requirements, the potential has to be an even function of the bending coordinate. Ignoring the lifting of the degeneracy we may approximate the zero-order potential with a Taylor expansion as
$$
V^0=ar^2+br^4+\cdots,
$$
where $r$ is the bending coordinate and $a$ and $b$ are constants describing the potential ignoring the vibronic interaction. When we do consider the vibronic interaction, the degenerate potential splits in two, but the splitting must still be an even function of $r$.
$$
\Delta V=\alpha r^2 + \beta r^4 + \cdots
$$
If the interaction is so large that $a<\alpha/2$, the potential will not have its minimum at $r=0$, but will have two minima at $\pm r_e$ and the molecule has a lower energy when it is bent.
See also Herzberg Vol 3, p 26. | {
"domain": "chemistry.stackexchange",
"id": 7890,
"tags": "physical-chemistry, molecular-orbital-theory"
} |
Random walk with equal probability | Question: This method randomly increments or decrements (with equal probability) until either -3 or 3 are reached. I'd like to maintain printing the value of pos on each iteration, and printing the largest positive number max reached at the end of the loop. How can I improve this method using the Random() function? I know this sounds pretty ambiguous, but I'd like to know if I can make rand.nextInt(2) == 0 an easier condition to understand? Any suggestions/improvements are welcome!
public static void randomWalk() {
Random rand = new Random();
int pos = 0;
int max = 0;
System.out.println("position = " + pos);
while (pos > -3 && pos < 3) {
if (rand.nextInt(2) == 0) {
pos++;
if (pos > 0) {
max = pos;
}
} else {
pos--;
}
System.out.println("position = " + (int) pos);
}
System.out.println("max position = " + max);
}
Answer: There's a minor bug there:
if (pos > 0) {
max = pos;
}
should test pos > max instead.
Use
max = Math.max(max, pos)
to make it shorter.
I'd like to know if I can make rand.nextInt(2) == 0 an easier condition to understand?
Use rand.nextBoolean(). For a probability of exactly 50%, it's perfect.
For other probabilities, you can use
rand.nextDouble() < probability
Another possibility is simply
pos += 2 * rand.nextInt(2) - 1;
max = Math.max(max, pos);
or
pos += rand.nextBoolean() ? 1 : -1;
max = Math.max(max, pos);
or
private static final int[] DELTAS = {+1, -1};
pos += DELTAS[rand.nextInt(DELTAS.length)];
max = Math.max(max, pos);
without any conditionals. You may consider it tricky, but it isn't. Obviously, my last solution shouldn't be used for such a simple case.
Concerning the computation of max it may be slightly less efficient since it gets executed even if the value decreases. But this doesn't matter unless you need to optimize heavily (your print is many orders of magnitude slower than this). | {
"domain": "codereview.stackexchange",
"id": 14365,
"tags": "java, random"
} |
Cmake Error from openni2_camera | Question:
Hi all.
Anybody knows how can I correct these?
CMake Error at /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:266 (message):
A required package was not found
Call Stack (most recent call first):
/usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:320 (_pkg_check_modules_internal)
openni2_camera/CMakeLists.txt:9 (pkg_check_modules)
CMake Error at openni2_camera/CMakeLists.txt:26 (message):
message called with incorrect number of arguments
Can't get past this one no matter what.
Thanks for any help.
Originally posted by psprox96 on ROS Answers with karma: 97 on 2015-06-22
Post score: 0
Answer:
If your Distribution does not have a OpenNi2-package, you need to build OpenNi2 from source and add a pkgconfig-file, e.g. /usr/local/lib/pkgconfig/openni2.pc:
prefix=/usr/local/OpenNI2
includedir=${prefix}/Include
libdir=${prefix}/Bin/Arm-Release
Name: openni2
Description: OpenNI2
Version: 2.2.0.33
Cflags: -I${includedir}
Libs: -L${libdir} -lOpenNI2
Originally posted by Humpelstilzchen with karma: 1504 on 2015-06-23
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by psprox96 on 2015-06-23:
Thanks! It works! I installed:
sudo apt-get install libopenni2-dev libopenni2-0
And corrected the cmakelists.txt line 26 to:
message ("${PC_OPENNI2_LIBRARY_DIRS}")
Comment by Rufus on 2021-03-17:
Alternatively, you can run
rosdep install --from-paths . --ignore-src --rosdistro <your distro>
to automatically install required dependencies | {
"domain": "robotics.stackexchange",
"id": 21984,
"tags": "ros, catkin, openni2-camera, package, ros-groovy"
} |
Formula for curl in polar coordinates using covariant differentiation | Question: For the plane in polar coordinates $(r,\theta)$ with metric $$ds^2=dr^2+r^2d\theta^2,$$
the curl on a vector field $v^a\partial_a$ is given by the rank-2 antisymmetric tensor $\nabla_av_b-\nabla_bv_a$. This tensor has only one independent & non-zero component:
$$\nabla_rv_\theta-\nabla_\theta v_r=\partial_rv_\theta - \Gamma_{r\theta}^av_a -\partial_\theta v_r +\Gamma^a_{\theta r}v^a=\partial_rv_\theta-\partial_\theta v_r$$
since $\Gamma^a_{\theta r} = \Gamma^a_{r \theta }$. Expressed in contravariant vector components, $$\nabla_rv_\theta-\nabla_\theta v_r=\partial_r (g_{\theta a}v^a)-\partial_\theta (g_{rb}v^b)=\partial_r(r^2v^\theta)-\partial_\theta(v^r)=2rv^\theta+r^2\partial_rv^\theta-\partial_\theta v^r$$
I am trying to compare this to the formula for curl in vector calculus:
$$\nabla\times \textbf{v} = \frac{1}{r}\left[\frac{\partial}{\partial r}(rv^\theta) - \frac{\partial}{\partial\theta}v^r\right] \vec{\hat{z}}=\left[\frac{1}{r}v^\theta+\partial_rv^\theta-\frac{1}{r}\partial_\theta v^r\right]\vec{\hat{z}}$$, where $\vec{\hat{z}}$ is the unit vector pointing out of the plane.
The difference in the two formulas is because in manifolds we used the coordinate basis $\partial_\theta$ while in vector calculus we used the normalized coordinate basis $\frac{1}{r}\partial_\theta$. In the coordinate basis, $\textbf{v}=v^r\partial_r+v^\theta\partial_\theta$ while in the normalized coordinate basis, $\textbf{v}=v^r\partial_r+v^\theta\frac{1}{r}\partial_\theta$.
Thus to convert between the 'manifold' formula and the 'vector calculus' formula, we just have to rescale the $v^\theta$ component of the vector.
From the 'manifold' formula, replace $v^\theta\rightarrow \frac{v^\theta}{r}$ to get the 'vector calculus' formula:
$$\nabla_rv_\theta-\nabla_\theta v_r=2rv^\theta+r^2\partial_rv^\theta-\partial_\theta v^r \rightarrow2r\frac{v^\theta}{r}+r^2\partial_r\frac{v^\theta}{r}-\partial_\theta v^r =v^\theta+r\partial_rv^\theta -\partial_\theta v^r$$
However, this answer larger than the correct formula by a factor of $r$. Why is that so?
Answer: Say $(M,g)$ is a $2$-dimensional oriented Riemannian manifold, and let $\star$ be the Hodge star operator. Given a vector field $F$ on $M$, we define
\begin{align}
\text{curl}(F)&:=\star d(g^{\flat}F)
\end{align}
i.e we take our vector field $F$, convert it to a covector field $g^{\flat}(F)$, take its exterior derivative (so we now have a $2$-form) and finally take its Hodge dual to get a smooth function (since we're on a $2$-dimensional manifold). The fact that the curl of a vector field in $2$-dimensions yields a smooth function corresponds to your observation that there's only one non-vanishing term. The thing you're missing is the final Hodge star (the extra $r$ you have is the same $r$ in $dx\wedge dy=r\,dr\wedge d\theta$).
Explicitly, suppose we're in the plane and using polar coordinates. Suppose $F=f^r\mathbf{e}_r+ f^{\theta}\mathbf{e}_{\theta}$ is the expansion in terms of unit vector fields (as in vector calculus). Then, $g^{\flat}(F)=f^r\,dr +f^{\theta}\,rd\theta$ (here this factor of $r$ is the norm of $\frac{\partial}{\partial \theta}$, i.e $\sqrt{g_{\theta\theta}}=\sqrt{r^2}=r$). So,
\begin{align}
d(g^{\flat}(F)) &=
\left(\frac{\partial f^r}{\partial r}\,dr+\frac{\partial f^r}{\partial \theta}\,d\theta\right)\wedge dr +
\left(\frac{\partial (rf^{\theta})}{\partial r}\,dr+\frac{\partial (r f^{\theta})}{\partial \theta}\,d\theta\right) \wedge d\theta\\
&=\left(\frac{\partial (r f^{\theta})}{\partial r}-\frac{\partial f^r}{\partial \theta}\right)dr\wedge d\theta\tag{$*$}\\
&=\frac{1}{r}\left(\frac{\partial (r f^{\theta})}{\partial r}-\frac{\partial f^{r}}{\partial \theta}\right)\, rdr\wedge d\theta
\end{align}
In the last line, I simply divided and multiplied by $r$.
The reason for this is that the area-form in the plane is $dx\wedge dy=r\,dr\wedge d\theta$, and the nice thing about the Hodge star is that for any function $\phi$ and the area form $\omega$, we have $\star(\phi \, \omega)=\phi (\star\omega)=\phi\cdot 1$, i.e $\star(r\,dr\wedge d\theta)=1$. Therefore,
\begin{align}
\text{curl}(F)&=\star d(g^{\flat}(F))=\frac{1}{r}\left(\frac{\partial (r f^{\theta})}{\partial r}-\frac{\partial f^{r}}{\partial \theta}\right)
\end{align}
Where you went 'wrong'
Note that in your calculation, you stopped at $(*)$, i.e the expression you found is the component of a $2$-form (the coefficient of $dr\wedge d\theta$). Inherently, there's nothing wrong with that, but the classical definition of curl requires an extra Hodge star application. So really, it boils down to the fact that your initial expression $\nabla_av_b-\nabla_bv_a$ is the component of a $2$-form, but it is NOT the usual definition of curl.
Extra Ramblings.
For a vector field $F$ in $\Bbb{R}^3$, the definition of curl must be modified slightly:
\begin{align}
\text{curl}(F)&:=g^{\sharp}(\star d(g^{\flat}F))
\end{align}
i.e we convert the vector field $F$ to a $1$-form $g^{\flat}(F)$, take its exterior derivative (so we have a $2$-form now), then take the Hodge star (so we get a $3-2=1$-form) and finally use $g^{\sharp}$ to convert it back to a vector field (as a side remark: from this one can already see that the curl of a vector field is an EXTREMELY unnatural operation, and it's unfortunate we use it so often in elementary vector calculus).
BTW, if you want to think of a vector field in $\Bbb{R}^2$ as a vector field in $\Bbb{R}^3$, you can do so by thinking of it as having no $z$ component, in which case applying this 3-D definition of curl will yield a vector field having only a $\mathbf{e}_z$ component, which matches with what we found above (so overall there's no inconsistency in jumping between $2$ and $3$ dimensions). | {
"domain": "physics.stackexchange",
"id": 82264,
"tags": "homework-and-exercises, general-relativity, differential-geometry, tensor-calculus, vector-fields"
} |
Navier-Stokes Equations in Einstein Notation and its relation to Poisson's Equation | Question: The Navier-Stokes equations are:
$$\partial_t v + v \cdot \nabla v = - \nabla p + \nu\nabla^2 v, \quad v \in \mathbb{R}^3\\
\nabla \cdot v = 0$$
I have seen that, using Einstein notation (which I am new to), the above can be written as
$$\partial_t v_i + v_j\partial_j v_i = - \partial_i p + \nu \partial_{jj}v_i, \quad v \in \mathbb{R}^3\\
\partial_iv_i = 0.$$
Note that here I am using the notation that $\partial_j = \partial/\partial x_j$.
Using this notation, I would like to take the divergence of the N-S equations to eliminate the pressure term. I am not sure if this is correct, but my work is as follows:
$$\partial_j (\partial_t v_i + v_j\partial_j v_i + \partial_i p - \nu \partial_{jj}v_i) \\= \partial_j\partial_tv_i + \partial_jv_j\cdot\partial_jv_i + v_j\cdot \partial_{jj}v_i + \partial_{ji} p - \nu\partial_{jjj}v_i.$$
What is confusing me is when to use an index of $i$ and when to use an index of $j$, is there some general rule for this? Also, why is there subscript on $p$?
Once I figure this out I assume the rest is just a trivial application of the divergence-free condition and from there we recover Poisson's equation.
Note that I have already read the following posts:
Index notation with Navier-Stokes equations and Questions about Navier-Stokes equations, Einstein notation, tensor rank but unfortunately to no avail.
Answer: You are taking the inner product of $\nabla$ and $\mathbf v$, so you need to make sure they both have the same index:
$$\partial_i\left[\partial_tv_i+v_j\partial_jv_i\right]=\partial_i\left[-\partial_ip+\nu\partial_{jj}v_i\right]$$
Your first term should drop to zero due to the divergence condition (after interchanging partial derivatives), then you can work on sums for the remaining terms to more clearly see the Poisson
equation,
$$\partial_{ii}p=f\left(\cdots\right) $$ | {
"domain": "physics.stackexchange",
"id": 89896,
"tags": "fluid-dynamics, tensor-calculus, notation, flow, navier-stokes"
} |
Deriving Maxwell's Equations from Electromagnetic Tensor | Question:
Given
$ F_{\mu\nu} = \partial_{\mu}A_{\nu} - \partial_{\nu}A_{\mu} $
It is obvious that the diagonals are zero, as
$ F_{ii} =\partial_{i}A_{i} - \partial_{i}A_{i} = 0 $
And, setting $0$ as time and $1,2,3$ as $x,y,z$ respectively, then
$F_{01} = F_{0x} =\frac{\partial}{\partial x^{0}}A_{1}-\frac{\partial}{\partial x^{1}}A_{0} = E_{x}
$
continuing with $F_{03}$ we obtain $E_{x} ...E_{z}$ in the first row of $F$
$F_{12} = F_{xy} = \partial_{x}A_{y}-\partial_{y}A_{x} = B_{z}$
also continuing with $F_{23} = F_{yz} $ and $F_{31} = F_{zx}$ we obtain $B_{x}$ and $ >B_{y}$
Thus we get $\triangledown \times \vec{A} = \vec{B}$
So getting the signs straight, we finally have this.
$F_{\mu\nu} = \begin{pmatrix}
0 & E_{x}&E_{y} &E_{z} \\
-E_{x} & 0 & B_{z} &-B_{y} \\
-E_{y}& -B_{z} & 0 &B_{x} \\
-E_{z} & B_{y} & -B_{x} & 0
\end{pmatrix}$
I understand there should be "over c" for $E$ components.
Two Questions:
Why does
$F_{01} = F_{0x} =\frac{\partial}{\partial x^{0}}A_{1}-\frac{\partial}{\partial x^{1}}A_{0}$ result in $ E_{x}$?
$A$ is a vector potential, and I've learned that $\vec{E}$ can be represented by a $-\triangledown \phi$ where $\phi$ is a scalar potential.
I don't understand what I am supposed to do to with this matrix to get the two Maxwell's equations below.
$\triangledown \cdot \vec{E} = \rho$
and
$\triangledown \times \vec{B} = \vec{J} + \frac{\partial\vec{E}}{\partial t}$
Apparently, this can be solved by
$\sum_{\mu}^{3}\partial_{\mu}F^{\mu\nu} = J^{\nu}$
where,
$\nu = 0, \triangledown \cdot \vec{E} = \rho$
and
$\nu = i, \triangledown \times \vec{B} = \vec{J} + \frac{\partial\vec{E}}{\partial t}$
But where did $\sum_{\mu}^{3}\partial_{\mu}F^{\mu\nu} = J^{\nu}$ come from?
Answer: The most general form of Maxwell's equations are (setting $\mu_0 = \varepsilon_0 = 1$)
\begin{align}
\vec{\nabla} \cdot \vec{B} &= 0 \\
\vec{\nabla} \times \vec{E} &= - \frac{ \partial \vec{B} }{ \partial t} \\
\vec{\nabla} \cdot \vec{E} &= \rho \\
\vec{\nabla} \times \vec{B} &= \vec{J} + \frac{ \partial \vec{E} }{ \partial t}
\end{align}
The first equation implies
$$
\boxed{ \vec{B} = \vec{\nabla} \times \vec{A} }
$$
Plugging this into the second equation, we find
$$
\vec{\nabla} \times \left( \vec{E} + \frac{ \partial \vec{A} }{ \partial t} \right) = 0
$$
This equation then solves to
$$
\boxed{ \vec{E} = - \vec{\nabla} \phi - \frac{ \partial \vec{A} }{ \partial t} }
$$
Plugging the boxed equations into the last two Maxwell's equations, we get
$$
\nabla^2 \phi + \frac{ \partial }{ \partial t} (\vec{\nabla} \cdot \vec{A} ) = - \rho ~~~~~~~~ ...... (1)
$$
and
$$
\frac{ \partial^2 \vec{A} }{ \partial t^2} + \vec{\nabla} \times ( \vec{\nabla} \times \vec{A} ) + \frac{\partial }{\partial t} (\vec{\nabla}\phi) = \vec{J} ~~~~~~~~ ...... (2)
$$
Note that we have a total of 4 equations. In the covariant formalism, the define the 4-vectors
$$
A^\mu : = ( \phi , \vec{A}),~~~ J^\mu : = (\rho, \vec{J})
$$
All you have to do is show that the equation
$$
\partial_\mu F^{\mu\nu} = J^\nu
$$
are in fact identical to (1) and (2). [The Minkowski sign convention is here assumed to be $(+,-,-,-)$.] For instance, if I choose $\nu = 0$ in the equation above, I find
$$
J^0 = \partial_\mu F^{\mu0} = \partial_0 F^{00} + \partial_i F^{i0} = \partial_i ( \partial^i A^0 - \partial^0 A^i )
$$
Using the definitions above, we find
$$
\rho = -\nabla^2 \phi - \frac{ \partial }{ \partial t} (\vec{\nabla} \cdot \vec{A} )
$$
which is precisely (1).
I will leave it to you to show that if I choose $\nu = i = 1,2,3$, then I reproduce the 3 equations (2).
Thus, the equation $\partial_\mu F^{\mu\nu} = J^\nu$ "comes from" the Maxwell equations themselves. They are simply a convenient rewriting of the Maxwell equations. | {
"domain": "physics.stackexchange",
"id": 18204,
"tags": "homework-and-exercises, electromagnetism, maxwell-equations, unit-conversion"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.