anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
What is the relation between cross-section and spectral line intensity? | Question: I was told by my supervisor that there is a relation between these two and he talked about a coefficient connecting the two of them. I am currently brushing up a lot of chemistry as I'm a physics student currently working on an internship mainly working with physical chemistry.
Answer: A plane wave moving along $z$ has intensity $I$ that decreases as
$$dI=-\alpha Idz$$
which yields Beer's law
$$I=I_oe^{-\alpha z}$$
where $\alpha$ is the absorption coefficient in cm$^{-1}$, at frequency $\omega$, for a transition from state $i$ to $f$, and which depends on the population difference and on the optical cross section $\sigma_{if}$ (cm$^2$) as
$$\alpha_{i,f}=\sigma_{i,f}(N_i-N_f)$$
and, as is almost always the case, $N_f\ll N_i$ then $\alpha_{i,f}=\sigma_{i,f}N_i$. As $N$ is the number density it is common to replace this with concentration (mol/dm$^3$) and then Beer's law becomes
$$I=I_oe^{-\epsilon [C] z}$$
where $\epsilon$ is the extinction coefficient in dm$^3$/mol/cm at a given wavelength. | {
"domain": "chemistry.stackexchange",
"id": 17660,
"tags": "physical-chemistry, ir-spectroscopy, definitions"
} |
Open planner stack and dp planner | Question:
Hello everyone.
I am currently trying to get OP planner avoid and stop before obstacles, so far without success.
So far the nodes I use are :
Global planning :
op_global_planner
Local planning :
op_common_params
op_behavior_selector
op_motion_predictor
op_trajectory_evaluator
op_trajectory_generator
Obstacle detection :
ray_ground_filter
lidar_euclidean_cluster_detect
imm_ukf_pda_track
It works perfectly to generate a path and follow it. Roll out trajectories are also generated. The problem is that nothing subscribes to any obstacle detection topic.
Do you know if this is supposed to work like this ? And how to get OP use detected objects ?
I also stumbled across this GitHub issue about dp_planner and it seems that it includes object detection. I also checked the Autoware Package and it seems that it contains parameters related to object detection. But of course no documentation.
Should I drop all the op_xxx packages and use dp_planner instead ? Do they work together ?
There is no information anywhere.
@Hatem , You helped me before, would you be so kind to have a look at this too ?
Of course anyone is welcome to give me a hint here.
Originally posted by Mackou on ROS Answers with karma: 196 on 2020-04-03
Post score: 0
Answer:
Hello,
No problem, I can help.
I just fixed this issue for Autoware1.13 yesterday :)
The issue is that imm_ukf_pda_track output the Objects data in "velodyne" frame.
But OpenPlanner expects it to be in "map" frame.
also the topic names are different.
so how to solve this problem:
Solution 1:
In lidar_euclidean_cluster_detect,
set the output frame to be "map"
Use
lidar_kf_contour_tracker.
Solution 2:
Add a bridge node that transform the output from imm_ukf_pda_track to "map" frame and publish it with the same topic as in op_motion_predictor.
Hope This is helpful.
Regards,
Originally posted by Hatem with karma: 443 on 2020-04-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Hatem on 2020-04-03:
if you want to fresh fixed version use the one from branch openplanner.1.13, but it is not tested yet
Comment by Mackou on 2020-04-03:
@Hatem Thanks a lot for your answer.
I have tried to set the output frame of lidar_euclidean_cluster_detect to "map" . But now the detected boxes are shifted in rviz, is that normal ?
Also /lidar_kf_contour_tracker subscribes to /cloud_clusters but lidar_euclidean_cluster_detect publishes in /points_cluster
Do I need to republish it ?
Comment by Mackou on 2020-04-03:
the topics do not match
[ERROR] [1585907078.578334622]: Client [/lidar_kf_contour_track] wants topic /cloud_clusters to have datatype/md5sum [autoware_msgs/CloudClusterArray/5bdd7c958335da845b88351aab5141d4], but our version has [sensor_msgs/PointCloud2/1158d486dd51d683ce2f1be655c3c181]. Dropping connection.
Comment by Hatem on 2020-04-03:
It is visualization issue,
The visualization only expect velodyne frame.
And you don't need the cloud cluster topic , you can disable that in lidar kf contour tracker
Only use detected objects topic.
Openplanner publish visualization information in map frame. You can use it.
Regards
Comment by Mackou on 2020-04-03:
@Hatem thanks again.
How do I disable the cloud cluster topic in kf contour tracker ?
How do I use detected objects ? What should subscribe to it ? And in what topic ?
[EDIT] I managed to make OP use the tracked objects but it doesnt avoid them at all. Any idea ?
Comment by Mackou on 2020-04-03:
@Hatem
Here is what my current status look like : https://imgur.com/a/x8qy8Yk
I have the object detected in open planner, and I have the red detected_polygon.
But OP doesnt take it into account when generating the trajectories and the car just bump into the obstacle. Any idea ?
Comment by Hatem on 2020-04-03:\
Make sure Enable Following & Enable Avoidance is checked
In pure pursuit make sure you select Waypoint not Dialog.
Comment by Mackou on 2020-04-03:
@Hatem Thanks again.
enableFollowing is true.
I dont have any enableAvoidance, but enableSwerving is true.
In pure pursuit I dont have any parameter related to waypoint or Dialog. Where do I set this ?
Also I am not sure the object is really taken into account as I dont see any stopline in rviz
Im really stuck here
Comment by Hatem on 2020-04-03:
Kindly check the tutorial again carefully.
https://youtu.be/BS5nLtBsXPE
After this point it should work.
Comment by Mackou on 2020-04-15:
@Hatem Thanks for the tutorial, I was able to follow everything and get everything to work correctly with the car simulations.
But now when switching to LGSVL, if I put a non-moving obstacle in the middle of the road and disable swerving, the car will decelerate a lot but will at the end still hit it.
When I enable swerving the car avoids the obstacle with no problem, so the obstacle is detected.
I can also follow a moving obstacle.
Do you know why the car is not stopping completely when a static obstacle is on the road ?
Thanks !
Comment by Hatem on 2020-04-19:
Hi Mackou, it is probably because the deceleration value.
you need to increase deceleration in common params, also you need to set same deceleration to the controller, LGSVL car acceleration/deceleration parameters.
Comment by Mackou on 2020-04-19:
Hey @Hatem, thanks for your answer again !
I have tried to increase the deceleration limit in the common params and it didnt seem to change anything. When you tell me to change the controller params, you mean my path following node right ?
I have noticed something very weird, when I stop all the nodes and start again, it works ONCE. When I retry it fails. And it seems to be the case all the time.
Here are some graphs showing the target velocity when success (1st one) and failure (2nd one) : https://imgur.com/a/q3jsa1x
Blue means forward status, and red means following status.
Do you have any idea what could explain this behavior ?
Comment by Hatem on 2020-04-20:
@Mackou , that is weird indeed. can you take a video to show me the steps and results ?
I want to know how do you exactly what do you mean by the second time. is there a new global plan?
we need to know which node has good initial parameter, but then that value changes and it causes the problem.
Regards,
Comment by Mackou on 2020-04-21:
@Hatem, Thanks for your answer.
Here is the video : https://youtu.be/Fp4ENUU88sQ
I use launch files, not the runtime manager, but I can make a video using the runtime manager too if you prefer.
Comment by Hatem on 2020-04-22:
Hi @Mackou,
I checked the your video, several things could go wrong here.
a) from parameters setting point of view:
SpeedProfileFactor is too small, I think 1.0 is good
enablePrediction should be "false" , prediction is not tested yet.
lateral_acceleration_limit is too high , I never use more than 6 , 4 is preferable for me
b) Don't use velocity set. with open planner.
c) if the simulation speed is as slow as I see in the video, then the controller will not work properly, not enough time update steps to apply braking. the frequency of shouldn't go below 5 f/s.
hope this is useful.
Regards,
Comment by Mackou on 2020-04-24:
Hey @Hatem and thanks for your answer.
Thanks to your last answer it works perfectly now ! But loosing prediction and hence the ability to give way is bad news for me as it would be very useful.
Do you know what work is to do to make it work ? Is there any new version even untested where you were able to make it work ?
The video capture was a bit laggy, not the simulation
I can't thank you enough for your time !
Comment by Hatem on 2020-04-24:
@Mackou I am happy that it works, great. can you please mark the this question as answered.
This is my latest repository
https://gitlab.com/hatem-darweesh-autoware.ai
Checkout branch openplannet.1.13
I am testing multiple features now. one of my objective is to get the trajectory and intention estimation to work.
this is the prediction paper: https://www.jstage.jst.go.jp/article/jsaeijae/10/4/10_20194117/_article/-char/en
Currently this feature is not integrated properly with OpenPlanner. hopefully soon.
and Maybe I will need your help getting LGSVL to work ;) currently I use only CARLA.
Have fun.
Regards,
Comment by Mackou on 2020-04-24:
@Hatem, Thanks, I will keep an eye on it and follow your repository.
I would be thrilled to help !
Regards
Comment by Hatem on 2020-04-24:
Great, Thanks , feel free to test, give me feedback and suggestions.
Comment by Mackou on 2020-04-29:
@Hatem I already opened an issue with a few questions. | {
"domain": "robotics.stackexchange",
"id": 34687,
"tags": "ros-melodic"
} |
What happens to a terrestrial body bound to a far away galaxy? | Question: Suppose that I could find an imaginary rope long enough to bind myself to a very distant planet, i.e. a planet within a very far away galaxy so that it is moving with the Hubble flow. To keep things simpler, the rope is not only very long but also of infinite strength. And also let choose a galaxy within the observable universe, at least to begin with.
1 - Will the expansion lift me off?
I would say yes, but perhaps I am wrong. If I do NOT start flying up, it is like that I displace the planet, and through gravity there, the entire galaxy from its comoving coordinate.
2 - if answer to 1 is YES, what is then the situation when my rope connects two comparable planets, one "here" and one in that far away galaxy?
Note that I choose planets as anchoring points instead of galaxies or stars so the rope won't burn ;)
ADDENDUM: I wanted to started in a way easy to me, but I've just realised that, within my thinking, probably the question is just as in 2), as for I am bound to the milky way. So what happens to two far apart galaxies connected to each other? Or alternatively what happens to the rope between them? There will be a tension, I suppose. What is happening if its limits are reached and it breaks?
If the question is too picturesque, consider two masses m1 and m2, still far away but within the observable universe. Case1) is m1 <<< m2, and case 2) is m1 = m2.
An answer addressing, even briefly, both points 1) & 2) is still welcomed as it will certainly contribute to my comprehension. Thanks.
Answer: So let me start with another thought experiment to get a feeling for these very long distances.
Consider yourself on Earth holding a stick reaching to the surface of the Sun, but not touching it. If you now would push that stick, it would take approximately a little bit more than 8 minutes till the stick touches the Sun. Why 8 minutes? Because that's the time light (or information) needs to travel from the Sun to Earth. So when you push that rigid stick, the information about that push (compressing and expanding the various distances between the atomic layers within the stick) can not travel faster than the speed of light.
Now when you expand that stick or rope to the nearest galaxy (Canis Major Dwarf - 25,000 light years) or the more popular Andromeda Galaxy (~220,000 light years), you get a feeling how long 'interactions' will take to reach you, if a planet whtin this galaxy 'pulls' on that rope.
To answer your 1st question, you have to consider that space is expanding. So considering a rope, which obviously 'occupies' part of the expanding space. And here it gets a little bit complicated, since the intrinsic expansion of space is not a force, but a change of the scale of space itself. But when an object, like the rope, should not follow that expansion i.e., it should not be torn apart, a force has to act against this expansion. Also, a rope with 'infinite strength' can not (by definition) burst, so it will pull you towards that planet. But this 'pull' towards that planet will be very very slowly.
At first it will take about 25,000 years until you 'feel' the pull of that planet. Then, if you assume that the Earth and the other planet are not orbiting a star or galaxy, you will be pulled with the a velocity of $H_0d$ ($H_0 \approx 70\text{km/s/MPc}$ is the Hubble constant and $d$ the length of the rope or distance to the planet) towards the planet. This gives about $70\cdot 0.00766 = 0.5\text{km/s}$ towards Canis Major Dwarf.
You will not displace the planet, because the force you would act on a planet, which is in a bound orbit around a star, is negligible. Consider if you throw a marble (you) against a bowling ball (the planet) - will the bowling ball move? Certainly not, according to Newton's second and third law.
2) This is a much more complicated, since the planets have to orbit the stars to not fall into them. They may both 'pull' the rope at the same time, but since information travels the with speed of light, the middle of the rope will feel this pull thousands of years later. Also I don't know what the rope will do, because if you pull at both ends and displace them by 10m, either the rope bursts, or it will expand and contract like a spring and thousand years later both planets will know.
Since galaxies are not rigid bodies like (rocky) planets, you can not easily connect or influence them. But when the rope bursts, according to Newton's first law, they will move on with the same velocity as they had when the rope stopped acting a force on them. | {
"domain": "physics.stackexchange",
"id": 42274,
"tags": "energy-conservation, space-expansion, observable-universe"
} |
Is Biot-Savart law obtained empirically or can it be derived? | Question: There's already a question like this here so that my question could be considered duplicate, but I'll try to make my point clear that this is a different question.
Is there a way to derive Biot-Savart law from the Lorentz' Force law or just from Maxwell's Equations?
The point is that we usually define, based on experiments, that the force felt by a moving charge on the presence of a magnetic field is $\mathbf {F} = q\mathbf{v}\times \mathbf{B}$, but in that case the magnetic field is usually left to be defined later.
Now can that force law be used in some way to obtain Biot-Savart law like we obtain the equation for the electric field directly from Coulomb's Force law?
I wanted to know that because as pointed out in the question I've mentioned, although Maxwell's Equations can be considered more fundamental, those equations are obtained after we know Coulomb's and Biot-Savart's laws, so if we start with Maxwell's Equations to obtain Biot-Savart's having use it to find Maxwell's Equations then I think we'll fall into a circular argument.
In that case, without recoursing to Maxwell's Equations the only way to obtain Biot-Savart's law is through observations or can it be derived somehow?
Answer: $\def\VA{{\bf A}}
\def\VB{{\bf B}}
\def\VJ{{\bf J}}
\def\VE{{\bf E}}
\def\vr{{\bf r}}$The Biot-Savart law is a consequence of Maxwell's equations.
We assume Maxwell's equations and choose the Coulomb gauge, $\nabla\cdot\VA = 0$.
Then
$$\nabla\times\VB
= \nabla\times(\nabla\times\VA)
= \nabla(\nabla\cdot\VA) - \nabla^2\VA
= -\nabla^2\VA.$$
But
$$\nabla\times\VB - \frac{1}{c^2}\frac{\partial\VE}{\partial t} = \mu_0 \VJ.$$
In the steady state this implies
$$\nabla^2\VA = -\mu_0 \VJ.$$
Thus, we have Poisson's equation for each component of the above equation.
The solution is
$$\VA(\vr) = \frac{\mu_0}{4\pi}\int \frac{\VJ(\vr')}{|\vr-\vr'|}d^3 r'.$$
Now we need only calculate $\VB = \nabla\times\VA$.
But
$$\nabla\times\frac{\VJ(\vr')}{|\vr-\vr'|}
= \frac{\VJ(\vr')\times(\vr-\vr')}{|\vr-\vr'|^3}$$
and so
$$\VB(\vr) = \frac{\mu_0}{4\pi}\int
\frac{\VJ(\vr')\times(\vr-\vr')}{|\vr-\vr'|^3}
d^3 r'.$$
This is the Biot-Savart law for a wire of finite thickness.
For a thin wire this reduces to
$$\VB(\vr) = \frac{\mu_0}{4\pi}\int
\frac{I d{\bf l}\times(\vr-\vr')}{|\vr-\vr'|^3}.$$
Addendum:
In mathematics and science it is important to keep in mind the distinction between the historical and the logical development of a subject.
Knowing the history of a subject can be useful to get a sense of the personalities involved and sometimes to develop an intuition about the subject.
The logical presentation of the subject is the way practitioners think about it.
It encapsulates the main ideas in the most complete and simple fashion.
From this standpoint, electromagnetism is the study of Maxwell's equations and the Lorentz force law.
Everything else is secondary, including the Biot-Savart law. | {
"domain": "physics.stackexchange",
"id": 96107,
"tags": "electromagnetism, magnetic-fields"
} |
Can any other thing rather than current pass through conductor? | Question: A conductor is an object or type of material that allows the flow of charge (electrical current) in one or more directions. Materials made of metal are common electrical conductors. Electrical current is generated by the flow of negatively charged electrons, positively charged holes, and positive or negative ions in some cases. Can any other thing pass through ??
Answer: If the conductor is liquid (electrolyte), your hand could pass through it. Seawater isn't a very good electrolyte, but it's one you've probably touched. Ionic liquids are more conductive.
There are transparent conductors like indium tin oxide which light can pass through (the wavelengths may be restricted of course). ITO is used as an electrode on many flat screens.
If you dip one end of a wire in a cup of hot water and hold the other end, you'll feel that heat can pass through - or of course use a teaspoon. There's an overlap between electrical and thermal conductors which is fundamentally important to several areas of physics.
Sound can easily pass through conductors, arguably more easily than through air. Imagine sitting in a closed metal box: you'd expect to be able to hear what's going on outside. Or more prosaically set your phone playing some music, and wrap it in foil. You might want to use locally stored music, as the radio waves you need to stream will be significantly attenuated . | {
"domain": "physics.stackexchange",
"id": 61842,
"tags": "electric-current, conductors"
} |
Are there any predictions of what galaxies exist in the Norma cluster/ Abell 3267? | Question: I am working on a map for a science fiction story and using what information is available I have a good idea of locations of nearby galaxy groups and clusters and where the known supermassive black holes are.
As the Norma cluster is a big area of debate and due to that and other areas being obscured by our own galaxies, observing what is in those areas is difficult. Going off Wikipedia this cluster has a binding mass of 1E15 solar masses, which is the same as the Virgo cluster and only one galaxy is mentioned which is ESO 137-001.
Is this mass based of our galaxies motion in that direction so mass behind the cluster (some believe the Shapley supercluster) is giving the value we have or is that a known mass of galaxies in that cluster? If so can we predict if it has a similar number of large galaxies and supermassive black holes as the Virgo cluster or is there other predictions for the Norma cluster?
Answer: VizieR lists two catalogues of Norma cluster members:
J/MNRAS/383/445 Woudt, P.A. et al. (2008) "Radial velocities in the Norma cluster (A3627)"
J/MNRAS/396/2367 Skelton, R.E. et al. (2009) "NIR Ks photometry of Norma cluster (A3627)" (this one contains angular diameter information)
The paper associated with the first of these, "The Norma Cluster (ACO 3627): I. A Dynamical Analysis of the Most Massive Cluster in the Great Attractor" describes the methodology used for estimating the cluster mass from their measurements of the radial velocities (in particular their dispersion) of the galaxies. From the paper:
For the determination of the dynamical mass of the Norma cluster, we have used both the virial theorem ($M_{\rm VT}$) and the projected
mass estimator ($M_{\rm PME}$), see equations (21) and (22) of Pinkney et al. (1996). The use of the biweight velocity centroid and scale (Beers et al. 1990) in the virial theorem (instead of the velocity mean and
standard deviation) leads to a more robust mass estimate ($M_{\rm RVT}$).
The latter is more robust against the effects of contamination by the
inclusion of possible non-members in the analysis. The projected
mass estimator (Bird 1995), on the other hand, is sensitive to the
presence of (spatially separated) subclusters due to its proportionality to the projected distance between galaxy $i$ and the cluster centroid
($R_{\rm \perp,i}$) (see equation 22 in Pinkney et al. 1996).
They also note that the masses they derive are consistent with masses estimated from the X-ray emission by Böhringer et al. (1996) and Tamura et al. (1998).
As regards what types of galaxy these are, unfortunately neither of the catalogues provide morphological types directly. On the other hand, they do include for many of the objects a reference to the WKK98 catalogue (Woudt, P.A. & Kraan-Korteweg, R.C. 1998).
WKK98 is also available on VizieR as J/A+A/380/441 and does include the morphological type.
You can use the VizieR query interface to join the tables (or you can download the tables in their entirety via the FTP page on VizieR and do the joining on your local machine, e.g. via the various lookup functions in your spreadsheet software of choice), which will give you a list of the morphological types of the objects. | {
"domain": "astronomy.stackexchange",
"id": 4754,
"tags": "galaxy, supermassive-black-hole, galaxy-cluster"
} |
Time-varying waveform | Question: My situation is as follows: i am trying to generate a waveform the hard way, by constructing the samples one by one and then saving the result to a .wav file using Python.
When the frequency is constant, everything is fine: i use $y(t) = \sin(2 \pi \cdot f \cdot t)$.
However, if i change the frequency to be a function of time, things go wrong. If the function is a linear one of the form $f(t) = a + bt$, it still works. But if i choose, for example, $f(t) = 40 + 10 \sin(t)$, the max frequency increases over time, reaching a maximum higher than the expect 50Hz.
I have read something about instantaneous frequency, namely, this: Why does a wave continuously decreasing in frequency start increasing its frequency past the half of its length?. But doing the integral evaluation makes the sound even weirder. And the method i currently have works for linear function of time, so i figured there must be something else wrong.
I also tried to generate chunks of sound in the frequency i need, at each time, and then glue them together. I calculated the period of the oscillation, so that a chunk has as many samples as necessary to make a whole period on that frequency, so that there are no "jumps" between different frequencies. But generates a cracking sound in the sample.
Here is an example of the kind of a JavaScript implementation of the kind of sound i need: http://jsfiddle.net/m7US6/4/.
Any help would be appreciated. Thanks.
Answer: Consider the function $\sin(2\pi f t)$. When $2\pi f t$ goes from $0$ to $2\pi$ you get oscillation of the sine wave, $2\pi$ to $4\pi$, another, and so on. So every time the argument changes by $2\pi$ you get one oscillation.
Now lets plot $2\pi f(t) t$ where $f(t) = 10 + 10 \sin(2 \pi t)$. (I've modified it a bit to show the effect more):
As $t$ increases, this value changes faster and faster, meaning $\sin(2\pi f(t) t)$ will have higher and higher frequency.
Another way of thinking about it - the frequency is the derivative of $f(t) t$. For the example this is $20 \pi (\sin(2 \pi t)+2 \pi \mathbf{t} \cos(2 \pi t)+1)$. Note the $t$ multiplier in there - $t$ increases thus the frequency increases.
So what you really want is
$\tfrac{d}{dt} f(t) t = 40 + 10 \sin(t)$ which gives $f(t) t = 40t - 10\cos(t) + c$ and for your example try $ \sin(2\pi(40t - 10 \cos(t)))$ | {
"domain": "dsp.stackexchange",
"id": 1814,
"tags": "frequency, sound, signal-synthesis"
} |
Step in derivation of Lagrangian mechanics | Question: There is a step in expressing the momentum in terms of general coordinates that confuses me (Link)
\begin{equation}
\left(\sum_{i}^{n} m_{i} \ddot{\mathbf{r}}_{i} \cdot \frac{\partial \mathbf{r}_{i}}{\partial q_{j}}\right) \delta q_{j}=\sum_{i}^{n}\left\{\frac{d}{d t}\left(m_{i} \dot{\mathbf{r}}_{i} \cdot \frac{\partial \mathbf{r}_{i}}{\partial q_{j}}\right)-m_{i} \dot{\mathbf{r}}_{i} \cdot \frac{d}{d t}\left(\frac{\partial \mathbf{r}_{i}}{\partial q_{j}}\right)\right\} \delta q_{j}.
\end{equation}
If we write $\frac{d}{dt}(\mathbf{r}_{i}\cdot \frac{\partial \mathbf{r}_{i}}{\partial q_{j}})$, then we have $\frac{d}{d t}\left(m_{i} \dot{\mathbf{r}}_{i} \cdot \frac{\partial \mathbf{r}_{i}}{\partial q_{j}}\right)+m_{i} \dot{\mathbf{r}}_{i} \cdot \frac{d}{d t}\left(\frac{\partial \mathbf{r}_{i}}{\partial q_{j}}\right)$ and not a minus. So why is there a minus in the derivation?
Answer: $$\frac{d}{dt}\left(m_i \dot{r}_i \frac{\partial r_i}{\partial q_j}\right)
= m_i\ddot{r}_i \frac{\partial r_i}{\partial q_j} +
m_i\dot{r}_i\frac{d}{dt}\left(\frac{\partial r_i}{\partial q_j}\right)$$
so
$$
\frac{d}{dt}\left(m_i\dot{r}_i\frac{\partial r_i}{\partial q_j}\right)-
m_ir_i \frac{d}{dt}\left(\frac{\partial r_i}{\partial q_j}\right)= m_i
\ddot{r}_i\frac{\partial r_i}{\partial q_j}
$$ | {
"domain": "physics.stackexchange",
"id": 87682,
"tags": "classical-mechanics, lagrangian-formalism, coordinate-systems, differentiation"
} |
How we transform IS inputs to VC (reduction)? | Question: I would like to clarify something in my understanding of proving a problem to be NP-hard.
So in short, what I know is that:
"If I have a problem A that I want to prove that is NP-hard and another well -known NP-hard problem B and I can find a polynomial time reduction R such that if I answer A I know the answer to B as well for all the possible inputs to problem B , then I have proved that $B \leq_P A$ which means that "A is harder or as hard as B which is known to be hard already" and therefore A is NP-hard."
What confuses me is that I see sometimes some proofs that show that two algorithmic questions (or at least that's what I think that they are proving) are completely equivalent but not by transforming the inputs of one to another but by simply providing an argument that states that they are equivalent.
What do I mean :
Let's say we know Independent Set (IS) is NP-hard and we want to show that vertex cover (VC) is NP-hard. If I understand well the well-known method is to claim this :
Let's say we want to find out whether an IS of at least k vectices exists in $G(V,E)$.
Now we take this very same G (no transformation perfomed on it) and ask the question : "Is there a vertex cover S of at most n-k vertices"?
If I find one and then delete the vertices of it the $|V|-|S|$ vertices that are left form an independent set. That's true because let's say they do not : then there is at least an edge $(u,v)$ such that $u,v \in V-S$ so it was "never really covered by our VC ".
But $|S|$ has at most n-k vertices and therefore the $V-S$ has at least $k$ vertices.
So what is the transformation here ? In the part we delete the vertices of $S$ we create an induced subgraph of G which is a transformation but I am not sure how it connects to "the whole method of proving this as NP-hard" .
(I hope my question makes sense I tried hard to put it into words)
Answer: You can express this idea as a reduction.
IS consists of all pairs $\langle G,k \rangle$, where $G$ is a graph which contains an independent set of size at least $k$.
VC consists of all pairs $\langle G,k \rangle$, where $G$ is a graph which contains a vertex cover of size at most $k$.
To reduce IS to VC, take an instance $\langle G,k \rangle$ of IS to the instance $\langle G,n-k \rangle$ of VC, where $n$ is the number of vertices in $G$.
In this case there is also a reduction between the problem of finding a maximum independent set in a graph and the problem of finding a minimum vertex cover in a graph. Given a graph $G = (V,E)$, find a minimum vertex cover $S$, and output the independent set $V \setminus S$. This is a different type of reduction, and the problems are also of a different kind. | {
"domain": "cs.stackexchange",
"id": 19603,
"tags": "complexity-theory, np-hard, np"
} |
Why are the principal planes where principal stresses occur perpendicular to each other? | Question: Equation of principal angles:
$$\tan 2\theta_p=\frac{2\tau_{xy}}{\sigma_x-\sigma_y}$$
Equation of principal stresses:
$$\sigma_{max}, \sigma_{min} = {\sigma_{xx} + \sigma_{yy} \over 2} \pm
\sqrt{ \left( {\sigma_{xx} - \sigma_{yy} \over 2} \right)^2 + \tau_{xy}^2 }$$
Source of equations: Lectures notes on Mechanics of solids, Course code- BME-203, prepared by Prof. P.R.Dash, page 45 and 46.
Above is the equation used for finding the principal angles corresponding to the two principal planes where principal stresses (maximum and minimum stresses) occur.
In solid Mechanics, the difference between the two values of principal angles is $90^\circ$. Why is it equal to $90^\circ$?
Answer: The answer is probably because the stress tensor is symmetric $\sigma_{ij}=\sigma_{ji}$, and the principal (not principle!) planes are perpendicular to the eigenvectors, which for a symmetric matrix are always mutually perpendicular. Note that the equation you added for the pricipal stresses is indeed the equation for the eigenvalues of the matrix
$$
\left[\matrix{\sigma_{xx}& \sigma_{xy}\cr \sigma_{yx} & \sigma_{yy}}\right]
$$
This is assuming that for some reason you have written $\tau_{xy}$ for the shear stress $\sigma_{xy}$. | {
"domain": "physics.stackexchange",
"id": 69961,
"tags": "homework-and-exercises, material-science, stress-strain, solid-mechanics"
} |
Starting a Clojure Chess Engine | Question: In my journey to learn Clojure I recently decided that I would like to write a chess engine, which is kind of funny because I don't really know chess either. ;-) My goals are to learn Clojure and chess, and write something that is fairly easy to understand.
Currently I'm working on board representation and basic movement. There are many ways to represent a chess board, but I decided to represent the board as a list of game pieces because I would like to see how that choice affects the implementation in a Lisp dialect.
Everything is still early in the development, but I'd like to get some feedback on what I've started.
Here is my board representation.
(ns chess.state)
(defn make-board
"Creates a chess board in the initial configuration."
[]
'({:type :rook :color :black :rank 8 :file 1 }
{:type :knight :color :black :rank 8 :file 2 }
{:type :bishop :color :black :rank 8 :file 3 }
{:type :queen :color :black :rank 8 :file 4 }
{:type :king :color :black :rank 8 :file 5 }
{:type :bishop :color :black :rank 8 :file 6 }
{:type :knight :color :black :rank 8 :file 7 }
{:type :rook :color :black :rank 8 :file 8 }
{:type :pawn :color :black :rank 7 :file 1 }
{:type :pawn :color :black :rank 7 :file 2 }
{:type :pawn :color :black :rank 7 :file 3 }
{:type :pawn :color :black :rank 7 :file 4 }
{:type :pawn :color :black :rank 7 :file 5 }
{:type :pawn :color :black :rank 7 :file 6 }
{:type :pawn :color :black :rank 7 :file 7 }
{:type :pawn :color :black :rank 7 :file 8 }
{:type :pawn :color :white :rank 2 :file 1 }
{:type :pawn :color :white :rank 2 :file 2 }
{:type :pawn :color :white :rank 2 :file 3 }
{:type :pawn :color :white :rank 2 :file 4 }
{:type :pawn :color :white :rank 2 :file 5 }
{:type :pawn :color :white :rank 2 :file 6 }
{:type :pawn :color :white :rank 2 :file 7 }
{:type :pawn :color :white :rank 2 :file 8 }
{:type :rook :color :white :rank 1 :file 1 }
{:type :knight :color :white :rank 1 :file 2 }
{:type :bishop :color :white :rank 1 :file 3 }
{:type :queen :color :white :rank 1 :file 4 }
{:type :king :color :white :rank 1 :file 5 }
{:type :bishop :color :white :rank 1 :file 6 }
{:type :knight :color :white :rank 1 :file 7 }
{:type :rook :color :white :rank 1 :file 8 }))
Here are some common movement functions that I'm using for calculating the movement of each piece.
(ns chess.movement
(:require [chess.state :refer :all]
[clojure.math.numeric-tower :as math]))
(defn on-board?
"Determines if a position is on the board."
[[rank file :as position]]
(and (>= rank 1)
(<= rank 8)
(>= file 1)
(<= file 8)))
(defn same-rank?
"Determines if two positions are in the same rank."
[[start-rank start-file :as position] [dest-rank dest-file :as destination]]
(and (= dest-rank start-rank) (not= dest-file start-file)))
(defn same-file?
"Determines if two positions are in the same file."
[[start-rank start-file :as position] [dest-rank dest-file :as destination]]
(and (= dest-file start-file) (not= dest-rank start-rank)))
(defn diagonal?
"Determines if two positions are diagonal from each other. We need this for things like determining whether a pawn movement is a capture, etc."
[[start-rank start-file :as position] [dest-rank dest-file :as destination]]
(= (math/abs (- start-rank dest-rank))
(math/abs (- start-file dest-file))))
(defn occupied?
"Determines if a location on the board is occupied by a piece. If the color actual
parameter is provided then the piece at that location must be that color. We need to know this
to help determine the ability to move, capture, etc."
([board [rank file :as position]]
(some #(and (= rank (% :rank)) (= file (% :file))) board))
([board [rank file :as position] color]
(some #(and (= rank (% :rank)) (= file (% :file)) (= color (% :color))) board)))
(defn opponent-color
"Returns the opponent's color."
[color]
(cond
(= color :black) :white
(= color :white) :black))
(defn remove-pieces
"Removes pieces from the board at the specified locations."
[board position & remaining-positions]
(let [positions (into #{position} remaining-positions)
matching-position? #(positions [(% :rank) (% :file)])]
(remove matching-position? board)))
(defn positions-between
"Generates a list of the positions between start and end. If the
start and end positions aren't in the same rank, file, or diagonal
from each other then returns an empty list."
[[start-rank start-file :as start] [end-rank end-file :as end]]
(if (or (same-rank? start end)
(same-file? start end)
(diagonal? start end))
(let [next-rank (cond (= start-rank end-rank) identity
(< start-rank end-rank) inc
(> start-rank end-rank) dec)
next-file (cond (= start-file end-file) identity
(< start-file end-file) inc
(> start-file end-file) dec)]
(loop [positions '()
[current-rank current-file :as current-position] [(next-rank start-rank) (next-file start-file)]]
(if (= current-position end)
positions
(recur (conj positions current-position) [(next-rank current-rank) (next-file current-file)]))))
'()))
(defn movement-blocked?
"Move is blocked if the destination is occupied by the player's own piece, or
if any position between the start and destination contains a piece. Note that
this function does not work for Pawns (yet) because a pawn is blocked when
moving forward if the destination is occupied by any piece."
[board [rank file :as position] [dest-rank dest-file :as destination] color]
(or (occupied? board destination color)
(some (into #{} (positions-between position destination))
(map (fn [p] [(p :rank) (p :file)]) board))))
Here is my code for deciding which positions a rook can move to on the board.
(ns chess.rook-movement
(:require [chess.movement :refer :all]
[chess.state :refer :all]))
(defn valid-rook-move?
"Determines if a rook can make a move from one position to another on a board."
[board [start-rank start-file :as position] [dest-rank dest-file :as destination] color]
(and (on-board? destination)
(or (same-rank? position destination)
(same-file? position destination))
(not (movement-blocked? board position destination color))))
(defn rook-destinations
"Returns a lazy sequence of positions (tuples containing rank and file) that a rook can move to."
[board [rank file :as position] color]
(let [possible-dests (concat (map (fn [f] [rank f]) (range 1 9))
(map (fn [r] [r file]) (range 1 9)))]
(filter #(valid-rook-move? board position % color) possible-dests)))
Please comment! I'm looking for ways to make the code more idiomatic, readable, organized better, more efficient, etc.
Answer: Funnily enough, although Clojure is a Lisp, Clojure programmers rarely use lists to store data. Typically we use vectors. I think a lot of this is because vectors have their own syntax that's easier to read and type than a quoted list, and you don't have to worry about quoting because vectors never try to do function application.
But vectors and lists do have different performance characteristics. A list resembles its counterparts in Common Lisp or Scheme: it's singly-linked, so lookups are linear time. Vectors, on the other hand, are giant flat trees; looking something up in a vector with \$n\$ items is \$O(\log_{32}n)\$, which you can typically think of as effectively constant. (E.g. \$log_{32}(10^{30}) = 20.6\$, rounded off.) For both of these reasons, I think a vector might have been a better choice for your representation of the board. A map from occupied spaces to the piece occupying it might have been even better; you could more quickly check if a space is already occupied, and if you do need to iterate over all the elements, maps support the sequence abstraction too:
(def m {:a 1, :b 2, :c 3})
(keep #(when (< 2 (% 1)) (% 0)) m)
;; Returns (:c)
When you use map on a map, each key and value get passed to the function you provide as a two-element vector. That snippet makes a list of all the keys whose values are greater than 2. (keep is just map, except it doesn't include nil values in the list; it's equivalent to (remove nil? (map f-that-is-sometimes-nil some-seq)).)
Note that unlike Python, since almost all types in Clojure are immutable, you can use almost anything as the key to a map. So if you wanted to do a map from occupied spaces to pieces, you could do this:
(def board {[2 1] {:type :pawn, :color :white},
[2 2] {:type :pawn, :color :white}
...
or this:
(def board {{:rank 2, :file 1} {:type :pawn, :color :white},
{:rank 2, :file 2} {:type :pawn, :color :white}
...
You might also programmatically generate your initial board configuration, instead of writing it all out.
Based on your code for determining if a move is a valid move for a rook, I'm guessing you have a bunch of functions like valid-queen-move?, queen-destinations, valid-knight-move?, knight-destinations, etc. floating around in a namespace somewhere. That seems a little messy. I think a cleaner way would be to use one of Clojure's pseudo object-oriented features, either records and protocols or multimethods.
With records and protocols, you could make each piece type a record, like this:
(defrecord Rook [color rank file])
(defrecord Knight [color rank file])
(defrecord Queen [color rank file])
;; etc.
There's probably some metaprogramming or macro tricks you could do to generate these declarations for you; I played around with some approaches I thought would work, but none did.
EDIT: I posted a Stack Overflow question about how to generate the record definitions programmatically, and got some great answers. Stack Overflow users Arthur Ulfeldt and galdre both gave macros that can give you record definitions for all the pieces in a couple lines of code, so go check out their answers if you're interested in that.
Anyway, you could then have a protocol, like the following, for different move types:
(defprotocol move
(move-legal? [this source dest])
(spaces-limited? [this]))
Then you have each piece implement the protocol. For a rook, it would look like this:
(extend-type Rook
move
(move-legal? [_ source dest]
(or (same-rank? source dest) (same-file? source dest)))
(space-limit [_] 5000)) ; Larger than board size, i.e. infinite
For a king, it would look like this:
(extend-type King
move
(move-legal? [this source dest]
(< (move-distance source dest) (space-limit this)))
(space-limit [this] (if (castling? this) 2 1))
For a pawn:
(extend-type Pawn
move
(move-legal? [this source dest]
(and (< (move-distance source dest) (space-limit this))
(or (same-file? source dest)
(and (diagonal? source dest) (occupied? dest)))))
(space-limit [this]
(if (at-start this)
2
1)))
Then I would have a helper function, can-make-move?, that checks if the movement is blocked or off the board as well as calling move-legal? on the piece. Call this to check if a move is valid before making it.
(defn can-make-move?
[piece source dest]
(and (not (movement-blocked? board source dest (:color piece)))
(on-board? dest)
(move-legal? piece source dest)))
(defn make-move
[piece source dest]
(if (can-make-move? piece source dest)
(move piece dest)
(throw (java.lang.IllegalArgumentException. "Move invalid."))))
There's a lot more polish to be put on this, but that would be the basic approach to using protocols and records. The main advantage here over the approach with maps and separate functions is code organization. Rather than having separate valid-rook-move?, valid-queen-move?, etc. functions, you just have a single move-legal? that works for any piece. In the future, if you need some other behavior from all pieces, add a function to the protocol and implement it. This is Clojure's version of object-oriented programming, and you can use just as much of it as you need whenever OO is advantageous, without
becoming an Inheritance Hierarchy Morlock.
The multimethod version would be similar; you could use the same map representation you have now, and define multimethods instead of the protocol functions. The multimethods would dispatch on the type of the piece. Multimethods are great, but all their power is not needed here. I think protocols and records are a better fit here, since you have an obvious single type (which piece you're working with) to dispatch on. Protocols and records also have good performance, since they turn straight into Java classes and method calls behind the scenes. | {
"domain": "codereview.stackexchange",
"id": 13167,
"tags": "clojure, chess"
} |
Why $P$ cannot have NULL string in Arden's Theorem? | Question: Arden's Theorem says that in the equation $R=Q+RP$, the $P$ cannot have NULL string. In this respect,the theorem will not be valid for the expression $R=Q+R(NULL+01)$. Am I correct? If so, then what will be the justification?
Answer: Here is the version of Arden's theorem on Wikipedia:
One solution of the language equation $R = Q+RP$ is $R = QP^*$.
If $\epsilon \notin P$, then this is the only solution.
When $\epsilon \in P$, there are more solutions. In fact, we can prove the following result:
The solutions of the language equation $R = Q+RP$, where $\epsilon \in P$, are $R=SP^*$ for all $S \supseteq Q$.
Proof. Let us first show that $SP^*$ is always a solution. Since $\epsilon \in P$, $RP = SP^+ = SP^*$. Since $SP^* \supseteq S \supseteq Q$, $Q + RP = Q + SP^* = SP^* = R$.
Let us now show that all solutions are of this form. Suppose that $R$ is a solution. Clearly $R \supseteq Q$. Since $\epsilon \in P$, this implies that $RP \supseteq R \supseteq Q$, and so $RP = Q + RP = R$. Induction shows that $RP^n = R$ for all $n \in \mathbb{N}$, and so $RP^* = R$. Since $R \supseteq Q$, this is a solution of the required form. $\square$ | {
"domain": "cs.stackexchange",
"id": 14291,
"tags": "formal-grammars, regular-expressions"
} |
Vibrating point cloud in rviz | Question:
So as you can see on the code I'm creating two subscribing nodes:
Subscribing from laser scanner
Subscribing /tf which is result of transformation /odom -> /robot
In this code I'm using tf2_sensor_msgs to transform point_cloud (which is just /mybot/laser/scan transformed into point_cloud) with do_transform_cloud()using /tf and publish it to topic /laserPointCloud.
My problem is that when I'm publishing it to RVIZ i can see some kind of vibrations of the subscribed point_cloud. I was thinking that it may be because frequency of publishing /mybot/laser/scan is 40Hz and of the /tf 30 but I've made two separate scripts to publish pointcloud at 30Hz (to be the same as /tf using rate.sleep()) and it didn't help (the vibrations appeared like in the given code).
Ubuntu:18.04 LTS / ROS: Melodic
Originally posted by asbird on ROS Answers with karma: 27 on 2019-11-20
Post score: 0
Answer:
Oh, well that's your problem. You shouldn't be subscribing to the '/tf' topic directly. You should be using alistener and a buffer so that the library does all the math for you.
http://wiki.ros.org/tf2/Tutorials/Writing%20a%20tf2%20listener%20%28Python%29
Then use the 'transform' method from the BufferInterface http://docs.ros.org/melodic/api/tf2_ros/html/python/
Originally posted by tfoote with karma: 58457 on 2019-11-22
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 34040,
"tags": "ros-melodic, pcl, ubuntu, translation, pointcloud"
} |
Explanation of absorption and reflection of particular color of light by any object relating wavelength | Question: Why do object reflect or object particular color of light? Can anyone give an explanation of by comparing wavelengths?
Answer: Color is not a single valued variable as far as language goes. Color is the perception of the observer . The frequencies of the rainbow, do raise the perception of colors that one labels from red to violet and these have a one to one correspondence with the frequency of the light. But light reflected from an object can be a combination of frequencies that will give to the retina of the eye the signal "red" while not having the rainbow frequency "red".
In general white light ( all frequencies) falling on an object will be partially absorbed for some frequencies and the rest of the frequencies reflected, and the color itself that the observer records depends on biological perception , not clear cut one to one correspondence with frequency.
Now if by "color" you meant a single frequency, the perceived color of the object under monochromatic light will not be the same as with white light. Depending on the molecules and the percentage of absorption the object will be perceived as a hue of the incoming single frequency, unless the incoming frequency can raise the molecular levels high enough so that deexcitation will radiate a different frequency or combination of frequencies. | {
"domain": "physics.stackexchange",
"id": 29079,
"tags": "visible-light"
} |
E.L. Equations in QFT | Question: In QFT, we use the Lagrangian to construct the Hamiltonian, and in the Interaction Picture (with regards to the Free Field Hamiltonian) use the full Hamiltonian to calculate the changes in the field (or the wave function) over time. By that I mean, for a field $\Psi$:
$$ \Psi (t) = e^{iH t} \Psi e^{-iH t} $$
(This is usually more complex, using the time ordered exponential etc, but bare with me)
On the other hand (!), to solve for the free field, we use the E.L. equations. For example in the K.G. case:
$$ (\partial _ \mu \partial^\mu + m^2) \Psi = 0$$
Which one is it? Are they equivalent in some sense?
To put it clearly: Which equation describes the time evolution of operators and states in QFT? E.L. or ~schrodinger equation~ in the sense of $ e^{-i H t}$?
Answer: People wrote very good answers, but as a novice to the field I honestly couldn't understand them. I finally have a simple derivation that gives this result, so I'll share it here. The end result is - they are equivalent, and it is particularly hard to prove. Here is a BAD proof, that can set some minds at ease. It has many different problems, the most important one being that it doesn't work for Fermions.
First of all, notice that for a function $F(A,B)$ of two operators $A,B$ we always have:
$$ [A,F] = \frac{\partial F}{\partial B} [A,B] $$
Assuming [A,B] is a scalar. This can be proven by expanding $F$ to a series.
In classical mechanics, if we have a Field $\phi$ with a Lagrangian density $L$, we can define $H$ with the property of:
$$ \dot{\phi} = \{ \phi, H \}_{PB} = \frac{\partial H}{\partial \Pi} \{ \phi, \Pi \}_{PB} $$
Which is basically Hamilton's equation. If we define $\phi$ in such a way that:
$$ [\phi, \Pi] = i \{\phi, \Pi\}_{PB} $$
as we do in QFT, we will finally get the defining equation of QM:
$$ \dot{\phi} = \frac{\partial H}{\partial \Pi} \{ \phi, \Pi \}_{PB} = i \frac{\partial H}{\partial \Pi} [ \phi, \Pi ] = i [\phi, H] $$
So all 3 formulations (E.L., Hamilton and Heisenberg) are equivelent. | {
"domain": "physics.stackexchange",
"id": 55103,
"tags": "quantum-field-theory, lagrangian-formalism, schroedinger-equation, hamiltonian-formalism, klein-gordon-equation"
} |
The ListenHear Game - Listen and type the word | Question: What it does:
Speak a random word chosen from an array and ask the user to type the word in the box for 15 seconds
If right, +1pts, reset the timer and go to next word.
If wrong: do nothing.
After 15 seconds: game over.
body {background-color: black; color: white;}
.center {
width: 50%;
height: 50%;
position: absolute;
top:0;
bottom: 0;
left: 0;
right: 0;
margin: auto;
}
.hidden {
display: none;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>
<center class="center start">
<h1>
The ListenHear<br>Game<br><br><button class="w3-btn w3-green" onclick="play()">Play</button><button class="w3-btn w3-red" onclick="exit()">Exit</button>
</h1>
</center>
<center class="center hidden">
<h1 id="x"><x class="w3-red">There was an error.</x> </h1>
</center>
<script>
var read;
var time = 0;
var i = 0;
var words;
var c;
$.getJSON( "https://gist.githubusercontent.com/khanh2003/ae6c144ed12aa4e6dce98c40163935d1/raw/9073f8a40a39ecbe02a38547f99bca9e3637660c/JSON.json", function( data ) {
words=data;
});
function play() {
$(".start").hide(function() {
$(".hidden").removeClass("hidden");
$("#x")[0].innerHTML = "<i style='color:green'>Listen and fill in the blank the word(s).</i>";
setTimeout(game,2000);
});}
function game() {
read = words[Math.floor(Math.random()*words.length)];
$("#x")[0].innerHTML = "<input type=text onkeypress=check() style=width:100%></input><button onclick='speechSynthesis.speak(new SpeechSynthesisUtterance(read));'>Again please!</button><br><div class='time'></div><br><button onclick=giveup()>Give up</button>";
$("input").focus();
speechSynthesis.speak(new SpeechSynthesisUtterance(read));
time = 0;
c=setInterval(function() {
time=time+0.01;
$(".time").text(Math.floor(15-time));
if(time>15) {giveup();}
}, 10);
}
function check() {
if ($("input")[0].value.toUpperCase() === read.toUpperCase()) {
$("#x")[0].innerHTML = "<i style='color:green'>Correct! Score: "+(++i)+"</i>";
time = 0;
setTimeout(game,2000);
clearInterval(c);
}
}
function giveup() {
$("#x")[0].innerHTML = "<i style='color:red'>Time's Up! Score: "+(i)+"<br>The word(s) are: "+read+"</i>";
time=0;
setTimeout(play,5000);
}
function exit() {$("body").slideUp(function() {window.close;});}
</script>
For those who are complaining the code is broken, the SpeechSynthesisAPI (the code depends on) doesn't have good browser support.
Answer: Your game() calls c=setInterval(…), which is balanced by clearInterval(c) in check(). However, giveup() does not call clearInterval(c). As a result, if you just start the game but do nothing, then it gets ridiculously faster and faster. | {
"domain": "codereview.stackexchange",
"id": 23030,
"tags": "javascript, html, quiz"
} |
Using std::array to implement variadic construction of class with list-of-self as subclass | Question: I have a situation with a base class for whom a container of itself is a subclass. So a Block of Item has various ways in which it acts like an Item itself. I want to be able to initialize Block as Block {foo, baz, bar...} in a variadic syntax.
Here are the twists:
When a Block is constructed, the parameters in the initialization list are not simply Items. (Imagine some represent two items... while others might just be a different meaning for the literal types passed in.) To simplify the example I am suggesting that constructing an Item from a C-string literal is not available to the user... but that a Block initialization list would know what to do with that in context.
I'm trying to avoid creating std::vector in the course of the initialization, and instead building a std::array for as near-zero runtime overhead as I can get. Each element in the Block initialization should be constructed (or copy constructed) just once in the process; right in place in the array... though this array is only needed temporarily. As far as I can tell, this rules out std::initializer_list and I must use a variadic function.
The basic idea is to create a class that has containership of an Item which is a friend of Item... here called Listable. It has its own set of constructors:
class Item {
friend class Listable;
friend class Block;
Item (char const *) { std::cout << "Istring\n"; }
Item (void *, size_t) { std::cout << "Iblock\n"; }
public:
Item (Item const &) { std::cout << "Icopy\n"; }
Item (float) { std::cout << "Ifloat\n"; }
};
class Listable {
Item item;
public:
Listable (Item const & i) : item (i) { std::cout << "Litem\n"; }
Listable (const char * s) : item (s) { std::cout << "Lstring\n"; }
Listable (float f) : item (f) { std::cout << "Lfloat\n"; }
};
class Block : public Item {
protected:
Block (Listable * l, size_t c) : Item (l, c) { std::cout << "Bpointer\n"; }
template<size_t N>
Block (std::array<Listable, N> a) : Block (&a[0], N) { std::cout << "Barray\n"; }
public:
template<typename... Ts>
Block (Ts const & ...t) : Block (std::array<Listable, sizeof...(t)>{t...})
{ std::cout << "B\n"; }
};
Here is a usage, showing a string not working for the construction of an Item while the Block knows how to handle it. The goodItem is the only one that needs to be copied, as its original instance was not constructed in-place in an array slot:
#include <iostream>
#include <array>
/* #include the classes above */
int main() {
auto goodItem = Item {10.20};
auto goodBlock = Block {goodItem, "blue", "red", 3.04};
/* auto misplacedItem = Item {"purple"}; */ // errors, correctly...
}
I compiled with no warnings with:
g++ -pedantic -Wall -Wsign-conversion -Wextra -Wcast-align -Wcast-qual -Wctor-dtor-privacy -Wdisabled-optimization -Wformat=2 -Winit-self -Wlogical-op -Wmissing-declarations -Wmissing-include-dirs -Wnoexcept -Wold-style-cast -Woverloaded-virtual -Wredundant-decls -Wsign-promo -Wstrict-null-sentinel -Wstrict-overflow=5 -Wswitch-default -Wundef -Werror -Wno-unused --std=c++11 test.cpp -o test
This output is what I wanted to see:
Ifloat
Icopy
Litem
Istring
Lstring
Istring
Lstring
Ifloat
Lfloat
Iblock
Bpointer
Barray
B
Can anyone spots any problems or suggestions for improvement on this technique?
Answer: I have a few things to say about your code:
Do you intend to actually do things in the constructors instead of just displaying strings? Knowing this would help the review process. Currently, you do not store anything passed to Item, but if you intend to store things at some point, it may change things.
Instead of using &a[0] which is kind of obscure and not easy to find with a simple search query, you shoud use a.data() which clearly shows the intent.
It would probably be a good idea to pass the std::array by const reference in the constructor instead of passing it by copy. That will avoid useless copies:
template<size_t N>
Block (std::array<Listable, N> const& a) : Block (&a[0], N) { cout << "Barray\n"; }
Of course, that implies that you take const Listable* instead of Listable* at some point (but we come back to my first point: we lack information about what your code is supposed to do). I don't know what you intend to do with this, but passing a pointer to the underlying memory of a temporary doesn't feel right. This may create dangling pointers.
EDIT: since your code isn't supposed to store the pointer, then taking the std::array by copy shoud be safe. But as you noted in the comments, using an rvalue-reference parameters should be even better: it binds to the temporary, you can modify it and no copy is performed. Seems like the best solution. | {
"domain": "codereview.stackexchange",
"id": 11045,
"tags": "c++, optimization, c++11, array"
} |
Doubt on precession | Question: So we are studying rotation of rigid body. Our teacher talked about precession but not in much detail
It got me thinking so if we have axis of axis of rotation then
how many such axis of axis of .......of body can be there
if the answer is many can't we ultimately represent any motion as sum of such motion
Sorry if it's nonsense as it is coming from just physics enthusiast and not a scholar or something
Answer:
There is only one axis of rotation. This is because angular velocity can be described as a vector quantity. It is impossible to have two axes at once without having the particle exist in two places at once.
http://www.feynmanlectures.caltech.edu/I_11.html
Since angular momentum is conserved, $L = MR^2w = M(r_x^2+r_y^2+r_z^2)w$. | {
"domain": "physics.stackexchange",
"id": 39276,
"tags": "newtonian-mechanics, rotational-dynamics, precession"
} |
Latest cosmological parameters | Question: I'm looking for the latest values (with uncertainties) of the four main cosmological density parameters $\Omega_i$ :
\begin{align}\tag{1}
\Omega_{\text{mat}} &={} ?,
&\Omega_{\text{rad}} &={} ?,
&\Omega_{\Lambda} &={} ?,
&\Omega_{k} &={} ?.
\end{align}
I know that $\Omega_{\text{mat}} \approx 0.30$, $\Omega_{\text{rad}} \approx 0.00$, $\Omega_{\Lambda} \approx 0.70$ and $\Omega_{k} \approx 0.00$, but I would like to have more precise values (with uncertainties, if possible). Take note that these parameters are constrained by the following relation :
\begin{equation}\tag{2}
\Omega_{\text{mat}} + \Omega_{\text{rad}} + \Omega_{\Lambda} + \Omega_{k} \equiv 1.
\end{equation}
Of course, I checked Wikipedia but I don't trust it very much: Lambda-CDM model,
I've also checked on arXiv. For example: Planck 2015 results. XIII. Cosmological parameters,
but I don't find clear final and consensual values in this paper.
Help would be appreciated. Please state your sources.
Answer: Cosmological parameters are measured in a variety of ways, and their values will depend on which measurements you trust the most. The paper you link to (Planck Collaboration et al. 2016) with the 2015 results from the Planck observations of the cosmic microwave background is probably the one that most people will accept, but even in that paper you will find different values, depending on which observables you combine.
You will find the values in their Table 4. I think that most people use the values in the column called "TT+lowP+lensing" (e.g. Geil et al. 2016, Ricotti et al. 2016, and Liu et a. 2016), which is the "conservative" choice. However, you'll also find some (e.g. Chevallard & Charlot 2016 and Silk 2016) who use the values in the last column, called "TT,TE,EE+lowP+lensing+ext". These values takes into account external data (baryonic acoustic oscillations and supernovae data), which reduce the uncertainties, arguably to unnaturally small values. The TT, TE, TT, and lowP refer to the polarization maps used, and "lensing" to the weak gravitational lensing measurements by Planck.
Standard cosmological parameters
The table below is a modified version from the Planck paper where I show only the most used parameters:
Here, $n_s$ is the slope of the primordial power spectrum, $H_0$ is the Hubble constant in km s–1 Mpc–1, $\Omega_\Lambda$ and $\Omega_m$ are the density parameters of dark energy and total (dark+baryonic) matter, $\Omega_\mathrm{b}h^2$ and $\Omega_\mathrm{c}h^2$ are the density parameters of baryonic and dark matter, multiplied by the factor $h \equiv H_0/100$ (squared), $\sigma_8$ is the matter density fluctuations on scales of 8 (comoving) Mpc, $z_\mathrm{re}$ is the redshift at which the Universe was reionized (assuming instant reionization), and the last row shows the inferred age of the Universe in billion years.
Curvature density
The constraints on the curvature parameter $\Omega_K$ is given in Table 5, which has somewhat different combinations of data. All in all, Planck constrains the curvature to $|\Omega_K| < 0.005$, but you will rarely offend anyone by simply setting the curvature to zero.
Temperature and radiation density
The radiation density is a bit more convoluted. It has a contribution from both photons and neutrinos, and their densities are related as
$$
\rho_\nu = N_\mathrm{eff} \frac{7}{8} \left(\frac{4}{11}\right)^{4/3}\rho_\gamma,
$$
where $N_\mathrm{eff} = 3.046$ is the effective number of neutrino species. Following the procedure by Pulsar in this answer, but with updated parameters (i.e. the $N_\mathrm{eff}$ given above, and the average CMB temperature of $T_0 = 2.722\pm0.027$ (Eq. 83a)), I get that
$$
\begin{array}{rcl}
\Omega_\mathrm{rad}h^2 & = & \Omega_\nu h^2+ \Omega_\gamma h^2 \\
& = & (1.7018 + 4.6213) \times 10^{-5} \\
& = & 4.1620\times10^{-5},
\end{array}
$$
that is, with $h = 0.6781$,
$$
\Omega_\mathrm{rad} = 9.0513\times10^{-5}.
$$
Recap
So, answering your question is a little difficult, as there's no single answer, and since the curvature is given with 95% confidence ("2$\sigma$"), rather than 68% ("1$\sigma$"). For the Hubble constant and the matter and dark energy, I'd recommend $H_0 = 67.81\pm0.92 \,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$, $\Omega_\mathrm{m} = 0.308\pm0.012$ and $\Omega_\Lambda = 0.692\pm0.012$.
For curvature, I would use 0 (especially because your calculations probably would need to switch between ordinary and hyperbolic trigonometry depending on the sign), but if you do want to include uncertainty, you can say $\Omega_K=0\pm0.005$ (95%). Or, you could simply use $\Omega_K = 1 - \Omega_\mathrm{m} - \Omega_\Lambda - \Omega_\mathrm{rad}$ and propagate the errors to get $\Omega_K = 0\pm0.017$, which will give you a more conservative value.
For radiation, to propagate uncertainties you would have to know the covariance matrix of the input parameters, but since the total error is dominated by that of the Hubble constant, standard error propagation $(\sigma_{\Omega_\mathrm{rad}}/\Omega_\mathrm{rad} \simeq 2 \sigma_h/h)$ yields a value of $\Omega_\mathrm{rad} = (9.0513\pm0.2456)\times10^{-5}$.
So, to be explicit my recommendation is:
$$
\{\Omega_\mathrm{m}, \Omega_\Lambda, \Omega_\mathrm{rad}, \Omega_K\}
= \{ 0.308, 0.692, 9.05\times10^{-5},0\}
\pm \{0.012, 0.012, 2.46\times10^{-6}, 0 \}.
$$
But I think the most important thing is to state where you take the parameters from. People rarely state why the choose a particular set of parameters, and although Planck gives very small error bars, other probes give error bars so small that they're basically mutually incompatible. That's why you can still easily get away with $\{\Omega_\mathrm{m}, \Omega_\Lambda, \Omega_\mathrm{rad}, \Omega_K\} = \{ 0.3,0.7,0,0\} \pm \{0,0,0,0\}$. | {
"domain": "astronomy.stackexchange",
"id": 2008,
"tags": "universe, cosmology, general-relativity, dark-matter, hubble-constant"
} |
ORCA: How to plot an adiabatic potential in dihydrogen H2 molecule? | Question: I stared to study ORCA, and I try to obtain classical results for dihydrogen as for example. I need to get a starting point to understand what needs to be done.
So, How to plot an adiabatic potential in dihydrogen H2 molecule with ORCA calculation?
! RHF OPT def2-QZVPP
%geom Scan
B 0 1 = 1.0, 3.0, 12
end
end
* xyz 0 1
H 0.000000 0.00000 0.00000
H 0.800000 0.00000 0.00000
*
Answer: By adiabatic, I presume you mean the Born-Oppenheimer approximation (which is usually used). If you need a plot of potential against H-H distance, then you need to do a relaxed potential energy surface (PES) scan.
! UHF OPT def2-QZVPP
%geom Scan
B 0 1 = 0.3, 1.3, 30
end
end
* xyz 0 1
H -4.61685 1.79381 0.00000
H -4.14986 1.26166 0.00000
*
The Scan directive in %geom section will perform a relaxed geometry scan (although in this case you have only two atoms so there is no difference between relaxed and unrelaxed). The B 0 1 = 0.3, 1.3, 30 line will scan the bond (B) between atom 0 and atom 1 (in Orca, atom counting starts from 0). The scanning will start from the bond distance 0.31Å and will end at 1.3Å and will go through 30 steps (i.e. the PES will have 30 points in total).
Another thing is that you need to use the unrestricted hartree-fock (UHF) because when the H atom splits the two electrons go into different orbitals, so RHF will give the wrong behaviour at higher bond distances (it will force double occupancy in one orbital, resulting in $\ce{H+}$ and $\ce{H-}$ fragments). UHF gives the right behaviour but may not reproduce the correct energy. Multi-configurational methods might be required.
The input file that I have shown will show you the energy behaviour of the ground state (in your graph, $\mathrm{U_S}$). Plotting the first excited state (antibonding) i.e. the $\mathrm{U_A}$ graph would be more difficult. You would probably need to do some sort of orbital rotation to converge to the excited triplet state, and then use that as the guess for the geometry scan (or maybe do TDDFT). I don't know much about that.
After running the calculation, the energy values at each bond length will be printed out at the end of the .out file, and also as a .dat file. You can import the .dat file into Excel or other programs to get the actual visual plot.
Edit: If I am not mistaken, the $\mathrm{U_A}$ graph indicates the first excited triplet state, so converging to that UHF solution is easier, all you have to do is to set the multiplicity to 3 (*xyz 0 3). Using UHF/def2-SVP, the scan looks like this—
As you can see, the two curves are crossing whereas they shouldn't really cross. I am not entirely sure why this happens, but I suspect that using multiconfigurational methods will solve this. Also note that the energy plotted is the absolute SCF energy, so the graph's don't go to zero at high bond distances. | {
"domain": "chemistry.stackexchange",
"id": 15082,
"tags": "quantum-chemistry, software"
} |
Why aren't Maxwell's equations overdetermined? | Question: Consider the four differential equations in the table given on wikipedia here and assume there is no charge distribution at any point in time, and thus also no current. If there is no charge, then the four equations reduce to the following:
$\nabla\cdot E = 0$
$\nabla\cdot B = 0$
$\frac{\partial B}{\partial t} = -\nabla\times E$
$\frac{\partial E}{\partial t} = c^2\nabla\times B$
The last two equations tell us how both the magnetic and electric fields change over time respectively, thus given some initial magnetic and electric fields, one should be able to determine any future state of both field. This makes the first two equations seem redundant to me and thus the system seems over determined. However they are clearly necessary, so I must be missing something. Are the first two equations simply initial conditions?
Answer: The first two Maxwell equations describe static electric and magnetic fields. From these equations we learn the geometric properties of such fields, and the nature of the lines of force these fields produce. The first one (when there is charge present)
$$\nabla \cdot \vec E = \rho$$
leads us to determine the form of the electric field for any kind of charge distribution. This is extremely important for the study of electrostatics. Furthermore, this equation can be used to derive the Poisson equation,
$$\nabla^2 V = -\rho$$
which allows us to determine the electrostatic potential $V$
for various charge distributions. We can also use the above Maxwell equation to derive Coulomb’s law (though this law is not necessarily a direct result of this equation only). The Poisson equation is also a very powerful tool in the study of electrostatics. This equation also has powerful applications in semiconductor physics.
The second equation you mention,
$$\nabla \cdot \vec B = 0$$
tells us something very important, which is that magnetic monopoles do not exist. The mathematical implication of this equation is that there must exist magnetic vector potential $\vec A$ where
$$\vec B = \nabla \times \vec A$$
This is a powerful mathematical result. This magnetic vector potential is ubiquitous in classical electrodynamics and quantum electrodynamics. | {
"domain": "physics.stackexchange",
"id": 74916,
"tags": "electromagnetism, maxwell-equations"
} |
If I graft two trees together while young, will they grow as one plant? | Question: If I were to graft two apple saplings together -- by bending the tops toward each other and lashing them together -- will the plants grow as one and benefit from one another, or will they be fighting each other for root space and light? If they would grow with each other, then I could theoretically grow a line of closely spaced fruit trees to any length, and they would be strengthened by each other in bad conditions.
Answer: There are a couple of answers to this question. Especially where trees are concerned, you can graft two or more trees onto the same rootstock, or even a single limb into a tree.
But if the graft takes, it won't behave too much more differently than just more branches of the same tree. Structurally intertwining them will not be different than if you had just taken a single tree's branches to support each other. The graft will usually only have a single set of roots, from the host tree. They will not compete. The tendency will be for the branches to grow apart so that they can independently get their own light. This is very much like any other single tree. Not sure about fusing two halves of a tree together - exposing the roots would tend to kill the tree or unsettle it. | {
"domain": "biology.stackexchange",
"id": 8052,
"tags": "botany, plant-physiology"
} |
What are ARPES features of charge density waves in the phase diagram of high-Tc superconductors? | Question: Part of the phase diagram of high-$T_c$ superconductors is charge density waves in the superconducting phase. What are distinctive features of ARPES (Angle-resolved photoemission spectroscopy) for such a phase?
Moreover, there may be pair density waves, which are density waves of Cooper pairs. Are there any distinctive features of ARPES for such a phase, in comparison with other phases?
Answer: I'm not an expert in cuprates but I have recently read into the matter of seeing charge order in some transition metal oxides, especially in nickel compounds, and I have done ARPES on semiconductors many years ago.
A good review reference for ARPES (angle resolved emission) on cuprate superconductors seems to be
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.75.473
Simply speaking, the impact of a charge density wave on ARPES is to simply repeat the bands with a wave vector corresponding to the wave. For example, if you have a charge order of 1/3 the Brillouin zone, you would see a repetition of the band structure at 1/3 of the zone (as well as 2/3 bc of the brilliouin zone reputation at 1).
So for charge order, you would expect to see a repetition of the cuprate bands at the charge ordering vector. It seems so far, no one has ever seen such repetitions in cuprate superconductors.
For a density wave of the cooper pairs, I don't fully understand the circumstances, so I assume this just means a spatial modulation of the superconducting gap. In that case you would see suppression of the superconducting gap at certain wavevectors of the fermi surface corresponding to the periodicity of that modulation. If that vector does not lie on the Fermi surface I'm not sure you would see this effect at all. In any case, it seems like this effect has not been seen either. | {
"domain": "physics.stackexchange",
"id": 50016,
"tags": "experimental-physics, superconductivity"
} |
Prove that $ln(n)^r \in o(n^p)$ for $p>0$ and $r\in \mathbb{R}$ | Question: I am trying to proof $f\in o(g)$
Let be $r,p\in \mathbb{R}$ with $p>0$
We have $f(n)=ln^r (n)$ and $g(n)=n^p$
I have already proofed that $ln(n)\in o(n)$ via l'Hospital
$\lim\limits_{n\to \infty}\frac{\frac{1}{x}}{1}=0\Longrightarrow \lim\limits_{n\to \infty}\frac{ln(n)}{n}=0\Longleftrightarrow ln(n)\in o(n)$
I tried substituting n with ln(n) to recieve
$\forall c>0 \exists n_0 \forall n\geq n_0 : ln(ln(n))\leq cln(n)$
Then i tried splitting $c=\frac{a}{b}$ to get $\forall \frac{a}{b}>0\exists n_0 \forall n\geq n_0 : ln(ln(n)) \leq \frac{a}{b}ln(n)$
With this information i said, that $\frac{a}{b}*\frac{b}{a}*b*ln(ln(n))=b*ln(ln(n))\leq \frac{a}{b}*\frac{b}{a}*\frac{a}{b}*b*ln(n)=\frac{a}{b}*b*ln(n)=a*ln(n)$
to conclude $ln(n)^b=e^{\frac{a}{b}*\frac{b}{a}*b*ln(ln(n))}\leq e^{\frac{b}{a}*b*ln(n)}=e^{b*ln(n)}=n^a$
At this point i realized a huge problem. I assumed $\frac{a}{b}$ to be positive. But i need it to work for r and p, where only p is known to be postive, which in the other hand means, that r can be negative, what would make $\frac{p}{r}$ negative.
It means that my approach is propably not working a
Answer: You just need to show that
$$
\lim_{n \to \infty} \frac{\log^r n}{n^p} = 0.
$$
This is trivial if $r \le 0$ since $\frac{\log^r n}{n^p} = \frac{1}{n^p \log^{-r} n}$ and $\lim_{n \to \infty} n^p \log^{-r} n = +\infty$.
For $r>0$, you can use l'Hôpital's rule $\lceil r \rceil$ times to obtain:
$$
\lim_{n \to \infty} \frac{\log^r n}{n^p}
= \lim_{n \to \infty} \frac{r \log^{r-1} n}{pn^{p}}
= \dots
= \lim_{n \to \infty} \frac{\prod_{i=0}^{\lceil r \rceil-1} (r-i) \cdot \log^{r-\lceil r \rceil} n}{p^{\lceil r \rceil} n^{p}} \\
=
\frac{\prod_{i=0}^{\lceil r \rceil-1} (r-i)}{p^{\lceil r \rceil}}\cdot \lim_{n \to \infty} \frac{ \log^{r-\lceil r \rceil} n}{ n^{p}} =0,
$$
where $\frac{\prod_{i=0}^{\lceil r \rceil-1} (r-i)}{p^{\lceil r \rceil}}$ is a positive constant and
$
\lim_{n \to \infty} \frac{ \log^{r-\lceil r \rceil} n}{ n^{p}}
$
is equal to $0$ since it falls into the previous case. | {
"domain": "cs.stackexchange",
"id": 17236,
"tags": "asymptotics, complexity-classes, landau-notation"
} |
How does radiolabeling work? | Question: Protein turnover can be measured by calculating the "decay" or loss of radio-labeled proteins in the blood, for example, but I am confused at how this calculation works. Wouldn't the radioactive isotope that you're using be decaying due to being unstable and not just due to protein turnover? Would you have to account for that in your calculations?
Answer: Technically yes, but practically it depends on what the half-life of your isotope is relative to the half-life of your protein. For example, tritium has a half-life of a little over 12 years, but most protein labeling experiments take place over only a few days at most (it depends on your protein). So while technically the tritium is decaying during your experiment, the amount of decay will be below your detection limit, and therefore is negligible over the course of the experiment.
So basically it works best to choose an isotope that will not appreciably decay over the time course of your experiment so you don't have to worry about it. If this is not possible, then yes, you would need to correct for this. | {
"domain": "biology.stackexchange",
"id": 8076,
"tags": "proteins, experimental-design"
} |
Why do they consider radioactive matter with long half lives more dangerous than matter with a short half life? | Question: The title says it all.
For example why is plutonium considered more dangerous than radioactive iodine?
Answer: A more balanced approach might be to recognize that both short and long half-live materials can be serious hazards, but usually for somewhat different reasons. Also, the devil is very much in the details here, because issues such as how your body absorbs the isotopes is also very, very important.
Radioisotopes with short half-lives are dangerous for the straightforward reason that they can dose you very heavily (and fatally) in a short time. Such isotopes have been the main causes of radiation poisoning and death after above-ground explosions of nuclear weapons.
Iodine is an example where preferential absorption by the human body can further aggravate the dangers of short-lived isotopes.
Long-term isotopes are more complicated. They don't dose as heavily, but there are a lot more issues than just that. Plutonium for example is comparatively long-lived, but some of its decay products can be quite nasty. Also, plutonium happens to be particularly toxic due to its chemistry, which aggravates the damage it can do.
The biggest danger from radioisotopes with mid-to-long half lives is that they can keep an entire region of earth nastily radioactive for a very long time, e.g. hundreds or thousands or even tens of thousand of years. That's the main reason why disposing of reactor wastes, which often contain just such isotopes, is such a contentious issue.
At the extreme end are isotopes that are so long-lived that their hazard levels are close to zero. Uranium-238, the kind left after the fissile 235 is removed, pretty well falls into this category. Bismuth (as in the main ingredient in a popular pink stomach relief aid) is ironically in this category, with a half-life so long it's hard even to tell that it is radioactive. | {
"domain": "physics.stackexchange",
"id": 8652,
"tags": "radioactivity"
} |
Gauss's law for cylinder with infinite height with a spherical cavity | Question: Imagine there is a cylinder with a charge density of +Q per unit volume and of infinite length. Now place a spherical cavity inside it with a diameter equal to the cross-section diameter of the cylinder. Is there an electric field inside the sphere? If so, is it possible to calculate the E-field with Gauss's Law?
Answer: Yes, you can use Gauss's law, but I will leave you to work out the details. You use the principle of superposition.
Use Gauss's law (cylindrical symmetry) to work out the E-field inside the uniform cylinder, without the spherical hole in it.
Use Gauss's law (spherical symmetry) to work out what the E- field would be due to a sphere with a negative charge density $-Q$, in the position you have shown the spherical cavity.
Your situation is equivalent to the sum of these two fields. | {
"domain": "physics.stackexchange",
"id": 34287,
"tags": "electrostatics, electric-fields, gauss-law"
} |
Isotropic moments of inertia | Question: Explicit integration can show that the moment of inertia of a Platonic solid (i.e., tetrahedron, cube, octahedron, dodecahedron, or icosahedron) of uniform density is the same around any axis passing through its center. The axis need not pass through a vertex, or midpoint of an edge, or center of a face! In tensor terms, the moment of inertia tensor is a constant times the Kronecker delta, just like for a sphere.
What is the complete class of solids with an isotropic moment of inertia, and why?
Answer: Start with any solid whatsoever. Choose a coordinate system $x,y,z$ in which its moment of inertia tensor is diagonal. The diagonal components are
\begin{align}
I_{xx} = I_0 - \sum_n m_n x_n^2
\\
I_{yy} = I_0 - \sum_n m_n y_n^2
\\
I_{zz} = I_0 - \sum_n m_n z_n^2
\end{align}
where the $I_0$ term is the same for all three. Applying scale factors $a,b,c$ in the $x,y,z$ dimensions converts this to
\begin{align}
I_{xx}' = I_0' - a^2 \sum_n m_n x_n^2
\\
I_{yy}' = I_0' - b^2 \sum_n m_n y_n^2
\\
I_{zz}' = I_0' - c^2 \sum_n m_n z_n^2
\end{align}
where $I_0'\neq I_0$ but is still the same for all three components. Clearly we can choose $a,b,c$ (or any two of them) so that $I_{xx}'=I_{yy}'=I_{zz}'$.
This shows that any object, however irregular its shape may be, is just two ordinary scale-factors away from having a perfectly isotropic moment of inertia tensor. | {
"domain": "physics.stackexchange",
"id": 62765,
"tags": "symmetry, moment-of-inertia"
} |
knowrob_cad_models: Cannot locate rosdep definition for [iai_cad_downloader] | Question:
When I installed "Developer setup" of KnowRob in Indigo+Ubuntu14.04.4 LTS, in step
rosdep install --ignore-src --from-paths stacks/
it shown me this ERROR:
ERROR: the following packages/stacks could not have their rosdep keys resolved
to system dependencies:
knowrob_cad_models: Cannot locate rosdep definition for [iai_cad_downloader]
How could I get rid of this error? Thanks much in advance for any help.
Originally posted by cui56 on ROS Answers with karma: 11 on 2016-03-16
Post score: 1
Answer:
This is a ROS pacakge which has not been released into indigo.
You need to install this from source.
Originally posted by mgruhler with karma: 12390 on 2016-03-17
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 24142,
"tags": "ros, installation, knowrob, ros-indigo"
} |
Why do electrons come to ground state even after giving absorbing energy? | Question: Imagine you have a hydrogen placed under sunlight, now if we look at 1st shell of hydrogen, it has energy of $-13.6$ev now for 2nd shell we have energy of $-3.4$ev.
1st shell -> $-13.6$ev
2st shell -> $-3.4$ev
3rd shell -> $-1.5$ev
4th shell -> $-0.85$ev
The the continous power from sun is more than enough to send knock electrons out of hydrogen atom, even if we assume the power from sun is not "ample" enough at once, for eg: electron in 1st shell need $+10.2$ev so it can jump into 2nd shell, say sun gives $+5.2$ev at $t = 1s$ so this energy will increase kinetic energy of electron, at $t = 2s$ sun gives $+5.2$ev so electron now jumps into 2nd shell and remaining energy will be used to increase kinetic energy of electron. This process can go on until electron is completely removed from atom?
Why doesn't something like this happen?
Answer: There are also other processes that return the electron into the lower energy states: most notably spontanous emission (when electron lowers its energy and emits a photon), but also various kinds of other interactions, such as collisions with other hydrogen atoms. The drive towards lower energy then wins - this is what thermodynamics and statistical physics teach us. | {
"domain": "physics.stackexchange",
"id": 89430,
"tags": "electromagnetic-radiation, photons, radiation, photon-emission"
} |
Clarify vectors at angles of 90 and 270 degrees from the vertical for motion in a vertical circle | Question: I believe the expression for motion in a circle, measured at the top and bottom of that circle,is $\frac{mv^2}{r} = T - mg$ where $mg$ is negative because it acts downwards at all times.
I am confused about the tension $T$ at positions on the circle that are not at the top or the bottom, especially at $\theta=$90 degrees to the vertical, for example.
Take a mass $m$ connected to a pivot by a rod or string and made to rotate in a vertical circle. At $\theta=$90 degrees the rod or string holding the mass would be horizontal.
At any positions on the circumference of the circle the horizontal component of the weight, $mg \cos\theta$, is equal to zero, and the vertical component is $mg\sin\theta=mg$
And does the centripetal force still act towards the centre, so the horizontal component must be $\frac{mv^2}{r}$?
In which case, at $\theta=90$ degrees are the weight and centripetal forces orthogonal?
Does this mean that if $\frac{mv^2}{r}$ is horizontal and is the resultant of the tension and $mg$, then in order to obtain a horizontal resultant there must be a horizontal component to the tension? Does this mean the tension must point upwards from the horizontal? Or is the tension actually horizontal?
If the latter is the case, does it mean that the tension vector always points towards the centre of the circle at all points on the circumference?
Answer: The equation you have written is correct only at bottom.At the top,centrifugal force is balanced both by Mg and tension.
Also yes,tension always acts radially inwards at all times.
You need to understand difference between centripetal and centrifugal.
In the inertial frame,the net force of tension and Mg provides the necessary centripetal force for the rotating motion.
While if we sit on the rope(figuratively) and then draw the freebody diagram of the mass,we have to apply a centrifugal force radially away(pseudo force concept).In this frame(non inertial),body is not in motion and hence you can simple equate the forces at different points,tension being radially inward again. | {
"domain": "physics.stackexchange",
"id": 47974,
"tags": "newtonian-mechanics, forces, vectors, centripetal-force, centrifugal-force"
} |
Heat-treatment for protein expression | Question: I want to express thermophilic protein in E. coli and the first step purification is to heat the crude protein. Since I don't know the optimum temperature to denature the contaminants, I plan to heat the protein in three different temperatures. I currently have 120 ml crude protein and I wonder, is it okay for me to divide the crude protein for each temperature? So, I will heat 40 ml crude protein for each temperature.
Answer: Yes.
In fact this is exactly what you would do if you wanted to perform the experiment to determine which condition is best. For these sorts of experiments you use one batch and split it amongst the experimental variables, so that you don't have to deal with inter-batch effects (e.g. different concentrations of protein). | {
"domain": "biology.stackexchange",
"id": 12305,
"tags": "protein-expression, purification, heat"
} |
Is there a certain amount of titrant that is ideal? | Question: For an AP chem lab dealing with determining the amount of citric acid in orange juice by titrating with NaOH, one of the discussion questions goes like this:
Choose an amount of beverage to be titrated that will require at least 10 but less than 20 mL of titrant. Explain why this range of titrant is optimal.
This question makes no sense to me. Why would there be an "optimal range" of titrant? I would think that I should fill the buret to full capacity if enough titrant is available. Would it not be better to have excessive titrant than not enough of it?
We completed the lab, and the titration required 23.05 mL of 0.1 M NaOH solution to titrate 20.00 mL of orange juice. If we had followed the instructions, we would have added less orange juice, and used less NaOH, but what is the benefit in doing so? Budget cuts? If it was about not wasting solution, why would there be a minimal requirement of 10 mL of titrant?
Answer: In a perfect world there would be no optimal range. We do however live in a world full of errors and practical considerations.
Why use over a minimum amount?
Let's assume for demonstration, that each drop has a volume of 0.05 mL. Of course this is not an accurate value as various factors would affect this.
The titration has reached its end point when one last drop changes the colour. If your titre value is very small e.g. 1 mL, then each drop is 5% of your titre.
If only part of the drop is needed to reach the end point then all the extra part of the drop is inaccuracy, which in this scenario could be up to 5%! If you have the misfortune of accidentally adding one drop too many you are introducing an even larger error. This will impact concentration calculations based on the experiment.
Why not just use 'a lot' then?
There are a couple of things here:
1) As noted by Mithoron, this would be a waste. The accuracy gained from requiring additional titrant will level off rather rapidly.
2) If you use more than one full burette's worth there is an error penalty.
3) Using extra titrant is a waste of your time. Consider a 50 mL burette, if it needs to be filled up after each titration, that would be inefficient. I'm assuming you're doing repeats so this has real merit. | {
"domain": "chemistry.stackexchange",
"id": 4054,
"tags": "solutions, titration"
} |
Is there a Fermi estimations toolbox? | Question: I am a theoretical physics student and am a little ashamed at my inability to estimate any measurable quantity. I would like to develop my skills at Fermi estimations.
Although it is hopeless to start memorising physical constants, I would like to ask if there there is a compact list of quantities that are easy to memorise and useful in varied contexts to know?
For example, I learned today that a mole of Boltzmann constants is $k_B \, 10^{23} \cong 8.3$ in SI units and that, at room temperature, $k_B T \cong 1/40$ $eV$.
If someone knows where I can find a list of such tricks I would very much like to know about it.
Also, If you know one or two of these tricks, I will be happy to learn about them and make the list myself.
Answer: If you consider it hopeless to memorize physical constants, it will probably help to find combinations of these constant that relate to the human scale. For instance:
The gravitational constant $G$ might be difficult to memorize. This holds for both the units as well as the value. However, multiply this constant by the density of water, take the square root, and you end up with a characteristic frequency of about one per hour: $\sqrt{G \rho_w} \approx$ one per hour. If you memorize this fact, and provided you know the density of water in your preferred system of units, you can always work back to the value of $G$.
The quantum of action $\hbar$ and the elementary charge $e$ lead to tiny values when evaluated in any day-to-day system of units. However, the ratio $\hbar / e^2 \approx 4 k \Omega$, a value more easy to remember.
A similar relation holds for $\hbar $ and the electron mass $m_e$ their ratio gives a diffusion constant: $\hbar/m_e \approx$ one square centimeter per second.
You can augment the above with other ratios of fundamental constants that are easy to remember, such as $\sqrt{\hbar c/G} \approx 22 \mu g$ (the Planck mass), as well as with the values of dimensionless constants such as the fine structure constant. | {
"domain": "physics.stackexchange",
"id": 16418,
"tags": "resource-recommendations, estimation, fermi-problem"
} |
Is the three-body system "unique"? | Question: Given a state of an ideal 3 body system (i.e., without external interference) in time $t$: the velocity $v_{i,t}$, mass $m_{i,t}$ and position $x_{i,t}$ for $i\in \{1,2,3\}$, using numerical method it is possible to determine any state in time $\hat{t}$, where $\hat{t}>t$. But is it possible to determine a unique state in time $\bar{t}$ where $\bar{t}<t$?
In another word, is it sufficient to know a single state of the system for any given time to deduce the states of the system for all time?
Equivalently, will two different states of the 3-body system results in the same state (maybe at different time) in the future?
Answer: What you're really asking about is less to do with Astronomy and more to do with mathematics. You're basically asking if, given a system of differential equations, will a unique solution exist for all time? For an answer, you should check out the Existence and Uniqueness theorems of differential equations. You'd be better to ask questions like this on the Mathematics stack exchange.
However, to discuss the particular, astronomical case you've asked about, the answer is yes, you can run that system both forwards and backwards if you know some initial state. Newtonian mechanics is completely deterministic in that if you know all the equations of motion involved, as well as the entire state of the system at a given time, you can figure out the state of that system at any other time, both in the past and the future.
To speak to your particular 3-body orbiting problem though, I'll say that the system of equations is not solvable in closed form - that is, you can't write down an analytic solution to the equations like you could for the 2-body case. As userLTK states, you can write down approximate solutions in the restricted 3-body problem, where one mass is significantly less than the other two and orbits under specific conditions.
To get a solution at any time $\hat{t}$, you need to use numerical methods. Of course numerical methods are inherently flawed. Numerical errors build up the longer you simulate due to time steps which are not infinitesimal, from errors within the numerical algorithms used, and general floating point errors. In theory, if you had a computer with infinite precision and infinite computing power, you could solve a 3-body (or n-body) system perfectly, but we live in the real world where such things are impossible.
To prove though, that you can figure out the state at any time in the past or present for a three body body, I've written a basic simulation in Python 3. It can run both forwards and backwards from a given start condition and start time. Essentially it puts three nearly identical masses in contrived starting positions and velocities. Below the code are plots of results.
import numpy as np
from numpy.linalg import norm
from matplotlib.pyplot import *
from time import time
# Define physical constants
G = 6.67408E-11 # Gravitational Constant, m^3 kg^-1 s^-2
# Define body 1 parameters
m1 = 2.2E30 # Mass, kg
x1 = np.array([0,1E11]) # Position, m
v1 = np.array([-3.5E4,0]) # Velocity, m/s
# Define body 2 parameters
m2 = 1.9E30
x2 = np.array([1E11*np.cos(210*np.pi/180),
1E11*np.sin(210*np.pi/180)])
v2 = np.array([3E4*np.cos(300*np.pi/180),
3E4*np.sin(300*np.pi/180)])
# Define body 3 parameters
m3 = 2E30
x3 = np.array([1E11*np.cos(330*np.pi/180),
1E11*np.sin(330*np.pi/180)])
v3 = np.array([3E4*np.cos(60*np.pi/180),
3E4*np.sin(60*np.pi/180)])
# Define simulation parameters
n = 3 # Number of bodies, unitless
t = 0 # Simulation time, s
dt = 1E4 # Simulation time step, s
tEnd = 1E8 # Simulation end time, s
m = np.array((m1,m2,m3)) # All masses
x = np.vstack((x1,x2,x3))# All positions
v = np.vstack((v1,v2,v3))# All velocities
xHist = [[list(x1)],[list(x2)],[list(x3)]]
vHist = [[list(v1)],[list(v2)],[list(v3)]]
# Simulate until end time is reached
start = time()
while True:
# Calculate acceleration
a = []
for i in range(n):
a.append(0)
for j in range(n):
if i == j: continue
a[-1] += - G * m[j] / norm(x[i]-x[j])**3 * (x[i] - x[j])
# Update velocities
for i,vi,ai in zip(range(n),v,a):
vi += ai * dt
vHist[i].append(list(vi))
# Update positions
for i,xi,vi in zip(range(n),x,v):
xi += vi * dt
xHist[i].append(list(xi))
# Update time and end simulation if past tEnd
t += dt
if dt > 0 and t > tEnd: break
if dt < 0 and t < tEnd: break
end = time()
print('Simulation finished in {:.4f} seconds.'.format(end-start))
# Convert xHist and vHist to np arrays
xHist = np.array(xHist)
vHist = np.array(vHist)
# Plot everything up
figure()
for i,c in enumerate(['or','ob','oc']):
# Plot starting positions
plot(xHist[i,0,0], xHist[i,0,1], c)
for i,c in enumerate(['-r','-b','-c']):
# Plot path of star
plot(xHist[i,:,0], xHist[i,:,1], c)
gca().set_aspect(1)
gca().set_xticks([])
gca().set_yticks([])
show(block = False)
Note, the plots show the initial positions of the stars as the points and then trace out their paths over time.
Path of three masses for $t<t_0$
In this scenario, the three masses end up in the contrived scenario. I ran the simulation backwards by setting the timestep dt and the end time tEnd to be negative.
Path of three masses for $t_0<t$
From here, the simulation is run forwards with positive dt and tEnd, starting from the same contrived scenario as above.
Note how chaotic and unstable this system is. The entire system only remains a 3-body system for less than 6 years. Before that the three masses are separate and doing their own thing. They "coincidentally" meet (because I set it up so they should), orbit around each other for a little less than 6 years, and one gets ejected, resulting in the other two continuing to orbit one another.
Contrived Scenario with $m_1=m_2=m_3$, $|v_1|=|v_2|=|v_3|$, and all positions $120^o$ from each other
Just for fun, a "stable" 3-body problem with all stars of equal mass and orbits. This is really an unstable equilibrium orbit and any perturbations will screw it up and you'll see what you saw in the above two graphs. In fact, if you run this long enough, the numerical instabilities of my code will result in the orbits breaking down. This numerical instability, as I said above, is inherent in any numerical solution and cannot be overcome, only minimized. I find that, using the numerical method I did, my system is resistant to numerical instabilities for about 7 years. If I want to run this any longer (and continue to be accurate), I need more robust numerical methods. | {
"domain": "astronomy.stackexchange",
"id": 2557,
"tags": "orbit, stellar-dynamics"
} |
Randomized Algorithms Probability | Question: I'm taking a grad level randomized algorithms course in the fall. The professor is known for being very detail oriented and mathematically rigorous, so I will be required to have an in-depth understanding of probability. What would be a good probability book to learn from that would be intuitive, but also have some mathematical rigor to it?
Answer: What textbooks does the course recommend? I like "Probability and Computing" by Mitzenmacher and Upfal and "Randomized Algorithms" by Motwani and Raghavan. They introduce the necessary theory from an algorithms viewpoint. I also recommend a book on inequalities, as bounding things is quite essential to the analysis of randomized algorithms. At the very least this cheat sheet. | {
"domain": "cs.stackexchange",
"id": 1426,
"tags": "probability-theory, randomized-algorithms"
} |
Regarding Goldstein's claim that $\mathbf{F} = \dot{\mathbf{p}}$ | Question: From Goldstein:
... The mechanics of the particle is contained in Newton's second law of motion, which states that there exist frames of reference in which the motion of the particle is described by the differential equation $$\mathbf{F} = \frac{d\mathbf{p}}{dt} \equiv \dot{\mathbf{p}},$$ or $$\mathbf{F} = \frac{d}{dt}\left(m\mathbf{v}\right).$$ In most instances, the mass of the particle is constant and [the last equation] reduces to $$\mathbf{F} = m\frac{d\mathbf{v}}{dt} = m\mathbf{a}\textrm{...}$$
What Goldstein is saying troubles me for it implies that $\mathbf{F} = \dot{\mathbf{p}}$ works for a particle of time-varying mass and that contradicts Ján Lalinský's answer to this question: Second law of Newton for variable mass systems.
What's going on here?
Edit: Some people think that a particle that loses mass that isn't going anywhere--it simply disappears---isn't a useful fiction to solve problems. Consider a ball that is emitting mass isotropically whilst being pushed by some force. In analyzing the motion of this ball there is no need to consider the fact that material is indeed being emitted, all that is needed is the fact that the ball is losing mass. Here we can just use $\mathbf{F} = m\mathbf{a}$ instead of $\mathbf{F} = \dot{\mathbf{p}}$, no?
Answer: I think that there are basically four pieces to the puzzle for why this frequent error (of course, I may be missing something):
it is customary since old times to state traditional Newton's second law in terms of change of momentum, as he originally did (quantity of motion). This catched on despite this traditional formulation only applies to systems of constant mass and there is no more generality to it than in stating the law in terms of changes of velocity of the body. Perhaps one advantage of the momentum formulation is that it applies more directly to extended bodies which have no single velocity, but do have single momentum. But even for such bodies second law can be described using changes of velocity: if the body has no single velocity, the velocity of center of mass can be used.
since in special relativity $\mathbf F = m\mathbf a$ does not hold in any obvious sense, it was necessary to find some valid equivalent and in the decades after Einstein's 1905 publication it was generally accepted that the preferred way to do that is to try to give the traditional expression
$$
\mathbf F = \frac{d\mathbf p}{dt}
$$
a more non-trivial role - by allowing for the $m$ in $\mathbf p = m\mathbf v$ be a variable quantity that is a function of body speed. Later in 20th century and today, this become widely discouraged by particle physicists, for some valid reasons - while the method is internally logically consistent, for some people explanation of special relativity gets easier and more clear when not relying on the concept of relativistic mass.
Most courses on mechanics do not do justice to analysis and examples of variable mass systems, this area is often skimped over in courses and textbooks for physicists.
Given the above situation in both non-relativistic and relativistic mechanics teaching, it is likely that people then reconstruct the actual logic of Newton's second law in this incorrect way:
because we write it in a way that suggests differentiating mass by time can take place ($F = dp/dt$), this equation probably applies even if mass $m$ changes in time. (WRONG)
Anybody who ever derived the rocket equation of motion knows that variable mass systems need careful analysis in terms of interaction of constant mass systems, and $\mathbf F = \frac{d\mathbf p }{dt}$ is no more general than $\mathbf F = m\mathbf a$ - it applies only to constant mass systems. | {
"domain": "physics.stackexchange",
"id": 56731,
"tags": "newtonian-mechanics, forces, acceleration"
} |
Euler's totient function for large numbers | Question: I made this algorithm to compute Euler's totient function for large numbers. A sieve is used.
#include <iostream>
#include <cstdint>
#include <vector>
typedef uint64_t integer;
integer euler_totient(integer n) {
integer L1_CACHE = 32768;
integer phi_n=0;
if (n>0) {
phi_n++;
integer segment_size=std::min(L1_CACHE,n);
std::vector<char> SIEVE(segment_size, true);
std::vector<integer> PRIME;
integer len_PRIME=0;
for (integer p=2; p<segment_size; p++)
if (SIEVE[p]==true)
if (n%p==0) {
for (integer m=p; m<segment_size; m+=p)
SIEVE[m]=false;
PRIME.push_back(p);
len_PRIME++;
}
else
phi_n++;
if (n>segment_size) {
integer m,p;
for (integer segment_low=segment_size; segment_low<n; segment_low+=segment_size) {
std::fill(SIEVE.begin(), SIEVE.end(), true);
for (integer i=0; i<len_PRIME; i++) {
m=(PRIME[i]-segment_low%PRIME[i])%PRIME[i];
for(;m<segment_size;m+=PRIME[i])
SIEVE[m]=false;
}
for (integer i=0; i<segment_size && segment_low+i<n; i++)
if (SIEVE[i]==true){
p=segment_low+i;
if (n%p==0) {
for (m=i; m<segment_size; m+=p)
SIEVE[m]=false;
PRIME.push_back(p);
len_PRIME++;
}
else
phi_n++;
}
}
}
}
return phi_n;
}
int main() {
std::cout << euler_totient(1000000) << std::endl;
return 0;
}
Is it a good solution?
Can it be improved in any way?
Answer:
Is it a good solution?
Unfortunately I'll have to go with "not really".
This approach can be summarized as taking the definition of the totient of n as "the number of positive integers up to n that are relatively prime to n" literally, turning it into an algorithm.
This approach does not take advantage of any factors that are found. What I mean by that is, for example, if n = 2^k, then an algorithm based on factorization will find the totient almost immediately, while this algorithm will still have to iterate up to n. Or, imagine you discover that n is divisible by 1009 (and only once, meaning that n is not divisible by 1009² - you can easily deal with prime powers, but I didn't want to do it for the example), then you would know that totient(n) = 1008 * totient(n / 1009). If you were going to iterate up to n (which is not necessary) then this would effectively cut the remaining amount of iteration by a factor of 1009.
Also not finding factors is useful: when n is a prime, that is a fact that can be discovered more quickly than iterating all the way up to n (doing it naively, discovering that n is prime would happen after sqrt(n) steps, significantly better than n), and then the totient is just n - 1.
The simplest factorization-based approach, which just uses trial division, nothing fancy, would only need to count up to sqrt(n), and only in the worst case, when n is prime. Otherwise, factors are found along the way and every time one of them is found, the bound up to which the trial division needs to go is significantly reduced.
To show how big the difference could be, let's take the totient of 2364968846596223957 (I got this by drawing a random 64bit number). Using a simple trial-division based computation that I quickly worked out, nothing special, my PC took 80 milliseconds to compute the result 2360645320368442380 (which is correct, verified with WolframAlpha). Counting up to that number would take years.
In terms of coding style, I have a couple of remarks as well. There is very little white space, such as around operators and also sometimes between the different "parts" in a for statement. Styles differ, but I don't find this nice to read. It's very visually dense. Also, there is a repeated use of if or for with non-trivial contents, yet without braces. That is commonly recommended against in style guides (sometimes recommendations even go so far as to always demand braces, even if the contents are trivial) and personally I also recommend against it. I also find typedef uint64_t integer questionable, what do you gain from this? | {
"domain": "codereview.stackexchange",
"id": 44070,
"tags": "c++, algorithm, primes"
} |
Moment of a bent rod | Question: In my mechanics textbook there is an example where they take the moment of the external forces in a system. The example states:
Two uniform rods $OX$ and $XY$, pin jointed together at $X$, hang from a fixed hinge at $O$. The rods have length $a$ and $b$, and weight $ka$ and $kb$ respectively. The lower end $Y$ is now pulled aside with horizontal force $F$. Find the angles $\alpha$ and $\beta$ which the rods make with the vertical in equilibrium.
They then go on to take the moment for the rod $XY$ giving:
$F(b\cos{\beta}) = kb(\frac{1}{2}b\sin{\beta}) $ , and I understand where this comes from. However, they then go on to take the moments of the whole system about $O$ to give:
$$F(a\cos{\alpha}+b\cos{\beta}) = ka(\frac{1}{2}a\sin{\alpha})+ kb(a\sin{\alpha} +\frac{1}{2}b\sin{\beta}) $$
I understand where the $ka(\frac{1}{2}a\sin{\alpha})$ came from but not how to get the other two parts. For the left side, it looks to me like they have taken the moments caused by $F$ separately for each rod then added them together, and that they have done the same for the right side. I haven't come across this before, so why are they able to do this? Is there some general rule that covers this?
Here is a diagram they have given to help illustrate the problem
Answer: Since the system is in equilibrium, the net moment acting about $O$ must be $0$. Now, we see that F has no vertical component, hence the moment produced by F about O is the magnitude of F multiplied by the vertical distance of the point of application of F from O (keep in mind that the torque vector is $\tau=r \times F$, that is the product of the force component that is perpendicular to the position vector of the force from the origin). Now the vertical distance can be found from some trigonometry. The same idea applies for B, where as it has only a vertical component, the moment is the product of the horizontal distance between the point of applied force and the origin, which is again found through some trigonometry. | {
"domain": "physics.stackexchange",
"id": 43380,
"tags": "classical-mechanics, forces, torque, moment"
} |
Fermionic commutation relation using Jordan-Wigner transformation | Question: How can one show in detailed steps that Fermionic annihilation and creation operators, under Jordan-Wigner transformation, satisfy the Fermionic commutation relations?
The Fermionic commutation relations are:
$$\{\hat{a}_i,\hat{a}_j\}= \{\hat{a}_i^\dagger,\hat{a}_j^\dagger\} =0 , \{\hat{a}_i,\hat{a}_j^\dagger\} = \delta_{ij}.$$
Answer: Based on my answer to this: Fermionic occupation operator and nearest neighbor Fermionic hopping interaction as a qubit operator, you can see that we have:
\begin{align}
\hat{a}_i &= \frac{1}{2} Z^{\otimes (i-1)} (X - iY),\\
\hat{a}_i^\dagger &=\frac{1}{2} Z^{\otimes (i-1)} (X + iY).\\
\end{align}
If $i=j$ we have:
\begin{align}
\{\hat{a}_i,\hat{a}_i^\dagger\} &\propto \frac{1}{4} ((X - iY)(X + iY) + (X + iY)(X - iY)),\\ &= \frac{1}{4} (X^2 + iXY + - iXY + Y^2 + X^2 -iXY + iXY + Y^2) \\
&=\frac{1}{4}(2X^2 + 2Y^2)\\
& =\frac{1}{4}(4I) \\
& = I.
\end{align}
All $Z$ operators are replaced by $I$ operators since $Z \times Z = I$, and this is also what was done in the penultimate step, where $X^2 = I$ and $Y^2 = I$ were used.
For the other anti-commutators we have:
\begin{align}
\{\hat{a}_i,\hat{a}_j\} &= \frac{1}{4} ((X - iY)(X - iY) + (X - iY)(X - iY)),\\ \{\hat{a}^\dagger_i,\hat{a}^\dagger_j\} &= \frac{1}{4} ((X + iY)(X + iY) + (X + iY)(X + iY)).
\end{align}
You can then do the same type of arithmetic that I went through in detail for the first anti-commutator. | {
"domain": "quantumcomputing.stackexchange",
"id": 2538,
"tags": "hamiltonian-simulation, chemistry, solid-state"
} |
catkin command for rosbuild_invoke_rospack(${PROJECT_NAME} ${_prefix} DIRS depends-manifests) | Question:
I'm currently working to update our package from from rosbuild to catkin and would like know how to write get list of packages that depends on current target package.
in the rosbuild system, we used following command and we're looking for the equivalent function/procedure in catkin system.
rosbuild_invoke_rospack(${PROJECT_NAME} ${_prefix} DIRS depends-manifests)
Originally posted by Kei Okada on ROS Answers with karma: 1186 on 2013-08-01
Post score: 1
Answer:
The catkin migration guide (http://ros.org/wiki/catkin/migrating_from_rosbuild#line-95) says "do not do this". Conceptually a package should not query its downstream dependencies.
Could you describe why you have been doing this before? Perhaps there is another way to get to the same goal.
Originally posted by Dirk Thomas with karma: 16276 on 2013-08-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Kei Okada on 2013-08-01:
thanks, we have defined idl message compilation helper cmake functions, that searches all downlostreamy dependent directories with ./idl folder and compile them
https://code.google.com/p/rtm-ros-robotics/source/browse/trunk/rtmros_common/rtmbuild/#rtmbuild%2Fcmake
Comment by Dirk Thomas on 2013-08-01:
Without further information I would just guess that this should be handled differently.
Message generation is very similar to this an genmsg/genpy/gencpp provide cmake functions and additional scripts to perform the generation. But each downstream package explicitly invokes the generation for its messages. That also makes it more in line with the goal that the result should be bundled within each package (and not with the generator).
Comment by Kei Okada on 2013-08-01:
Ok, but how about [1] case, To invoke message generation for each downstream package, for custom client library which is not available as ubuntu system package?
[1] http://ros-users.122217.n3.nabble.com/Shared-installation-wiki-page-td1313245.html
Comment by Dirk Thomas on 2013-08-01:
The clean way is to write a generator and build all message packages from source to also generate messages with your custom generator. Otherwise I don't see a good way to perform clean dependencies and checking for rebuilds. | {
"domain": "robotics.stackexchange",
"id": 15131,
"tags": "catkin"
} |
About gravitational wave polarization in the detectors output | Question: Gravitational wave detection with Michelson interferometers gives the gravitational wave strain(the amplitude) , in TT gauge frame we know there are two polarizations, now in some literature it says the detector response is the superposition( the linear combination) of the two polarization signals and in other it says the detected signal is just the plus polarization(look at Kip Thorne Blandford), it shows the photodetector response is directly proportional to it? , can someone resolve this ambiguity
Answer: The detector response $d(t)$ to a gravitational wave $h_{\mu\nu}$ is a combination of the antenna pattern of the detector $F_{+, \times}(\theta, \phi, t)$, the direction $\{\theta, \phi\}$ and arrival time $t$ of the gravitational wave, and the polarization. Decomposing the TT gauge perturbation into $+$ and $\times$ polarizations as $h_{ij} = h_+ e^+_{ij} + h_\times e^\times_{ij}$, where $e^{+,\times}_{ij}$ are polarization tensors, the detector response is given by
\begin{equation}
d(t) = h_+(t) F_+(\theta, \phi, t) + h_\times(t)F_\times (\theta, \phi, t)
\end{equation}
There are many places this is covered, such as vol 1 of Maggiore's book. An example free reference is Eq 1 of https://arxiv.org/abs/1102.5421.
I assume the statement from Thorne and Blanford that you are mentioning is only intended to hold for a particular relative orientation of the gravitational wave and the detector. It sounds like they are probably considering a plane gravitational wave traveling in the direction orthogonal to the plane of the detector; then the response functions simplify so that $F_+=1$ and $F_\times=0$ (assuming that "$+$" has been defined to align with the arms of the interferometer). | {
"domain": "physics.stackexchange",
"id": 92109,
"tags": "gravitational-waves"
} |
First time jQuery on xkcd | Question: I'm a first time user of jQuery (and a huge fan already). I am trying to make the comic picture of an xkcd page onclick to the explanation page.
Here's what I've done.
$("#comic").click(function(){
return window.open("http://www.explainxkcd.com/wiki/index.php/" +
document.URL.split("/")[3]);
});
Is this efficient? Is this robust? Any jQuerying tips?
Answer: Besides not needing to return the result of window.open, it looks pretty good. See this thread for an explanation of what returning a value from event does. From this answer,
return false from within a jQuery event handler is effectively the
same as calling both e.preventDefault and e.stopPropagation on
the passed jQuery.Event object.
e.preventDefault() will prevent the default event from occuring,
e.stopPropagation() will prevent the event from bubbling up and
return false will do both. Note that this behaviour differs from
normal (non-jQuery) event handlers, in which, notably, return false does not stop the event from bubbling up.
Clearly, thats not what we want, so instead of returning the result of window.open, we should just omit the return.
This may not be necessary, but I would be more comfortable using
var comicID = location.pathname.split("/")[1];
over
var comicID = document.URL.split("/")[3]);
mainly because http:// may be omitted from the URL and the code will still work (shouldn't ever be a problem either way). It's also more intuitive. I would also leave a comment for the code such as
// where path name looks like /156/ for xkcd.com/156/
Indentation would also help the code be more readable, however it's short so it doesn't matter too much... That said, I would write your code:
// when a user clicks the comic open the explanation
$("#comic").click(function(){
// where pathname is the comicID eg /156/ for xkcd.com/156
window.open("http://explainxkcd.com/wiki/index.php/" + location.pathname.split("/")[1]);
}); | {
"domain": "codereview.stackexchange",
"id": 7630,
"tags": "javascript, jquery"
} |
pass agruments/parameters to included launch file in yaml | Question:
How would you pass arguments / parameters to an included launch file? The following doesn't work and gives the following error: Unexpected key(s) found in 'include': {'param'}
launch:
- include:
file: "$(find-pkg-share test)/launch/test.launch.py"
param:
-
name: "color"
value: "red"
Originally posted by waspinator on ROS Answers with karma: 122 on 2022-09-13
Post score: 0
Answer:
use "arg" instead of "param" to pass arguments into an included launch file. Names and values must all be strings in quotes.
launch:
- include:
file: "$(find-pkg-share test)/launch/test.launch.py"
arg:
-
name: "color"
value: "red"
Originally posted by waspinator with karma: 122 on 2022-09-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 37973,
"tags": "ros, ros2, roslaunch, yaml"
} |
building a 2-layer LSTM for time series prediction using tensorflow | Question: From Tensorflow tutorials i am experimenting time series with LSTM
In the section 'multi-step prediction' using LSTM tutorial says
Since the task here is a bit more complicated than the previous task, the model now consists of two LSTM layers. Finally, since 72 predictions are made, the dense layer outputs 72 predictions.
where previous task was prediction over a single point.
How do we know how many layers a problem requires (here, 2) ?
Then, from implementation point of view, using Python Tensorflow library,
multi_step_model = tf.keras.models.Sequential()
multi_step_model.add(tf.keras.layers.LSTM(32,
return_sequences=True,
input_shape=x_train_multi.shape[-2:]))
multi_step_model.add(tf.keras.layers.LSTM(16, activation='relu'))
multi_step_model.add(tf.keras.layers.Dense(72))
why is there a need for the adding a Dense(72) layer ? what is the function Dense() doing ? (reading the doc doesn't help really)
Answer: First question: How many layers?
This is architectural question and one of them most important when constructing NN. Generally the more complex the task the more layers you should use to approximate (until a certain point than there is overkill, motivation for ResNet)
If you are looking for some guidelines there are some good posts, but the research and general trend nowadays is that we are over-doing it in the first place, and we can achieve some good results with some smart tricks without making it too deep.
TL;DR it depends on the problem, but not as deep as we think for 98% of problems.
Second (third?) question: why is there a need for the adding a Dense(72) layer ? what is the function Dense() doing ? Well you said that Finally, since 72 predictions are made, the dense layer outputs 72 predictions. For the question what is dense function doing in short its producing an (output) vector. In long dense layer represents a matrix vector multiplication. Values in the matrix are actually the trainable parameters (weights) which get updated during backpropagation, and if you seen mathematically representation of NN with matrizes (which all of them are thats how you utilise power of GPU-s bla bla) thats exactly what this dense layer represents. | {
"domain": "datascience.stackexchange",
"id": 6632,
"tags": "python, neural-network, tensorflow, time-series, lstm"
} |
Trends in solubility of group 2 nitrates | Question: In my lab report, we are required to explain the trends in solubility of group 2 salts, going down the group. I had explained all of the trends except one, group 2 nitrates. The following is the data provided.
$\ce{Mg(NO3)2}$ – $\pu{0.49 mol}$ per $\pu{100 g}$ of water
$\ce{Ca(NO3)2}$ – $\pu{0.62 mol}$ per $\pu{100 g}$ of water
$\ce{Sr(NO3)2}$ – $\pu{0.16 mol}$ per $\pu{100 g}$ of water
$\ce{Ba(NO3)2}$ – $\pu{ 0.04 mol}$ per $\pu{100 g}$ of water
So as the data shown, the solubility first increases and then decreases. My speculation is that the general trend should decrease but $\ce{Ca(NO3)2}$ has some special properties which makes it more soluble or $\ce{Mg(NO3)2}$ has some special which makes it less soluble. I searched the internet and all I got is "All group 2 nitrates are soluble" with no explanation regarding the trend.
Please correct my speculations if they're wrong and also provide an explanation regarding this trend.
Answer: Solubility is one of the present scientific problems that nobody is able to explain thoroughly. As you are speaking of Calcium compounds, look at the series of Calcium compounds made with halogens. $\ce{CaCl2, CaBr2, CaI2}$ are so soluble in water that they can be dissolved in less than their weight of water. $\ce{CaF2}$ should be similar. It is not. On the contrary, $\ce{CaF2}$ is one of the most insoluble compounds, because it forms the mineral called fluorite, which is one of the main sources of Fluorine atoms on Earth. If it would have been at least partly soluble, it would have been washed away by the rains a long time ago. Nobody is able to explain such a difference of solubility.
There are plenty of theories explaining the solubilities of some groups of substances using electronegativities, ionic or covalent radius, and other parameters, which work pretty well with a lot of substances. But there are always exceptions. The final explanation remains to be discovered. | {
"domain": "chemistry.stackexchange",
"id": 13234,
"tags": "inorganic-chemistry, solubility, alkaline-earth-metals"
} |
Coefficient of restitution for a perfectly inelastic collision | Question: The coefficient of restitution is defined as the ratio of the differences in velocities of colliding objects after and before the collision: $$k_{COR}=\frac{v_{1,after}-v_{2,after}}{v_{1,before}-v_{2,before}}.$$ There also exists a second definition, where $$k_{COR}=\sqrt \frac{E_{k,after}}{E_{k,before}}.$$
As such, in a perfectly inelastic collision, where the colliders stick (have equal velocity), $$k_{COR}=\frac{v_{after}-v_{after}}{v_{1,before}-v_{2,before}}=0.$$ However, according to the second definition, this means that all kinetic energy is lost ($E_{k,after}=0$). This cannot be true, as were there no kinetic energy, there could be motion at all.
Both definitions are from Wikipedia. Is the page wrong, or there exists an explanation for this?
Answer: The article specifies the equation dealing with kinetic energy is looking at the relative kinetic energy. For a perfectly inelastic collision, the bodies are not moving relative to each other, so the relative kinetic energy is $0$. Thus there is no contradiction.
To add more detail to this, the best thing to do is to work in the center of momentum frame, which is the frame where the total momentum of the system is $0$. This can be done by first noting that, by definition, the center of mass of two objects (which we treat as point particles) is
$$x_\text{COM}=\frac{m_1x_1+m_2x_2}{m_1+m_2}$$
which means the velocity of the center of mass is
$$v_\text{COM}=\frac{m_1v_1+m_2v_2}{m_1+m_2}$$
where $v_1$ and $v_2$ are the velocities observed in some inertial frame of reference.
Therefore, to move to the center of momentum frame, all we need to do is change our velocities to $v_1\to v_1-v_\text{COM}$ and $v_2\to v_2-v_\text{COM}$. You can easily show that in this frame, $p_\text{total}=0$, i.e.
$$m_1(v_1-v_\text{COM})+m_2(v_2-v_\text{COM})=0$$
The kinetic energy in this center of momentum frame is the "relative kinetic energy".
$$K_r=\frac12m_1(v_1-v_\text{COM})^2+\frac12m_2(v_2-v_\text{COM})^2=\frac12\cdot\frac{m_1m_2}{m_1+m_2}\cdot(v_1-v_2)^2$$
As you can see, this kinetic energy involves the relative velocity between the two objects, as well as the reduced mass $\mu=m_1m_2/(m_1+m_2)$. You can then easily show from here that for a collision between two objects
$$k_\text{COR}=\frac{v_{1,\text{after}}-v_{2,\text{after}}}{v_{1,\text{before}}-v_{2,\text{before}}}=\sqrt{\frac{(v_{1,\text{after}}-v_{2,\text{after}})^2}{(v_{1,\text{before}}-v_{2,\text{before}})^2}}=\sqrt{\frac{K_{r,\text{after}}}{K_{r,\text{before}}}}$$ | {
"domain": "physics.stackexchange",
"id": 69225,
"tags": "newtonian-mechanics, kinematics, momentum, conservation-laws, collision"
} |
Electric field on test charge due to dipole | Question: In worked example 4.1 of Intermolecular and Surface Forces by Jacob Israelachvili, he is calculating the electric field on a test charge due to the dipole shown in the picture.
He assumes $r\gg l$ and:
$$AB\approx r-\frac{1}{2} l\cos{\theta}$$
$$AC\approx r+\frac{1}{2} l\cos{\theta}$$
Using this, he writes that the magnitude of the electric field on A due to the negative charge at B is:
$$E_{-}=q / 4 \pi \varepsilon_{0} \cdot A B^{2} \approx\left(q / 4 \pi \varepsilon_{0} r^{2}\right)\left(1+\frac{l}{r} \cos \theta\right)$$
But I don't see where that comes from. Why is the squared distance in the numerator? and what approximation is used on the right-hand side?
Answer: This is the binomial approximation, where you can approximate $$(1 + x)^\alpha \approx 1 + \alpha x, \quad \text{(if $|\alpha x|\ll 1$)}.$$
The squared distance is in the denominator, as you'd expect: $$E_- = \frac{q}{4\pi\epsilon_0 \cdot AB^2}.$$
If you substitute for $AB$, you can easily show that it's just $$E_- = \frac{q}{4\pi\epsilon_0 r^2} \frac{1}{(1 - \frac{l}{2r}\cos\theta)^2}.$$
We can now use the binomial approximation (with $\alpha = -2$ and $x = -l\cos\theta/2r$) on $$ \frac{1}{(1 - \frac{l}{2r}\cos\theta)^2} \approx \left(1 + (-2)\times \left(-\frac{l}{2r}\right)\cos\theta \right),$$
so that
$$\frac{q}{4\pi\epsilon_0 \cdot AB^2} \approx \frac{q}{4\pi\epsilon_0 r^2} \left(1 + \frac{l}{r}\cos\theta \right).$$ | {
"domain": "physics.stackexchange",
"id": 75379,
"tags": "electromagnetism, forces, magnetic-fields, electric-fields, charge"
} |
What additional velocity must be imparted to an orbiting satellite so that it leaves the earth's gravitational pull? | Question: Consider this problem from my physics workbook:
A spaceship is launched into a circular orbit close to the earth's surface. What additional velocity must now be imparted to the spaceship to overcome the gravitational pull of the earth?
Attempt:
I tried to use the concept of binding energy of closed systems. Here is how my textbook introduces it:
The total mechanical energy (potential + kinetic) of a closed system is negative. The modulus of this total mechanical energy is the binding energy of the system... It is due to this energy that a particle remains attached within a system. If minimum this much energy is given to a particle in any form, the particle no longer remains attached within the system.
I know that the total mechanical energy of a satellite orbiting close to the earth's surface is $\frac{GMm}{2R}$, where $M$ and $R$ are the mass and radius of the earth respectively. Since minimum this much kinetic energy is to be provided to the satellite,
$$\frac{1}{2}mv^2=\frac{GMm}{2R} \rightarrow v = \sqrt{\frac{GM}{R}} = \sqrt{gR}$$
However, according to the key, the correct answer is $(\sqrt{2}-1)\sqrt{gR}$. The solution is brief and I am unable to prove it using energy considerations:
The speed of a satellite in a circular orbit close the earth's surface is $v_o = \sqrt{gR}$ and escape velocity is given by $v_e=\sqrt{2gR}$. Therefore, the additional velocity to escape is $v_e-v_o=(\sqrt{2}-1)\sqrt{gR}$.
Could someone please explain to me why my answer is incorrect, and help me prove why the above the solution is true?
Answer: The orbital velocity (object at $r=2R$) can be calculated if we set
$$\frac{1}{2} mv^2 - \frac{GMm}{2R} = 0$$
so that as you point out the velocity is
$$\tag 1 v= \sqrt{\frac{GM}{R}}$$
and also
$$g = \frac{GM}{R^2}$$
meaning
$$GM = gR^2$$
or
$$gR = \frac{GM}{R}$$
and from equation (1) this means
$$v = \sqrt{gR}$$
Since escape velocity is given by
$$v_e = \sqrt{2Rg}$$
then the additional required velocity is
$$V = \sqrt{2Rg} - \sqrt{Rg} = (\sqrt{2} - 1)\sqrt{Rg}$$ | {
"domain": "physics.stackexchange",
"id": 75140,
"tags": "homework-and-exercises, newtonian-mechanics, gravity, orbital-motion, satellites"
} |
Fetch a Quranic Verse from the Web | Question: Verse is a command-line program that allows you to retrieve specific verses
from the Quran. It takes a chapter and verse number as input and provides you
with the corresponding Quranic verse.
Dependencies:
libcurl
libjansson
strtoi.h:
#ifndef STRTOI_H
#define STRTOI_H
typedef enum {
STRTOI_SUCCESS,
STRTOI_OVERFLOW,
STRTOI_UNDERFLOW,
STRTOI_INCONVERTIBLE
} strtoi_errno;
/**
* @brief strtoi() shall convert string nptr to int out.
*
* @param nptr - Input string to be converted.
* @param out - The converted int.
* @param base - Base to interpret string in. Same range as strtol (2 to 36).
*
* The format is the same as strtol, except that the following are inconvertible:
*
* - empty string
* - leading whitespace
* - any trailing characters that are not part of the number
*
* @return Indicates if the operation succeeded, or why it failed.
*/
strtoi_errno strtoi(int *restrict out, const char *restrict nptr, int base);
#endif /* STRTOI_H */
strtoi.c:
#include "strtoi.h"
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <limits.h>
#include <ctype.h>
strtoi_errno strtoi(int *restrict out, const char *restrict nptr, int base)
{
/*
* Null string, empty string, leading whitespace?
*/
if (!nptr || !nptr[0] || isspace(nptr[0])) {
return STRTOI_INCONVERTIBLE;
}
char *end_ptr;
const int errno_original = errno; /* We shall restore errno to its original value before returning. */
const long int i = strtol(nptr, &end_ptr, base);
errno = 0;
/*
* Both checks are needed because INT_MAX == LONG_MAX is possible.
*/
if (i > INT_MAX || (errno == ERANGE && i == LONG_MAX)) {
return STRTOI_OVERFLOW;
} else if (i < INT_MIN || (errno == ERANGE && i == LONG_MIN)) {
return STRTOI_UNDERFLOW;
} else if (*end_ptr || nptr == end_ptr) {
return STRTOI_INCONVERTIBLE;
}
*out = (int) i;
errno = errno_original;
return STRTOI_SUCCESS;
}
errors.h:
#ifndef ERRORS_H
#define ERRORS_H
#include <stddef.h>
#define ARRAY_CARDINALITY(x) (sizeof (x) / sizeof ((x)[0]))
/*
* Error codes for invalid arguments.
*/
enum error_codes {
E_SUCCESS = 0,
E_NULL_ARGV,
E_INSUFFICIENT_ARGS,
E_INVALID_CHAPTER,
E_INVALID_VERSE,
E_INVALID_RANGE,
E_PARSE_ERROR,
E_ENOMEM,
E_UNKNOWN,
E_CURL_INIT_FAILED,
E_CURL_PERFORM_FAILED
};
/**
* @brief get_err_msg() shall retrieve the error message corresponding to the given error code.
*
* @param err_code - An integer respresenting the error code.
*
* @return A pointer to a constant string containing the error message.
* If the error code is not recognized, a default "Unknown error code.\n"
* message is returned.
*/
const char *get_err_msg(int err_code);
#endif /* ERRORS_H */
errors.c:
#include "errors.h"
#include <assert.h>
/*
* Array of strings to map enum error types to printable string.
*/
static const char *const errors[] = {
/* *INDENT-OFF* */
[E_NULL_ARGV] =
"Error: A NULL argv[0] was passed through an exec system call.\n",
[E_INSUFFICIENT_ARGS] =
"Usage: verse <chapter> <verse>\n",
[E_INVALID_CHAPTER] =
"Error: Invalid chapter number.\n",
[E_INVALID_VERSE] =
"Error: Invalid verse number for the given chapter.\n",
[E_INVALID_RANGE] =
"Error: Chapter or verse out of valid numeric range.\n",
[E_PARSE_ERROR] =
"Error: Non-numeric input for chapter or verse.\n",
[E_ENOMEM] =
"Error: Insufficient memory.\n",
[E_UNKNOWN] =
"Fatal: An unknown error has arisen.\n",
[E_CURL_INIT_FAILED] =
"Error: curl_easy_init() failed.\n",
[E_CURL_PERFORM_FAILED] =
"Error: curl_easy_perform() failed.\n"
/* *INDENT-ON* */
};
const char *get_err_msg(int err_code)
{
static_assert(ARRAY_CARDINALITY(errors) - 1 == E_CURL_PERFORM_FAILED,
"The errors array and the enum must be kept in-sync!");
if (err_code >= 0 && err_code < (int) ARRAY_CARDINALITY(errors)) {
return errors[err_code];
}
return "Unknown error code.\n";
}
webutil.h:
#ifndef WEB_UTIL_H
#define WEB_UTIL_H
#include <stddef.h>
#include <curl/curl.h>
/*
* A struct to hold the contents of the downloaded web page.
*/
struct mem_chunk {
char *ptr;
size_t len;
};
/**
* @brief Parses a JSON response to extract a specific verse.
*
* @param json_responose - A JSON response string containing verse data.
* @param out - A pointer to a string where the parsed verse will be stored.
*
* @return An error code indicating the success or failure of the parsing process.
* If successful, the parsed verse will be stored in the 'out' parameter.
*/
int parse_response_json(const char *restrict json_response,
char **restrict out);
/**
* @brief A callback function for curl_easy_perform(). Stores the downloaded
* web content to the mem_chunk struct as it arrives.
* @param Refer to https://curl.se/libcurl/c/CURLOPT_WRITEFUNCTION.html for a
* detailed explanation about the parameters.
* @return The number of bytes read, or 0 on a memory allocation failure.
*/
size_t write_memory_callback(void *content, size_t size,
size_t nmemb, struct mem_chunk *chunk);
/**
* @brief download_webpage() shall download the contents of a web page specified
* by the URL using libcurl.
*
* @param chunk - A struct to hold the downloaded content.
* @param curl - A libcurl handle for performing the download.
* @param url - The URL of the web page to download.
*
* @return CURLE_OK on success, or a libcurl error code on failure.
*/
int download_webpage(const struct mem_chunk *restrict chunk,
CURL * restrict curl, const char *restrict url);
#endif /* WEB_UTIL_H */
web_util.c:
#ifndef _XOPEN_SOURCE
#define _XOPEN_SOURCE 700
#endif
#include "web_util.h"
#include "errors.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <jansson.h>
#define MAX_VERSE_SIZE 4096
int parse_response_json(const char *restrict json_response, char **restrict out)
{
json_error_t error;
json_t *const root = json_loads(json_response, 0, &error);
if (root) {
/*
* Check if "text" exists in the JSON structure.
*/
const json_t *const data = json_object_get(root, "data");
const json_t *const text = json_object_get(data, "text");
if (data && text && json_is_string(text)) {
*out = strdup(json_string_value(text));
} else {
json_decref(root);
return E_UNKNOWN;
}
} else {
fputs(error.text, stderr);
return E_UNKNOWN;
}
json_decref(root);
return E_SUCCESS;
}
size_t
write_memory_callback(void *content, size_t size,
size_t nmemb, struct mem_chunk *chunk)
{
const size_t new_size = chunk->len + size * nmemb;
void *const cp = realloc(chunk->ptr, new_size + 1);
if (!cp) {
perror("realloc()");
return 0;
}
chunk->ptr = cp;
memcpy(chunk->ptr + chunk->len, content, size * nmemb);
chunk->ptr[new_size] = '\0';
chunk->len = new_size;
return size * nmemb;
}
int
download_webpage(const struct mem_chunk *restrict chunk,
CURL * restrict curl, const char *restrict url)
{
CURLcode ret;
/* *INDENT-OFF* */
if ((ret = curl_easy_setopt(curl, CURLOPT_URL, url)) != CURLE_OK
|| (ret = curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_memory_callback)) != CURLE_OK
|| (ret = curl_easy_setopt(curl, CURLOPT_WRITEDATA, chunk)) != CURLE_OK
|| (ret = curl_easy_setopt(curl, CURLOPT_USERAGENT, "Verse/1.0")) != CURLE_OK
|| (ret = curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L)) != CURLE_OK
|| (ret = curl_easy_setopt(curl, CURLOPT_MAXREDIRS, 5L)) != CURLE_OK) {
return (int) ret;
}
/* *INDENT-ON* */
return (int) curl_easy_perform(curl);
}
main.c:
#ifdef _POSIX_C_SOURCE
#undef _POSIX_C_SOURCE
#endif
#ifdef _XOPEN_SOURCE
#undef _XOPEN_SOURCE
#endif
#define _POSIX_C_SOURCE 200819L
#define _XOPEN_SOURCE 700
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "strtoi.h"
#include "web_util.h"
#include "errors.h"
#define BASE_URL "http://api.alquran.cloud/v1/ayah/%d:%d/en.maududi"
#define MAX_URL_SIZE 128
#define MAX_CHAPTER 114
#define MIN_CHAPTER 0
#define MIN_VERSE 0
#define INIT_MEM_CHUNK(address, size) \
{ .ptr = address, .len = size }
/*
* Each entry in the table is the maximum number of verses present in its
* corresponding index, which denotes the chapter number.
*/
static const int verse_limits[] = {
7, 286, 200, 176, 120, 165, 206, 75, 129, 109, 123, 111, 43, 52,
99, 128, 111, 110, 98, 135, 112, 78, 118, 64, 77, 227, 93, 88, 69,
60, 34, 30, 73, 54, 45, 83, 182, 88, 75, 85, 54, 53, 89, 59, 37,
35, 38, 29, 18, 45, 60, 49, 62, 55, 78, 96, 29, 22, 24, 13, 14,
11, 11, 18, 12, 12, 30, 52, 52, 44, 28, 28, 20, 56, 40, 31, 50,
40, 46, 42, 29, 19, 36, 25, 22, 17, 19, 26, 30, 20, 15, 21, 11,
8, 8, 19, 5, 8, 8, 11, 11, 8, 3, 9, 5, 4, 7, 3, 6, 3, 5, 4, 5, 6
};
static inline int check_args(int argc, const char *const *argv)
{
/*
* Sanity check. POSIX requires the invoking process to pass a non-NULL argv[0].
*/
return (!argv[0]) ? E_NULL_ARGV :
(argc != 3) ? E_INSUFFICIENT_ARGS : E_SUCCESS;
}
static int check_input(const char *const *restrict argv, int *restrict chapter,
int *restrict verse)
{
const int ret_1 = strtoi(chapter, argv[1], 10);
const int ret_2 = strtoi(verse, argv[2], 10);
/* *INDENT-OFF* */
return (ret_1 == STRTOI_INCONVERTIBLE || ret_2 == STRTOI_INCONVERTIBLE) ? E_PARSE_ERROR :
(ret_1 == STRTOI_UNDERFLOW || ret_2 == STRTOI_UNDERFLOW ||
ret_1 == STRTOI_OVERFLOW || ret_2 == STRTOI_OVERFLOW) ? E_INVALID_RANGE :
(*chapter <= MIN_CHAPTER || *chapter > MAX_CHAPTER) ? E_INVALID_CHAPTER :
(*verse <= MIN_VERSE || *verse > verse_limits [*chapter - 1]) ? E_INVALID_VERSE :
E_SUCCESS;
/* *INDENT-ON* */
}
static int handle_args(int chapter, int verse)
{
char url[MAX_URL_SIZE];
snprintf(url, sizeof (url), BASE_URL, chapter, verse);
struct mem_chunk chunk = INIT_MEM_CHUNK(0, 0);
CURL *const curl = curl_easy_init();
if (!curl) {
return E_CURL_INIT_FAILED;
}
int rc = download_webpage(&chunk, curl, url);
if (rc != CURLE_OK) {
curl_easy_cleanup(curl);
free(chunk.ptr);
return E_CURL_PERFORM_FAILED;
} else {
char *result = NULL;
rc = parse_response_json(chunk.ptr, &result);
if (rc != E_SUCCESS) {
curl_easy_cleanup(curl);
free(chunk.ptr);
return rc;
} else {
printf("(%d:%d) %s\n", chapter, verse, result);
}
free(result);
}
curl_easy_cleanup(curl);
curl_global_cleanup();
free(chunk.ptr);
return E_SUCCESS;
}
int main(int argc, char **argv)
{
const char *const *args = (const char *const *) argv;
int status = check_args(argc, args);
if (status != E_SUCCESS) {
fputs(get_err_msg(status), stderr);
return EXIT_FAILURE;
}
int chapter, verse;
status = check_input(args, &chapter, &verse);
if (status != E_SUCCESS) {
fputs(get_err_msg(status), stderr);
return EXIT_FAILURE;
}
status = handle_args(chapter, verse);
if (status != E_SUCCESS) {
fputs(get_err_msg(status), stderr);
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
Or you could clone this repository:
verse
Review Goals:
General coding comments, style, etc.
Does any part of my code exhibit undefined/implementation-defined behavior?
Is there a better way to structure the code?
Answer: Just a strtoi() review.
Bug: errno setting/restoration woes.
errno = 0; should be done before calling strtol();
Below code may set errno due to strtol(), yet the next line is errno = 0; and following tests like errno == ERANGE are always false.
// Problem code
const int errno_original = errno;
const long int i = strtol(nptr, &end_ptr, base);
errno = 0; // ????
if (i > INT_MAX || (errno == ERANGE && i == LONG_MAX)) {
Instead, sample errno and then restore it. Use sample for later tests.
// Sample fix
const int errno_original = errno;
errno = 0;
const long int i = strtol(nptr, &end_ptr, base);
int errno_sample = errno;
errno = errno_original;
if (i > INT_MAX || (errno_sample == ERANGE && i == LONG_MAX)) {
Code does not restore errno in select cases
Consider moving errno = errno_original; up near strtol() to restore errno along all code paths.
Unneeded test
// v------v not needed. `strtol()` handles that.
// if (!nptr || !nptr[0] || isspace(nptr[0])) {
if (!nptr || isspace(nptr[0])) {
and re-organize tests
} if (*end_ptr || nptr == end_ptr) {
return STRTOI_INCONVERTIBLE;
} else if (i > INT_MAX || (errno == ERANGE && i == LONG_MAX)) {
return STRTOI_OVERFLOW;
} else if (i < INT_MIN || (errno == ERANGE && i == LONG_MIN)) {
return STRTOI_UNDERFLOW;
}
White space test potentially out of range
is...() functions need an unsigned char value or EOF. When char is signed, code risks UB.
// if (!nptr || !nptr[0] || isspace(nptr[0])) {
if (!nptr || !nptr[0] || isspace((unsigned char)nptr[0])) {
strtoi() differs from strtol() on extremes
Consider making strtoi() more strtol() like by setting *nptr in all cases like INT_MIN, INT_MAX, 0 on too low, too high or no convert.
strtoi() name
strtoi() is a reserved name:
Function names that begin with str and a lowercase letter may be added to the declarations in the <stdlib.h> header. C11 7.31.12 1
Consider a new name, maybe str2i()?
Consider STRTOI_N
Creating STRTOI_N can simplify testing the function result for range.
typedef enum {
STRTOI_SUCCESS,
STRTOI_OVERFLOW,
STRTOI_UNDERFLOW,
STRTOI_INCONVERTIBLE,
STRTOI_N // add
} strtoi_errno;
Mis-comments
@param base - Base to interpret string in. Same range as strtol (2 to 36).
should be
@param base - Base to interpret string in. Same range as strtol (0 and 2 to 36).
- empty string not needed in .h exception list.
base out of range?
Interesting that strtoi() with its many checks does not test base. C does not specify a base check for strtol().
Unneeded else
A style issue.
if (i > INT_MAX || (errno == ERANGE && i == LONG_MAX)) {
return STRTOI_OVERFLOW;
// else not needed here.
} else if (i < INT_MIN || (errno == ERANGE && i == LONG_MIN)) {
return STRTOI_UNDERFLOW;
}
Minor: unnecessary #include <*.h>
strtoi.c does not need #include <stdio.h>.
Even thought this does not apply to strtoi.c, for user.h files, I strongly recommend to only include necessary #include <*.h> files.
For user.c files, the issue is less important.
I consider #include <*.h> files in a user.c file as not a real issue as there exist a reasonable argument to include some <*.h> to make certain the user.c code does not conflict with the unnecessary #include <*.h>. Further, the maintenance needed to only include a limit set in a user.c file is not that productive. Note: that some IDEs offer an editing option to include the minimum set. Something of a spell checker for #include.
This issue is a bit of a software war and best to follow your group's coding standards.
Candidate replacement (untested):
str2i_error str2i(int *restrict out, const char *restrict nptr, int base) {
// Maybe test out
if (out == NULL) {
return STR2I_INCONVERTIBLE;
}
if (nptr == NULL || isspace((unsigned char) nptr[0])) {
*out = 0;
return STR2I_INCONVERTIBLE;
}
if (!(base == 0 || (base >= 2 && base <= 36))) {
*out = 0;
return STR2I_INCONVERTIBLE;
}
char *end_ptr;
int errno_original = errno;
errno = 0;
// I used value here rather than i since it is not an int.
long value = strtol(nptr, &end_ptr, base);
int errno_sample = errno;
errno = errno_original;
if (*end_ptr || nptr == end_ptr) {
*out = 0;
return STR2I_INCONVERTIBLE;
}
if (value > INT_MAX || (errno_sample == ERANGE && value == LONG_MAX)) {
*out = INT_MAX;
return STR2I_OVERFLOW;
}
if (value < INT_MIN || (errno_sample == ERANGE && value == LONG_MIN)) {
*out = INT_MIN;
return STR2I_UNDERFLOW;
}
*out = (int) value;
return STR2I_SUCCESS;
}
Consider making a str2i() that works just like strtol(), expect for range. Example Then form your str2i_error() which uses str2i(). Easier to extend and make your str2other_error() functions.
Advanced
Even consider _Generic. | {
"domain": "codereview.stackexchange",
"id": 45448,
"tags": "c, json, curl"
} |
Finding Wikipedia articles with specific types of user page links | Question: SELECT pl_from, NS, page_title, L_NS, L_titles, num_L, SB, IU, WP
FROM (
SELECT
pl_from,
-- pl_from_namespace does not appear to be consistently reliable;
-- it might be better to select from the page table and join the pagelinks table to it.
CASE
-- This fails on pages missing from the page table (presumably because they were deleted).
WHEN pl_from_namespace != page_namespace THEN CONCAT(pl_from_namespace, ' vs. ', page_namespace)
ELSE pl_from_namespace
END AS NS,
page_title,
pl_namespace AS L_NS,
GROUP_CONCAT(pl_title SEPARATOR ' ') AS L_titles,
COUNT(pl_title) AS num_L,
CASE
WHEN MAX(CASE WHEN pl_title LIKE '%/sandbox' THEN 1 END) = 1 THEN '(SB)'
ELSE ''
END AS SB,
CASE
WHEN EXISTS (
SELECT 1
FROM templatelinks
WHERE
tl_from = pl_from
AND tl_title = 'Under_construction'
) THEN '(C)'
ELSE ''
END AS C,
CASE
WHEN EXISTS (
SELECT 1
FROM categorylinks
WHERE
cl_from = pl_from
AND cl_to = 'Pages_using_Under_construction_with_the_placedby_parameter'
) THEN '(PB)'
ELSE ''
END AS C_PB,
CASE
WHEN EXISTS (
SELECT 1
FROM templatelinks
WHERE
tl_from = pl_from
AND (tl_title = 'In use' OR tl_title = 'GOCEinuse')
) THEN '(IU)'
ELSE ''
END AS IU,
CASE
WHEN EXISTS (
SELECT 1
FROM templatelinks
WHERE
tl_from = pl_from
AND tl_title = 'Copyvio-revdel'
) THEN '(RD1)'
ELSE ''
END AS RD1,
CASE
WHEN EXISTS (
SELECT 1
FROM templatelinks
WHERE
tl_from = pl_from
AND tl_title = 'Wikipedia_person_user_link'
) THEN '(WP)'
ELSE ''
END AS WP,
CASE
WHEN EXISTS (
SELECT 1
FROM categorylinks
WHERE
cl_from = pl_from
AND cl_to = 'Candidates_for_speedy_deletion'
) THEN '(CSD)'
ELSE ''
END AS CSD
FROM pagelinks
LEFT JOIN page ON page_id = pl_from
WHERE
pl_from_namespace = 0
AND pl_namespace = 2
-- In the future: AND pl_namespace != 0
GROUP BY pl_from
ORDER BY SB, page_title
) AS t1
WHERE
(
(C = '' AND RD1 = '' AND WP = '' AND CSD = '')
OR num_L != 1
)
AND (C_PB = '' OR num_L != 2)
Currently this query is run on an online database replicate of Wikipedia, so you can see this query's result.
What is this supposed to do? How does it work?
Relevant background: MediaWiki wikis separates pages into namespaces that are intended to store different types of content. On Wikipedia, the article namespace (which contains all of the actual encyclopedia) is the main namespace. Namespaces are somewhat analogous to how Stack Exchange sites separate content into the main questions site and the Meta domain for internal site discussion (however, Wikipedia sorts many things into namespaces that SE sites don't).
This query searches for internal links from articles to the user namespace. Getting all these links is easy; the complexity arises from filtering out some of the results under specific conditions.
Before going further, there's one essential piece of background about the MediaWiki database schema: the columns of each table are specifically named to be distinct, so each column starts with a prefix specific to its originating table. This query uses 4 tables: page with prefix page_, pagelinks with prefix pl_, templatelinks with prefix tl_, and categorylinks with prefix cl_.
The basic goals of the query are as follows:
Take all link on articles (i.e. pagelinks rows where pl_from_namespace = 0) that link to user namespace (i.e. where pl_namespace = 2) from the pagelinks table
GROUP BY pl_from to allow counting of links per page through COUNT(pl_title) AS num_L, and to generally organize rows as page-specific.
This also uses the aggregate function GROUP_CONCAT(pl_title SEPARATOR ' ') AS L_titles to list each link in the final output output.
Filter out page rows that:
have any of the following templates: 'Under_construction', 'Copyvio-revdel', 'Wikipedia_person_user_link' or has the category 'Candidates_for_speedy_deletion'
(A page "has a template" if there's a row in templatelinks where tl_from = pl_from AND tl_title = 'Template_title'. A page "has a category" if there's a row in categorylinks where cl_from = pl_from AND cl_to = 'Category_title')
AND have one link (numL = 1)
Additionally filter out page rows that have the category 'Pages_using_Under_construction_with_the_placedby_parameter' AND have two links (numL = 2)
Take note of rows that have a page_title ending with /sandbox or have the template 'In_use' or 'GOCEinuse' by using what amounts to Boolean columns.
These documentation links may also be useful if one wants to understand the Mediawiki database better, but should not be required to answer the question: Database layout manual, page table, pagelinks table, templatelinks table, categorylinks table
Improvements I'm looking for
My query already runs in less than a second, so I'm not too concerned about efficiency (though I welcome any suggestions).
What I'm mainly looking for is general SQL advice. I strongly suspect there are much much better ways to handle repeated structures like:
CASE
WHEN EXISTS (
SELECT 1
FROM templatelinks
WHERE
tl_from = pl_from
AND tl_title = 'template_title'
) THEN '(col_name)'
ELSE ''
END AS col_name
If you need any clarification or have any questions, feel free to comment and I will reply.
Answer: Consider the following tips and best practices:
TABLE QUALIFIERS: First and foremost, always qualify all fields in all clauses (SELECT, WHERE, JOIN, etc.) with table names or table aliases using period denotation. Doing so facilitates readability and maintainability.
PREFIXED FIELD NAMES: Related to above, avoid prefixing field names (pl_from, tl_title, cl_to). Instead, use table aliases to period qualify identifiers in query to avoid collision or confusion. Of course if this is Wikimedia's setup, there's nothing you can do.
CASE SUBQUERIES: Avoid subqueries in CASE statements which requires row by row logic calculation. Instead, use multiple LEFT JOIN on templatelinks and categorylinks tables and then run the needed CASE logic where NOT EXISTS render as NULL.
GROUP BY: Unfortunately, at a disservice to newcomers in SQL due to MySQL's ONLY FULL GROUP BY mode turned off, your aggregate inner query is not ANSI compliant. Always include all non-aggregated columns in GROUP BY for consistent, valid results.
Your query would fail in practically all other RDBMS's (Oracle, Postgres, etc.) as your GROUP BY query is incomplete and does not adhere to ANSI rules since page_title, pl_namespace, and now the new LEFT JOIN fields are not included. In SQL where at least one aggregate is used such as COUNT, all grouped columns must be included in GROUP BY clause but can be optionally omitted in SELECT (not other way around). NOTE: your results may change with such code refactoring. The Wikimedia interface may not allow setting/mode adjustments.
AGGREGATION: Related to above, you may need to handle all unit level calculations including CASE statements in the inner query and move aggregation to top level SELECT. If you need to include other unit level fields in final resultset but not in aggregation, run a JOIN on the aggregated subquery or via a CTE.
Below is an adjustment to your SQL query with unit level calculations handled in derived table subquery and all aggregations moved to top level. Previous outer WHERE now becomes HAVING since aggregates are involved. Depending on your needs and results, additional adjustments may be needed. But again, be sure to run with complete GROUP BY to include all non-aggregated columns. As mentioned, you will not be warned by the Wikimedia engine.
SELECT sub.pl_from,
sub.page_title,
sub.L_NS,
MAX(sub.NS) AS NS,
GROUP_CONCAT(sub.pl_title SEPARATOR ' ') AS L_titles,
COUNT(sub.pl_title) AS num_L,
MAX(sub.SB) AS SB,
MAX(sub.IU) AS IU,
MAX(sub.WP) AS WP
FROM
(SELECT pl.pl_from,
CASE
WHEN pl.pl_from_namespace != p.page_namespace
THEN CONCAT(pl.pl_from_namespace, ' vs. ', p.page_namespace)
ELSE pl.pl_from_namespace
END AS NS,
p.page_title,
pl.pl_namespace AS L_NS,
pl.pl_title,
CASE
WHEN MAX(CASE WHEN pl.pl_title LIKE '%/sandbox' THEN 1 END) = 1 THEN '(SB)'
ELSE ''
END AS SB,
CASE
WHEN t.tl_title = 'Under_construction'
THEN '(C)'
ELSE ''
END AS C,
CASE
WHEN c.cl_to = 'Pages_using_Under_construction_with_the_placedby_parameter'
THEN '(PB)'
ELSE ''
END AS C_PB,
CASE
WHEN (t.tl_title = 'In use' OR t.tl_title = 'GOCEinuse')
THEN '(IU)'
ELSE ''
END AS IU,
CASE
WHEN t.tl_title = 'Copyvio-revdel'
THEN '(RD1)'
ELSE ''
END AS RD1,
CASE
WHEN t.t_title = 'Wikipedia_person_user_link'
THEN '(WP)'
ELSE ''
END AS WP,
CASE
WHEN c.cl_to = 'Candidates_for_speedy_deletion'
THEN '(CSD)'
ELSE ''
END AS CSD
FROM pagelinks pl
LEFT JOIN page p ON p.page_id = pl.pl_from
LEFT JOIN templatelinks t ON t.tl_from = pl.pl_from
LEFT JOIN categorylinks c ON c.cl_from = pl.pl_from
WHERE pl.pl_from_namespace = 0
AND pl.pl_namespace = 2
) AS sub
GROUP BY sub.pl_from,
sub.page_title,
sub.L_NS
HAVING
(
(MAX(sub.C) = '' AND MAX(sub.RD1) = '' AND MAX(sub.WP) = '' AND MAX(sub.CSD) = '')
OR COUNT(sub.pl_title) != 1
)
AND (MAX(sub.C_PB) = '' OR COUNT(sub.pl_title) != 2)
ORDER BY MAX(sub.SB),
sub.page_title | {
"domain": "codereview.stackexchange",
"id": 35233,
"tags": "sql, mysql, wikipedia"
} |
Why is the total read number still more than the paired in sequencing after removing the duplicate in samtools flagstat output? | Question: After alignment using BWA, I have removed the dupliment using the samtools(Version: 1.9).
My procedure is as follows:
bwa mem -k 32 -M ref.fa read1 read2 > out.sam
samtools view -@ 0 -b -T ref.fa -o out.bam in.sam
samtools sort -n -o out.nameSrt.bam in.bam
samtools fixmate -r -m in.nameSrt.bam out.fixmate.bam
samtools sort -o out.fixmate.sort.bam in.fixmate.bam
samtools markdup -r -S -s in.fixmate.sort.bam out.markdup.bam
samtools flagstat in.markdup.bam > out.markdup.flagstat
The flagstat output result is as follows:
21611397 + 0 in total (QC-passed reads + QC-failed reads)
0 + 0 secondary
0 + 0 supplementary
0 + 0 duplicates
21611397 + 0 mapped (100.00% : N/A)
21422330 + 0 paired in sequencing
10711165 + 0 read1
10711165 + 0 read2
19797684 + 0 properly paired (92.42% : N/A)
21422330 + 0 with itself and mate mapped
0 + 0 singletons (0.00% : N/A)
1306000 + 0 with mate mapped to a different chr
727043 + 0 with mate mapped to a different chr (mapQ>=5)
Why is the total read number still more than the paired in sequencing after removing the duplicate in samtools flagstat output?
Is there anything wrong with my procedure?
Answer: Bwa-mem may produce chimeric alignment, where different parts of a read are mapped to distinct loci. flagstat counts them as two reads. | {
"domain": "bioinformatics.stackexchange",
"id": 784,
"tags": "ngs, samtools"
} |
Conductors and their charge? | Question: Why does excess positive charge stay on the surface of a conductor?
This is what I understood from:
How does positive charge spread out in conductors?
and other resources on the web:
If there is a electric field inside the conductor they will pull on the electrons
Therefore there can be no field inside the conductor
It follows from Gauss's Law that there are no charges inside
My questions:
If there are positive charges inside the conductor, they will attract the electrons. But the electrons are already being attracted by the nucleus they belong to so why would they move? All electrons have electric fields already acting on them (the electric field of the nucleus) so why would adding new ones make a difference?
If the positive charges are distributed on the surface, the field would only be zero right at the centre.The fields would cancel out in the centre because of symmetry but the field anywhere other than the centre would be non-zero. So how would the electrons be an equilibrium?
Please see the details on the bounty
Answer: The defining property of a conductor is that charge is free to move within it.
Hence, if there existed an electric field within the conducting medium, charge
would move until the field became zero. It follows that $\vec{E} = 0$ inside of a
conductor.
Gauss's law therefore implies:
$$
\rho=\epsilon_0\nabla\cdot\vec{E}=0,
$$
since $\vec{E} = 0$ within the bulk of the conductor, all of the excess charge must
reside on the surface.
To address your two questions specifically;
In a metal, the electrons flow freely around like a fluid. They are not associated with any particular nucleus.
The charges will do whatever they need to, in order to make the field zero inside. This defines how the charge acts on the surface. Your assumption that you know the charge distribution and from that you can determine the field is backwards. | {
"domain": "physics.stackexchange",
"id": 14350,
"tags": "electrostatics, charge, conductors"
} |
Failed to load plugin (gazebo-11) | Question:
Hello, I am using ROS2 Foxy (built from source) and gazebo11 (installed by command sudo apt install ros-foxy-gazebo-ros-pkgs) on Ubuntu 20.04.1. When I try to follow the test from installing tutorial (http://gazebosim.org/tutorials?tut=ros2_installing&cat=connect_ros) I get an error:
[Err] [Plugin.hh:178] Failed to load plugin libgazebo_ros_diff_drive.so: libgazebo_ros_diff_drive.so: cannot open shared object file: No such file or directory
on the step:
gazebo --verbose /opt/ros/eloquent/share/gazebo_plugins/worlds/gazebo_ros_diff_drive_demo.world
I already cheeked GAZEBO_PLUGIN_PATH, it point directly to the folder where the .so file is located.
Thanks for helping.
Originally posted by faade on Gazebo Answers with karma: 3 on 2020-10-10
Post score: 0
Answer:
Hi @faade,
Did you source the ROS 2 workspace in the terminal that you are running Gazebo?
source /opt/ros/foxy/setup.bash
Regards
Originally posted by ahcorde with karma: 281 on 2020-10-16
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 4552,
"tags": "ros, gazebo, gazebo-11, gazebo-plugin"
} |
Encypt XML file with AES and storing on disk | Question: I have written some code to encrypt XML and then store it on the disk. I want to be sure that the encryption code is secure, so here is the code:
package com.application;
import java.io.UnsupportedEncodingException;
import java.lang.reflect.Field;
import java.security.Key;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.security.spec.AlgorithmParameterSpec;
import javax.crypto.BadPaddingException;
import javax.crypto.Cipher;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.SecretKeySpec;
public class Aes {
public Aes() {
}
public String encrypt(String data, String key) {
try {
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
String iv = generateRandomIv();
cipher.init(Cipher.ENCRYPT_MODE, makeKey(key), makeIv(iv));
return iv + System.getProperty("line.separator") + new String(cipher.doFinal(data.getBytes("ISO-8859-1")), "ISO-8859-1");
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public String decrypt(String data, String key) throws WrongPasswordException {
String decrypted = "";
try {
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
String iv = getIv(data);
cipher.init(Cipher.DECRYPT_MODE, makeKey(key), makeIv(iv));
decrypted = new String(cipher.doFinal(removeIvFromString(data).getBytes("ISO-8859-1")), "ISO-8859-1");
}
catch (BadPaddingException e) {
throw new WrongPasswordException();
}
catch (Exception e) {
throw new RuntimeException(e);
}
return decrypted;
}
private AlgorithmParameterSpec makeIv(String iv) {
try {
return new IvParameterSpec(iv.getBytes("UTF-8"));
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
return null;
}
private String generateRandomIv() {
return new RandomStringGenerator().randomString(16);
}
private String getIv(String data) {
return data.substring(0, data.indexOf(System.getProperty("line.separator")));
}
private String removeIvFromString(String data) {
return data.substring(data.indexOf(System.getProperty("line.separator")) + 1, data.length());
}
private Key makeKey(String encryptionKey) {
try {
MessageDigest md = MessageDigest.getInstance("SHA-256");
byte[] key = md.digest(encryptionKey.getBytes("UTF-8"));
return new SecretKeySpec(key, "AES");
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
return null;
}
}
package com.application;
import org.apache.commons.lang3.ArrayUtils;
import java.util.ArrayList;
import java.util.Random;
public class RandomStringGenerator {
private char[] vowelLowerCaseLetter = {'a', 'e', 'i', 'o', 'u', 'y'};
private char[] consonantsLowerCaseLetter = {'b','c','d','f','g','h','j','k','l','m','n','p','q','r','s','t','v','w','x','z'};
private char[] numbers = {'1', '2', '3', '4', '5', '6', '7', '8', '9', '0'};
private char[] specialCharacters = {'!', '"', '@', '#', '£', '¤', '$', '%', '&', '/', '{', '(', '[', ')', ']', '=', '}', '?', '+', '\\',
'´', '¨', '~', '^', '*', '\'', '-', '_', '.', ':', ',', ';', ' ', '½', '§', '<', '>'};
public String randomString(int length) {
char[] upperCaseLetter = convertCharsToUpperCase(ArrayUtils.addAll(vowelLowerCaseLetter, consonantsLowerCaseLetter));
char[] lowerCaseLetter = ArrayUtils.addAll(vowelLowerCaseLetter, consonantsLowerCaseLetter);
char[] allowedCharacters = ArrayUtils.addAll(ArrayUtils.addAll(lowerCaseLetter, upperCaseLetter), ArrayUtils.addAll(numbers, specialCharacters));
String randomString = "";
for (int i = 0; i < length; i++) {
randomString += getRandomCharacter(allowedCharacters);
}
return randomString;
}
private char getRandomCharacter(char[] allowedCharacters) {
Random r = new Random();
return allowedCharacters[r.nextInt(allowedCharacters.length)];
}
private char[] convertCharsToUpperCase(char[] lowerCaseLetter) {
char[] upperCaseLetters = new char[lowerCaseLetter.length];
for (int i = 0; i < lowerCaseLetter.length; i++) {
upperCaseLetters[i] = Character.toUpperCase(lowerCaseLetter[i]);
}
return upperCaseLetters;
}
}
Answer: Binary data != string
public String encrypt(String data, String key) {
...
new String(cipher.doFinal(data.getBytes("ISO-8859-1")), "ISO-8859-1");
...
}
Here you get a byte[] with the input data encrypted. This is arbitrary binary data.
Do not treat binary data as strings.
It only works because you are using an encoding with a single byte per character.
When you want to store binary data as a string you should use Base64 encoding instead:
import java.util.Base64;
public String encrypt(String data, String key) {
try {
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
String iv = generateRandomIv();
cipher.init(Cipher.ENCRYPT_MODE, makeKey(key), makeIv(iv));
byte[] cipherBytes = cipher.doFinal(data.getBytes(StandardCharsets.UTF_8));
String base64CipherText = Base64.getEncoder().encodeToString(cipherBytes);
return iv + System.getProperty("line.separator") + base64CipherText;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public String decrypt(String data, String key) throws WrongPasswordException {
String decrypted = "";
try {
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
String iv = getIv(data);
cipher.init(Cipher.DECRYPT_MODE, makeKey(key), makeIv(iv));
byte[] cipherBytes = Base64.getDecoder().decode(removeIvFromString(data));
decrypted = new String(cipher.doFinal(cipherBytes), StandardCharsets.UTF_8);
}
catch (BadPaddingException e) {
throw new WrongPasswordException();
}
catch (Exception e) {
throw new RuntimeException(e);
}
return decrypted;
}
Now that the encoding issues are fixed you should consider using UTF-8 (or another portable encoding) for the String.getBytes. | {
"domain": "codereview.stackexchange",
"id": 15930,
"tags": "java, security, xml, cryptography"
} |
Swift: arrayToTree() where array contains int and nil | Question: I am learning tree on Leetcode. Need to prepare the testing data.
It is easy to convert the array to an organized node, where its elements are integers.
such as [3, 9, 20, 15, 7]
Here is my code:
extension Array where Element == Int{
func arrayToTree() -> TreeNode{
var nodes = [TreeNode]()
for num in 0..<self.count{
nodes.append(TreeNode(self[num]))
}
var i = 0
repeat{
nodes[i].left = nodes[2 * i + 1]
if self.count > 2 * i + 2 {
nodes[i].right = nodes[2 * i + 2]
}
i+=1
}while i < (self.count)/2
return nodes.first!
}
}
When the last level contains some nils , such as [3, 9, 20, nil, nil, 15, 7]
Here is the code:
extension Array where Element == Int?{
func arrayToTree() -> TreeNode?{
guard self.count > 0 else{
return nil
}
var nodes = [TreeNode?]()
for num in 0..<self.count{
if let num = self[num]{
nodes.append(TreeNode(num))
}
else{
nodes.append(nil)
}
}
var i = 0
repeat {
nodes[i]?.left = nodes[2 * i + 1]
if self.count > 2 * i + 2 {
nodes[i]?.right = nodes[2 * i + 2]
}
i += 1
} while i < (self.count) / 2
return nodes.first!
}
}
How can I refactor this?
Such as combine them together using Swift Generic with some Protocol.
Answer: Simplifying func arrayToTree()
This
var nodes = [TreeNode?]()
for num in 0..<self.count{
if let num = self[num]{
nodes.append(TreeNode(num))
}
else{
nodes.append(nil)
}
}
creates a new array by mapping each element in self (an optional Int) to a new element (an optional TreeNode). That can be simplified to
let nodes = self.map { $0.map { TreeNode($0) } }
where the outer Array.map maps the given array to a new array, and the inner Optional.map maps an optional Int to an optional TreeNode.
The
var i = 0
repeat {
// ...
i += 1
} while i < (self.count) / 2
loop can be simplified to
for i in 0..<self.count/2 {
// ...
}
The forced unwrapping
return nodes.first!
cannot crash – the nodes array cannot be empty at this point. I would still suggest to avoid it since later code changes might break the logic. It also makes it easier for future maintainers of the code to verify its correctness.
Actually the preceding code just results in an empty nodes array if the given list is empty. Therefore we can remove the initial guard and replace it by
guard let first = nodes.first else {
return nil
}
return first
at the end of the method. This can be further shortened to
return nodes.first.flatMap { $0 }
Putting it together, the function would look like this:
func arrayToTree() -> TreeNode? {
let nodes = self.map { $0.map { TreeNode($0) } }
for i in 0..<self.count/2 {
nodes[i]?.left = nodes[2 * i + 1]
if self.count > 2 * i + 2 {
nodes[i]?.right = nodes[2 * i + 2]
}
}
return nodes.first.flatMap { $0 }
}
An alternative implementation
What you have is a way to create a TreeNode, and that is what initializers methods are for. Therefore I would put the code in a
public convenience init?(values: [Int?])
of the TreeNode class instead of an Array extension method. The usage would be
let list = [3, 9, 20, nil, nil, 15, 7]
if let tree = TreeNode(values: list) {
// ...
}
And the task calls for a recursive implementation:
public convenience init?(values: [Int?], offset: Int = 0) {
guard offset < values.count, let value = values[offset] else {
return nil
}
self.init(value)
self.left = TreeNode(values: values, offset: 2 * offset + 1)
self.right = TreeNode(values: values, offset: 2 * offset + 2)
}
Making it generic
With the above changes it is now easy to replace Int by an arbitrary value type:
public class TreeNode<T> {
public var val: T
public var left: TreeNode?
public var right: TreeNode?
public init(_ val: T) {
self.val = val
self.left = nil
self.right = nil
}
public convenience init?(values: [T?], offset: Int = 0) {
guard offset < values.count, let value = values[offset] else {
return nil
}
self.init(value)
self.left = TreeNode(values: values, offset: 2 * offset + 1)
self.right = TreeNode(values: values, offset: 2 * offset + 2)
}
} | {
"domain": "codereview.stackexchange",
"id": 33559,
"tags": "swift, generics"
} |
Colour of colloids | Question: Why colours of colloidal solutions are different when viewed along different directions?
For example Milk appears blue when viewed by reflected light and red when viewed by transmitted light
Is it due to the irregular shape of the particles that they scatter different wavelengths along different directions?
Answer: This is classic, the effect is called Mie scattering.
Very simply speaking, light is scattered on the whole surface of every (translucent) particle. For round particles, interference makes a diffraction pattern with conic symmetry, i.e. the intensity of diffracted light depends on the angle towards the incoming wave.
Obviously the pattern depends on the light wavelength, and so there is a dispersive effect. It's not very strong, because usually the particles are not all exactly of uniform size, and there is a lot of multiscattering.
Btw. it takes a keen observer to notice this without prior knowledge. Congratulations. ;-) It's easier to see if you dilute the milk (less multiple scattering). At least one good use for low-fat milk.
Warning: The explanation above is terribly simplified, to the point where it doesn't explain much. For example you'd expect the colours to change with every brand of non-/homogenised milk, because the size distribution varies. Not so much.
Also the effect is obscured by the fact that light scattering is generally much stronger for shorter wavelenghts. That also means that red light is more likely to pass through, and blue light to get absorbed or escape on the front side. It's really hard to tell apart Mie scattering and the ordinary Tyndall effect. | {
"domain": "chemistry.stackexchange",
"id": 11559,
"tags": "inorganic-chemistry, color, colloids"
} |
Why are animal births not taken as seriously as human births? | Question: When humans give birth, more than often medical assistance is needed. Others gather around and frantically look for any way to help. But when an animal gives birth, it is usually seen as a moment where you give the female its space and let the birth occur naturally and without any assistance. The animal is of course in serious pain just as a female human but this is more than often not taken into account. Why is it that animal births are not taken as seriously?
Answer: Our heads are bigger.
There's some debate on the issue, but in essence, human brains, and therefore heads, are very large relative to our body size. This is handy for all the intelligent things we like to do, but can be rather painful during birth. Because we walk upright, the size of a newborn's head is actually a non-trivial fact during the birthing process. There are two major implications.
The first is that human birth hurts. You can watch the birth of other animals and they seem to brush it off, but for humans, forcing that huge head through a relatively small birth canal is difficult. Evolution has (supposedly) limited the size of the hips because, while that would allow an easier birthing process, it would negatively impact our ability to walk. As such, it has to hurt.
Secondly, in order to make the process easier, humans rotate during birth. The end result is that, unlike even other closely related primates, humans come out backward in a way that is very difficult for a birthing female to attend to. This almost requires having another person or two on hand to help out. This would, of course, be a huge reinforcement for social connections.
A few books I know of touch on this. Up From Dragons deals with the brain size/hip size issue and The Invisible Sex talks about rotation during the birthing process and the social implications. | {
"domain": "biology.stackexchange",
"id": 1205,
"tags": "human-biology, reproduction"
} |
Subtracting grand mean from train and test images | Question: I am building an image classifier based off the VGG_face keras implementation. It is easiest for me to extract a csv file full of the representations and then try classifiers on those representations. When I got the representations, I first subtracted the mean of the entire dataset from each image. Then I realized... am I cheating, so to speak? In other words, since I included the test images when calculating the grand mean to be subtracted, does this now then overestimate my accuracy measurements?
Answer: There is a kind of bias that you are introducing, yes. You are basically extracting some statistics (i.e. the mean) from your hold-out set and using that to train, which makes your final claims of accuracy a little weaker (some people might say they are useless).
The general approach is to compute the mean of your training data, then you may subtract that from all of the data, including hold-out data.
You can do the mean subtraction, in general, using something like the ImageDataGenerator. The mean to be subtracted can be computed using all or some of the training data. That class also offers other augmentation functionalities, such as normalising the dataset too, adding rotations etc.
you mentioned you read features from a CSV file, so if you are not talking about images, as long as you can use e.g. NumPy, you can perform is manually on all data at the beginning. | {
"domain": "datascience.stackexchange",
"id": 3289,
"tags": "keras, convolutional-neural-network, normalization"
} |
Project Euler 34 - digit factorials | Question:
145 is a curious number, as \$1! + 4! + 5! = 1 + 24 + 120 = 145\$.
Find the sum of all numbers which are equal to the sum of the
factorial of their digits.
Note: as \$1! = 1\$ and \$2! = 2\$ are not sums they are not included.
I can't figure out a fair way to optimize the upper bound from the information given in the question. I went on the PE forum and found many people setting the upper bound to 50000 because they knew that would be large enough after testing. This doesn't seem fair to me; I want to set the bound based on the information in the question. Right now it runs in around 20 seconds.
EDIT: I'm not looking for a mathematical algorithm. I'm looking for ways to make this code faster.
from math import factorial as fact
from timeit import default_timer as timer
start = timer()
def findFactorialSum():
factorials = [fact(x) for x in range(0, 10)] # pre-calculate products
total_sum = 0
for k in range(10, fact(9) * 7): # 9999999 is way more than its fact-sum
if sum([factorials[int(x)] for x in str(k)]) == k:
total_sum += k
return total_sum
ans = findFactorialSum()
elapsed_time = (timer() - start) * 1000 # s --> ms
print "Found %d in %r ms." % (ans, elapsed_time)
Answer: I'm not aware of any mathematical way to establish the upper bound for the search space (I used the same upper limit as you did). However, there are some optimizations you can make:
Use integer math throughout instead of converting to a string, extracting the digits, then converting back to an integer. On my PC, your algorithm ran in approximately 6200ms. Using integer math as follows (there may be a more Pythonic way to do this; my code is a direct transliteration of C++ that I used), it ran in approximately 1600ms -- almost 4 times as fast:
def findFactorialSum():
factorials = [fact(x) for x in range(0, 10)] # pre-calculate products
total_sum = 0
for k in range(10, fact(9) * 7): # 9999999 is way more than its fact-sum
tmp = k
total = 0
while tmp > 0:
total += factorials[tmp % 10]
tmp //= 10
if total == k:
total_sum += k
return total_sum
(For my curiosity, I also tried it with the '/' operator instead of the '//' operator, and consistently got 100ms longer run times with the former.)
You're doing an exhaustive search of the numbers from [10...7*9!] to see which ones meet the problem criteria. You can eliminate a large proportion of those numbers:
Hint 1:
No 2-digit number can have a digit >= 5. This is because 5! == 120, so any 2-digit number with a 5 (or higher) in it will automatically have a 3-digit (or longer) sum. You can extend this rule for 3-, 4- and 5- digit numbers.
Hint 2:
Extending Hint 1: No 3-digit number can have more than one 7 in it. 7! is 720, so if you have two 7s, you have 1440, a 4-digit number. There are a few more opportunities like this to eliminate numbers to check
Hint 3:
Think of the number 145 given in the problem; since you know that this works, there's no need to check numbers that are a permutations of its digits: 154, 415, 451, 514, 541. The trick here is to look at the problem the other way around: instead of finding numbers whose Sum of Factorials of Digits equals the original number, find an ordered sequence of digits whose Sum of Factorials can be split into a sequence of digits, sorted, and that compares equal to the original sequence. | {
"domain": "codereview.stackexchange",
"id": 8821,
"tags": "python, optimization, programming-challenge"
} |
Do rosservice calls from rtt_ros respect orocos operation thread spec? | Question:
Say I have an orocos component that has an operation such as,
this->addOperation("do_work", &MyWorker::doWork, this, RTT::ClientThread).doc("Does work");
Where when called, this blocks the callers thread and does not block the task context thread in OROCOS.
I'm unable to find anywhere in the documentation for rtt_ros_integration whether or not ROSServiceService calls respect the orocos operation spec on whether to block the client thread of the task context thread.
Originally posted by jlack on ROS Answers with karma: 78 on 2017-07-30
Post score: 1
Original comments
Comment by gvdhoorn on 2017-07-30:
This is really an OROCOS specific question. I doubt the OROCOS / rtt_ros maintainers frequent this board, so I don't expect an answer to your question soon. We can see what happens (perhaps jmeyer has an account here), but you might want to try the orocos mailing list or their issue tracker.
Comment by jlack on 2017-07-30:
Yeah figured i'd throw it on here and see if I found any takers. Thanks i'll try on the rtt_ros issue board as well.
Comment by gvdhoorn on 2017-07-31:
orocos/rtt_ros_integration#91.
Answer:
Finally got this working, and long story short the answer is yes, the rosservice functionality offered by rtt_ros does respect the thread spec given when adding an operation.
The rtt_roscomm documentation is quite lacking in functional examples and clear documentation on the specifics, so I end up having to write and get the code working to answer the question.
For others looking to use rosservice call functionality provided by rtt_roscomm I recommend looking at the unit tests to bootstrap the process of getting it up and running, as currently that's the best place to look for figuring out how to get it to work.
Originally posted by jlack with karma: 78 on 2017-08-02
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 28482,
"tags": "ros, orocos"
} |
Could someone recommend a book for surveying species? | Question: I'm trying to get/renew basic knowledge of species. Could someone recommend a book for surveying "important"/"representational" species? I am looking for a book with good illustrations and that covers most "important"/"representational" species.
I am currently reading The Tree of Life: A Phylogenetic Classification, and it is thicker than I want to read as a first book in this kind.
Answer: Judging from your response to Gurav in the comments, it sounds like introductory zoology and plant biology texts would fit the bill.
For zoology, we teach from Hickman et al's Integrated Principles of Zoology. It outlines the major phyla, their defining characteristics, with plenty of specific examples scattered throughout. There are nice little problem sets throughout, and it goes into a solid amount of detail for a first or second year zoology course.
For plants, I've used Graham et al's Plant Biology, which takes a similar general approach. Though it's perhaps a bit broader, and less species-focused.
Both of these books outline the major relevant groups, and use 'representative' species to illustrate various biological points throughout. They might be a good place to start! | {
"domain": "biology.stackexchange",
"id": 1163,
"tags": "taxonomy, book-recommendation"
} |
Implementing a Yearmonth class | Question: Recently I wrote a program that was required to handle year and month data and so I wrote this class to encapsulate that handling. What I needed was a way to initialize the Yearmonth object based on the current local time, and allow a simple method of calculating future Yearmonth values based on a duration in months. This sample code illustrates how I use it:
ymtest.cpp
#include <iostream>
#include "Yearmonth.h"
int main()
{
YM::Yearmonth ym; // today
std::cout << ym << '\n';
ym += 2; // 2 months from now
std::cout << ym << '\n';
ym += 14; // test year increment
std::cout << ym << '\n';
}
Yearmonth.h
#ifndef YEARMONTH_H
#define YEARMONTH_H
#include <iostream>
namespace YM {
class Yearmonth
{
public:
// construct with today's year and month
Yearmonth();
// construct with year, month (1=Jan, 12=Dec)
Yearmonth(unsigned ayear, unsigned amonth);
// increment by given number of months
Yearmonth &operator+=(const unsigned mon);
// return year
unsigned year() const;
// return month (1=Jan, 12=Dec)
unsigned month() const;
// prints to ostream. E.g. 2014 Dec ==> "201412"
friend std::ostream& operator<<(std::ostream &out, const Yearmonth &ym);
private:
unsigned myyear;
unsigned mymonth;
};
}
#endif //YEARMONTH_H
Yearmonth.cpp
#include <ctime>
#include "Yearmonth.h"
namespace YM {
Yearmonth::Yearmonth()
{
time_t tt;
time(&tt);
tm *t = localtime(&tt);
myyear = t->tm_year + 1900;
mymonth = t->tm_mon;
}
Yearmonth::Yearmonth(unsigned ayear, unsigned amonth)
: myyear(ayear), mymonth(amonth-1)
{
myyear += mymonth/12;
mymonth %= 12;
}
unsigned Yearmonth::year() const
{
return myyear;
}
unsigned Yearmonth::month() const
{
return mymonth+1;
}
Yearmonth &Yearmonth::operator+=(const unsigned mon)
{
mymonth += mon;
myyear += mymonth/12;
mymonth %= 12;
return *this;
}
std::ostream& operator<<(std::ostream &out, const Yearmonth &ym)
{
return out << ym.myyear * 100 + ym.mymonth+1;
}
}
The class seems sufficient for my needs and everything works. Have I missed anything important?
Answer: I only have a few small remarks to make:
In your header file, don't include <iostream>: include <iosfwd> instead which contains the forward declarations for every type in <iostream>.
Also, you only use std::ostream in your source file, so you could simply include <ostream> there.
Several times, you compute mymonth / 12 and mymonth % 12. If your class is designed to be used intensively, you could consider using std::div(mymonth, 12) which will compute both values at once and may therefore be slightly faster (if you really need it).
You may want to prefix time_t, time, tm and localtime with std::. Them coming from the C standard library doesn't prevent you to use std::.
Using a const unsigned parameter seems pretty useless. I don't have strong opinions on the const on value parameters, but you could safely drop it since it adds little value. | {
"domain": "codereview.stackexchange",
"id": 13207,
"tags": "c++, c++11, datetime, c++14"
} |
What are the possible non-entangling two-qubit gates? | Question: The non-entangling gates in $ SU_4 $ contains the entire group of gates of the form
$$
SU_2 \otimes SU_2.
$$
It also contains
$$
\zeta_8 SWAP= \zeta_8 \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
\end{bmatrix}
$$
where $ \zeta_8=e^{2\pi i/8}= e^{\pi i/4} $ is a primitive eighth root of unity.
Are there any other non-entangling two-qubit gates? A related (perhaps equivalent?) question is what is the normalizer of $ SU_2 \otimes SU_2 $ in $ SU_4 $? Does the normalizer
$$
N(SU_2 \otimes SU_2)
$$
just have two connected components (the component of the identity and the component of SWAP)? Does it have more connected components? Do these other components correspond to other non-entangling gates? Also, interesting to note that
$$
(\zeta_8 SWAP)^2=iI \not \in SU_2 \otimes SU_2
$$
is not in $ SU_2 \otimes SU_2 $ even though we wouldn't think of it as an entangling gate since it is just a global phase and moreover it is in $ U_2 \otimes U_2 $.
Answer: There are no other non-entangling gates in $SU(d^2)$ in any dimension $d=2,3,\dots$. Note that the global phase is irrelevant to the problem, so we lose no generality by considering non-entangling gates in $U(d^2)$ instead. We will prove that if $U\in U(d^2)$ is non-entangling then either $U\in U(d)\otimes U(d)$ or $\text{SWAP}\circ U\in U(d)\otimes U(d)$.
Preliminaries
If $A$ is a linear subspace of $\mathbb{C}^d$ and $|\psi\rangle\in\mathbb{C}^d$ a pure state, then let $|\psi\rangle\otimes A$ denote the set $\{|\psi\rangle\otimes|\phi\rangle:|\phi\rangle\in A\}$ which is a linear subspace of $\mathbb{C}^d\otimes\mathbb{C}^d$. Similarly, for $A\otimes|\phi\rangle$. We say that a linear space $B\subseteq\mathbb{C}^d\otimes\mathbb{C}^d$ is entanglement-free if every element of $B$ is (a scalar multiple of) a product state.
Lemma If a linear subspace $B\subseteq\mathbb{C}^d\otimes\mathbb{C}^d$ is entanglement-free, then either $B=A\otimes|\psi\rangle$ or $B=|\psi\rangle\otimes A$ for some state $|\psi\rangle\in\mathbb{C}^d$ and some linear subspace $A\subseteq\mathbb{C}^d$.
Proof. Assume otherwise. Then we can find $|a\rangle\otimes|b\rangle\in B$ and $|x\rangle\otimes|y\rangle\in B$ such that $|x\rangle$ is not a scalar multiple of $|a\rangle$ and $|y\rangle$ is not a scalar multiple of $|b\rangle$. However, then $|a\rangle\otimes|b\rangle + |x\rangle\otimes|y\rangle$ is entangled$^1$.$\square$
Two types of non-entangling gates
Now, suppose that $U\in U(d^2)$ is non-entangling. Then the image $U[B]$ of any entanglement-free subspace $B=A\otimes|\psi\rangle$ under $U$ is entanglement-free and the lemma above implies that either
$$
U[A\otimes|\psi\rangle] = A'\otimes|\psi'\rangle\tag{1}
$$
or
$$
U[A\otimes|\psi\rangle] = |\psi'\rangle\otimes A'\tag{2}
$$
for some subspace $A'$ of $\mathbb{C}^d$ and some state $|\psi'\rangle\in\mathbb{C}^d$. In the first case, $U\in U(d)\otimes U(d)$. In the latter case $\text{SWAP}\circ U\in U(d)\otimes U(d)$. Since $(1)$ and $(2)$ exhaust all possibilities, no other non-entangling gates exist.
Intuition
The argument above attempts to capture the intuitive observation that if we vary$^2$ the state of the first qudit in a product state that is fed into a two-qudit unitary gate then that variation affects either the first qudit, the second qudit or both qudits at the output. However, if the variation affects both qudits then they become entangled$^3$. Therefore, since the gate is non-entangling, the variation can only feed through either to the first qudit or to the second qudit. These two cases correspond to the two possibilities $(1)$ and $(2)$ above.
Normalizer
The normalizer $N:=N(U(d)\otimes U(d))$ of $U(d)\otimes U(d)$ in $U(d^2)$ does indeed have two connected components which correspond to the identity and the SWAP gate. First, note that every non-entangling gate belongs to $N$. Conversely$^4$, no entangling gate belongs to $N$.
Now, $N$ inherits its topology from $U(d^2)$ which inherits its topology from $\mathbb{C}^{d^4}$. Moreover, $N$ is closed, so connectedness and path-connectedness are equivalent in $N$. Thus, if $N$ was connected, then there would be a continuous path from a gate of the form $U_1\otimes V_1$ to a gate of the form $\text{SWAP}\circ(U_2\otimes V_2)$. However, this would mean that we can approximate the SWAP gate by product gates arbitrarily well, which is impossible. Therefore, $N$ has at least two connected components.
Finally, $U(d)$ is path-connected, so we can form a continuous path between any two gates. Taking the product of such paths, we see that any two gates of the form $U_1\otimes V_1$ live in the same connected component. Similarly for gates of the form $\text{SWAP}\circ(U_2\otimes V_2)$. Therefore, $N(U(d)\otimes U(d))$ has exactly two connected components.
$^1$ This can be proved rigorously by extending $\{|a\rangle\otimes|b\rangle\}$ to a basis and writing the coefficients of $|a\rangle\otimes|b\rangle + |x\rangle\otimes|y\rangle$ in that basis as a $d\times d$ matrix. Since $|x\rangle$ is not a scalar multiple of $|a\rangle$ and $|y\rangle$ is not a scalar multiple of $|b\rangle$, the matrix has at least two linearly independent rows and therefore $|a\rangle\otimes|b\rangle + |x\rangle\otimes|y\rangle$ is not a product state.
$^2$ For example, we could imagine varying the state of the first qudit with time as in $|\psi(t)\rangle\otimes|\phi\rangle$.
$^3$ More generally, the qudits could become correlated classically. However, this possibility is ruled out by unitarity. It would be relevant if we considered two-qudit quantum channels instead of two-qudit unitary gates.
$^4$ We can arrange for the conjugation of a product unitary with non-degenerate spectrum by an entangling unitary to result in an operator with entangled eigenstates. Such an operator is not a product unitary. | {
"domain": "quantumcomputing.stackexchange",
"id": 4197,
"tags": "quantum-gate, entanglement, mathematics"
} |
Explain these graphs of rotation and velocity of pucks on air hockey board | Question: I've been tasked to do a simple experiment on the elasticity of collisions. For this experiment I used two "pucks" (very light circular metal pieces of certain height but hollow) and a table that works like air hockey tables (decreases friction by blowing air from underneath). One puck was placed on this board and the other one was shot into it. Each puck had two reflective markers on them, one in the center and one on the edge. The positions of these markers were logged by two cameras shooting infrared light. I am now trying to understand this position data.
This is the data I have (I'm using Mathematica):
m1Vel = Differences /@ {m11x, m11y};
m2Vel = Differences /@ {m21x, m21y};
m12Vel = Differences /@ {m12x, m12y};
m22Vel = Differences /@ {m22x, m22y};
m1DeltaX = m11x - m12x;
m1DeltaY = m11y - m12y;
m2DeltaX = m21x - m22x;
m2DeltaY = m21y - m22y;
angularVel[dy_, dx_] := Differences@ArcTan[dy/dx]
vectorNorm2[list_] := Sqrt[list[[1]]^2 + list[[2]]^2];
Using this to plot position data for the pucks M1 and M2:
ListLinePlot[{m11x, m11y, m21x, m21y},
PlotLegend -> {"M1 X", "M1 Y", "M2 X", "M2 Y"}, LegendSize -> 0.5,
LegendPosition -> {1.1, 0}]
And then approximate the velocity for each puck, for the marker in the center:
ListLinePlot[{vectorNorm2[m1Vel], vectorNorm2[m2Vel],
vectorNorm2[m1Vel] + vectorNorm2[m2Vel]}, PlotRange -> Full,
PlotLegend -> {"M1 v", "M2 v", "M1+M2 v"}, LegendSize -> 0.5,
LegendPosition -> {1.1, 0}]
And for the marker on the edge:
And finally the rotation of each puck, using the approximation that the angle from a horizontal line is $\mathrm{arctan}(\frac{\Delta x}{\Delta y})$ and the angle velocity therefore the difference between the angle at one point and the angle at the next point, as seen in the function angularVel above.
ListLinePlot[{MovingAverage[angularVel[m1DeltaY, m1DeltaX], 10]^2,
MovingAverage[angularVel[m2DeltaY, m2DeltaX], 10]^2},
PlotRange -> Full, PlotLegend -> {"M1 w", "M2 w"}, LegendSize -> 0.5,
LegendPosition -> {1.1, 0}]
Alright, so what's the matter?
I was expecting both of the velocity graphs to look like the first. Since the kinetic energy is proportional to the velocity squared, it is unacceptable to me that it goes up and down in the second velocity graph. It should decrease in a monoton manner. The first collision is with the other puck, but there are collisions after that with the walls.
The rotational energy is proportional to the angular velocity. I get that if in the collision with a wall some energy is transferred from translation to rotation and that the rotational energy therefore does not decrease in a monoton manner, but these really sharp peaks (even sharper without the moving average) I cannot understand.
Since I'm studying the elasticity I really need only to understand what happens in the first collision. But I feel like a fraud if I write something up about that, neglecting the rest of the graph with all of its peculiarities. If you had to explain these things in a report, what would you write?
Answer: The general expression for calculating kinetic energy is
$$KE = \frac{m v^2}{2} + \frac{I \omega^2}{2}$$
However, $v$ means the velocity of the center of mass and $\omega$ is rotational velocity around the center of mass. $I$ is moment of inertia about center of mass.
You cannot do the above expression just for arbitrary point of the body.
As for the second question and pulses of rotational movement: I think during the collisions both pucks roll against each other or against the wall for a very small fraction of time. When rolling you have static friction forces, which have great and temporal effects on speed, rotational speed and their relation. | {
"domain": "physics.stackexchange",
"id": 2831,
"tags": "homework-and-exercises, newtonian-mechanics, experimental-physics, collision"
} |
How to load two seperate nodelets in the nodelet manager | Question:
Hi, I have two nodelets in separate files(one publishes sensor_msgs::Image and the other subscribes to it) and want to load them in the nodelet manager. ROS does provide a method here but I fail to understand the details of it. Thanks in advance.
Originally posted by surabhi96 on ROS Answers with karma: 41 on 2018-09-24
Post score: 0
Answer:
To load your nodelets you must have loaded a nodelet manager first.
Using command lines :
Load the nodelet_manager (you can put the name you want but make sure all your nodelets use the same manager)
rosrun nodelet nodelet manager __name:=nodelet_manager
Then you can load your nodelets (again replace nodelet_manager with the name you chose previously), if you have two nodelets you have to run :
rosrun nodelet nodelet load PACKAGE_OF_YOUR_NODELET/NODELET1 nodelet_manager
rosrun nodelet nodelet load PACKAGE_OF_YOUR_NODELET/NODELET2 nodelet_manager
Using launch file :
< node pkg="nodelet" type="nodelet" name="nodelet_manager" args="manager"/>
< node pkg="nodelet" type="nodelet" name="NODELET1"
args="load PACKAGE_OF_YOUR_NODELET/NODELET1 nodelet_manager">
< /node>
< node pkg="nodelet" type="nodelet" name="NODELET2"
args="load PACKAGE_OF_YOUR_NODELET/NODELET2 nodelet_manager">
< /node>
You can also set a parameter or remap some topics as in the tutorial.
I personnaly prefer to use launch files.
But always remember that the nodelet manager is mandatory here and if by mistake you load your nodelets in two different nodelet_manager you will lose the specificity of the nodelets and they will have the same behavior as a node.
Originally posted by Delb with karma: 3907 on 2018-09-24
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Delb on 2018-09-25:
I forgot to add this useful command :
rosrun nodelet declared_nodelets
This will list all the available nodelets on your system (from wiki section 2) | {
"domain": "robotics.stackexchange",
"id": 31815,
"tags": "ros, ros-kinetic, nodelet, nodelet-manager"
} |
What does par. lines mean in relation to a telescope aperture in 19th century German astronomical publications? | Question: I am trying to understand what is meant by par. lines in an 1867 article "The Aberration of the fixed stars after the wave theory" by Prof. W. Klinkerfues of Royal Astronomy Works of Goettingen. (LEIPZIG, VERLAG VON QUANDT & HÄNDEL., 1867)
The passage is on page 57 and reads "The latter, with a telescope of 21 par. Lines aperture and 18 inches focal length, was at 50 times Magnification suitable to give the state of the clock to a small fraction of a second".
In the original German version it is "Letzteres, mit einem Fernrohre von 21 Par. Linien Oeffnung und 18 Zoll Brennweite, war bei 50 maliger Vergroesserung geeignet ...". I do not know German language and apologise for any typing errors.
It appears that "inch" was still being used in Goettingen around this time, before they switched to the metric system in the late 19th century. However, I am not sure how closely this "inch" relates to the current standard inch.
Any help is appreciated in figuring out the aperture of this telescope.
Any help in getting a fuller description of this telescope, used by Prof. Klinkerfues to determine whether there is any change in stellar aberration if a liquid is inserted between the objective and eye-piece, is also appreciated.
Kind regards,
Joseph
Answer: I think it means a Paris line, or ligne.
In Klinkerfues (1867) it seems the author uses Paris inches, which was a common unit in particular for lenses. One Paris inch is equal to 1.0657 "modern" inches, or 2.7069 cm.
Like the modern inch, 12 Paris inches equal 1 Paris foot, while 1⁄12 of a Paris inch was called a ligne, equal to 2.2558 mm (apparently the modern inch can also be divided into 12 lines, although the exact definition varies).
So, "21 Par. Linien" would be equal to 4.74 cm. | {
"domain": "astronomy.stackexchange",
"id": 6192,
"tags": "observational-astronomy, telescope, telescope-lens, units"
} |
Why Clebsch–Gordan coefficients does not have a recursion relation for $J$? | Question: The Clebsch–Gordan coefficients had a recursion relation: Sakurai Eq 3.8.45
$$J_\pm|j_1j_2; jm\rangle =(j_{1\pm}+j_{2\pm}) \sum_{m_1} \sum_{m_2} |j_1j_2;m_1m_2\rangle\langle j_1j_2;m_1m_2| j_1j_2;jm\rangle$$
Thus if one started say $|j_1=1j_2=1|J=2M=2\rangle$ one could obtain the representation of $|J=2$, $M=2,1,0,-1,-2\rangle$.
However, why there's not a recursion relationship for $J$? i.e. although one could use orthogonal relationship to calculate for $|J=1M=1\rangle$, why there's not a recursion relationship like that for $M$?
Answer: They do. Obviously they are not generated by the action of $J_\pm$ since this action cannot change the $J$ quantum number but if my typesetting is right two of them are
\begin{align}
&\sqrt{\frac{2c(-a+b+c)(a+b+c+1)}{2c+1}}C^{c\gamma}_{a\alpha b\beta}\\
&=\sqrt{(b-\beta)(c-\gamma)}C^{c-1/2,\gamma-1/2}_{a\alpha,b\beta-1/2}
+\sqrt{(b+\beta)(c+\gamma)}C^{c-1/2,\gamma-1/2}_{a\alpha-1/2,b\beta}
\end{align}
and
\begin{align}
&\sqrt{\frac{(-a+b+c)(a-b+c)(a+b-c+1)(a+b+c+1)(2c-1)}{2c+1}}
C^{c\gamma}_{a\alpha,b\beta}\nonumber \\
&\quad =\sqrt{(b+\beta)(b-\beta+1)(c+\gamma)(c+\gamma-1)}C^{c-1,\gamma-1}_{a\alpha,b\beta-1}-2\beta\sqrt{c^2-\gamma^2}C^{c-1,\gamma}_{a\alpha,b\beta}\\
&\qquad -\sqrt{(b-\beta)(b+\beta+1)(c-\gamma)(c-\gamma-1)}C^{c-1,\gamma+1}_{a\alpha,b\beta+1}
\end{align}
and more are given in
Varshalovich, D. A., Moskalev, A. N., & Khersonskii, V. K. M. (1988). Quantum theory of angular momentum.
I believe are consequences of recursion relations of Regge symbols. See also along those lines
Bincer, A. M. (1970). Interpretation of the Symmetry of the Clebsch‐Gordan Coefficients Discovered by Regge. Journal of Mathematical Physics, 11(6), 1835-1844.
Another paper
Smorodinskiĭ, Y. A., & Shelepin, L. A. (1972). Clebsch-Gordan coefficients, viewed from different sides. Soviet Physics Uspekhi, 15(1), 1.
gives a completely cool approach to obtaining some unexpected relations which can be transformed into recursion relations. | {
"domain": "physics.stackexchange",
"id": 67386,
"tags": "quantum-mechanics, hilbert-space, angular-momentum, representation-theory"
} |
How I debug this code? | Question: I am trying to run CMSclassifier::classifyCMSfunction on my data but I am getting this error
library(CMSclassifier)
> Rfcms <- CMSclassifier::classifyCMS(my_data,method="RF")[[3]]
Error in match.names(clabs, names(xi)) :
names do not match previous names
This code classifies gene expression data
Code
But on example data code works
I have attached my data and example data here
Could somebody please help me in solving this error?
My data
Example data
Answer: The problem was I should use ENTREZ rather than gene symbols | {
"domain": "bioinformatics.stackexchange",
"id": 938,
"tags": "r, rna-seq, networks, modelling"
} |
Trouble with Simple Average Velocity | Question: I have a super simple question on average velocity that either I am not setting up correctly, or is itself graded incorrectly in a huge online system used by thousands of students for a very long time.
I seriously doubt the later could go uncaught.
Here's the question:
A car travels along a straight line at a constant speed of 40.0 mi/h for a distance d and then another distance d in the same direction at another constant speed. The average velocity for the entire trip is 31.5 mi/h.
What is the constant speed with which the car moved during the second distance d?
And here's my work:
V0 = 40.0 mph
Δx0 = d
V1 = ?
Δx1 = d
Vavg = 31.5 mph = ( 40.0 mph + V1 ) / 2
63 mph = 40.0 mph + V1
63 mph - 40.0 mph = V1
23 mph = V1
I've done this calculation a few times, used multiple sources to verify Vavg = ( Vf - Vi ) / 2 for constant acceleration, and even tried a few variations of the problem using different values from my book.
Stil no dice; the computer always marks my answers as "off by less than 10%".
What am I doing wrong?
Answer: The average velocity is given by
$$
\bar v=\frac{1}{T}\int_0^T v(t)\mathrm dt=\frac{1}{T}(v_1t_1+v_2t_2)
$$
where $t_1$ is the time spend on the first interval, $t_2$ is the time spend on the second one, and $T=t_1+t_2$.
Using
$$
v_1t_1=v_2t_2=d
$$
you get
$$
\bar v=2\frac{v_1v_2}{v_1+v_2}
$$
I believe you can take it from here. | {
"domain": "physics.stackexchange",
"id": 27774,
"tags": "homework-and-exercises, acceleration, velocity"
} |
How much larger must the ID of a cylinder be than the OD for a Slip Fit? Material: Delrin (Acetal) | Question: I am prototyping a nested waveguide made of four cylindrical sections. Each piece is turned on a lathe from Delrin (acetal) rod. Each piece fits inside the next larger one.
There are no moving parts. It is an antenna.
My question is: how much larger must the ID of a cylinder be, than the OD of the one which fits inside it?
In my drawing, I have indicated that they be 10 thousandths of an inch larger, but I do not know if this is correct.
I would like to be able to disassemble the cylinders during experiment, to take measurements, and then put them back together.
Each cylinder is to be electroformed with nickel on the outside, so the thickness of the metal layer will also be taken into account.
Here is an updated drawing with tolerances added and revised diameters.
Answer: 0.01" is probably a decent place to start as a slip fit, though you can certainly go tighter. I might even question if you actually want a slip fit for this application. You say it's for an antenna, and if you want it to extend and stay extended on its own like other collapsible antennae, a slip fit probably won't accomplish that. You'll need something that creates some friction but not so much that it can't be overcome by a person pulling on it.
In any case, what's potentially more important than the actual dimension is the tolerance. Those nominal dimensions will not be what you actually get, especially when you call out things down to the thousandth of an inch. For a slip fit, how I would actually dimension it is to make the nominal dimensions the same for the inner and outer, and then create a plus tolerance (e.g. +0.005"/-0) on the OD and a minus tolerance on the ID. Using 5 thousandths on each end will get you a 10 thousandths slip at most.
However, you need to know that the place you're sourcing these parts from can match that tolerance. 0.005" isn't that tough for a good machine shop, though I'm not sure how workable that plastic is, I'm more used to dealing with steels and cast irons. | {
"domain": "engineering.stackexchange",
"id": 638,
"tags": "mechanical-engineering, machining"
} |
Cardiac cycle and atrial contraction | Question:
During atrial contraction ("a" in the figure), why does the ventricular pressure match the atrial pressure? The ventricular pressure generally stays the same throughout passive filling until it reaches the point where atrial contraction occurs. Why is there a sudden change in the ventricle pressure during atrial contraction? I can understand the atrial pressure would increase when the atria contract, but the ventricles have not contracted yet so the ventricle pressure shouldn't increase. The ventricle pressure can increase greatly when the volume exceeds a certain value and the elastic tissue of the heart cannot stretch anymore. However, the volume here is only about 110ml, so it has not reached that point. I can only think of one explanation:
The pressure is equilibrated between two sides of an open valve.
However, this doesn't explain why the atrial pressure is slightly greater than ventricle pressure during filling; it also doesn't explain why the aortic pressure is slightly greater than the ventricle pressure near the end of ejection.
Answer: While blood is flowing into the ventricle, it can never be at a higher pressure than where blood is flowing from: if it was, the flow would be going in the other direction. Flow is always from higher to lower pressure, if there is no pressure difference there is no flow.
Before atrial contraction, the ventricle can have no more pressure than the uncontracted atrium, which in turn can have no more pressure than the veins (vena cava or pulmonary depending on which side of the heart we are talking about). When the atrium contracts, it increases the pressure in the atrium, which causes a flow of blood into the ventricle. Whenever a fluid flows, there is a pressure drop that depends on the resistance (very much like voltage in an electrical circuit). However, the AV valve is fairly big and open, so there is little pressure drop from the atrium to ventricle.
The pressure is equilibrated between two sides of an open valve.
...is mostly true if the valve is large enough. There is still some pressure drop, though: if there wasn't, there wouldn't be flow.
There is no need for the ventricle to be at maximum capacity for pressure to increase. Imagine if you squeeze one side of a balloon: the pressure in the balloon increases, which you can tell because the balloon stretches and expands in the areas you are not squeezing; it need not be at the maximum capacity of the balloon for this to happen. Same for the heart.
As far as the higher pressure in the aorta, I this diagram is slightly exaggerating where the pressure difference starts, but as the ventricle relaxes there is a brief time where you get a small backward flow because the relaxing ventricle ends up having less pressure than the proximal aorta. This pressure drop is what closes the aortic valve (or the pulmonary valve, same process). | {
"domain": "biology.stackexchange",
"id": 9755,
"tags": "cardiology, blood-circulation, blood-pressure, heart-output"
} |
Denoising effect in GnuRadio OFDM Serializer block | Question: Why does the OFDM serializer has such a strong denoising effect in the flow graph below ? Is that normal ?
The upper constellation is AFTER the OFDM serializer block and the lower is BEFORE the OFDM serializer block.
Here is the OFDM documentation : http://gnuradio.org/doc/doxygen/page_ofdm.html
It does not help to explain why the serializer block has such a strong denoising effect, any idea why this denoising happens ?
Here is the flowgraph (in .grc format): http://pastebin.com/raw/PTY0Q0Ty
Further inspection indicates that the serializer block only does remove non data carriers. It just probably so happens that anything that is non data is super noisy, and data carriers are not noisy at all, but I still wonder how is this possible.
Answer:
Further inspection shows indicates that the serializer block only does remove non data carriers. It just probably so happens that anything that is non data is super noisy, and data carriers are not noisy at all, but I still wonder how is this possible.
The magic that happens here is in the actual equalizer used in the frame equalizer block. If you'd scroll up in the GRC¹, you'd probably see the payload equalizer object block, holding a "simpledfe" equalizer.
Now, simpledfe stands for simple data feed-back equalizer. It's actually pretty well-documented, but somehow the documentation tool broke and the HTML documentation doesn't actually contain any of the explanation given in the source code.
So, here's a source code excerpt with the docs:
/* \brief Simple decision feedback equalizer for OFDM.
* \ingroup ofdm_blk
* \ingroup equalizers_blk
*
* \details
* Equalizes an OFDM signal symbol by symbol using knowledge of the
* complex modulations symbols.
* For every symbol, the following steps are performed:
* - On every sub-carrier, decode the modulation symbol
* - Use the difference between the decoded symbol and the received symbol
* to update the channel state on this carrier
* - Whenever a pilot symbol is found, it uses the known pilot symbol to
* update the channel state.
*
* This equalizer makes a lot of assumptions:
* - The initial channel state is good enough to decode the first
* symbol without error (unless the first symbol only consists of pilot
* tones)
* - The channel changes only very slowly, such that the channel state
* from one symbol is enough to decode the next
* - SNR low enough that equalization will always suffice to correctly
* decode a symbol
* If these assumptions are not met, the most common error is that the
* channel state is estimated incorrectly during equalization; after that,
* all subsequent symbols will be completely wrong.
*
* Note that the equalized symbols are *exact points* on the constellation.
* This means soft information of the modulation symbols is lost after the
* equalization, which is suboptimal for channel codes that use soft decision.
*
*/
Hm, but what about unoccupied carriers?
Now, looking at the implementation in ofdm_equalizer_simpledfe we see:
void
ofdm_equalizer_simpledfe::equalize(gr_complex *frame,
int n_sym,
const std::vector<gr_complex> &initial_taps,
const std::vector<tag_t> &tags)
{
[…]
gr_complex sym_eq, sym_est;
for (int i = 0; i < n_sym; i++) {
for (int k = 0; k < d_fft_len; k++) {
if (!d_occupied_carriers[k]) {
continue;
}
[…]
In other words: those are left untouched from the original FFT.
Now, a typical direct conversion OFDM system will leave the DC carrier unoccupied – that containing the leakage of the receiver LO.
That is typically a relatively powerful FFT bin, shouldn't vary much in magnitude, and its phase should depend on the point in time the DFT was taken, and that's the frame start time as determined by the Schmidl&Cox sync. That would be my explanation of the $|\cdot|\approx 50$ constellation points, and the rest might very well be the unequalized noise in original amplitude, especially since we're looking at a constellation sink that shows 1024 constellation points at once – so we'll see some outliers in noise power.
¹ It's often impossible to fit a whole GRC flow graph on the screen. It's therefore really handy that there's the "save screen capture button" in the toolbar and the menu! | {
"domain": "dsp.stackexchange",
"id": 3815,
"tags": "demodulation, ofdm, gnuradio"
} |
How it is not possible to say if Titius–Bode law is a "coincidence" or not? | Question: From Wikipedia:
No solid theoretical explanation underlies the Titius–Bode law – but it is possible that, given a combination of orbital resonance and shortage of degrees of freedom, any stable planetary system has a high probability of satisfying a Titius–Bode-type relationship. Since it may be a mathematical coincidence rather than a "law of nature," it is sometimes referred to as a rule instead of "law."[18] On the one hand, astrophysicist Alan Boss states that it is just a coincidence, and the planetary science journal Icarus no longer accepts papers attempting to provide improved versions of the "law."
I'm rather baffled by the way the theoretical explanation is presented. How the question is not yet settled. Don't we have enough planets in our Galaxy to rule out this is a mathematical coincidence, or actually to assure the law? I've see this related question here, but I feel like it does not so much address my issue here. My question is rather not if Titius–Bode works or not, but rather why do we still have this questions open in the 21st century.
Answer: How it is possible is illustrated by the counter question - how would you define whether it is a coincidence or not? How accurately do the planets in a system have to follow the TB relation to decide that they are in fact following it? It is an ill-posed question and depending on how you pose it you could get different answers.
In order to address the problem using exoplanetary systems you have to have a big sample of stars that have multiple ($3+$) planets. There are nowhere near as many of these as there are stars where 1-2 planets have been detected. That is because the methods of planet detection - particularly transiting planets - are most sensitive to close-in planets and may easily miss other planets in the system that are not quite in the same plane as the transiting planet(s). A further complication is that stars where multiple exoplanets are seen may not be like our own, because their ecliptic planes are much flatter than that of the Solar System.
There are about 230 systems of $3+$ detected planets that can be looked at presently (Mousavi-Sadr 2021). Those authors conclude that about half these systems follow a logarithmically spaced period distribution better than our own Solar System does (and half don't). But note that these period distributions have two free parameters for each system, they are not fixed at the values appropriate for our Solar System.
Some authors have advocated searching for new planets using the predictions of TB-like relationships deduced from planets already detected (e.g., Bovaird et al. 2015). But where these predictions have been tested, the success rate is very low and it is unclear whether that is because the predictions are bad or just because there are plenty of other explanations to do with the potential size of the new planet or its orbital inclination that would lead to them not being detected. | {
"domain": "astronomy.stackexchange",
"id": 6383,
"tags": "solar-system, exoplanet"
} |
How does an accuracy specified in vol% translates to ppm? | Question: I need to setup some CO2 concentration measurement. The datasheet of the sensor is:
Range: 0 to 25 vol%
Accuracy < 0.5 vol% + 3% of measured value
vol% is the percentage by volume. So the range is: 0 ppm to 250,000 ppm.
Then what is the right reading for accuracy, including an example for the measurement of 400 ppm of CO2 in standard air, let's call it uca for use case accuracy?
0.5 vol% is equal to 5,000 ppm ==> uca = 5,000 ppm + 3% of 400 ppm = 5012 ppm
0.5 vol% is relative to range ==> uca = 0.5% of 250,000 ppm + 3% of 400 ppm = 1262 ppm
0.5 vol% is relative to the actual concentration ==> uca = 0.5% of 400 ppm + 3% of 400 ppm = 14 ppm
something else?
Answer: The volume percentage is a ratio of two volumes. Hence, it is a dimensionless number. The same is true for ppm. Thus, you transform according to
$$1 vol\% = 0.01 = 0.01 \cdot 10^6 ppm = 10^4 ppm$$
as you did. The interpretation is that if you divide the volume in one million parts and select ten thousand of these parts, you selected $1\%$ of all parts. In the following, we will treat vol% as a proper unit, just like $m$ or $kg$.
Suppose we have a scale and the data sheets says:
Range: 0 to 25 kg
Accuracy < 0.5 kg + 3% of measured value
and we measure the value 4kg. The accuracy of this measurement is $0.5kg + 3\% \cdot 4kg$.
Let's transform this logic to your problem: The accuracy is
$$0.5 vol\% + 3\%\cdot 400ppm
= 0.5 \cdot 10^4 ppm + 3\%\cdot 400ppm = 5012ppm
$$ | {
"domain": "physics.stackexchange",
"id": 78539,
"tags": "measurements, units, error-analysis"
} |
What is the first moment area of a rounded rectangle? | Question: I need to calculate the plastic section modulus of a rectangular section with rounded corners. First I need to know the formula for first moment area of a quadrant. I can't find it anywhere on the internet. Does anyone know?
Answer: I don't have a simple formula but that is the way i would handle it. Assuming the four corners are circular with equal radius r, so that symmetry exists, the first moment of area of one half of the section should be:
$$S_{half} = S_{half\,rect}-2S_{corner}$$
where
$S_{half\,rect} = b(h/2)(h/4) = \frac{bh^2}{8} $ the 1st moment of
area of the half rectangle (with intact corners) around neutral axis and
$S_{corner}$ the 1st moment of area of the removed material from each
corner, around neutral axis.
How to find $S_{corner}$
We have to find its area and the distance of its centroid from the section neutral axis. It helps to consider the removed corner as a quarter circle subtracted from an r x r square (see figure below). The area is then simply found, by subtracting form the small r x r square, a circular quarter (see figure below):
$$A_{corner} = r^2 - \frac{πr^2}{4} $$
The centroid of the removed corner area, relative to the top edge, is similarly found considering the centroid of the rxr square (red dot) which is located $y=r/2$ below top edge and the centroid of the quarter circle (blue dot) which is located $y=r-\frac{4r}{3\pi}$ from the top edge. Combining them both, we get the distance of centroid of the removed corner from the top edge:
$$ y_{corner} = \frac{1}{A_{corner}}\left(r^2\cdot\frac{r}{2} - \frac{\pi r^2}{4} \left(r-\frac{4r}{3\pi}\right)\right) $$
Having found the above, the first moment of area of one removed corner around the section neutral axis is:
$$ S_{corner} = A_{corner}\left(\frac{h}{2} - y_{corner}\right)$$
Finally
Substituting $ S_{corner}$ to the 1st formula, we get the 1st moment of area of the half section $S_{half}$. Then, the plastic modulus of the total section, taking advantage of symmmetry, is:
$$Z=2S_{half}$$
For verification of the result a rounded rectangle calculator may prove handy. | {
"domain": "engineering.stackexchange",
"id": 1263,
"tags": "civil-engineering, mathematics"
} |
Understanding rocket equations | Question: I'm doing a science project where I have to explain the physics behind a trip to an exoplanet and I also have to explain about rocket equations while doing so. I know that a trip to an exoplanet is pretty much impossible, but it more has to be the theoretical part of the physics. And my problem is with the rocket equation. So far I have this simple equation:
$$\Delta v=v_e \ln\left(\dfrac{M}{M - m_R}\right)$$
This should give the maximum velocity of a rocket with the total mass $M$ and the fuel mass $m_R$. But I'm kind of unsure what to do about $v_e$. I believe it supposed to be the gravitational pull, but I need the rocket equation for a rocket that's already in space so I don't have to think about getting out of earth's atmosphere and all that. So I'm kind of stuck here. I would also like to know the time that it takes the rocket to get to the planet, which is 16 light years away or $1.514 \times 10^{14}$ kilometres away.
I hope this makes sense. Physics isn't my strong suit, so any help would be appreciated.
** Edit **
Okay, so I think I get it a little more now. Say we have a rocket that needs to transport a load with the mass of 500 kg.
$$M-m_R=500\ kg$$
But we still need to get a speed that's somewhere near the speed of light of possible (This is only theoretical of course). So if we say 1/8 the speed of light which is around $37000000\ m/s$ or $3.7\cdot 10^{7}\ m/s$. Now to get a rocket moving that fast, we would need a proper engine. And according to the link @BowlOfRed provided, the most effective spacecraft propulsion method is a Nuclear photonic rocket which has an exhaust velocity of roughly $2.99\cdot 10^{8}\ m/s$. Can we then just add the load amount and the exhaust velocity value to the equation and find how big the fuel motor has to be? I know that I'm doing something wrong about this, since the equation gives a strange and small number when you find $M$. And the Nuclear photonic rocket method probably isn't a very realistic method, but again, my projekt is mostly theoretical and I'm only trying to find a way that it, in theory, should be possible to travel to an exoplanet.
Answer: Assuming that after reading the comments you understand that $v_e$ is the exit velocity of the fuel, you need to further understand what $\Delta v$ is. It's the change in velocity of the spacecraft. For real missions this is not simply the maximum velocity of the craft.
When you want to visit an exoplanet and return, you need to distribute your $\Delta v$ onto several parts of the trip:
Accelerate to leave Earth
Brake to not fly past the exoplanet
Accelerate to leave exoplanet
Brake to no fly past Earth or cause a crater, which we call lithobraking :-)
Aerobraking in an atmosphere may relax some of these $\Delta v$ requirements, as will swing-bys/gravity-assists along the way. See also this cool $\Delta v$ map of the solar system.
So if your fuel allows for a $\Delta v$ of say 40km/s, your actual travelling speed is going to be considerably lower.
And we haven't talked about staging yet, which also changes things a bit.
Now with nuclear fuel exiting near the speed of light, indeed, fuel mass is quite low. There's a factor of 30.000 compared to the $v_e$ of a chemical reaction (generously taken as 10km/s).
If I'm not mistaken, a matter/antimatter rocket is ideal, after solving the problem of getting that nasty kind of unobtainium in measurable quantities. | {
"domain": "physics.stackexchange",
"id": 27129,
"tags": "homework-and-exercises, newtonian-mechanics, forces, rocket-science"
} |
Flask brand API endpoint unit test | Question: # unit tests for the /brands endpoint
from flask.ext import restful
import unittest
import os
import json
import sys
# tells sql_models to use the testing database
os.environ['API_TESTING'] = '1'
sys.path.append('/opt/backend_api/')
import api
from models import sql_models
from config import settings
class TestBrandsEndpoint(unittest.TestCase):
def setUp(self):
''' fire up a test instance of the flask app '''
self.app = api.APP.test_client()
self.app.testing = True
self.api = api.API
self.test_brand_name = 'Momcorp'
def tearDown(self):
'''
tear down app, reverse any changes made
- remove the brand inserted in test_post()
'''
brand_object = sql_models.Brand.get_by_name(self.test_brand_name)
if brand_object is not None:
sql_models.Brand.delete_brand(brand_object)
pass
def test_post(self):
'''
insert a new brand into the database
- only given field for test is the mandatory brand name but could(should?) be extended for the rest
- test that the response status code == 201 and the returned brand object has the correct name
'''
result = self.app.post( '/' + settings.BASE_API_VERSION + '/brands',
headers=[('user-api-key', '<redacted>')],
data={'name':self.test_brand_name})
# print(json.loads(result.data))
self.assertEqual(result.status_code, 201)
self.assertEqual(json.loads(result.data)['brand']['name'], self.test_brand_name)
def test_get(self):
'''
get all brands and all currently running offers
- sets the brand's 'offers_running' status
- returns serialized brands like so: {'brands':[serialized_brands]}
- test status code + name, id fields are not null
'''
result = self.app.get('/' + settings.BASE_API_VERSION + '/brands', headers=[('user-api-key','<redacted>')])
self.assertEqual(result.status_code, 200)
for brand in json.loads(result.data)['brands']:
self.assertIsNotNone(brand['name'])
self.assertIsNotNone(brand['id'])
if __name__ == '__main__':
unittest.main()
Looking for advice about the 'right way' to do this. Not sure if I should have a separate file and class for each endpoint or if all endpoints+request types should just be their own methods in a larger TestAllEndpoints(unittest.TestCase) class. Any advice or opinions are appreciated.
Answer:
Move the os.environ and sys.path calls into a new setUpClass method instead of have them at the global level?
pass at the end of setUp should be removed, similarly the commented out print.
I'd put the constants for the URL and the headers into fields or methods (self.url() / self.url) to reduce duplication.
The docstrings are a bit much. Ideally the code would be very self explanatory so that it almost spells out what you currently have (duplicated) in the docstring.
I'd put things that belong together in the same file; so same endpoint and requests definitely in the same file and probably in the same test class as well. I'd go for practicality first - if you need to have convoluted setup because you want to handle many different tests, then split it into multiple classes such that the tests are again easy to read. | {
"domain": "codereview.stackexchange",
"id": 21692,
"tags": "python, unit-testing, flask"
} |
About the release price | Question:
Dears:
If I want to developed a sensor driver and release up to ROS. Needs to charged any price from ROS?
best regards,
Emma
Originally posted by EmmaHsu on ROS Answers with karma: 1 on 2016-12-12
Post score: 0
Answer:
No! ROS is open source (http://www.ros.org/is-ros-for-me/) and if you want to release something as open source as well, you are free to do so.
Check out the bloom documentation which is the release toolchain used in ROS.
Originally posted by mgruhler with karma: 12390 on 2016-12-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by EmmaHsu on 2016-12-12:
Thanks for your reply :)
Comment by gvdhoorn on 2016-12-12:
Perhaps 'open-source' could use some clarification: wikipedia/zh/open-source.
Comment by NEngelhard on 2016-12-12:
And "open source" -> "commercially usable" is not a valid conclusion!!
Comment by gvdhoorn on 2016-12-12:
@NEngelhard: could you clarify that statement a bit? Are you referring to licensing or code quality (or both)?
Comment by NEngelhard on 2016-12-12:
I was referring to the license. Just being open source is not enough for being commercially usable.
Comment by mgruhler on 2016-12-12:
@NEngelhard @gvdhoorn good points. But I understood the questions as: "do I have to pay to release a package".
This is a clean "No, you don't have to." Whether the release package is suitable for anything is another matter ;-) | {
"domain": "robotics.stackexchange",
"id": 26460,
"tags": "ros, release"
} |
Showing/hiding deletion buttons in a grid view | Question: I want to perform some operations on the gridview (asp.net 4.0). So, to achieve this i have written a jquery function which i am calling on pageload but, because this function takes some time for execution my grid performance degrades (IE 8). Can i optimize it someway?
$.each($("#divTab2GridInquiries").find("tr").not(":first"), function () {
var tr = $(this);
var val = tr.find("input[id*='hdnLineStatus']").val();
var btnDelete = tr.find("div[id='divBtnDelete']");
var btnTobeDeleted = tr.find("div[id='divBtnTobeDeleted']");
if (val == "N") {
btnDelete.hide();
btnTobeDeleted.hide();
}
if (val == "S") {
tr.css("background-color", "#99FF99");
tr.find("input").css("background-color", "#99FF99");
btnDelete.show();
btnTobeDeleted.hide();
}
if (val == "D") {
tr.css("background-color", "#FFFF99");
tr.find("input").css("background-color", "#FFFF99");
btnDelete.show();
btnTobeDeleted.hide();
}
//From user rights
if ($("input[id*='hdnTab2ShowDelete']").val() != "Y") {
btnTobeDeleted.hide();
//btnDelete.show();
}
});
Answer: Use a single selector instead of "find" and "not." Leave the find, this is faster in every browser except opera (see comments).
For the 2nd part of the selector, you can use the "sibling" combinator ~ to grab everthing except its first operand, and the :first-child pseudoclass selector to get the first child, giving you the same set of elements without using several jQuery methods. This is faster than using not(':first') in all browsers, and faster than a single selector (e.g. not using find either) in all browsers except Opera (which maintains its native-selector edge). See this test.
Note: #someTable tr will also return tr elements from a nested table. You really want to target the direct row descendants of the table. But don't forget about tbody, which is a required element. So this probably should be "#divTab2GridInquiries > tbody > tr:first-child ~ tr". But that is a mouthful... and it's really slow. If you have no nested tables it will work fine as coded below.
$.each($("#divTab2GridInquiries").find("tr:first-child ~ tr"), function () {
var tr = $(this);
Not sure what you're doing here - the selector is using a wildcard match, but val only operates against the first element in a selection set. Can you target this element more specifically? In any event, instead of wildcard matching the id, add a class and select on that. Classes are much faster than substring matching attributes.
//var val = tr.find("input[id*='hdnLineStatus']").val();
var val = tr.find(".hdnLineStatus").val();
IDs are supposed to be unique. I'm not sure why you would have to target it this way. But using an attribute selector like this will definitely be slower than a regular ID or class selector. If these ids are really unique then just use #divBtnDelete. I suspect that they aren't and you're creating invalid html. Get rid of the ID an add a class.
// var btnDelete = tr.find("div[id='divBtnDelete']");
var btnDelete = tr.find(".divBtnDelete");
//var btnTobeDeleted = tr.find("div[id='divBtnTobeDeleted']");
var btnTobeDeleted = tr.find(".divBtnTobeDeleted");
This set of ifs should be a switch, but that's probably not slowing you down nearly as much as the selectors.
if (val == "N") {
btnDelete.hide();
btnTobeDeleted.hide();
}
if (val == "S") {
tr.css("background-color", "#99FF99");h
tr.find("input").css("background-color", "#99FF99");
btnDelete.show();
btnTobeDeleted.hide();
}
if (val == "D") {
tr.css("background-color", "#FFFF99");
tr.find("input").css("background-color", "#FFFF99");
btnDelete.show();
btnTobeDeleted.hide();
}
Use a class again.
//From user rights
//if ($("input[id*='hdnTab2ShowDelete']").val() != "Y") {
if ($(".hdnTab2ShowDelete").val() != "Y") {
btnTobeDeleted.hide();
//btnDelete.show();
}
}); | {
"domain": "codereview.stackexchange",
"id": 2102,
"tags": "javascript, jquery"
} |
Merging two different models in Keras | Question: I am trying to merge two Keras models into a single model and I am unable to accomplish this.
For example in the attached Figure, I would like to fetch the middle layer $A2$ of dimension 8, and use this as input to the layer $B1$ (of dimension 8 again) in Model $B$ and then combine both Model $A$ and Model $B$ as a single model.
I am using the functional module to create Model $A$ and Model $B$ independently. How can I accomplish this task?
Note: $A1$ is the input layer to model $A$ and $B1$ is the input layer to model $B$.
Answer: I figured out the answer to my question and here is the code that builds on the above answer.
from keras.layers import Input, Dense
from keras.models import Model
from keras.utils import plot_model
A1 = Input(shape=(30,),name='A1')
A2 = Dense(8, activation='relu',name='A2')(A1)
A3 = Dense(30, activation='relu',name='A3')(A2)
B2 = Dense(40, activation='relu',name='B2')(A2)
B3 = Dense(30, activation='relu',name='B3')(B2)
merged = Model(inputs=[A1],outputs=[A3,B3])
plot_model(merged,to_file='demo.png',show_shapes=True)
and here is the output structure that I wanted: | {
"domain": "datascience.stackexchange",
"id": 2405,
"tags": "machine-learning, python, deep-learning, keras, tensorflow"
} |
Skipping the defatting stage in extraction of alkaloids | Question: I am attempting to extract certain alkaloids from green plant matter, which is very high in waxes and fats.
I understand that in this scenario it is common to first acidify and then use a non-polar solvent to de-fat the solution.
I am curious though: say I were to basicify and extract alkaloids without any de-fatting. If I were then to add HCl solution to the non-polar solvent which now contains the alkaloids, wouldn't this in theory be equivalent to the de-fatting stage? Wouldn't the alkaloids would now be in the aqueous layer, and the lipids would be stuck in the non-polar layer?
As the acid/base method is so common, I am wondering if there are any downsides to the method described above that I am unaware of. I only have enough material for one shot so I would just like to hear some input before I attempt this expedited method.
May main fear would be emulsions that wouldn't break up. Is this a valid concern that is remedied by a preliminary de-fatting stage? Are there other concerns I should be wary of?
Answer: One big risk with the method you propose is foaming and gellation turning your extractions into a huge mess. Under basic conditions, the fats (e.g. triacylglycerols or TAGs) are likely to hydrolyze, forming carboxylate salts of fatty acids. "Carboxylate salts of fatty acids" is a long way of saying soap. These soap-like compounds could lead to excessive foaming or gellation of the sample, which will hinder phase separation and gunk up your apparatus. | {
"domain": "chemistry.stackexchange",
"id": 4137,
"tags": "extraction, alkaloids"
} |
Variable in export of package manifest | Question:
Hi,
I have a quick question about the neat wrapping of an external library. I'm wrapping PhysX into a ros package and would need an variable export of a cflag depending of the word-length of the host system.
cpp cflags="-I${prefix}/include/SDKs/Cooking/include -I${prefix}/include/SDKs/Foundation/include -I${prefix}/include/SDKs/NxCharacter/include -I${prefix}/include/SDKs/Physics/include -I${prefix}/include/SDKs/PhysXLoader/include" lflags="-L${prefix}/lib -Wl,-rpath,${prefix}/lib -lNxCharacter -lNxCooking -lPhysXCore -lPhysXLoader"
Now I want to add a flag -DNX32 or -DNX64 bit to the cflags depending on the host system. (The Makefile installs PhysX binaries automatically for 32bit and 64 bit systems into the package).
Any suggestions on how to nicely do this? Thanks.
Originally posted by rado0x54 on ROS Answers with karma: 191 on 2011-10-04
Post score: 0
Answer:
Adding
-DNX`getconf LONG_BIT`
solved the problem.
rospack cflags-only-other physx2_wrapper:
-DNX64
Thanks.
Originally posted by rado0x54 with karma: 191 on 2011-10-05
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Brian Gerkey on 2011-10-05:
Yep, that'll work. You can see the same pattern in the way we call things like rosboost-cfg (e.g.: https://code.ros.org/svn/ros/stacks/ros_comm/trunk/clients/cpp/roscpp/manifest.xml). But I'll note that this backdoor, where we let bash chew on the export flags, is highly non-portable, and will go away with the advent of rosbuild2. | {
"domain": "robotics.stackexchange",
"id": 6866,
"tags": "ros, manifest.xml"
} |
Is Hubble graph straight line or curve? | Question: According to this paper, the graph explaining Hubble's law is expressed in two cases. The first is when the hubble parameter is constant over time, and the second is when it changes over time.
My question is as follows.
First: I understand that the reason the Hubble graph is straight is that the distance to the measurement object is close, for example, when you write a graph, it becomes straight only if the distance is very close. Why does the change in Hubble constant over time make a straight line into a curve?
Is the explanation of the paper wrong?
Second: as far as I know, the Hubble constant in the graph above means the Hubble constant value at the time of writing the graph, but the Hubble constant in different time cannot be expressed on the same curve. I understand that if you want to find the value of the Hubble constant in the past, you have to draw another graph.
So, is the following sentence in the paper wrong?
“ As one moves from right to left in Fig. 6 (i.e., from the past toward the present), the slope, and hence the expansion rate, increases.”
Although distant celestial bodies represent past information, I don't think it's right to express the change in Hubble parameters in the current graph.
Answer: I think that the diagrams are technically correct if interpreted in a certain unintuitive way, namely if you take Distance to be the present-day metric distance to the galaxy, and Velocity to be its recessional velocity when it emitted the light, i.e. the cosmological-time derivative of its past metric distance.
If $t$ is the time of emission of the light, you then have $\text{Velocity} = d(a(t)\,\text{Distance})/dt$, and therefore $\text{Velocity}/\text{Distance} = a'(t)$, which is the expansion rate at time $t$.
Note that "constant rate of expansion" normally means constant $a'$, not constant $H=a'/a$. A constant Hubble parameter is exponential expansion ($a(t)=e^{Ht}$). Your question says that the first diagram is for a constant Hubble parameter, but the authors don't seem to say that, and it would be a strange choice in the circumstances.
If the diagrams showed past metric distance versus its time derivative, then the first diagram would curve upward, since $H=1/t$ when $a'$ is constant, and the second diagram would also, since $H'<0$ at all times even in ΛCDM. The curves would in fact curve so much that they would head back toward the vertical axis, since the metric distance (= angular size distance) decreases past a certain lookback time.
The authors seem to suggest that the slope of the graph is the expansion rate, but in my interpretation the expansion rate is the slope of a line through a data point and the origin. I think that's an error in the paper, unless there's another interpretation of the diagrams that I'm not seeing. | {
"domain": "physics.stackexchange",
"id": 82485,
"tags": "cosmology, space-expansion"
} |
40A mcb. 40A per pole or 40A altogether? | Question: If I have a 40A 3 pole Miniature Circuit Breaker ("mcb"), does this mean 40a per pole or 40A altogether?
The quality standards made me write some more so I do it.
Answer: 40 A per pole.
That's the way three-phase loads are specified so that's the way the breakers are specified.
It also means that a single phase exceeding the trip current (due, for example, to a partial earth fault on that phase) will trip the whole circuit. | {
"domain": "engineering.stackexchange",
"id": 5461,
"tags": "electrical-engineering"
} |
Contracted Christoffel symbol | Question: On page 3 of this document:
https://studentportalen.uu.se/uusp-filearea-tool/download.action?nodeId=1247106&toolAttachmentId=247022
it shows how to calculate the contracted Christoffel symbol by
\begin{align*} \Gamma^{\mu}_{\mu \lambda} &= \frac{1}{2}g^{\mu \rho}(\partial_{\mu}g_{\rho\lambda} + \partial_{\lambda}g_{\mu\rho} - \partial_{\rho}g_{\mu\lambda} ) \\ &= \frac{1}{2}(\partial^{\rho}g_{\rho\lambda} + g^{\mu \rho}\partial_{\lambda}g_{\mu\rho} - \partial^{\mu}g_{\mu\lambda} ) \\ &= \frac{1}{2}g^{\mu \rho}\partial_{\lambda}g_{\mu\rho} \end{align*}
Can someone kindly please help and explain what's happening in step $2$ and $3$? I just don't get it.
Answer: The tensor product in step 1 is expanded and the 3 terms in step 1 are transformed into 3 terms in step 2 according to these rules:
$$g^{\mu \rho}\partial_{\mu}g_{\rho\lambda} \to \partial^{\rho}g_{\rho\lambda} $$
and
$$ g^{\mu \rho}\partial_{\rho}g_{\mu\lambda} \to \partial^{\mu}g_{\mu\lambda} $$
this is because you can contract the index of a derivative (only the left-most one, if there are 2 or more derivatives acting on a tensor), while the middle term (the second one) is left untouched.
The first and third term in step 2 are the same , once you see that they differ only in the dummy indices, so they cancel each other and in step 3 we are left only with the second term of step 2. | {
"domain": "physics.stackexchange",
"id": 43423,
"tags": "homework-and-exercises, general-relativity, differential-geometry, metric-tensor"
} |
Time evolution for the harmonic oscillator wave functions | Question: The wave functions of the quantum harmonic oscillator are given by:
$$\psi_n(x)=\frac{1}{\sqrt{2^nn!}} \left(\frac{m\omega}{\pi \hbar}\right)^{-1/4} e^{-m\omega x^2/2\hbar}H_n\left(\sqrt{m\omega/ \hbar} x\right)$$
My question is how do these wave functions evolve in time? I could not find any reference concerning this. Meaning, given some $\psi(x,0)$, how does one generally find $\psi(x,t)$ in this case? I was thinking of:
$$\left|\psi\left(t\right)\right\rangle =\hat{T}\left|\psi_{0}\right\rangle \implies\left\langle x|\psi\left(t\right)\right\rangle =\psi\left(x,t\right)=\left\langle x\left|e^{-\frac{i}{\hbar}\hat{H}t}\right|\psi_{0}\right\rangle $$
But how does one proceed further?
Also, since I'm given $\psi(x,0)$ and not $\left|\psi_{0}\right\rangle $, I'm not sure that's even the way to go.
Answer: In the Dirac notation you use, one can express the initial state vector as
\begin{equation}\left|\Psi_0\right> = \sum_i a_i\left|\psi_i\right>\end{equation}
where the $\left|\psi_i\right>$ are the eigenvector solutions. You then operate on the left of this with $e^{-iHt/\hbar}$, which is an operator (think of the series expansion of this into terms involving products of $iHt/\hbar$). This gives
\begin{align}\left|\Psi(t)\right> &= e^{-iHt/\hbar}\left|\Psi_0\right> \\
&= e^{-iHt/\hbar}\sum_i a_i\left|\psi_i\right> \\
&= \sum_i a_ie^{-iE_it/\hbar}\left|\psi_i\right>.\end{align}
Finally, you take the inner product of this with $\left<x\right|$, which yields the wavefunction
\begin{equation}\left<x|\Psi(t)\right> = \Psi(x,t) = \sum_i a_ie^{-iE_it/\hbar}\psi_i(x),\end{equation}
as in Connor Behan's answer. | {
"domain": "physics.stackexchange",
"id": 80103,
"tags": "quantum-mechanics, homework-and-exercises, harmonic-oscillator, time-evolution"
} |
Relationship between circuit size and formula size in Sipser text | Question: The Sipser text (3rd edition) contains a proof that 3-SAT is NP-Complete based on Boolean circuits. Part of the proof contains the remark that the reduction from the circuit to the Boolean formula can be done in polynomial time.
First question: is it correct to say that if a circuit C of polynomial size exists, then there must exist a formula $\varphi$ of polynomial size where C is satisfiable if and only if $\varphi$ is satisfiable?
Second question: is the Boolean formula $\varphi$ still of polynomial size if C is of polynomial size and C is derived from a deterministic Turing machine M? This seems to be described in the proof of the earlier theorem in Sipser where (by building C from a tableau of M) it is shown that if $\mbox{A $\in$ TIME$(t(n))$ for $t(n) \geq n$ and $n \in \mathbb{N}$}$ then A has circuit complexity $O(t^2(n))$.
Answer: Yes, that's correct. See the Tseitin transform, which describes how. It doesn't matter how the circuit $C$ was constructed. | {
"domain": "cs.stackexchange",
"id": 15590,
"tags": "complexity-theory, circuits, boolean-algebra"
} |
Finding the impulse response of a system | Question: I have the following transfer function.
$$ H(j\omega) = \frac{1+0.5 e^{-j\omega}}{1-1.8 \cos(\frac{\pi}{16}) e^{-j\omega}+0.81 e^{-j2\omega}}$$
I'm trying to find the impulse response of the system. However, I couldn't separate the expression above and I couldn't figure out how I can find the impulse response. Can anybody help me how to solve this equation? Any help would be appreciated.
Answer: One can see that the given expression can be decomposed as
$$
\begin{align}
H(j\omega) &= \frac{1+0.5 e^{-j\omega}}{1-1.8 \cos(\frac{\pi}{16}) e^{-j\omega}+0.81 e^{-j2\omega}} \\
&= \frac{A }{1-0.9 e^{j\frac{\pi}{16}} e^{-j\omega}} + \frac{B}{1-0.9 e^{-j\frac{\pi}{16}} e^{-j\omega}}
\end{align}$$
where $A = B^* = 0.5 - j 3.9375 = 8/16 - j 63/16 $.
Then the impulse respons will be:
$$h[n] = A (0.9 e^{j\frac{\pi}{16}})^n u[n] + A^* (0.9 e^{-j\frac{\pi}{16}})^n u[n]$$
which can be simplified as:
$$h[n] = 2 \cdot 0.9^n |A| \cos( \frac{\pi}{16}n + \angle{A}) u[n]$$
where $\angle{A}$ is the phase angle of $A$. Following is the resulting sequence plotted, from $n=0$ to $n=35$, using MATLAB/OCTAVE. | {
"domain": "dsp.stackexchange",
"id": 7967,
"tags": "fourier-transform, transfer-function, impulse-response"
} |
Detecting original vs. edited (reposted / recompressed) image | Question: I'm trying to create something to help solve this problem:
My goal is that, given two images where one is an edit of the other, produce a system that outputs which one is most likely to be the original.
I have attempted to solve the problem in the following way.
Collect a data set of random images that one may find in their social media blog. (10,000 images)
Apply random lossy transformations to the image: resizing (with various scaling algorithms), cropping, lossy compression (JPEG)
Preprocess input into training data:
Metadata:
Collect file size, resolution, file format, lossy compression quality
Add jitter (to prevent overfitting by recognizing specific images), jitter varies per 8x8 sample
Output is a fixed number of floats per datum
Input pixels:
Break the image into 8x8 chunks (the size is chosen to coincide with JPEG MCU size)
Select a fixed number of chunks with the most entropy (changes in pixels)
Select a color channel
Divide by 255.0
Output is 64 floats (one per pixel)
DCT values:
Using the pixel data from above, perform a DCT transform
Take the absolute value of the DCT value
Output is 64 floats
Feed the above data into a neural network.
I don't know if it makes sense or not, but I started with a straight-forward model and did a simple hill-climbing hypersearch which added/removed/edited layers, and it produced the following model:
Each sample (and associated metadata) is fed as a single input.
Fit the model to the labels 0.0 or 1.0, with 0.0 representing original (less noise) and 1.0 representing edited (more noise).
Even though these labels don't describe any quality of the image itself, the goal is to use them to indicate some kind of ordering. The idea here is to "pull" the weights towards 0 for images known to have less noise and towards 1 for those with more noise, which I hope will cause the model to output higher values for images which have been edited.
Currently the model's accuracy is 88% (i.e. it correctly produces higher values for our edited versions for 88% of input image pairs).
My question is if there is a better way to represent ordering in this case other than labeling pairs 0 and 1. One potential issue with this is that the model is penalized if it outputs a score >1 or <0 (though currently avoided using sigmoid activation if that makes sense). Also, if there are other obvious improvements in the whole process. Thank you!
Answer:
My question is if there is a better way to represent ordering in this case other than labeling pairs 0 and 1.
I solved this by creating an artificial comparator layer, which takes two inputs, and outputs 0 if the first is smaller or 1 if the first is greater. This allows backpropagation to correctly push the two away from each other within the same batch.
Also, if there are other obvious improvements in the whole process.
See my other question: Producing a confidence output to use in a weighted average layer | {
"domain": "datascience.stackexchange",
"id": 9440,
"tags": "machine-learning, image"
} |
What are the major differences between cost, loss, error, fitness, utility, objective, criterion functions? | Question: I find the terms cost, loss, error, fitness, utility, objective, criterion functions to be interchangeable, but any kind of minor difference explained is appreciated.
Answer: They are not all interchangeable. However, all these expressions are related to each other and to the concept of optimization. Some of them are synonymous, but keep in mind that these terms may not be used consistently in the literature.
In machine learning, a loss function is a function that computes the loss/error/cost, given a supervisory signal and the prediction of the model, although this expression might be used also in the context of unsupervised learning. The terms loss function, cost function or error function are often used interchangeably [1], [2], [3]. For example, you might prefer to use the expression error function if you are using the mean squared error (because it contains the term error), otherwise, you might just use any of the other two terms.
In genetic algorithms, the fitness function is any function that assesses the quality of an individual/solution [4], [5], [6], [7]. If you are solving a supervised learning problem with genetic algorithms, it can be a synonym for error function [8]. If you are solving a reinforcement learning problem with genetic algorithms, it can also be a synonym for reward function [9].
In mathematical optimization, the objective function is the function that you want to optimize, either minimize or maximize. It's called the objective function because the objective of the optimization problem is to optimize it. So, this term can refer to an error function, fitness function, or any other function that you want to optimize. [10] states that the objective function is a utility function (here).
A utility function is usually the opposite or negative of an error function, in the sense that it measures a positive aspect. So, you want to maximize the utility function, but you want to minimize the error function. This term is more common in economics, but, sometimes, it is also used in AI [11].
The term criterion function is not very common, at least, in machine learning. It could refer to the function that is used to stop an algorithm. For example, if you are executing a computationally expensive procedure, a stopping criterion might be time. So, in this case, your criterion function might return true after a certain number of seconds have passed. However, [1] uses it as a synonym for the objective function. | {
"domain": "ai.stackexchange",
"id": 1280,
"tags": "deep-learning, convolutional-neural-networks, terminology, objective-functions, comparison"
} |
Why do we need to bind this pointer to a member function in boost::bind? | Question:
Hi,
I noticed in many boost::bind applications, for example here, we need to bind this pointer to a member function.
What does that do?
Thanks,
Rico
Originally posted by RicoJ on ROS Answers with karma: 41 on 2020-09-14
Post score: 0
Answer:
@RicoJ This question is not really related to ROS and you may want to search or ask in other forums like stackoverflow, but I will answer your question.
boost::bind is a special function used to create a functor for you with the arguments you pass to it. So, it basically creates a new function with the arguments; the problem here is that, if you want to bind a member function you really need to pass, apart from the special placeholders : _1, _2, that store the function arguments, the reference to the object context, that is the special word this.
So the first argument is the reference to the member function, the second, the context of the object itself and then the arguments. If you do not supply the context, the function itself cannot be defined properly within the member scope.
Hope that solve your question.
Regards.
Originally posted by Weasfas with karma: 1695 on 2020-09-15
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 35539,
"tags": "boost, ros-kinetic"
} |
What is torque, really? and can it be determined for a point on a circle that is away from the center of the rotation axis by a radius $r$? | Question:
Let's picture a Rolling without slipping Wheel, that constantly accelerates. radius of a wheel/circle is $r$.
Basically any wheel of a vehicle is rotating around its fixed axis - AoR -(axis of rotation)
Most of the times wheels have this AoR perfectly in the center of the circle/wheel/cylinder. What I want to understand is how can we describe the torque at the point $C$?
Because as I understand we could go about finding torque for the center of the circle - point $O$.
I am not sure if it's possible to find the torque for the point $C$, and the point $P$ that is point on a line tangent to the circle on the surface level.
Answer: Conceptually, torque is the turning effect of a force. It is dependent upon two factors- one is the size and direction of the force, and the other is the distance between its point of application and the point around which it is causing the rotation.
In the example you give, the rotation of the wheel is actually a series of instantaneous rotations about the point of contact with the ground, so yes you can calculate its acceleration in terms of a torque around the point P. However, aside from gravity and friction, neither of which will cause a wheel to accelerate on a flat surface, you don't show any motive force in your diagram, so it is hard for me to comment any further. | {
"domain": "physics.stackexchange",
"id": 62597,
"tags": "newtonian-mechanics, reference-frames, rotational-dynamics, torque"
} |
Shorting a Superconducting Coil with A/C | Question: When a superconducting coil carrying a direct current is shorted, it continues to carry the current without loss (approximately), assuming it stays cooled and superconducting.
What would happen if the coil were carrying AC, then shorted? Would the AC current persist as is, just like it would with DC? Would the frequency make a difference? (Specifically interested in high frequency, khz+)
Answer: A superconducting coil acts mostly as a perfect inductor, so it resists current variations. If it is plugged to an AC voltage source, it will pass an AC current corresponding to its inductance.
When shorted, it will continue running the current that was running through it just before shorting (as $V=0=L di/dt$). So you're going to end up with a constant current, that can be anywhere between the extreme values of the AC current that was running before the short.
On a more general note, remember that AC analysis only works with stationnary AC signals. With transients, you need to resort to time domain analysis. | {
"domain": "physics.stackexchange",
"id": 25504,
"tags": "electromagnetism, electricity, magnetic-fields, superconductivity, power"
} |
How to find a separable decomposition for $|\Psi^+\rangle\!\langle\Psi^+|+|\Phi^+\rangle\!\langle\Phi^+|$? | Question: The state
$$ \frac{1}{2}\left(| \phi^+ \rangle \langle \phi^+ | + | \psi^+ \rangle \langle \psi^+ | \right) $$
where
$$ | \phi^+ \rangle = \frac{1}{\sqrt2} \left(|00 \rangle + | 11 \rangle \right) $$
$$ | \psi^+ \rangle = \frac{1}{\sqrt2} \left(|01 \rangle + | 10 \rangle \right) $$
By PPT criteria, we know this is a separable state. If I wanted to find what is the mixture of separable states that form this, how would I go about it?
Answer: I would start by writing this as a matrix, and recognising how it can be written in terms of Pauli matrices:
$$
\frac14\left(\begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{array}\right)=\frac14(\mathbb{I}\otimes\mathbb{I}+X\otimes X)
$$
From here, I don't have a completely formulaic approach for how you do it. But, in this instance, I wrote
$$
=\frac{1}{2}\left(\frac{\mathbb{I}+X}{2}\otimes \frac{\mathbb{I}+X}{2}+\frac{\mathbb{I}-X}{2}\otimes \frac{\mathbb{I}-X}{2}\right).
$$
Now you can see that each of the terms in the tensor product is a separable state. Specifically,
$$
(|++\rangle\langle ++|+|--\rangle\langle --|)/2
$$
One approach that I suppose I might have taken is to recognise the separable, diagonal basis of $X\otimes X$, and decompose $\mathbb{I}\otimes\mathbb{I}$ in the same basis:
$$
\frac{1}{4}(|++\rangle\langle ++|+|+-\rangle\langle +-|+|-+\rangle\langle -+|+|--\rangle\langle --|)+\frac{1}{4}(|++\rangle\langle ++|-|+-\rangle\langle +-|-|-+\rangle\langle -+|+|--\rangle\langle --|),
$$
which inevitably leads to that result. | {
"domain": "quantumcomputing.stackexchange",
"id": 623,
"tags": "entanglement, density-matrix"
} |
Can a convex mirror form a real image? | Question: Is it possible to arrange a setup in which a convex mirror forms a real image(i.e an image that can be obtained on a a screen.)
Imagine a setup in which light from infinity falls on a concave mirror and the mirror converges it on the focus.(f would be negative)
However, if we put a convex mirror before the focus of the concave mirror,then the light rays will not actually meet, and the focus of the concave mirror(where the light would have normally converged if we had not placed a convex mirror) will act as a virtual object for the convex mirror.
In this situation, u (object distance) is positive and and so is the focal length of the convex mirror.
From the mirror equation, v=f*u/u-f
For this value to be negative,f>u(where f if focal length of convex mirror) so that a real image is formed.
But can this be practically done in the lab?How can we choose different sizes of mirrors and screen so that they do not block the light rays?
I tried doing it but it did not work out.
Can someone help and maybe suggest another setup.
Answer: Any discussion of concave/convex mirrors needs to begin with a statement of the particular version of the mirror equation to be used, along with the convention for setting and interpreting the signs of focal lengths, and object/image positions.
For example, from http://scienceworld.wolfram.com/physics/MirrorFormula.html:
Unmentioned in this is the convention that virtual images and objects are found behind the mirror and have negative values of $d$.
In the particular example you present, the image formed by the primary mirror is a real image. If you put infinity for the object distance and a positive focal length, you find a positive image distance.
But when you insert a convex mirror, with a negative focal length, into the optical path, you must also consider the position of the real image (now an object) relative to the convex mirror. The object is behind the convex mirror; it is a virtual object, and its distance from the convex mirror is negative.
With appropriate positioning of the convex mirror, the formula will produce a positive value for the image distance. There will be real image formed in front of the convex mirror.
You've just designed a Cassegrain telescope... | {
"domain": "physics.stackexchange",
"id": 45097,
"tags": "reflection, geometric-optics"
} |
Creating HCl acid | Question: I have the following reaction:
$$\ce{2 AlCl3 + 3H2O -> 2 Al(OH)3 + 6 HCl}$$
My goal is to create hydrochloric acid, but for the reaction shown, hydrochloride gas is produced instead. There are two scenarios that I am facing where:
The aluminum chloride is an aqueous solution instead of a solid
The aluminum chloride is a solid after heating the aqueous solution and the result is water vapor and solid $\ce{AlCl3}$.
My confusion and questions are that:
If I had an aqueous solution that is $\ce{AlCl3}$ in water, then why does it not react in the first place?
Does the water in the product side need to be a liquid, an aqueous solution, or a gas?
How do I make sure that the reaction works nearly every time?
I am trying to create hydrochloric acid meaning that $\ce{HCl}$ needs to react with water to provide a concentration of $\ce{HCl}$ acid. Would the water need to be a gas in order for both $\ce{HCl}$ and water to condense to hydrochloric acid or is there something else that I missed?
Is there a way that if I added more water to $\ce{AlCl3}$ that I would get $\ce{HCl}$ acid instead of gas?
I know that there are other methods of producing $\ce{HCl}$ acid, but I want to use this method.
Answer: Per this source, a good account of the actual product of dissolving $\ce{AlCl3}$ in a large volume of water:
If aluminium chloride is dissolved in a large amount of water the solution is acidic, but this has nothing to do with formation of hydrochloric acid. The solution contains hydrated aluminium ions and chloride ions:
$\ce{AlCl3(s) + aq -> [Al(H2O)6]^3+(aq) + 3Cl^-(aq)}$
The hexaqua complex ion behaves exactly like ions of similar type formed from transition metals; the small, highly charged metal ion polarises (withdraws electron density from) the water molecules that are attached to the aluminium ion through dative covalent bonds. This makes the hydrogen atoms d+ and susceptible to attack from solvent water, which is acting as a base. The complex ion is deprotonated, causing the solution to be acidic from the formation of hydroxonium ions $\ce{H3O+}$:
$\ce{[Al(H2O)6]^3+ (aq) + H2O(l) -> [Al(H2O)5OH]^2+(aq) + H3O+(aq)}$
So, no $\ce{HCl}$ per se, but adding a small amount of water to the dry salt will liberate fumes of hydrogen chloride that readily dissolve in water forming aqueous $\ce{HCl}$, but not likely a practical path either.
In my opinion, however, try adding $\ce{AlCl3}$ to a large volume of carbonated water as a possible path to very dilute $\ce{HCl}$. I expect the intermediate formation of an unstable aluminum carbonate followed by the deposition of $\ce{Al(OH)3}$ (interestingly, this reaction mirrors the wrong commonly cited hydrolysis reaction forming $\ce{Al(OH)3}$ and $\ce{HCl}$). Note, quickly separate the solid to limit the reverse reaction recreating the $\ce{AlCl3}$. This reaction, if successful, may also be viewed as paralleling the action of oxalic acid on various salts of mineral acids, successfully forming the mineral acid, along with a corresponding insoluble oxalate (there exists a thread on this). | {
"domain": "chemistry.stackexchange",
"id": 16355,
"tags": "reaction-mechanism, aqueous-solution"
} |
Service Call in ROSJava? | Question:
hi everybody!
I'm trying to call a Service in ROSJava. The Service itself is running on ROS/Cpp.
What i'm looking for, is the command "rosservice call /EnableFreeSpaceOrientation true" in Java.
http://www.ros.org/wiki/rosjava
The "Calling a Service" part seems to be the solution?
Have i missed something? Maybe i'm to stupid to see it or find it! :D
Hope for help! Thanks!
Originally posted by Mr_Miyagi on ROS Answers with karma: 21 on 2011-06-14
Post score: 0
Answer:
Are you using the new, pure-Java rosjava or the old JNI-based rosjava?
FYI: we've just moved the JNI docs here:
http://www.ros.org/wiki/rosjava_jni
As for the new rosjava, it's still in an alpha state and service calls are not easy-to-use right now.
Originally posted by kwc with karma: 12244 on 2011-06-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 5846,
"tags": "rosjava"
} |
What is this sand covered insect? | Question: I have found in a wooden house, in the mountains of central Spain this very curious insect:
I was wondering if that was its skin or whether it's decomposition or sand or saw-dust. That particular region of Spain does have stink-bugs ... but this one doesn't look like anything I've seen before. A google image search returns mostly arachnids ... but it seems to have only 6 legs.
Answer: It's a kind of assassin bug, specifically a Masked Hunter:
The surface of an immature Masked Hunter is sticky and it attracts lint and dust which helps to camouflage this predator.
and
The name refers to the fact that its nymph camouflages itself with dust.
Though they feed on small insects, they will bite defensively, and when they do, it's fairly painful. | {
"domain": "biology.stackexchange",
"id": 4969,
"tags": "species-identification, entomology"
} |
Relevance of Weisfeiler–Lehman Graph Isomorphism Test limitation for Graph Neural Networks | Question: Graph Neural Networks power is limited by the power of Weisfeiler–Lehman Graph Isomorphism algorithm.
Quoting wikipedia:
It has been demonstrated that GNNs cannot be more expressive than the
Weisfeiler–Lehman Graph Isomorphism Test. In practice, this
means that there exist different graph structures (e.g., molecules
with the same atoms but different bonds) that cannot be distinguished
by GNNs.
What are the practical implications of this limitation? Is it an academic example providing no obstacle in real-life applications of GNNs (e.g. to drug discovery) or are there any instances in which it plays a big role? If the latter, please provide an example of such a limitation.
Answer: Firstly, as already stated in the Wikipedia quote: Observing that a type of GNN is as expressive as the Weisfeiler–Lehman (WL) Test, means in practice that two graphs $\mathcal{G}_1$ and $\mathcal{G}_2$ cannot be differentiated by the GNN, if the 1-WL Test cannot differentiate them. Therefore, if $\mathcal{G}_1$ and $\mathcal{G}_2$ are labelled differently, your model cannot ever learn to classify both correctly.
In real life applications, this often doesn't matter too much and Zopf et al. provide a nice analysis of that and there you can see that most datasets contain 100% distinguishable graphs. But the 1-WL-Test struggles with the Molecular data of the MUTAG dataset (s. Table II). However, it seems that most of the time 1-WL expressivity will be enough.
There is another distinction to be made here: In real-world applications, we want encodings of graphs that reflect their similarity, meaning two similar graphs $\mathcal{G}_1$ and $\mathcal{G}_2$ should be encoded as similar vectors in the embedding space. If $\mathcal{G}_1$ and $\mathcal{G}_2$ are very slightly different, the vectors should still be close by. This then provides a tool that generalizes to unseen graphs. The WL-Test cannot do this kind of encoding but the GNN can.
Here it makes sense to think about expressivity of GNNs as the ability to do this smooth encoding. And one aspect of that is this upper bound of expressivity as stated by the quote you provide.
I hope that was understandable, feel free to follow up on that. | {
"domain": "ai.stackexchange",
"id": 3668,
"tags": "graph-neural-networks, geometric-deep-learning, graphs, graph-isomorphism-network"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.