anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
The moon has just the right speed not to crash on the Earth or escape into space. What are the odds? | Question: My understanding is that the moon was created a long time ago when Earth was hit by a big asteroid.
The debris then agglomerated into the Moon, which happens to be orbiting at the exact speed required to neither crash back into the Earth, nor escape into space.
Having the exact correct speed seems extremely unlikely. Yet, our moon is there, and many other planets have moons.
Are these just the few survivors out of thousands of events that didnt have the « goldilock » speed?
2022 Edit: I got my "ah HA!" moment where everything makes sense after playing 10 minutes of the tutorial of the "Kerbal Space Program" game. Highly recommended.
Answer: There isn't a "Goldilocks speed" for orbit. If you put two objects in space, and give them a velocity relative to each other, then provided that velocity is less than the escape velocity (at their relative distance) the two objects will orbit each other.
Those orbits will be elliptical, and it is possible that the ellipse is skinny and "eccentric" enough for the two bodies to collide when they are closest to each other. But for an object that is several hundred thousand km from Earth, there is a quite a wide range of possible elliptical orbits.
So when (and if) the grand collision happened, there was a huge amount of matter that was ejected up into space. Some probably was moving so fast that it escaped, Some certainly went into orbits that didn't have enough energy and so were small skinny ellipses and the matter fell back to Earth. But there was a lot that ended up in some kind of elliptical orbit. This matter was not all in the same orbit, but it started to coalesce, and form into a single ball, under its own gravity.
Other moons weren't formed like this, they either formed at the same time as their planets as a "mini solar system" (such as the four major moons of Jupiter) or they were captured from the asteroid or Kuiper belts). Initially, the captured moons may have had rather elliptical orbits.
But most moons are in rather circular orbits. Even if the moon was originally in an elliptical orbit, tidal effects will tend to make the orbit more circular. A planet and moon system has a certain amount of angular momentum and a certain amount of energy. The angular momentum can't change, but energy can be converted into heat and since tides dissipate some energy as heat, the orbit will tend to change to a shape that minimizes energy, for a given amount of angular momentum. That shape is a circle. (See Is the moon's orbit circularizing? Why does tidal heating circularize orbits?)
So the effect of tides is to give moons the "Goldilocks speed" that keeps them in a circular orbit. | {
"domain": "astronomy.stackexchange",
"id": 5265,
"tags": "orbit, the-moon"
} |
How do electromagnetic waves travel in a vacuum? | Question: This is perhaps a total newbie question, and I will try to formulate it the best I can, so here it goes. How does an electromagnetic wave travel through for example, the vacuum of space?
I usually see that waves are explained using analogies with water, pieces of rope, the strings of a guitar, etc, but it seems to me that all those waves need a medium to propagate. In fact, from my point of view, in those examples the wave as a "thing" does not exist, it's just the medium that moves (involuntary reference to The Matrix, sorry).
But in space there is no medium, so how does a wave travel? Are there free particles of some sort in this "vacuum" or something? I believe the existence of "ether" was discarded by Michelson and Morley, so supposedly there isn't a medium for the wave to travel through.
Moreover, I've seen other answers that describe light as a perturbation of the electromagnetic field, but isn't the existence of the field, potential until disturbed? How can it travel through something it does not exist until it's disturbed by the traveling light in the first place? (this last sentence is probably a big misconception by me).
Answer: The particles associated with the electromagnetic waves, described by Maxwell's equations, are the photons. Photons are massless gauge bosons, the so called "force-particles" of QED (quantum electrodynamics).
While sound or the waves in water are just fluctuations (or differences) in the densities of the medium (air, solid material, water, ...), the photons are actual particles, i.e. excitations of a quantum field. So the "medium" where photons propagate is just space-time which is still there, even in most abandoned places in the universe.
The analogies you mentioned are still not that bad. Since we cannot visualize the propagation of electromagnetic waves, we have to come up with something we can, which is unsurprisingly another form of a wave, e.g. water or strings.
As PotonicBoom already mentioned, the photon field exists everywhere in space-time. However, only the excitation of the ground state (the vacuum state) is what we mean by the particle called photon. | {
"domain": "physics.stackexchange",
"id": 37991,
"tags": "visible-light, electromagnetic-radiation, vacuum, aether"
} |
Explorer Robot ROS/Player interface | Question:
Hello all,
I am new to ROS. I haven't done any coding of my own with it, so I reach out to you guys for some help... I have done quite a bit of player work, and I am very familiar with the way it works. I am totally lost with ROS, so if anybody has some tips for C++ programming with Qt 4, that would be very much appreciated.
Primarily, I have a robot with a player driver, but no ROS driver. It is an explorer robot. The driver for it is not part of player, so I need take the player driver and use ROS to manipulate it. I am also trying to communicate with the kinect camera and mount it to the robot. Has anybody done anything related to this?
Thanks everyone!
-Hunter A.
Originally posted by allenh1 on ROS Answers with karma: 3055 on 2011-09-14
Post score: 1
Answer:
There may already be a ROS Explorer driver from Coroware.
If that is not satisfactory, you can port your Player driver to ROS. There are two methods:
Invoke the Player driver (and libplayer) from a ROS node, as was done for the ROS Erratic driver.
Convert the Player driver to a ROS node. The results are generally clean and easy to maintain. If you are interested in this approach, I can provide hints on how to proceed.
Originally posted by joq with karma: 25443 on 2011-09-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 6692,
"tags": "ros, player"
} |
What is the difference between the terms collision and merger? How are they used differently in Astronomy? | Question: We often hear of mergers of two stellar objects but we also sometimes talk about these or much smaller objects like planets or asteroids colliding.
What is the actual differences between Astronomy and Cosmology? received several excellent answers for example, but here what I'm looking for is if there can be a fairly easy way to differentiate the concepts of merger from that of collision, or establish the degree of overlap.
I do have an ulterior motive; in meta there is the question Do we need a tag for merging? I struggled to find something suitable for supermassive black hole mergers in galaxy collisions but in meta we discuss how the site works and how to maintain or improve it.
Here I'm asking the actual astronomy terminology question:
Question: What is the difference between the terms collision and merger? How are they used differently in Astronomy?
Answer: Partial answer to share what I've found to date:
Galaxies
The term "merger" does have widespread use in galactic-dynamics. The word "merger" appears 71 times Wikipedia's Galaxy merger for example, with terms like binary merger, multiple merger, minor merger, major merger, wet merger, dry merger, damp merger, mixed merger, merger history trees all having explicit definitions there.
Supermassive Black holes
When galaxies merge, there is the question of what happens to the supermassive black hole (SBH) that may be in the center of each. Since galactic mergers and SBH mergers are inextricably linked I'll list their questions together here:
Milky Way Formation
What parameters determine whether galaxies colliding will result in a merger or a hit and run?
Why do we believe that the super massive black holes at the centers of two merging galaxies would themselves merge?
Difference in energy released in stellar mass black hole merger and supermassive black hole merger
Why can't supermassive black holes merge? (or can they?)
How do two supermassive black holes reach "the last parsec" in merging galaxies?
Are there galaxies with 2 or more super massive black holes orbiting each other? "Yes, there are galaxies with two supermassive black holes in the center, see for instance 4C +37.11. Most likely such galaxies are formed by collision and merger of two galaxies, and their cores have not yet merged. Source"
Is there a way to calculate how much damage black hole merger shockwaves inflict on nearby objects?
What enhances the capture and merge rates of pairs of small black holes orbiting around supermassive black holes?
Why would the merger of spinning black holes within the accretion disk of a supermassive black hole cause them to "shoot straight up" out of the disk?
How do we know that supermassive black holes can gain mass by means other than merging with other supermassive black holes?
Stellar objects
But for individual stellar objects the situation does get murky. We have had a collision tag for a while now, and the "merging" of stars due to collisions happening in the centers of dense clusters is a topic first raised decades ago. Whether they merge, or just exchange matter or something else, it seems infinitely safer to stick with collision.
But as @DaddyKropotkin points out:
We're just giving you basic sense of how the terminology is used. It is subject-specific and nuanced since the word "merger" is used in the new field of gravitational wave astronomy, so you could get many different, inconsistent, opinionated answers.
When two objects that are either black holes (BH) or neutron stars (NS) find themselves in extremely close proximity, usually through a process of orbiting each other and spiraling inward due to energy radiation in the form of gravitational waves, and ultimately touch and combine much/most of their masses to form a single object, the last few seconds generates gravitational waves so strong that we can detect, record, and analyze them. These are then also called "mergers" (BH-BH, BH-NS and NS-NS mergers) See Wikipedia's List or gravitational wave observations; List of gravitational wave events for example.
How likely are planets to form after neutron star collisions?
GW from merging of neutron stars and black holes "So in the end, the frequency of LIGO detections depends on two things: The real distribution of objects in binaries: NS-NS, NS-BH, and BH-BH and the loudness/brightness of the gravitational waves emitted by the merger."
From Appendix A: Astronomical Terminology:
Energies of the order inferred suggest that gamma ray bursts may originate in the merger of two neutron stars to form a black hole or the capture of a neutron star by a black hole. Such mergers provide almost the only ways in which we can conceive of vast amounts of energy to be liberated rapidly. The potential energy that can be released in these mergers is of order M0c2 ~ 1056 erg.
As for stars merging:
Could our Sun be the product of an ancient stellar collision? "Stellar mergers are certainly possible, but also relatively rare. [...] Glebbek's dissertation on stellar mergers estimates a rough condition for the orbital angular momentum to exceed the maximum spin angular momentum of the merged star as..."
Could our Sun be the product of an ancient stellar collision? "The dynamics of the Solar System and the chemistry of the Solar System bodies don't support a hypothesis of a stellar merger later than formation of the protoplanetary disk" and "Stellar mergers are certainly possible"
What would happen if a Sun-like star were to consume a Jupiter-like planet?
How far from Betelgeuse is its habitable zone? "In addition to being a highly unstable and variable supergiant, it's a runaway star, suggesting that it was formerly a member of a multiple star system with a companion star that went supernova. Its relatively rapid rotation is difficult to explain via single star evolution, suggesting that it has undergone a stellar merger (Wheeler et al. 2017, Chatzopoulos et al. 2020)"
What can be said?
"Merger" is a solid, standard term when it comes to galaxies and their supermassive black holes.
"Merger" is becoming a standard term for the last few moments of NS-NS, NS-BH and BH-BH grabitational wave event.
In the case of stars, it's murky and "collision" seems to at least adequately cover all possible types of events where there is substantial combination of two stars' mass into one "thing" which could be a supernova, a star or neutron star or black hole or something else. While the term "merger" might be used from time to time by some folks, "collision" will be understood by all.
Things smaller than stars (e.g. planets, protoplanets, asteroids, dust...)
I think that again in this case "collision" will be the right term, though in solar system formation there is plenty of merging of objects to make larger objects. This needs to be explored further as this answer identifies itself as a "Partial answer to share what I've found to date". | {
"domain": "astronomy.stackexchange",
"id": 5622,
"tags": "terminology, definition, collision, merger"
} |
Can I melt Barium Titanate powder into a cylinder and preserve the high dielectric? | Question: I would like to make various solid forms of Barium Titanate e.g. cylinders, discs.
Barium Titanate powder seems widely available and has a melting point of 1,625'C. Can I melt it in an iron forge into a mould, and will this preserve the high dielectric constant?
Is there perhaps a low temp way to do this easier e.g. a solvent or additive to achieve the same? Others have tried embedding within epoxy, but with substandard K values (max 27 instead of 7,000 as per the raw powder)
Answer: As you know, barium titanate is a piezoelectric ceramic materials. It has its property as a piezoelectric material due to the specific spatial arrangements of its constituting atoms. Melting this material, means that it will lose its crystalline structure upon heating (knowing also that its Curie Temperature is $T_C=120^o\mathrm{C}$, i.e. above this temperature it loses its ferroelectric property).
On the other hand, processing and shaping ceramic materials are often performed during the preparation of these materials, not after. Heating at temperatures around $T_C=1600^o\mathrm{C}$ for processing ceramics is not a common procedure and it is expensive.
If you want to prepare cylinders and discs from a powder of barium titanate, you can add it to a polymer matrix (wax, epoxy resin,..) in an appropriate mold using adequate processing techniques like injection molding (heating below Curie Temperature). Of course, the resulting dielectric constant of the composite material is less than the one of pure barium titanate. | {
"domain": "chemistry.stackexchange",
"id": 4343,
"tags": "synthesis, melting-point"
} |
Why is the normal contact force horizontal on an inclined ladder? | Question:
There is only one force acting on the ladder which is its weight and it acts vertically downwards. Then why does the normal contact force from the vertical wall act horizontally on the ladder? There must be a horizontal force acting on the wall to exert a horizontal force on the ladder. What causes the horizontal force on the wall and what is it called?
Answer: I feel like there is something missing in this diagram, which is torque. In reality, there is a torque on the ladder, due to gravity, which causes it to want to rotate counterclockwise around the point where it touches the floor. This torque is "responsible" in some sense for the force of the top of the ladder against the wall (and the counterbalancing force of the foot of the ladder against the floor's friction.)
I don't see any torques in your free body diagram, although I do see an angle "alpha" at the base of the ladder, which is suggestive that maybe there should be some. If you haven't covered torque yet, this is not a great problem to try to work through. | {
"domain": "physics.stackexchange",
"id": 96051,
"tags": "newtonian-mechanics, forces, friction, free-body-diagram, statics"
} |
Can universe still expand without energy and matter? | Question: I heard dark energy is the intrinsic property of space and it cannot be particle because particle's density dilute when volume increase, I do not know how nothingness can have property and Einstein came up with a cosmological constant to fix his static universe model therefore mathematically he proved that space by itself can expand. Imagine a universe where empty space is devoided of energy and matter, can it expand?
Answer: The universe can expand without matter or radiation, but it cannot expand without energy. You can have “dark energy” without either matter or radiation.
In fact, right now dark energy is driving the accelerating expansion of our unverse, and the matter and radiation in it are playing a diminishing role. In the current standard cosmologcal model, the effect of matter and radiation will become negligible and the dark energy will cause exponential expansion.
You should not think of spacetime as nothingness. It has nontrivial geometric structure and the metric field that determines its geometry can carry energy, momentum, and angular momentum from place to place just like the electromagnetic field does. | {
"domain": "physics.stackexchange",
"id": 55278,
"tags": "general-relativity, space-expansion, dark-energy, cosmological-constant"
} |
One lab, multiple robots, multiple independent projects | Question:
I just came across this quote:
"For use cases with multiple robots it is generally recommended to use multiple masters and forward specific tf information between the robots. There are several different methods of implementing bridges between masters. For more information please see the sig/Multimaster."
In my case, we have a small robotics lab with around 10 robots. Students pair up and each are assigned a robot. Obviously we want the different student projects not to interfere with each other. We are currently running a single roscore.
The above quote (from the tf2 documentation) implies some problems with what we are doing. I need a little help clarifying or elaborating on the above caveat as well as the "right" way to do what I am trying to do?
Originally posted by pitosalas on ROS Answers with karma: 628 on 2019-06-21
Post score: 0
Answer:
You should run a separate roscore for each robot. I usually run the roscore on the robot.
Originally posted by ahendrix with karma: 47576 on 2019-06-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by pitosalas on 2019-06-21:
Can you say why? What problems does it solve? What problems does it create? Right now I have it working apparently fine with name spacing. Thanks.
Comment by ahendrix on 2019-06-21:
Running a separate roscore for each robot doesn't require namespacing or TF prefixes.
Comment by gvdhoorn on 2019-06-22:\
Obviously we want the different student projects not to interfere with each other.
And it also automatically isolates each robot -- and connected ROS nodes -- from each other.
Nothing prevents anyone from using the namespace of a different robot and mucking things up.
Using a separate master makes this much harder (not impossible, as it's a form of security-through-obscurity, but still).
Comment by pitosalas on 2019-06-22:
Thanks @gvdhoorn. Much better than actual answer. My robots have just a raspberry pi. Would running core on them be a cpu or memory load problem do you think?
Comment by gvdhoorn on 2019-06-22:
A master does not use significant amounts of CPU, it should be fine.
Unless students start using parameters as topics -- which I have seen -- but that would obviously be a no-no in any case. | {
"domain": "robotics.stackexchange",
"id": 33240,
"tags": "ros, ros-kinetic, multimaster"
} |
What is the practical application of Rayleigh number to heat sinks? | Question: I understand that there is such a metric as the Rayleigh number which governs convective cooling in a medium. Assume for example, a fist sized heatsink like that below in free air:-
Does the Rayleigh number mean that convection might not start between the fins? So in effect, the finned heatsink might as well be a large solid lump? And insofar as thermal resistance goes, the thermal conductivity would only depend on the gross overall dimensions (height x width x length) and not the actual finned area?
Answer: The Reynolds number is irrelevant in these situations because it is very small and does not deal with buoyancy and natural convection. There is instead a similitude parameter called the Grashof number that deals with balances between buoyant forces and viscuous forces which is relevant here. | {
"domain": "engineering.stackexchange",
"id": 2350,
"tags": "thermodynamics"
} |
What is black-body equivalent of UV part of solar spectrum? | Question: If all non-UV light was filtered from sunlight, does this approximate a different type of black body radiation? Regular sunlight has a black-body temperature of 5777 K.
This is in relation to the problem of concentrating sunlight to a temperature higher than 5777 K. Thermodynamics rules this out. However, if only the UV portion of the solar spectrum is used in some hypothetical solar concentrator, could a temperature higher than 5777 K be achieved?
Answer: You are quite right that it is thermodynamically impossible to heat something hotter than the surface of the sun using only sunlight and passive optical elements (such as a filter that would block some wavelengths). Heat would then be spontaneously flowing from somewhere less hot to somewhere hotter, and that doesn’t happen.
The roadblocks you will encounter when trying to do this don’t really have anything to do with the spectrum of light coming from the sun—you can heat something to pretty high temperature in a microwave oven, even though the wavelength of the radiation that does the heating corresponds to the peak emission of a very, very cold blackbody. It’s all about the intensity of the radiation.
It is impossible to passively concentrate light to an intensity higher than the source. Having used a magnifying glass to start a fire, you might think that you just need a big enough magnifying glass. But as you use a bigger and bigger magnifying glass to focus the sunlight, the image you form of the sun gets bigger and bigger too, spreading the increased power over a larger area. And the hottest it can ever get is the temperature of the sun. If you filter out some of the light, it’s less hot than that. | {
"domain": "physics.stackexchange",
"id": 76378,
"tags": "thermodynamics, visible-light, temperature, radiation, thermal-radiation"
} |
Is initial stream formation in a drainage basin random? | Question: It's known that stream orders are highly regular:
Horton showed that stream order is related to number of streams, channel length, and drainage area by simple geometric relationships; that is, stream order plots against these variables as straight lines on semilogarithmic paper.
[...] Among many samples of basins in the United States the bifurcation ratio tends closely to equal 3.5. There are variations, of course. In the examples of basins cited by Horton (1945, p. 290) values of the bifurcation ratio range from 2 to 4.
Fluvial Geomorphology, p.137
The stream bifurcation ratio is so regular in any given drainage basin that it's actually remarkable to look at a graph - as in these Indian continent sub-basins.
The process Fluvial Geomorphology describes for the evolution of these basins involves the random weathering of locally-level surfaces by rain, melt, or less commonly other liquid weathering. Small rivulets form via random depressions in the surface of the earth, which then merge into larger streams of higher order. But this got me thinking...
If the emergence of initial rivulets is a random process, shouldn't that random process be biased towards wherever the drainage basin happens to be the most erodible? I'd only expect this kind of macro-level structure to necessarily emerge if the earth were homogeneous in the drainage basin, but in practice this doesn't seem to need to be required in order for a drainage basin following a regular bifurcation ratio to emerge.
My questions are related:
Are there any known criteria for which drainage basins do not evolve in a way that produces regular stream orders? (When) has this happened?
Is my assumption that this macro-level order should only emerge for homogeneous substrate incorrect?
Answer: There are what are called "structure driven" stream courses, these occur where the formations underlying the stream bed have a large impact on the fluvial forms in the drainage basin. This is most common where the substrate has been fractured by tectonic motion or where there are inconsistencies in unit erodibility in bedded formations. These conditions force streambeds to follow lines of weakness that inhibit many "normal" behaviours such as bifurcation rates.
You can also get direct tectonic effects on the paths of streambeds where they cross faultlines. This can be in the form of damming, ponding, and overflow episodes after earthquakes. Waterfalls across surface faults are not uncommon. Nor are gorges where a river has cut across a fault driven orographic zone. One can also see "kinked" river beds where a river flowing across a faultline is bent by movement along the fault, in this case the river will follow the line of the fault for some distance before regaining the bed that has been displaced by it's motion.
I hope that answers your question but hit me up in the comments if you want more. | {
"domain": "earthscience.stackexchange",
"id": 2376,
"tags": "rivers, geomorphology"
} |
Calculating temperature after adding heat to ice | Question:
If $53.2~\mathrm{kJ}$ of heat are added to $15.5~\mathrm{g}$ of ice at $-5^\circ\mathrm{C}$, what will be the resultant state of matter in which water is present and also calculate its final temperature.
I have done this:
Step 1: Energy required to make ice at 0 Celsius
Step 2:Energy required to make water at 0 Celsius
Step 3: Energy required to make steam at 0 Celsius
Step 4: Energy required to make steam at 100 Celsius
Step 5:Energy required to make steam at 'x-100' Celsius
Please leave a comment stating if any errors are present. The answer I got is x = 808 Celsius. The correct answer is 303 Celsius.
Answer: It is an enthalpy balance that requires knowledge of the enthalpy change at each step: (The steps you have listed are not correct.)
Solid heat capacity of ice of the form e.g. $C_p^\mathrm{solid}(T)$. This allows you to calculate the amount of heat (or energy) required to warm the sub-cooled ice from $-5~^{\circ}\mathrm{C}$ to $0~^{\circ}\mathrm{C}$.
(Latent) heat of fusion, $\Delta H_\mathrm{fus}$ then must be added to transform the solid ice to liquid water at $0~^{\circ}\mathrm{C}$.
Next we must warm the liquid water from $0~^{\circ}\mathrm{C}$ to its boiling point at $100~^{\circ}\mathrm{C}$, which requires knowledge of the liquid heat capacity of water e.g. $C_p^\mathrm{liquid}(T)$.
Now at the boiling point, we must account for the required amount of energy to transform the liquid water into its vapor state at $100~^{\circ}\mathrm{C}$ using the (latent) heat of vaporization of water $\Delta H_\mathrm{vap}$.
The final step then involves super-heating the vapor from $100~^{\circ}\text{C}$ to $T_\mathrm{final} = ?~^{\circ} \mathrm{C}$, which requires knowledge of the vapor heat capacity of steam e.g. $C_p^\mathrm{vapor}(T)$.
You know the overall enthalpy change ($53.2~\mathrm{ kJ}$) and you know the starting temperature ($-5~^{\circ}\mathrm{C}$), so all you have to do is add the results from each of the steps and solve for $T_\mathrm{final}$ e.g. $303~^{\circ}\mathrm{C}$. | {
"domain": "chemistry.stackexchange",
"id": 5460,
"tags": "homework, thermodynamics, water, heat"
} |
What is quantum about the no-cloning theorem? | Question: I have variously heard people describe the no-cloning theorem as an essential feature of "quantum physics", akin to saying "we cannot copy arbitrary quantum information to arbitrary precision". However, this question is about the interpretation of the quantum result only insofar as this sheds light on the classical result presented in the course of the question.
The basic formal statement of the theorem is that given a Hilbert space $H$, there is no unitary operator $U : H \otimes H \to H \otimes H$ such that
$$ U (\lvert a \rangle \otimes \lvert b\rangle) = \lvert a\rangle \otimes \lvert a\rangle$$
for all $\lvert a\rangle \in H$ and a "blank" state $\lvert b\rangle$. The question is, is anything about this result surprising or "quantum" from the viewpoint of classical mechanics?
Here's an argument that "no-cloning" also holds in classical mechanics, taken almost verbatim from "There's no cloning in symplectic mechanics" by Fenyes:
Let $(M,\omega)$ be a symplectic manifold, the $\omega$ is the symplectic form that encodes what we physicists call the Poisson bracket by $\omega(X_f,X_g) = \{f,g\}$ where $X_f$ is the vector field defined by $\mathrm{d} f = \omega(X_f,-)$. Then all physical motions on $M$ are symplectomorphisms, i.e. functions $M\to M$ that preserve $\omega$, because they are integral flows of the Hamiltonian vector field $X_H$ which is a symplectic vector field by construction.
The combined phase space of two systems $(M,\omega),(M',\omega')$ is $(M\times M',\omega + \omega')$, where $\times$ is the Cartesian product of manifolds. Now, the classical analogue to the no-cloning theorem would clearly be the statement that there is no symplectomorphism $\phi : M\times M\to M\times M$ such that
$$ \phi(a,b) = \phi(a,a)$$
for all $a\in M$ and a blank state $b\in M$. And, indeed, this is true:
Let $u,v\in T_{(b,b)} (M\times M)$ be tangent vectors at $(b,b)$. Since $\phi(x,b) = (x,x)$ by assumption, curves $(\gamma(t),b)$ starting at $(b,b)$ get mapped to curves $(\gamma(t),\gamma(t))$ starting at $(b,b)$, and so $\mathrm{d} \phi_{(b,b)} (w,0) = (w,w)$ for all $w\in T_{(b,b)}(M\times M)$. Therefore,
\begin{align} & (\omega + \omega)((u,0),(v,0)) = (\omega + \omega)((u,u),(v,v)) \\
\implies & \omega(u,v) + \omega(0,0) = \omega(u,v) + \omega(u,v) \\
\implies & \omega(u,v) = 0\end{align}
which is a contradiction because symplectic forms are non-degenerate by definition. Therefore, no classical Hamiltonian cloning map exists.
So, what does this result actually show? Are the assumptions of the no-cloning theorem silly and the desired cloning map does not actually reflect what we mean by being able to copy arbitrary information in either case? Is there a subtle difference between the classical and the quantum setting which makes the assumptions silly in the classical, but not in the quantum setting? If the assumptions are not silly, then what is the significance of the classical result?
Answer: The answer seems to be given in "Limitations on Cloning in Classical Mechanics", also by Fenyes.
Let me summarize it: the point is that the notion of copying that is usually seen (such as in the OP) is actually not what we mean by copying. It only allows for the object that we want to copy, and the new object. In that case it is indeed impossible to clone. But the linked article shows that if we also include a copier, then it becomes possible to clone in the classical case (but remains impossible in the quantum case).
(Note: he actually doesn't prove that it's always possible classically, but at least he gives some explicit examples where it is.) | {
"domain": "physics.stackexchange",
"id": 35927,
"tags": "quantum-mechanics, classical-mechanics, hamiltonian-formalism"
} |
What would be the minimal size of an aerial vehicle capable of sustained suspersonic flight? | Question: I wonder, what's the smallest possible size for a UAV capable of sustained supersonic flight at the current technology level? Let's say 10 minutes of flight at 1.1 M.
Answer: As I talked in the comments. Physicists are rarely concerned with "current technology level" and are more interested in ultimate laws of physics. There is no physical law, which places an ultimate limit on the size an weight of an aircraft. You could reduce it to one atom, if you can call one atom an aircraft :P. It all comes down to engineering and not physics.
I managed to find this article though:
supersonic unmanned aerial vehicles close to becoming reality | {
"domain": "physics.stackexchange",
"id": 12477,
"tags": "aerodynamics"
} |
actionlib shutdown beacuse of getState() | Question:
the code is be like:
return this->moveBaseClient->getState();
while moveBaseClient is:
typedef actionlib::SimpleActionClient<move_base_msgs::MoveBaseAction> MoveBaseClient;
std::shared_ptr<MoveBaseClient> moveBaseClient;
calling at 20hz and this happened:
robot_decision_node: /opt/ros/kinetic/include/actionlib/managed_list.h:168: const T& actionlib::ManagedList<T>::Handle::getElem() const [with T = boost::shared_ptr<actionlib::CommStateMachine<move_base_msgs::MoveBaseAction_<std::allocator<void> > > >]: Assertion `valid_' failed.
Am i using getState wrong?
i chage the frequency to 10hz, now it works fine.
but it's still a bug?
Originally posted by zhazha on ROS Answers with karma: 21 on 2020-11-30
Post score: 1
Original comments
Comment by miura on 2020-12-01:
You should also post the code that sets moveBaseClient.
Comment by zhazha on 2020-12-01:
hi,i have updated the question with defination of moveBaseClient
Answer:
You may not be able to create an instance.
MoveBaseClient moveBaseClient("move_base", true); and initialize it with this->moveBaseClient.getState(), which should work.
If it has to be a pointer, it would work with std::shared_ptr<MoveBaseClient> moveBaseClient = new MoveBaseClient("move_base", true);.
Originally posted by miura with karma: 1908 on 2020-12-06
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by zhazha on 2020-12-06:
i have initialized the pointer like this:
if (!moveBaseClient.get()) {
moveBaseClient = std::make_shared("/move_base");
}
and i run the node in 10hz,it works fine
Comment by miura on 2020-12-06:
Congratulations. If you would like, please make a collect. Or, please state your answer. | {
"domain": "robotics.stackexchange",
"id": 35816,
"tags": "ros, ros-kinetic, simpleactionclient"
} |
How to calculate Zak phase from numerical wavefunctions with arbitray phase? | Question: In numerical calculations, an arbitrary gauge or phase attached to a wavefunction at a particular $\mathbf{k}$ places an obstacle in calculating the Berry connection $$\mathcal{A}(\mathbf{k})=\langle u_{n}(\mathbf{k})|\nabla_{\mathbf{k}}u_{n}(\mathbf{k})\rangle$$ because of the derivative. In 2D, for instance, using Berry curvature formulae can overcome this when calculating the Chern number.
However, for a 1D system, is there a way to calculate the (quantized, when there's appropriate symmetry) Zak phase $$\gamma =\oint_\mathrm{BZ}d\mathbf{k}\mathcal{A}(\mathbf{k})$$ without being affected by the arbitray gauge of wavefunctions?
Answer: You can do this in the same way that Wannier charge centers are computed (see papers by Vanderbilt).
Suppose $\mathcal{C}$ is some closed path in $\mathbf{k}$-space (e.g. a 1D BZ). We'll define the Berry phase of the $n$th band along $\mathcal{C}$ as (note the imaginary unit, absent in your definition):
$$\gamma_n(\mathcal{C}) = \mathrm{i}\oint_{\mathcal{C}} \langle u_{n\mathbf{k}}|\boldsymbol{\nabla}_{\mathbf{k}}u_{n\mathbf{k}}\rangle.$$
The discrete formulation can be obtained by using e.g. forward differences and eliminating gauge invariances by cleverly taking logarithms of 1 + small terms (or by parallel transport reasoning). If we suppose the path is discretized into (not necessarily equidistant) $\mathbf{k}_i$ steps with $i=1,\ldots, N$ and $\mathbf{k}_{N+1} \equiv \mathbf{k}_1$, the end result is:
$$\gamma_n(\mathcal{C}) = \mathrm{Im}\log \prod_{i=1}^N \langle u_{n\mathbf{k_i}}|u_{n\mathbf{k}_{i+1}}\rangle$$
You can view this as the product (i.e. phase summation) of $N$ small rotations of the eigenvector's phase as it's transported along $\mathcal{C}$; the $\mathrm{Im}\log$-part merely picks out the phase.
If $\mathcal{C}$ is a non-contractible path in the BZ along a reciprocal lattice vector $\mathbf{G}$, it is desirable to enforce a periodic gauge, in which case one would take $u_{n\mathbf{k}_{N+1}}(\mathbf{r}) = u_{n\mathbf{k}_1}(\mathbf{r})\mathrm{e}^{-\mathrm{i}\mathbf{G}\cdot\mathbf{r}}$.
Z2Pack is a tool which implements this (to a much greater degree of generality...). That's also a good starting point for further reading. | {
"domain": "physics.stackexchange",
"id": 49106,
"tags": "condensed-matter, solid-state-physics, computational-physics, topological-field-theory, topological-insulators"
} |
Negatve mass inside a black hole | Question: With Hawking radiation, one half of virtual pair falls into horizon and this particle has negative energy.
What would an observer inside horizon observe when seeing negative particles ?
How do these negative particles interact with ordinary matter ?
Answer: The Schwarzschild solution for a neutral black hole is
$$ c^2 {d \tau}^{2} = \left(1 - \frac{r_s}{r} \right) c^2 dt^2 - \left(1-\frac{r_s}{r}\right)^{-1} dr^2 - r^2 \left(d\theta^2 + \sin^2\theta \, d\varphi^2\right) $$
You see that the terms $dt^2$ and $dr^2$ are multiplied by $(1-r_s/r)$ or its inverse where $r$ is the radial coordinate and $r_s$ is a constant, the Schwarzschild radius.
An important subtlety of this $(1-r_s/r)$ is that it becomes negative for $r_s\lt r$; this is true for any black hole beneath its event horizon. That's why in the black hole interior, the changes of the coordinate $r$ are actually timelike and the changes of the coordinate $t$ are spacelike; the role of the space and time are interchanged in the interior relatively to the exterior!
When we say that the outgoing/infalling particles from the pair have positive/negative energy, we are talking about the energy that generates translations of the $t$ coordinate. However, the $t$ coordinate is really spacelike in the black hole interior so the $t$-component of the energy-momentum vector is interpreted as a spatial component of the energy-momentum vector by observers inside.
It means that from the internal observers' viewpoint who are capable of observing the infalling particle (and not all of them can!), it is just an ordinary particle with some value of the momentum $p_t$, which is a spatial component and may unsurprisingly be both positive and negative. There is certainly no "new kind of matter" occuring in general relativity right beneath the horizon. All the local physics obeys the same laws as it does outside the black hole.
This general assertion is a special example of the fact that the event horizon is a coordinate singularity, not an actual singularity. When one choose more appropriate, Minkowski-like coordinates for the region of the spacetime near the horizon, it looks almost flat – as seen from the fact that the Riemann curvature tensor has very small values, at least if the black hole is large enough. So an observer crossing the event horizon of a large enough black hole doesn't feel anything special at all. His life continues for some time before he approaches the singularity and this is where the curvature becomes intense and where he's inevitably killed by extreme phenomena. But life may continue fine near the horizon and even beneath it.
Note that the energy is the only conserved component of the energy-momentum vector on the Schwarzschild background – because this solution is time-translationally invariant but surely not space-translationally invariant. Due to the intense curvature caused by black hole, one must be careful and not interpret individual components of vectors "directly physically". We saw an example that what looks like a temporal component of a vector, namely energy, from the viewpoint of the observer at infinity, can really be a competely different, spatial, component from the viewpoint of coordinates appropriate for a different observer, one who is inside. | {
"domain": "physics.stackexchange",
"id": 4639,
"tags": "particle-physics, black-holes"
} |
Direction of Integration in Biot Savart's Law (Line Integral) | Question: Let's say we have a loop with a clockwise current, and my angle increases in the counter-clockwise direction (that is, $\hat{\phi}$ is counter-clockwise).
I have $$B=\frac{\mu_{0} i\vec{dl} \times \vec{r}}{4 \pi |\vec{r}|^{3}}$$
If I were to evaluate the line integral counter-clockwise, then when I change this integral in terms of $\phi$, my integral goes from $0$ to $2 \pi$.
If I evaluatethis integral clockwise, when I switch this integral in terms of $\phi$, my integral goes from $2 \pi$ to $0$ $-$ but now my $\vec{dl}$ is $-R d\phi \hat{\phi}$, so the two negative signs just compensate.
However, in one case, the current is in the same direction as $\vec{dl}$ and in the other, it is antiparallel, so it looks like I will get two different answers (although i expect the same answer)!
Which, of course, just leaves me confused.
Thanks for any help!
Answer: The actual Biot-Savart's law for a wire carrying a steady current reads
$$B(\vec{r})=\frac{\mu_0}{4\pi}\int\frac{\vec{I}\times\vec{r'}}{r'^3}d\ell'$$
where $\vec{r'}$ is a vector pointing from the charge to the point in space $\vec{r}$. What you have written is a simplification that we can always make for the steady current carrying wire because $I$ is just some constant. So we define a vector $d\ell'$ that satisfies the condition
$$\vec{d\ell'}=d\ell \hat{I}$$
where
$$\hat{I} = \frac{\vec{I}}{|{I}|}$$
and thus it is equivalent to write this as
$$B(\vec{r})=\frac{\mu_0 I}{4\pi}\int\frac{\vec{d\ell'}\times\vec{r'}}{r'^3}$$
which is convenient. The answer for your question is then that it absolutely matters which way you choose to integrate: you must integrate in the direction that the current flows. You're correct to think that the direction is not arbitrary. | {
"domain": "physics.stackexchange",
"id": 58109,
"tags": "electromagnetism, magnetostatics"
} |
Is this expert report wrong about basic kinematics? | Question: This question is about the application of kinematics in an expert report for Eirikson v. Breton, ABQB 2000 798 (archived), a judgement for a case where a woman was driving a car that was accelerating as it crashed into a parked fire truck, resulting in her death (para. 65). Her family sued the city, and the city was somehow found to be 100% liable for the incident (para. 77).
There are multiple expert reports, but this is the one in question:
[48] Dr. Navin is a engineer and director of Hamilton-Finn Road Safety Consultants Ltd. Dr. Finn provided an opinion with respect to the visibility available to Ms. Eirikson and the amount of time needed to react to the fire truck. He made a site visit in March, 2000. He concluded that the flashing beacons of the fire truck would first be visible about 610 metres prior to the conclusion. The fire truck would have remained in the driver’s view for the entire duration until collision, if the driver continued in the median lane. The right side of the fire truck would have come into the driver’s field of view at about 183 metres. Finally, the fact that the fire truck occupied part of the median lane would be apparent to a driver at about 163 metres. Based upon those assumptions, the time remaining to collision would be 6.5 seconds at 90 kph, 5.9 seconds at 100 kph and 5.3 seconds at 110 kph. The stopping distance needed is 3.1 seconds at 90 kph, 3.4 seconds at 100 kph and 3.7 at 110 kph. That would give a reaction time of 3.4, 2.5 or 1.6 seconds respectively.
[49] Based on the estimated volume of traffic, there would have been little difficulty in finding a suitable gap in the centre lane into which she could have moved given the amount of traffic. He arrives at this conclusion simply on the basis of a per hour traffic volume of 2,300 to 2,400 vehicles.
The bolded part seems to be incorrect. They divide the distance 163 m by each of the three initial speeds (90, 100, 110) km/h to obtain (6.5, 5.9, 5.3) s as the "time remaining to collision". They then calculate stopping times by dividing each initial speed by an assumed deceleration of about 29.4 km/h/s to get (3.1, 3.4, 3.7) s. They subtract the times to calculate the available reaction times (3.4, 2.5, 1.6) s.
Because the car is decelerating, the deceleration would increase the "time remaining to collision" and invalidate the first calculation. Instead, it seems like the analysis should be done in this way:
For an initial speed $v$, the distance covered while stopping by decelerating at rate $a$ is $\frac{v^2}{2a}$.
If there is distance $d$ available, it is possible to drive at the initial speed for $t$ time before decelerating if
$$vt + \frac{v^2}{2a} \le d$$
$$t \le \frac{d}v - \frac{v}{2a}$$
If we crunch the numbers, we get the maximum possible $t$ values as (4.989, 4.167, 3.464) s. Is this analysis correct?
Also, 29.4 km/h/s (0.832 g) is representative of the maximum possible deceleration of a street car under ideal road conditions. Would it even be feasible to decelerate at that rate if it's icy? The weather conditions make it plausible that it was icy:
[5] January 29, 1996 was a sunny, cold winter day. The temperature was in excess of minus 30. An electronic warning sign for northbound traffic just south of the Calf Robe Bridge had a lighted message which read “Frost warning on bridge deck – reduce speed.”
[72] There was a warning sign telling drivers to reduce their speed. However, there is no evidence that ice contributed to the accident. While drivers that drove after Ms. Eirikson’s accident noted that the bridge was icy, the drivers who came before did not experience ice.
But they disagreed on whether it was actually icy:
[50] Samac Engineering rebutted some of the conclusions in the report by Collision Analysis. Of significance is the comment by Collision Analysis that Ms. Eirikson had begun a lateral shift to the right at a distance sufficiently south of the pumper to avoid a collision. Samac says that the tire marks start on the left side of the laneway and proceeded in a straight line towards the impact location which is indicative of the vehicle braking and sliding because the wheels were locked.
[51] Dr. Nelson provided a rebuttal report to Mr. Navin. He points out that given Mr. Navin’s conclusion that 2,300 to 2,400 vehicles per hour were passing through the northbound lane, that means 30 to 40 vehicles were northbound at the time of the accident. In response to the Collision Analysis’ report, he points out that the first reaction of a driver is try to stop rather than attempt a avoidance steering manoeuver such as trying to drive by the pumper in the median lane. Given the lack of good lane markings, the icy conditions and the emergency occurring on the other side of the road, the average driver would not have been able to satisfactorily process the information available to perform the avoidance manoeuver. To expect a completely rational automated type of response under the circumstances is to misunderstand the biology of the human.
[52] Albert Lund provided a rebuttal report in conjunction with a site visit on May 31, 2000. He drove the route four times between 11 a.m. and 2 p.m. He said that as he was driving the route he still had an uncomfortable feeling as he rounded the long curve approaching the accident location. He felt ‘being locked into his lane’. He described the movement involved in attempting a lane change and said he felt restricted in his movements.
[53] Collision Analysis also provided a rebuttal. They point out that although the tire marks used by Samac to conclude that Ms. Eirikson had braked were likely from her vehicle, they could well have been made by the vehicle that went by after the accident. They say that Ms. Eirikson’s situation was not a complicated one. She was presented with a vehicle that was not similar to most other on the road at that time. It was an emergency vehicle painted in unique colours with flashing warning lights. As well, she had two lanes into which she could move. They say that it is impossible from the photographs to conclude from the tire marks that Ms. Eirikson’s wheels were fully locked thereby rebutting Samac’s conclusion that she could not institute a steering manoeuver. In response to Mr. Lund’s rebuttal report, Collision Analysis seems to be making the argument that many of the perceptual difficulties are ones experienced by Canadian drivers everyday. Finally, they point out that Dr. Nelson made arithmetic errors in Ms. Eirikson’s reaction time.
Finally, is it true that having a large difference in masses (between a fire truck and a car) would make the calculation (conservation of momentum equation, and either conservation of energy or calculation using coefficient of restitution) inaccurate?
[44] [... Collision Analysis] concluded that because of the different masses of the two vehicles, because they could not examine the damage sustained by the two vehicles and because the Eirikson vehicle was involved in the second collision, there could be no accurate prediction of Ms. Eirikson’s pre-impact speed.
Answer: You've misinterpreted the quote. The quote describes stopping distance, not stopping time. The distances are expressed as times at a certain starting speed. Context from the quote suggests that this is in order to establish how soon after the threat became visible the driver would have needed to engage the brake in order to come to a complete stop in the time remaining to her. Multiplying the times by the speeds gives the stopping distances: 78m, 94m, and 113m respectively. The actual stopping times and accelerations can be approximated given the distance, initial, and final velocity, given roughly constant acceleration while braking.
Let $v_f = 0$, $\Delta t$ = time spent braking, $\Delta s$ = distance traveled from start of braking to stop of movement
$\bar v = (v_0 + v_f)/2 = v_0/2$
$\Delta t = \Delta s/\bar v$
$a = \frac{\Delta v}{\Delta t} = -\frac{v_0^2}{2\Delta s}$
we have, for $v_0 = 100km/h$,
$a = -4.1 m/s^2$
$\Delta t = 6.7s$
Whether ~$4m/s^2$ is a reasonable expectation or not is a question the manufacturer's specifications might be able to answer.
To the last question, yes. When a stationary vehicle is struck by a moving vehicle in a collision, it can shift forward on its wheels. The distance it moved, cross-referenced with its known braking acceleration, can be used to estimate the collision velocity from conservation of momentum. (Conservation of energy is useless - vehicles are designed deliberately to conserve as little kinetic energy as possible in a collision, spending as much of the collision energy as possible to the deformation of the unoccupied parts of the car's body.) If the stationary vehicle is too much heavier than the vehicle that struck it, however, it's not likely to move an amount that's larger than the margin of error of the measurement, if at all. | {
"domain": "physics.stackexchange",
"id": 88757,
"tags": "newtonian-mechanics, kinematics, acceleration, collision, speed"
} |
What caused this mysterious stellar occultation on July 10, 2017 from something ~100 km away from 486958 Arrokoth? | Question: In 2017 a series of stellar occultations by asteroid 2014 MU69 "Ultima Thule" now officially named 486958 Arrokoth were timed in order to obtain better orbital information before the New Horizons flyby and to look for additional large chunks of debris that might be of scientific interest and pose a danger of collision thereby warranting a more distant flyby.
There was a pleasant surprise in the shape of the asteroid determined from an array of portable telescopes deployed across the occultation paths. The asteroid appeared to be binary, which was puzzling because there were no obvious oscillations in its light curve, but that's a different story.
Below is an assemblage of three such occultation measurements. We can see the asteroid's double circle shape nicely. But what interests me here is the single trace for July 10th. It seems to show an occulting object clearly distinct from yet incredibly close to Arrokoth. A pure coincidence would be more than extremely unlikely as the time window is seconds and the distance less than 100 km at ~45 AU!
Question: But I don't remember reading about a separate secondary body orbiting Arrokoth. So what caused this apparent occultation on July 10, 2017?
From this answer to Will the upcoming observations of occultation by Arrokoth (2014 MU69) be of a single object, or two?
Answer: There were three attempts to measure Arrokoth by occultation, and the June 3rd attempt didn't detect anything. The July 10th attempt had a tiny blip, that appeared to be in the "wrong place", well away from the location that astrometry had predicted. The July 17th occultation was successful, it determined the shape and location well.
Some thought that the July 10th "blip" might have been evidence of a moon.
It was not, it turned out that there was an error in the software that was used to produce the astronometry (the author of the software describes one as "a case of me failing to read my own documentation, another one a sign error") If those bugs are fixed, then the blip on July 10th is moved to the same place as the blips on July 17th. That blip is Arrokoth. | {
"domain": "astronomy.stackexchange",
"id": 5089,
"tags": "asteroids, identify-this-object, kuiper-belt, occultation"
} |
Why can't energy be created or destroyed? | Question: My physics instructor told the class, when lecturing about energy, that it can't be created or destroyed. Why is that? Is there a theory or scientific evidence that proves his statement true or false? I apologize for the elementary question, but I sometimes tend to over-think things, and this is one of those times. :)
Answer:
At the physics 101 level, you pretty much just have to accept this as an experimental fact.
At the upper division or early grad school level, you'll be introduced to Noether's Theorem, and we can talk about the invariance of physical law under displacements in time. Really this just replaces one experimental fact (energy is conserved) with another (the character of physical law is independent of time), but at least it seems like a deeper understanding.
When you study general relativity and/or cosmology in depth, you may encounter claims that under the right circumstances it is hard to define a unique time to use for "invariance under translation in time", leaving energy conservation in question. Even on Physics.SE you'll find rather a lot of disagreement on the matter. It is far enough beyond my understanding that I won't venture an opinion.
This may (or may not) overturn what you've been told, but not in a way that you care about.
An education in physics is often like that. People tell you about firm, unbreakable rules and then later they say "well, that was just an approximation valid when such and such conditions are met and the real rule is this other thing". Then eventually you more or less catch up with some part of the leading edge of science and you get to participate in learning the new rules. | {
"domain": "physics.stackexchange",
"id": 56370,
"tags": "energy, energy-conservation, conservation-laws, noethers-theorem"
} |
Algorithm to Iterate All Possible Strings in Clojure | Question: I'm fairly new to Clojure and I struggled a bit coming up with a nice Clojure-y solution for implementing an algorithm for one of my applications. The real issue I'm trying to solve would require a large writeup in order to understand what it's about, so I've come up with a contrived problem statement that's much easier to understand which is quite similar to my original problem.
Assume you have mapped numbers to letters like so: 1->a, 2->b, 3->c, ... , 26->z
If you receive input like "4 1 25", you can easily convert this to the string "day" (because 'd' is the 4th letter in the alphabet, 'a' is the 1st, and 'y' is the 25th).
Now assume you don't have spaces as delimiters. So the input is "4125". This introduces ambiguity. "4125" can be interpreted as "4 1 25", "4 12 5", and "4 1 2 5".
Write a function that takes a number, and returns all possible strings from these letter codes.
So if you receive "4125" (the example above), the function would return: dle, day, dabe
Here is what I came up with:
; Generates map {1->a, 2->b, ... , 26->z}
(def code->letter
(into {}
(for [x (range 26)]
[(str (inc x))
(str (char (+ x (int \a))))])))
(defn possible-strings
([number]
(possible-strings (str number) 0 ""))
([number index acc]
(if (>= index (count number))
acc
(flatten (for [[k v] code->letter
:when (.startsWith number k index)]
(possible-strings number (+ index (count k)) (str acc v)))))))
; Print all the possible strings from the input "4125"
(doseq [s (possible-strings 4125)]
(println s))
So I believe this is basically a backtracking algorithm. And because it's a backtracking algorithm, it's impossible to make this tail-recursive, correct? Or maybe I'm wrong that this is a backtracking algorithm. If so, what would this kind of solution be classified as?
Anyway, the algorithm seems to work, but I'm not sure this solution is very idiomatic. The part in particular that I really don't like is having to call flatten for each call. I can't imagine that being very efficient either for large inputs. It seems like there must be a better way to do handle that.
Like I said, I'm new to Clojure and functional programming in general, so if there are improvements elsewhere, I'd be happy to hear about them as well. A completely different approach to solving the problem is welcome as well.
Answer: I figured out a faster, more functional programming-oriented algorithm for this problem -- see below. Besides that, your code overall looks idiomatic and was easy for me to understand, so I have no real criticisms in that area.
zipmap
zipmap is ideal for concisely constructing maps like your code->letter map:
(def code->letter
(zipmap (map str (iterate inc 1)) (map char (range 97 123))))
;=> {"1" \a, "2" \b, "3" \c, ... "26" \z}
One way to avoid using flatten
Personally, I think using flatten is fine, but some view it as being somewhat hacky, and I've heard it said that if you find yourself using flatten, there is usually a better way to do what you're trying to do. In this case, I would use mapcat -- it's like map, but it handily concatenates the results together into a single collection.
EDIT: This doesn't work! Please disregard :)
...
(mapcat (fn [[k v]]
(when (.startsWith number k index)
(possible-strings number (+ index (count k)) (str acc v))))
code->letter))))
or,
...
(mapcat (fn [[k v]] (possible-strings number (+ index (count k)) (str acc v)))
(filter #(.startsWith number (key %) index) code->letter)))))
Functional programming
You're correct in that your algorithm is not (and cannot be) tail-recursive. This is because you're wrapping the recursive call to possible-strings within a filtering and mapping operation. I'm not familiar with the term "backtracking algorithm," but it does seem appropriate in this case. This initially struck me as the kind of thing you would do with a reduce, building up a string from scratch, however, you do have to keep "backtracking" to build up new strings using different groupings of digits. I might be wrong about this, but I don't think this is something you can do with tail-recursion, which means you can't use recur and there would be performance implications, especially for large inputs.
Here's another approach: write a function that takes a number and returns a list of possible arrangements of numbers from 1-26. Then just map the numbers to letters and you'll have your possible strings.
This function, which I'll call possible-arrangements, is a little complicated to write. My approach is to walk through the number string from the beginning and form a tree, looking at numbers 2 at a time and branching off for each possibility, each branch terminating once it reaches a list of numbers that can't be broken down any further. For example:
; *starred numbers are ones that are evaluated to see if they can be broken
; down further
*4125 ; 41 is not in range, so only one possibility...
|
4 *125 ; 12 can be 1 2 or 12, so we branch, etc.
/ \
1 *25 12 5
/\ |
/ \ |
(4 1 ...) 2 5 25 12 5 = 4 1 2 5, 4 1 25, 4 1 12 5
; the result would be [["4" "1" "2" "5"] ["4" "1" "25"] ["4" "12" "5"]]
.
(defn possible-arrangements [^String n]
(case (count n)
1 [[n]]
2 (let [each-digit (mapv str n)]
(if (contains? code->letter n)
[[n] each-digit]
[each-digit]))
(let [a (str (first n))
ab (subs n 0 2)
prepend (fn [x offset]
(map (partial cons x) (possible-arrangements (subs n offset))))]
(if-not (contains? code->letter ab)
(prepend a 1)
(concat (prepend a 1) (prepend ab 2))))))
(defn possible-strings [^String n]
(for [nums (possible-arrangements n)
:let [chars (map code->letter nums)]]
(apply str chars)))
This is still not a tail-recursive solution, but it's about 4 times faster, I think because it isn't having to perform 26 string operations for each digit of the input number. | {
"domain": "codereview.stackexchange",
"id": 8721,
"tags": "beginner, recursion, functional-programming, clojure"
} |
Why does the Moon sometimes look Yellow? | Question: My friend and I both noticed that the Moon looked yellow Yesterday, but it wasn't like that the same time the day before, as far as I remember.
Does the Moon sometimes look Yellow? If it does, why does it?
Answer: Because light with higher frequency, that is bluer colors, scatters more in Earth's atmosphere and celestial objects thus look redder (yellow is towards red) while the atmosphere looks blue. See Rayleigh scattering.
Here is video with a lengthy and somewhat entertaining explanation:
https://www.youtube.com/watch?v=SRh75B5iotI | {
"domain": "astronomy.stackexchange",
"id": 515,
"tags": "the-moon, naked-eye"
} |
Evaluating math terms as nested lambdas instead of expression tree | Question: I need to store some math terms. Originally I would use a tree to do it, especially if parsing strings was involved. However, since the expressions are built within the code and need not be parsed, I thought of doing it with nested lambdas and overloaded operators instead.
Since these expressions will probably be run a few thousand times (possibly up to 100k), I wonder if I should expect any problems (e.g. call stack too full) or what other thoughts you have on this approach. I expect no math term to contain more than 100 operators (in f+g*h I count 2 operators, for that matter).
Run it on godbolt
#include <functional>
#include <iostream>
using Func = std::function<double(double)>;
class Cube{
public:
double operator()(double x) const{
return x*x*x;
}
};
Func operator+(Func lhs, Func rhs){
return [lhs, rhs](double x){
return lhs(x) + rhs(x);
};
}
Func operator-(Func lhs, Func rhs){
return [lhs, rhs](double x){
return lhs(x) - rhs(x);
};
}
Func operator*(Func lhs, Func rhs){
return [lhs, rhs](double x){
return lhs(x) * rhs(x);
};
}
Func operator/(Func lhs, Func rhs){
return [lhs, rhs](double x){
return lhs(x) / rhs(x);
};
}
int main(){
std::function<double(double)> square = [](double x){
return x*x;
};
Cube c;
auto result1 = square + c;
auto result2 = square - c;
auto result3 = square * c;
auto result4 = c / square;
auto result5 = result1 + result2 - result3 * result4;
double x = 3.5;
std::cout << "result1: " << result1(x) << "\n";
std::cout << "result2: " << result2(x) << "\n";
std::cout << "result3: " << result3(x) << "\n";
std::cout << "result4: " << result4(x) << "\n";
std::cout << "result5: " << result5(x) << "\n";
}
````
Answer: I don't see any major problems with this approach, in fact it strikes me as quite elegant! However, I notice your Cube callable/functor and your square function are not composable with the rest of your operators, and this is trivially fixable by making them also functions returning lambdas:
Func square(Func op) {
return [op](double x) {
auto r = op(x);
return r*r;
};
}
And similarly for cube. Note how I cache the result - this way if op has side effects, they only happen once, as most consumers of the library might expect.
What I mean by composable is that you can now do stuff like:
Func complicatedOperation = square(cube(cube + square));
The way these would get composed is by having a small helper function, id:
double id(double x) {
return x;
}
id or "identity", a function which just returns its argument, provides a simple way of saying "value goes here" when the tree is being constructed.
and if you want to start the expression tree with the square or cube functions, define these:
liftedSquare = square(id);
liftedCube = cube(id);
Now, there's an easier way here - what about just having:
#include <cmath>
...
Func operator^(const Func& lhs, const Func& rhs){
return [lhs, rhs](const double x){
return std::pow(lhs(x), rhs(x));
};
}
No need for separate square and cube anymore! square becomes auto square = id ^ lift(2);. lift is this function which is basically just a lazy version of id, it takes a value and returns a function returning that value.
Then just chuck everything into a namespace so it doesn't interfere with other operator overloads, and you have this:
#include <functional>
#include <cmath>
#include <iostream>
namespace LazyOps {
using Func = std::function<double(double)>;
namespace {
double _id(const double x) {
return x;
}
}
Func id = _id;
Func lift(const double x) {
return [x](const double _) {
return x;
};
}
Func cube(const Func& op) {
return [op](const double x) {
auto r = op(x);
return r*r*r;
};
}
Func square(const Func& op) {
return [op](const double x) {
auto r = op(x);
return r*r;
};
}
Func operator+(const Func& lhs, const Func& rhs){
return [lhs, rhs](const double x){
return lhs(x) + rhs(x);
};
}
Func operator-(const Func& lhs, const Func& rhs){
return [lhs, rhs](const double x){
return lhs(x) - rhs(x);
};
}
Func operator*(const Func& lhs, const Func& rhs){
return [lhs, rhs](const double x){
return lhs(x) * rhs(x);
};
}
Func operator/(const Func& lhs, const Func& rhs){
return [lhs, rhs](const double x){
return lhs(x) / rhs(x);
};
}
Func operator^(const Func& lhs, const Func& rhs){
return [lhs, rhs](const double x){
return std::pow(lhs(x), rhs(x));
};
}
}
int main(){
using namespace LazyOps;
auto liftedSquare = square(id);
auto liftedCube = cube(id);
auto result1 = liftedSquare + liftedCube;
auto result2 = liftedSquare - liftedCube;
auto result3 = liftedSquare * liftedCube;
auto result4 = liftedCube / liftedSquare;
auto newSquare = id ^ lift(2);
auto result5 = result1 + result2 - result3 * result4;
auto result6 = cube(square(liftedCube / liftedSquare));
auto result7 = id ^ id;
double x = 4.0;
std::cout << "result1: " << result1(x) << "\n";
std::cout << "result2: " << result2(x) << "\n";
std::cout << "result3: " << result3(x) << "\n";
std::cout << "result4: " << result4(x) << "\n";
std::cout << "result5: " << result5(x) << "\n";
std::cout << "result6: " << result6(x) << "\n";
std::cout << "result7: " << result7(x) << "\n";
std::cout << "square(" << x << "): " << newSquare(x) << "\n";
}
Unfortunately, the compiler isn't great with compiling auto result = id ^ id; just like that, therefore I had to put in a type deduction hint by putting the actual _id function in a private anonymous namespace and defining an alias Func id = _id; in the actual LazyOps namespace.
As for whether this will blow up your stack, I can't really say, but hopefully accepting the Func arguments as const references can help with that. Make a small script to generate some huge expression tree and see how it goes! | {
"domain": "codereview.stackexchange",
"id": 43547,
"tags": "c++, performance"
} |
Angular momentum transfer in collision between two smooth bodies | Question: I know that conservation of angular momentum says that the total angular momentum is time invariant in absence of external forces.
So say you consider two identical smooth spheres and you make the first sphere strike the second, how would you know if the first transferred any angular momentum into the second without observing it?
Say inertia of each ball is $I$ and say angular velocity of first ball is $\omega_a$ and second ball is stationary
$$I \omega_a = I \omega '_{a} + I\omega'_{b}$$
or,
$$ \omega_a =\omega'_{a} + \omega'_{b}$$
where the primes denote final angular velocity.
Is there any more deductions I can make about the system (since both balls are identical and smooth) or is this it?
Answer: The bodies need to make contact to create torque which changes angular momentum. Smooth surfaces can not make contact hence no transfer of angular momentum is possible. | {
"domain": "physics.stackexchange",
"id": 66781,
"tags": "newtonian-mechanics, angular-momentum, rotational-dynamics, conservation-laws"
} |
What is the difference between the strong force and the strong nuclear force? | Question: Is there a difference between the strong nuclear force, and the strong force (without the nuclear in between)? I have heard that the strong nuclear force binds protons and neutrons together, while the strong force binds the gluons together inside protons and neutrons. Is that true?
Answer: At a fundamental level, there is only one strong force, which is mediated between gluons between quarks and gluons with non-zero color charges.
In practice, due to confinement, at large distances (or equivalently, low energies) compared to the confinement scale, there are are no objects with a net-color charge. Loosely speaking, this scale is some fraction of the size of a nucleon.
Therefore, inside a nucleon, free quarks and gluons directly interact via the color force.
Nucleons can also interact. In the low energy effective field theory description of nucleons, the relevant degrees of freedom are nucleons and pions, which are color-neutral objects. Nucleons attract each other by exchanging pions.
However, the ultimate, fundamental origin of these nucleon-pion interactions is the same strong force that binds quarks and gluons. The effective low energy description is a convenient way of describing a process that, in more fundamental terms, would involve many complicated interactions of quarks and gluons inside the nucleons. | {
"domain": "physics.stackexchange",
"id": 96923,
"tags": "terminology, definition, strong-force"
} |
Drawing sets of boxes to display contents of arrays | Question: This code draws two sets of 9 boxes on the screen to display the contents of two arrays.
How can I speed up, shorten and make the code more efficient?
for (int x=0;x<3;x++){
for (int y=0;y<3;y++){
if (pattern[x][y]==1){
g.setColor(colorrange.colb());
g.fillRect(actlo+(x*size),actlo+(y*size),size,size);
}
g.setColor(Color.yellow);
g.drawRect(actlo+(x*size),actlo+(y*size),size,size);
if (patternb[x][y]==1){
g.setColor(colorrange.colb());
g.fillRect(actlo+(x*size),inlo+(y*size),size,size);
}
g.setColor(Color.yellow);
g.drawRect(actlo+(x*size),inlo+(y*size),size,size);
}
}
Answer: Multiplies can be pretty expensive within loops, so I minimized the use of them by precalculating where possible. It is also better to have the Y loop before the X loop to prevent cache misses (which wouldn't matter on a really small array like this but is a good practice). Consider this alternative:
for(int y = 0; y < 3; y++)
{
final int yy = y * size;
for(int x = 0; x < 3; x++)
{
final int xx = x * size;
if(pattern[x][y] == 1)
{
g.setColor(colorrange.colb());
g.fillRect(actlo + xx, actlo + yy, size, size);
}
g.setColor(Color.yellow);
g.drawRect(actlo + xx, actlo + yy, size, size);
if(patternb[x][y] == 1)
{
g.setColor(colorrange.colb());
g.fillRect(actlo + xx, inlo + yy, size, size);
}
g.setColor(Color.yellow);
g.drawRect(actlo + xx, inlo + yy, size, size);
}
} | {
"domain": "codereview.stackexchange",
"id": 10766,
"tags": "java, optimization"
} |
Different number of modes while performing the EEMD (Ensembled Empirical Mode Decomposition) | Question: I am performing an Ensemble Empirical Mode Decomposition (EEMD).
I wrote a MATLAB routine to perform the EMD. I noticed that when adding white Gaussian noise to the input signal, sometimes I obtain a different number of modes, even if the noise variance is not so high. Clearly, if this happens, I cannot sum compute consistent ensemble-averages for each mode.
Do you think this depends more on the nature of the input signal or on my EMD algorithm/routine?
Answer: It depends on both on the signal content and on the EMD implementation. EMD can be sensitive to time shifts, especially with strong transients, to noise power. Robust or constrained EMD on single signal or multivariate signals have been developed.
Your question relates to signal morphology, processing purpose, and quality metrics. I doubt they can be answered globally. | {
"domain": "dsp.stackexchange",
"id": 9086,
"tags": "signal-analysis, empirical-mode-decomposition"
} |
Which is the cheapest plastic that retains its structure at 85 °C or higher temperature? | Question: In an academic project, I need to protect some electronics components from heat (around 85 °C). I am searching for both a cheap plastic film and a cheap plastic to be used in a injection molding process.
As matt_black referred, the main criteria of the selection is "the point where structural integrity is reduced by repeated use at that temperature or the point where the polymer starts to creep".
By now I am using cork, but I would like to use a more robust material.
Those materials should be both thermal and electrical insulators and should not absorb water. They should also be resistant to the external environment: rain, sun (UV radiation), dust, etc.
Which materials are the best option in this case? From what I have searched, these are the main options in terms of thermal requirements:
PPO/PPE based
Polypropylene
Polyurethane
Vinyls
Polybutylene
Acetals
ABS and SAN
Polystyrene
ABS/Polycarbonate alloy
Acrylics
Cellulosics
Polyethylene and copolymers
Answer: The melting point is not going to give you the answer you want. To properly select a polymer you need to look at the glass transition temperature $(T_\mathrm{g})$. This is the temperature at which a material loses crystallinity and starts to more easily flow. This parameter varies widely with thermal history and internal stress of a polymer material and can be raised by annealing.
Based on UV resistance, I would exclude any olefin or vinyl polymers as well as ABS.
For water resistance, I would exclude cellulose polymers and acetals.
Based on the temperature requirement I would say polystyrene ($T_\mathrm{g} = \pu{100^\circ C}$), acrylics $(T_\mathrm{g} = 85 - \pu{165^\circ C})$, polycarbonate ($T_\mathrm{g} = \pu{135^\circ C}$), or PPE/PPO ($T_\mathrm{g} = \pu{215^\circ C}$) have sufficiently high $T_\mathrm{g}$'s to remain as acceptable for the mentioned specifications. | {
"domain": "chemistry.stackexchange",
"id": 10394,
"tags": "materials, temperature, plastics"
} |
Please explain how we arrive at this statistical result | Question: While solving a derivation in statistical mechanics I came across a result which was derived from expression:
$p\propto \exp{( -\frac{C_{V}}{2k T^{2}} \Delta{T^{2}}- \frac{1}{2kT \kappa _{T} V}\Delta{V^{2}}})$
The result is which is written after writing one line which goes as:
which shows that the fluctuations in T and V are statistically independent, Gaussian variables!. A quick glance at above equation yields:
$\overline{(\Delta{T^{2}})}=\frac{kT^{2}}{C_{V}}, \overline{(\Delta{V^{2}})}=kT \kappa _{T} V$
Where bar over T and V represents average.
I don't understand how statistically independent Gaussian variables led to this result and what are statistically independent Gaussian variables. Please help me understand this.
Answer: A Gaussian distribution, often called normal distribution, is a (continuous) probability distribution of the form,
$$
f_X\left(x\vert\mu,\,\sigma\right)\sim A\exp\left(-\left(x-\mu\right)^2/2\sigma^2\right)
$$
for the normalization constant $A$, mean value $\mu$ and variance $\sigma^2$.
The author assumes that the temperature and volume distributions follow normal distributions (with variances of $k_BT/C_v$ and $k_BT\kappa_TV$) that are not correlated, so the product of these two is straight-forward:
$$
f_T\cdot f_V\sim\exp\left(-\Delta T^2/2\sigma_T^2\right)\cdot\exp\left(-\Delta V^2/\sigma_V^2\right)
$$
And with simple properties of exponentials, leads to the result given by the author. | {
"domain": "physics.stackexchange",
"id": 48522,
"tags": "thermodynamics, statistical-mechanics"
} |
How to calculate the half time of a unimolecular reaction given the Arrhenius coefficient and the activation energy? | Question:
Consider the following unimolecular reaction:
$$\ce{H2O2 + M -> OH + OH + M}$$
The high pressure limit unimolecular rate coefficient for this reaction is $k_\mathrm{uni} = A\cdot \mathrm{e}^{\frac{-E_\mathrm{a}}{R \cdot T}}$, where $A = \pu{3E14 1/s}$, and $E_\mathrm{a} = \pu{305 kJ/mol}$. Calculate the half-life of this reaction at $\pu{1 atm}$ and $\pu{1000 K}$.
Source: Question 14.1 from John W. Daily: Statistical Thermodynamics: An Engineering Approach. Cambridge University Press, 2018. ISBN: 1108244645, 9781108244640. DOI: 10.1017/9781108233194.
I started by getting that $k_\mathrm{uni}$ through the formula, got $\pu{0.03515 1/s}$ and then, since it's unimolecular:
$$t_{\frac{1}{2}} = \frac{\ln(2)}{k_\mathrm{uni}}.$$
But I got it wrong, I don't have the solutions manual to this book to check what is wrong. I also tried using the Troe form, using the $k_\mathrm{uni}$ from the starting formula as the $k_{\inf}$ but still got it wrong.
Okay, I finally found the solutions manual. And saw that the answer on it is completely wrong, so there was no way that introducing what that math gave would work. The halftime is actually 19.7 s, but in the book it is 9 $\mu$s, which doesn't make sense.
Answer: You calculated the high-pressure-limit half life, where the rate of collisions with M is so high that it has no influence on the rate constant. Now all you have to do is to consider that the pressure is 1 atm instead of at the high-pressure limit. That slows down the reaction such that the half-time is in the second instead of the micro second range.
Something similar happens with ozone. It is not very stable at sea level, but has a substantial half-time in the upper atmosphere (unless there are certain man-made substance up there).
To solve numerically, you could assume a Lindemann mechanism (see https://en.wikipedia.org/wiki/Lindemann_mechanism) and that $k_1$ is diffusion-limited (i.e. no activation energy, every collision leads to activated complex).
The textbook cited by the OP has:
Strangely, in an earlier table they show $k_0$ and $k_\infty$ as (along with pre-exponential temperature dependence $T^n$ and ?activation energy? E - units unknown): | {
"domain": "chemistry.stackexchange",
"id": 13771,
"tags": "thermodynamics, kinetics, statistical-mechanics"
} |
Reflected and refracted wave sphased | Question: When we derive refraction and reflection laws for a generical plane wave on a surface, we say that reflected and refracted must be in phase with the incident wave. Why a medium cannot do a sphased reaction beside incident wave? Sorry for my trivial question.
Answer: It is possible. Take a look at phase conjugated mirror on Wikipedia.
Also, phase shifts do happen on reflection from flat surfaces -- all the time. It's called the Goos-Hänchen effect. It's just very very small, so that's probably why you ignore it in your derivation. | {
"domain": "physics.stackexchange",
"id": 164,
"tags": "optics, waves, reflection, refraction, plane-wave"
} |
How does HIV assemble its capsid correctly? | Question: HIV’s capsid is very unusual. The capsid is made of around 1200 identical CA proteins (p24). These CA proteins first assemble into either pentamers or hexamers, which assemble into a fullerene like polyhedron containing 12 pentamers at the vertices and about 200 hexamers filling the bulk. Compared to other viral capsid, HIV’s capsid is unusually big (with an equivalent triangulation number of T=1200/60=30) and highly asymmetric. Instead of a perfect icosahedron made of 20 equilateral triangles, the edges are of different lengths (as well as the h and k numbers https://viralzone.expasy.org/8577) in HIV’s capsids, and even two HIV can have slightly different capsids.
It should be difficult for such complex structures to be assembled correctly because the structural environment is very different among the subunits. For viruses with small and symmetrical capsids, the symmetry is guaranteed by the fixed binding angles between subunits (because the structural environment is equivalent everywhere). More complex viruses can encode multiple types of capsid proteins taking different positions. For example, the capsid of poliovirus can be modeled as a soccer ball, with each pentagon made of 5 VP1 and each hexagon made of 3 VP2 and 3 VP3. However, HIV’s capsid is solely made of CA, which means the very same CA need to adapt to the highly variable structural context. Some complex viruses like herpes viruses encode scaffold proteins to guide the capsid assembly, which is unlikely true for HIV as HIV’s capsid is self-assembled. Despite its irregularities, HIV’s capsid is perfectly sealed off, which is unlike murine leukemia virus with large gaps in its capsid (https://www.pnas.org/doi/abs/10.1073/pnas.1811580115?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed). How can the seemingly simple CA proteins assemble into such complex structures accurately?
Answer: How capsids self-assemble isn't fully known for any virus, but it's been studied a lot and there are a number of principles we can apply. First, HIV isn't as unique as you assume. 30 is a fairly large triangularization number, but Faustovirus has T=277, and Mimivirus is estimated at over T=1000. The conical shape of lentiviruses is indeed very unusual, but if we look at how that shape is constructed, it seems less exotic. Hexagons can tile infinitely on a 2D plane or cylinder. However, if a hexagon is replaced with a pentagon, the tiling is distorted such that the plane becomes curved. The geometry works out so that replacing 12 hexagons in a lattice with pentagons will create an enclosed shell, regardless of size. If the pentagons are evenly spaced, the shell is an icosahedron. However, many bacteriophage build a cylindrical region between two half-icosahedral endcaps, with 6 pentagons at each end. This type of capsid is called "prolate", and the well-known T4 phage uses it, as well as the elongated "corndog" phages shown below:
Negative-stain transmission electron micrographs of the C. crescentus phage phiCbK and five phiCbK-like phages. All five exhibit Siphoviridae morphology and prolate heads. Scale bars are 100 nm.
The conical capsids of lentiviruses like HIV have a similar geometry to these corndog phage, except the two endcaps are asymmetric; with 7 pentons at one end and 5 at the other. If HIV was like many large icosahedral viruses and encoded different proteins for the pentons, arranging them correctly would be difficult. However, the current theory is that the overall geometry is imposed on the maturing HIV capsid, and CA protein simply changes into a pentameric arrangement where the local curvature of the immature lattice is too great for hexameric arrangement. From Unclosed HIV-1 Capsids Suggest a Curled Sheet Model of Assembly, referencing Nonequilibirum Assembly, Retroviruses, and Conical Structures:
Levandovsky and Zandi used tapered triangular prisms to represent CA units and were able to recapitulate spherical, conical, and tubular shapes.26 Tapering caused growing sheets to curve, and as the curved sheets grew, inclusion of pentamers became necessary to relieve accumulated stress. Opposing edges of the growing sheets eventually curled around toward each other and connected and then the top and bottom ends sealed. Conical shapes therefore emerged not as a result of template interactions or membrane enclosures but through simple nonequilibrium growth of elastic sheets.
The overall shape of the CA immature lattice before the conical core forms is guided by two forces that mimic the "scaffold" proteins used by other viruses. These are the outer membrane and the inner RNA payload, to which CA is stuck via the MA and NC subunits of the Gag polyprotein, respectively. This figure from Assembly and Architecture of HIV illustrates how the constraints of the membrane and RNA leave the Gag units organized in a way that allows the CA units to organize into the required shape once they are cleaved free: | {
"domain": "biology.stackexchange",
"id": 11932,
"tags": "virology, protein-interaction"
} |
Is it foolish to distinguish between covariant and contravariant vectors? | Question: A vector space is a set whose elements satisfy certain axioms. Now there are physical entities that satisfy these properties, which may not be arrows. A co-ordinate transformation is linear map from a vector to itself with a change of basis. Now the transformation is an abstract concept, it is just a mapping. To calculate it we need basis and matrices and how a transformation ends up looking depends only on the basis we choose, a transformation can look like a diagonal matrix if an eigenbasis is used and so on. It has nothing to do with the vectors it is mapping, only the dimension of the vector spaces is important.
So it is foolish to distinguish vectors on the way how their components change under a co-ordinate transformation, since it depends on the basis you used. So there is actually no difference between a contravariant and covariant vector, there is a difference between a contravariant and covariant basis as is shown in arXiv:1002.3217. An inner product is between elements of the same vector space and not between two vector spaces, it is not how it is defined.
Is this approach correct?
Along with this approach mentioned, we can view covectors as members of the dual space of the contra-vector space. What advantage does this approach over the former mentioned in my post?
Addendum: So now there are contra variant vectors and their duals called covariant vectors. But the duals are defined only once the contravectors are set up because they are the maps from the space of contra vectors to $R$ and thus, it won't make sense of to talk of covectors alone. Then what does it mean that the gradient is a covector ? Now saying because it transforms in a certain way makes no sense.
Answer: This is not really an answer to your question, essentially because there isn't (currently) a question in your post, but it is too long for a comment.
Your statement that
A co-ordinate transformation is linear map from a vector to itself with a change of basis.
is muddled and ultimately incorrect. Take some vector space $V$ and two bases $\beta$ and $\gamma$ for $V$. Each of these bases can be used to establish a representation map $r_\beta:\mathbb R^n\to V$, given by
$$r_\beta(v)=\sum_{j=1}^nv_j e_j$$
if $v=(v_1,\ldots,v_n)$ and $\beta=\{e_1,\ldots,e_n\}$. The coordinate transformation is not a linear map from $V$ to itself. Instead, it is the map
$$r_\gamma^{-1}\circ r_\beta:\mathbb R^n\to\mathbb R^n,\tag 1$$
and takes coordinates to coordinates.
Now, to go to the heart of your confusion, it should be stressed that covectors are not members of $V$; as such, the representation maps do not apply to them directly in any way. Instead, they belong to the dual space $V^\ast$, which I'm hoping you're familiar with. (In general, I would strongly discourage you from reading texts that pretend to lay down the law on the distinction between vectors and covectors without talking at length about the dual space.)
The dual space is the vector space of all linear functionals from $V$ into its scalar field:
$$V=\{\varphi:V\to\mathbb R:\varphi\text{ is linear}\}.$$
This has the same dimension as $V$, and any basis $\beta$ has a unique dual basis $\beta^*=\{\varphi_1,\ldots,\varphi_n\}$ characterized by $\varphi_i(e_j)=\delta_{ij}$. Since it is a different basis to $\beta$, it is not surprising that the corresponding representation map is different.
To lift the representation map to the dual vector space, one needs the notion of the adjoint of a linear map. As it happens, there is in general no way to lift a linear map $L:V\to W$ to a map from $V^*$ to $W^*$; instead, one needs to reverse the arrow. Given such a map, a functional $f\in W^*$ and a vector $v\in V$, there is only one combination which makes sense, which is $f(L(v))$. The mapping $$v\mapsto f(L(v))$$ is a linear mapping from $V$ into $\mathbb R$, and it's therefore in $V^*$. It is denoted by $L^*(f)$, and defines the action of the adjoint $$L^*:W^*\to V^*.$$
If you apply this to the representation maps on $V$, you get the adjoints $r_\beta^*:V^*\to\mathbb R^{n,*}$, where the latter is canonically equivalent to $\mathbb R^n$ because it has a canonical basis. The inverse of this map, $(r_\beta^*)^{-1}$, is the representation map $r_{\beta^*}:\mathbb R^n\cong\mathbb R^{n,*}\to V^*$. This is the origin of the 'inverse transpose' rule for transforming covectors.
To get the transformation rule for covectors between two bases, you need to string two of these together:
$$
\left((r_\gamma^*)^{-1}\right)^{-1}\circ(r_\beta^*)^{-1}=r_\gamma^*\circ (r_\beta^*)^{-1}:\mathbb R^n\to \mathbb R^n,
$$
which is very different to the one for vectors, (1).
Still think that vectors and covectors are the same thing?
Addendum
Let me, finally, address another misconception in your question:
An inner product is between elements of the same vector space and not between two vector spaces, it is not how it is defined.
Inner products are indeed defined by taking both inputs from the same vector space. Nevertheless, it is still perfectly possible to define a bilinear form $\langle \cdot,\cdot\rangle:V^*\times V\to\mathbb R$ which takes one covector and one vector to give a scalar; it is simple the action of the former on the latter:
$$\langle\varphi,v\rangle=\varphi(v).$$
This bilinear form is always guaranteed and presupposes strictly less structure than an inner product. This is the 'inner product' which reads $\varphi_j v^j$ in Einstein notation.
Of course, this does relate to the inner product structure $ \langle \cdot,\cdot\rangle_\text{I.P.}$ on $V$ when there is one. Having such a structure enables one to identify vectors and covectors in a canonical way: given a vector $v$ in $V$, its corresponding covector is the linear functional
$$
\begin{align}
i(v)=\langle v,\cdot\rangle_\text{I.P.} : V&\longrightarrow\mathbb R \\
w&\mapsto \langle v,w\rangle_\text{I.P.}.
\end{align}
$$
By construction, both bilinear forms are canonically related, so that the 'inner product' $\langle\cdot,\cdot\rangle$ between $v\in V^*$ and $w\in V$ is exactly the same as the inner product $\langle\cdot,\cdot\rangle_\text{I.P.}$ between $i(v)\in V$ and $w\in V$. That use of language is perfectly justified.
Addendum 2, on your question about the gradient.
I should really try and convince you at this point that the transformation laws are in fact enough to show something is a covector. (The way the argument goes is that one can define a linear functional on $V$ via the form in $\mathbb R^{n*}$ given by the components, and the transformation laws ensure that this form in $V^*$ is independent of the basis; alternatively, given the components $f_\beta,f_\gamma\in\mathbb R^n$ with respect to two basis, the representation maps give the forms $r_{\beta^*}(f_\beta)=r_{\gamma^*}(f_\gamma)\in V^*$, and the two are equal because of the transformation laws.)
However, there is indeed a deeper reason for the fact that the gradient is a covector. Essentially, it is to do with the fact that the equation
$$df=\nabla f\cdot dx$$
does not actually need a dot product; instead, it relies on the simpler structure of the dual-primal bilinear form $\langle \cdot,\cdot\rangle$.
To make this precise, consider an arbitrary function $T:\mathbb R^n\to\mathbb R^m$. The derivative of $T$ at $x_0$ is defined to be the (unique) linear map $dT_{x_0}:\mathbb R^n\to\mathbb R^m$ such that
$$
T(x)=T(x_0)+dT_{x_0}(x-x_0)+O(|x-x_0|^2),
$$
if it exists. The gradient is exactly this map; it was born as a linear functional, whose coordinates over any basis are $\frac{\partial f}{\partial x_j}$ to ensure that the multi-dimensional chain rule,
$$
df=\sum_j \frac{\partial f}{\partial x_j}d x_j,
$$
is satisfied. To make things easier to understand to undergraduates who are fresh out of 1D calculus, this linear map is most often 'dressed up' as the corresponding vector, which is uniquely obtainable through the Euclidean structure, and whose action must therefore go back through that Euclidean structure to get to the original $df$.
Addendum 3.
OK, it is now sort of clear what the main question is (unless that changes again), though it is still not particularly clear in the question text. The thing that needs addressing is stated in the OP's answer in this thread:
the dual vector space is itself a vector space and the fact that it needs to be cast off as a row matrix is based on how we calculate linear maps and not on what linear maps actually are. If I had defined matrix multiplication differently, this wouldn't have happened.
I will also, address, then this question: given that the dual (/cotangent) space is also a vector space, what forces us to consider it 'distinct' enough from the primal that we display it as row vectors instead of columns, and say its transformation laws are different?
The main reason for this is well addressed by Christoph in his answer, but I'll expand on it. The notion that something is co- or contra-variant is not well defined 'in vacuum'. Literally, the terms mean "varies with" and "varies against", and they are meaningless unless one says what the object in question varies with or against.
In the case of linear algebra, one starts with a given vector space, $V$. The unstated reference is always, by convention, the basis of $V$: covariant objects transform exactly like the basis, and contravariant objects use the transpose-inverse of the basis transformation's coefficient matrix.
One can, of course, turn the tables, and change one's focus to the dual, $W=V^*$, in which case the primal $V$ now becomes the dual, $W^*=V^{**}\cong V$. In this case, quantities that used to transform with the primal basis now transform against the dual basis, and vice versa. This is exactly why we call it the dual: there exists a full duality between the two spaces.
However, as is the case anywhere in mathematics where two fully dual spaces are considered (example,
example,
example,
example,
example
), one needs to break this symmetry to get anywhere. There are two classes of objects which behave differently, and a transformation that swaps the two. This has two distinct, related advantages:
Anything one proves for one set of objects has a dual fact which is automatically proved.
Therefore, one need only ever prove one version of the statement.
When considering vector transformation laws, one always has (or can have, or should have), in the back of one's mind, the fact that one can rephrase the language in terms of the duality-transformed objects. However, since the content of the statements is not altered by the transformation, it is not typically useful to perform the transformation: one needs to state some version, and there's not really any point in stating both. Thus, one (arbitrarily, -ish) breaks the symmetry, rolls with that version, and is aware that a dual version of all the development is also possible.
However, this dual version is not the same. Covectors can indeed be expressed as row vectors with respect to some basis of covectors, and the coefficients of vectors in $V$ would then vary with the new basis instead of against, but then for each actual implementation, the matrices you would use would of course be duality-transformed. You would have changed the language but not the content.
Finally, it's important to note that even though the dual objects are equivalent, it does not mean they are the same. This why we call them dual, instead of simply saying that they're the same! As regards vector spaces, then, one still has to prove that $V$ and $V^*$ are not only dually-related, but also different. This is made precise in the statement that there is no natural isomorphism between a vector space and its dual, which is phrased, and proved in, the language of category theory. The notion of 'natural' isomorphism is tricky, but it would imply the following:
For each vector space $V$, you would have an isomorphism $\sigma_V:V\to V^*$. You would want this isomorphism to play nicely with the duality structure, and in particular with the duals of linear transformations, i.e. their adjoints. That means that for any vector spaces $V,W\in\mathrm{Vect}$ and any linear transformation $T:V\to W$, you would want the diagram
to commute. That is, you would want $T^* \circ \sigma_W \circ T$ to equal $T$.
This is provably not possible to do consistently. The reason for it is that if $V=W$ and is $T$ an isomorphism, then $T$ and $T^*$ are different, but for a simple counter-example you can just take any real multiple of the identity as $T$. This is precisely the formal statement of the intuition in garyp's great answer.
In apples-and-pears languages, what this means is that a general vector space $V$ and its dual $V^*$ are not only dual (in the sense that there exists a transformation that switches them and puts them back when applied twice), but they are also different (in the sense that there is no consistent way of identifying them), which is why the duality language is justified.
I've been rambling for quite a bit, and hopefully at least some of it is helpful. In summary, though, what I think you need to take away is the fact that
Just because dual objects are equivalent it doesn't mean they are the same.
This is also, incidentally, a direct answer to the question title: no, it is not foolish. They are equivalent, but they are still different. | {
"domain": "physics.stackexchange",
"id": 70205,
"tags": "general-relativity, vectors, notation, tensor-calculus, covariance"
} |
Finding the tension in rope tied to ladder using the principle of virtual work | Question: A ladder $AB$ of mass $m$ has its ends on a smooth wall and floor (see figure). The foot of the ladder is tied by an inextensible rope of negligible mass to the base $C$ of the wall so the ladder makes an angle $\alpha$ with the floor. Using the principle of virtual work, find the magnitude of the tension in the rope.
My equations:
I assume length of ladder = $L$
$x$-direction: $$N_2 = mg$$
$y$-direction: $$N_1 = T$$
torque equation about $B$: $$ T_{about B}=T_{mg}+T_{A} = \frac{L}{2}(mg)cos\alpha-LN_1 sin\alpha=0$$
Therefore, $$\frac{L}{2}(mg)cos\alpha-LT sin\alpha=0$$
And we get $$T= \frac{1}{2}(mg)cot\alpha$$
which is the answer.
However, I am supposed to find this using the principle of virtual work. To do this, I guess I would have to displace the ladder downward at $A$ and rightward at $B$, in which case the normal forces $N_1$ and $N_2$ do no work. I then conclude that the work done by the gravitational force $mg$ along the downward displacement of the center of mass of the ladder counterbalances the tension times the horizontal displacement at $B$. However, finding the displacement of the center of the ladder as the ladder slides downwards doesn't seem to be an easy task, so I must be on the wrong track. The final equation I have to deal with seems to be $$mg.\delta y_{com}-T.\delta x_{at B}=0$$
Any inputs would be appreciated.
Answer: Choose $\alpha$ as the generalized coordinate, so $y_\text{COM}=\frac{1}{2}L \sin\alpha$, and $x_B=L\cos\alpha$.
Then $\delta y_\text{COM}=\frac{1}{2}L \cos\alpha \,\delta\alpha$, and $\delta x_B=-L\sin\alpha \,\delta\alpha$. Substitute into the equation. | {
"domain": "physics.stackexchange",
"id": 4136,
"tags": "homework-and-exercises, classical-mechanics"
} |
Understanding minority charge injection | Question:
Due to the applied voltage, electrons from n-side cross the
depletion region and reach p-side (where they are minority
carries). Similarly, holes from p-side cross the junction and reach
the n-side (where they are minority carries). This process under
forward bias is known as minority carrier injection. At the
junction boundary, on each side, the minority carrier
concentration increases significantly compared to the locations
far from the junction.
Due to this concentration gradient, the injected electrons on
p-side diffuse from the junction edge of p-side to the other end
of p-side. Likewise, the injected holes on n-side diffuse from the
junction edge of n-side to the other end of n-side
(Fig. 14.14). This motion of charged carriers on either side
gives rise to current. The total diode forward current is sum
of hole diffusion current and conventional current due to
electron diffusion. The magnitude of this current is usually
in mA. From
In the big picture, I understand that forward biasing a p-n junction makes it more conducting and allows current through but the mechanism makes no sense to me. In the quoted passage above, it is said that due to applied voltage, the negative charge carriers from n side move to the p side which makes the middle depletion region larger. If this happens, wouldn't the resistance keep increasing and net current become zero?
Related questions:
Working of PN junction
Answer: Wait, first one comment: when you forward bias a junction the depletion region gets smaller, not larger. I am not really sure if you really meant to write that.
About the mechanism of injection... without going into the math. I will talk about electrons here but a complementary and symmetric discussion can be done with holes.
First, let me talk about the equilibrium condition at $V=0$. You have many (a lot many) more free electrons in the n-type material rather than in the p-type. The obvious expectation is that random motion (diffusion) will spread the electrons, and thus transfers a lot of them from N to P... and indeed that happens until there is such a negative charge build-up on the P side edge (and symmetrically a positive one on the N side edge) that you have the formation of an electric field that balances this trend. In the end, the electric field is caused by the ionized donors/acceptors in the depletion region, you can imagine them as the leftover of this equilibration process. So the equilibrium is reached: you have many more electrons on the N side but they cannot diffuse to the P side because there is a barrier (the integral effect of the electric field) preventing them to do so.
When you directly bias your diode you are making it "more convenient" for electrons to stay on the P side with respect to the N side. This is in practice simply reducing the electric field in the depletion region and the height of the barrier cited above. So the barrier is not able anymore to keep electrons in N from flooding the p-type region.
[Edit] Addition after comment, on why the barrier gets thinner when we forward bias the junction.
Ok, I see the tricky argument: we transfer even more electrons from N to P, we should thus expect even more charge build-up and a wider (and higher) barrier. So let's see what is wrong in this line of thought.
First note that there is some sort of contradiction here: (1) the bias reduces the barrier, (2) electrons now overcome the barrier and move from N to P, (3) this increases the barrier... looks like a sort of self-terminating or self-contradicting process. You probably already noticed.
The argument for the increased barrier has some merit, but it fails because - differently from the equilibration process imagined above - now we have a battery and a circuit with some current that can flow: electrons are transferred from N to P, the diffuse there, eventually recombine and current will be carried by holes... and in the end some electrons will exit into the wire on the left side of the device. The details of all these processes are not so obvious but the key observation is that charge is flowing. Sure, some electrons are going from N to P but also some electrons are exiting P to go into the outer wiring. Who will win? Will the P region get more or less negatively charged? Not obvious to tell from this argument.
A useful evocative picture could be to look at the junction region as a capacitor (there are some similarities indeed, and variable capacitors can be built using diodes: see varicaps), when you apply the direct bias you are charging that capacitor and increasing the charge on the P side and decreasing it on the N side. This simply means that electrons on the N side and holes on the P side are now able to reach closer to the junction and leave less space to ionized donors/acceptors.
Injected electrons could still be a source of confusion. I see here a possible source of doubts: in the end we are injecting electrons from N to P, so it looks completely natural to expect that the P edge gets MORE negatively charged. Well thought but... no, that is not what happens. I am not sure I can help much intuition here, and I do understand it is confusing. I can just tell you that this is not the case. Electrons that are injected and diffuse deep into in the P type region before recombination (quite deep, typically for a length which is orders of magnitude larger than the depletion region), are not associated with any significant net charge build up. What happens in practice is that the negative charge of the injected electrons is neutralized by the (massive amount of) holes that populate the P type region, which will react to any minuscule amount electric field and re-establish local charge neutrality on a very short time scale. Admittedly, I think this is one of the most subtle aspects of PN junctions. | {
"domain": "physics.stackexchange",
"id": 79902,
"tags": "semiconductor-physics"
} |
Hidden Markov Model initial probability reestimate: Why $\pi^*_i = \gamma_i(1)$ instead of $\pi^*_i = \frac{\gamma_i(1)}{\sum_{j = 1}^N \gamma_j(1)}$ | Question: In the sources I consulted it states that in the Baum Welch algorithm the reestimate of the initial probability of state $i$ of the HMM is $\pi^*_i = \gamma_i(1)$. But $\gamma_i(t)$ is the probability of being in state ${\displaystyle i}$ at time ${\displaystyle t}$ given the observed sequence ${\displaystyle Y}$ and the parameters ${\displaystyle \theta }$ (quote wiki)
So, then why does this probability not need to be normalised like so? :
$$\pi^*_i = \frac{\gamma_i(1)}{\sum_{j = 1}^N \gamma_j(1)}$$
After all normalizing is what is done for the reestimate of the transition probabilities and the emission probabilities too.
Answer: It is defined to be a probability. A probability is by definition already normalized. In particular, we are guaranteed that
$$\sum_{j=1}^N \gamma_j(1) = 1,$$
as there are only $N$ possibilities for the state that you're in at time $1$, and these $N$ cases have no overlap. | {
"domain": "cs.stackexchange",
"id": 8674,
"tags": "machine-learning, probability-theory, probabilistic-algorithms, hidden-markov-models"
} |
Does increasing the density of a solution decrease the rate of temperature change? | Question: I did an experiment to compare whether salt water (5% concentration of salt) or fresh water of the same volume took longer to heat up to a certain temperature. We found that salt water took longer to heat up than fresh water.
Is this due to density? specific heat capacity? or should I have gotten different results.
Answer: The thermal conductivity of saline is less than water. See this page for graphs of thermal conductivity against salt content.
Note that a secondary effect is that adding salt to water actually lowers the specific heat, and this will increase the rate of temperature change. See the question Why does salty water heat up quicker than pure water? and it's answers. In particular follow the link I provide to the paper by Zwicky.
However you're comparing the same volume you have more mass to heat up because the density of sea water is greater than the density of pure water. If you take sea water (about 3.5% salt - I chose this because data is easily Googlable) the specific heat is 3.993kJ per kg per degree, compared to water at 4.184kJ/kg/K. However the density of seawater is 1037kg/m$^3$ so the specific heat per cubic metre is almost exactly the same as pure water. | {
"domain": "physics.stackexchange",
"id": 6322,
"tags": "homework-and-exercises, water, temperature, density"
} |
Concat two tensors of different dimensions | Question: I have two tensors. For example -
a = torch.randn((500, 200, 10))
b = torch.randn((500, 5))
I have to concat each of b tensor to all elements of corresponding a tensor i.e., each 200 tensors of a[0] should get concatenated with b[0] - final dimension should be (500, 200, 15).
Without using explicit for loop, how can I achieve this in Pytorch efficiently?
Answer: For that, you should repeat b 200 times in the appropriate dimension this way:
c = torch.cat([a, torch.unsqueeze(b, 1).repeat(1, 200, 1)], dim=2)
c.shape
As desired, the shape of the result is torch.Size([500, 200, 15]) | {
"domain": "datascience.stackexchange",
"id": 9458,
"tags": "python, deep-learning, pytorch, embeddings"
} |
Atmospheric Nitrogen gas and human body pH buffers | Question: It looks like the human body doesn't have the ability to breakdown atmospheric $\ce{N2}$, as of now $\ldots$ So an enzyme like nitrogenase (found in cyanobacteria) fixes $\ce{N2}$ and produces $\ce{NH3}$. $\ce{NH3}$ is a weak base which qualifies as a buffer component.
If the human body was able to react to $\ce{N2}$ inhaled from the air, would it negatively impact the body's ability to maintain pH levels of vascular systems/cells of the human body given the large amount of $\ce{N2}$ in the atmosphere?
Answer: One chemical equation that can be written for nitrogenase is this one:
$\ce{N_2 + 8 \ H^+ + 8 \ e^- + 16 \ ATP \longrightarrow 2 \ NH_3 + H_2 + 16 \ ADP + 16 P_{i}}$
The equation shows that not only does nitrogenase generate the basic molecule $\ce{NH3}$, it also consumes protons (i.e. acid) to form hydrogen as an inescapable byproduct of its mechanism.
So yes, in theory nitrogenase could perturb the human body's ability to buffer pH.
However, the relative rates are always important. Human requirements for nitrogen are on the order of 80 grams of protein per day. This translates to about 12 g of (fixed) nitrogen atoms per day, which corresponds to 0.85 mol of N. If all of that nitrogen were to come from nitrogenase, then 0.85 × 8 ÷ 2 = 3.4 mol of acidity would be lost via proton consumption in addition to the 0.85 mol of base generated as $\ce{NH3}$, or about 5.2 mol of "base" in total.
But human pH homeostasis works even though humans breathe out about 900 g of an acid equivalent ($\ce{CO2}$) per day, or about 20 mol.
So even if nitrogenase completely replaced human nitrogen requirements in some hypothetic bioengineered human, the demands on pH homeostasis would only change by about 25%, maximum. And considering that pH homeostasis still works even when humans are exercising vigorously, when "instantaneous" $\ce{CO2}$ production rates are far higher, I doubt even a 25% change would be too much to handle.
Thus in conclusion I don't think nitrogenase would wreck human pH homeostasis, unless its activity in the human body was increased to a level far beyond what is necessary to replace dietary N requirements. | {
"domain": "chemistry.stackexchange",
"id": 3725,
"tags": "acid-base, biochemistry, ph"
} |
Right vs Left Derivatives | Question: Let $\theta$ be a fermionic quantity and $f(\theta)=f(0)+\theta\frac{\partial f}{\partial\theta}=f(0)+\frac{\partial_r f}{\partial\theta}\theta$. Under a variation $\theta\mapsto\theta+\delta\theta$ we have
$$f(\theta)\mapsto f(\theta)+\delta\theta\frac{\partial f}{\partial\theta},$$
using the first formula, or
$$f(\theta)\mapsto f(\theta)+\frac{\partial_r f}{\partial\theta}\delta\theta,$$
using the second one. However,
$$\delta\theta\frac{\partial f}{\partial\theta}=(-1)^{|\delta\theta|(|f|+|\theta|)}\frac{\partial f}{\partial\theta}\delta\theta=(-1)^{|\delta\theta|(|f|+|\theta|)+|\theta|(|f|+1)}\frac{\partial_rf}{\partial\theta}\delta\theta$$
which is different from $\frac{\partial_rf}{\partial\theta}\delta\theta$ in general. This yields a contradiction between both variations. Of course problems are avoided if $|\delta\theta|=|\theta|$ but I don't see how this affect the first two equations. I am very confused by this!
Answer:
Yes, by definition the Grassmann parity $|\delta z|$ of a variation $\delta z$ of a supernumber $z$ (of definite Grassmann parity) is the same as the Grassmann parity $|z|$ of the supernumber $z$ itself:
$$|\delta|~=~0.\tag{1}$$
Perhaps OP is wondering about the following question.
Question: How does an infinitesimal variation $\delta$ relate to a left vector-field/linear derivation $X$ of Grassmann-parity $|X|$?
Answer: In order to relate $X$ to an infinitesimal variation$^1$ $$\delta~=~\epsilon X,\tag{2L}$$
one needs to introduce an infinitesimal parameter $\epsilon$ of the same Grassmann-parity $|\epsilon|=|X|$.
--
$^1$ For a right vector-field/linear derivation $X_R$, we instead have
$$\delta~=~X_R \epsilon ,\tag{2R}$$
with $|\epsilon|=|X_R|$. | {
"domain": "physics.stackexchange",
"id": 69636,
"tags": "differentiation, fermions, calculus, grassmann-numbers, superalgebra"
} |
Why is lattice QCD called non-perturbative? | Question: Like, if you are approximating a smooth structure with a discrete lattice, isn't this like a perturbation from smooth space-time?
If Feynman diagrams are a perturbative method, why are Feynamn diagrams on a lattice/grid called non-perturbative?
Answer: In general, by a perturbative approach, we mean an approximation of the form,
$$f = f_0 + \epsilon f_1 + \epsilon^2 f_2 + \dots$$
where $\epsilon$ is the perturbation parameter, for some solution $f$. That is to say, one can approximate the behaviour of the solution by this series.
However, summing all the terms does not mean you will recover the exact solution; in most cases perturbative series are asymptotic series.
On the other hand, lattice QCD is an approach which is not described as perturbation theory because it does not follow this scheme, and in principle one recovers the exact solution in the appropriate limit.
To convince yourself of the distinction, consider the differential equation,
$$\frac{\mathrm d f}{\mathrm dx} = g(x).$$
If we choose to discretize it (very naively), we obtain a linear system,
$$\frac{f_{i+1}-f_{i}}{\Delta x} = g_i$$
which is a totally different approach to plugging in a series expansion like the one above. That being said, Feynman diagrams are always a perturbative approach, so Feynman diagrams on a lattice are a perturbative approach, but putting the theory on a lattice is not what makes it perturbative. | {
"domain": "physics.stackexchange",
"id": 54501,
"tags": "quantum-chromodynamics, non-perturbative, lattice-gauge-theory"
} |
How can causal inference be used with machine learning? | Question: I am wondering how Causal inference is being used with machine learning and specially where in the data science project ? I have been searching for an answer and I've came to the conclusion that Causal inference can be used after the modeling phase in order to confirm some correlations between variables and the target/outcome.
For example if the model has a good accuracy and gives you a high correlation/association between input A and the target B you may want to perform a causal inference to validate that A has an effect on B.
I would like to know if my understanding is correct and also know if there is other application of Causal inference with Machine learning.
Answer: Your understanding is correct. Finding correlations between variables is simple but turning them into causal assertions needs an extra effort. Causal inference is used mostly to reach a "prescription" in the form of "do X so that Y happens".
When not to use Causal inference:
If it is possible to do experiments, causal inference can be avoided. For example, A\B tests let you study the effect of a change in two groups and reach a causal conclusion. For example, result of an A\B test would be "users in group A which see a button with color intensity 50 clicked 10% more than group B with color intensity 40", so do X='increase color intensity of the button' so that Y='more click' happens. With larger, more uniform groups, your assertion would be more reliable.
Causal inference in Machine learning:
In most machine learning projects these type of experiments are possible and mostly cheap, therefore why bother? Moreover, specially in predictive projects, value comes from correlated relations. Knowledge of causal relations, which are a subset of correlated relations, does not add value.
Causal inference:
When you work with historical data or you can only "observe" the data without affecting it, causal inference comes into play. Generally, causal inference is a controversial topic as it tries to extract causal relations from observational data (as opposed to experimental data in A\B tests).
To best of my knowledge, the main contributor to causal inference is Prof. Judea Pearl. His underlying tools are probabilistic graphical models (PGMs) and do-calculus. These tools let us explicitly encode our assumptions about the mechanics of data generation, and reach causal conclusions. So when an assertion like "do X, so that Y happens" goes wrong, we can track the problem in our assumptions in a principled way. For example, we may have ignored an important hidden variable that by including it our conclusion would change. He fundamentally says, anyone who reaches a "prescriptive" conclusion is doing causal inference, so it is better to lay out your assumptions explicitly, to prevent correlation-causation problems going unnoticed.
Some interesting resources:
Simpson's paradox is an entry point to become interested in causal inference,
This paper of Judea Pearl connects the paradox with causal inference. | {
"domain": "datascience.stackexchange",
"id": 4684,
"tags": "machine-learning, statistics, association-rules"
} |
Does the 1:1 sex ratio at birth apply to every human pregnancy or is it a statistical average? | Question: Are there genetic factors that biase the sex ratio of offsprings for each person but average to 1:1 for the entire human population, or does the 1:1 ratio apply to every single fertile person?
Answer: The ratio you are referring to is indeed computed aggregating multiple observations and thus it is a "statistical average".
Any individual (or couple, in this case) can have a specific ratio that differs from the average one. Many factors may affect sex ratio, among which sperm/egg viability, chromosomal aberrations, and hormones misbalance are the ones usually affecting the fertility of a couple and they can also bias the sex ratio.
Many other factors can play a role.
Different populations have slightly different sex-ratio and even smoking can skew the sex ratio! | {
"domain": "biology.stackexchange",
"id": 10038,
"tags": "human-biology, human-genetics, sex-chromosome"
} |
Does the DNA sequence of a butterfly match that of the caterpillar it used to be? | Question: Just had this thought occur to me.
If one were to take a DNA sample(or is it RNA?) of a caterpillar before it became a chrysalis, and attempt to match the sample against one taken after the chrysalis matured to a butterfly, would the two samples come up identical?
Answer: The genome (entire DNA sequence) of the butterfly would be identical to that of the caterpillar in all somatic cells. A caterpillar has the genes to produce wings, for example, however at that stage in it's development they are not 'switched on' to make the necessary proteins.
If you were to sample the mRNA produced from transcription of the DNA in the caterpillar and butterfly stages, there are likely to be many differences. As in the example I previously used, the mRNAs that are translated into wing-forming proteins are more likely to be apparent in the butterfly than the caterpillar. When saying this, please be aware that I'm assuming that the regulation of these genes is pre-transcriptional - I'm not an expert of butterfly physiology! | {
"domain": "biology.stackexchange",
"id": 727,
"tags": "genetics"
} |
Cleaning a file / Word Query GUI (FLTK) | Question: This is a follow up of Cleaning a file / Word Query
I incorporated the suggestions from the anwers there and turned the Word Query Programm into a GUI.
For that I used the Support code of the Book which provides some basic functions for FLTK GUI toolkit. You can find it here: http://www.stroustrup.com/Programming/PPP2code/
However I modified the code slightly because I didn't want to use the std_lib_facilities.h provided by Stroustrup since it includes a lot of bloat by including many headers which are not necessary.
First of all I would like you to check the GUI implementation. Let me know if you find improvements for the code. Are there any bad practices? You can find it in Word_query_window.h/cpp.
Feel also free to take a look in Cleaned_words.h/cpp and Word_query.h/cpp. They are improved Versions of the files provided in the original question without GUI: Cleaning a file / Word Query
In Word_query.h I would especially like to know how to simplify
std::vector<std::pair<Word, Occurences>> most_frequent_words(const std::map<Word, Occurences>& words_with_occurences);
std::vector<Word> longest_words(const std::map<Word, Occurences>& words_with_occurences);
std::vector<Word> shortest_words(const std::map<Word, Occurences>& words_with_occurences);
These functions were changed from the last question to here by returning more than one result. I feel like I compute the results to complicated. Also the three methods look quite similar in the implementation.
The other files they are basically all from Stroustrup with std_lib_facilities.h removed as an include. I think it would be also interesting what could be improved in them nowadays. The Book is based on C++11 but feel also free to suggest improvements here using the latest standard (C++17).
If you want this GUI get to run. I used MSVC 2017 and followed this tutorial how to get install FLTK to run: https://bumpyroadtocode.com/2017/08/05/how-to-install-and-use-fltk-1-3-4-in-visual-studio-2017-complete-guide/
Here is the source code in order from important to less important:
Word_query_window.h
#ifndef WORD_QUERY_WINDOW_GUARD_280820182111
#define WORD_QUERY_WINDOW_GUARD_280820182111
#include "Window.h"
#include "GUI.h"
#include "Point.h"
#include <fstream>
#include <map>
namespace word_query_gui {
using Word = std::string;
using Occurences = int;
class Word_query_window : public Graph_lib::Window {
public:
Word_query_window();
private:
void init_window_open_file();
void show_window_open_file();
void hide_window_open_file();
void init_window_show_filename();
void show_window_show_filename();
void hide_window_show_filename();
void init_window_select();
void show_window_select();
void hide_window_select();
void init_window_display();
void show_window_display();
void hide_window_display();
static const Point window_offset_xy;
static constexpr auto window_size_x = 1024;
static constexpr auto window_size_y = 768;
static constexpr auto window_label = "Word query";
static constexpr auto button_size_x = (window_size_x / 100) * 13;
static constexpr auto button_size_y = (window_size_y / 100) * 8;
// Error
static const Point text_error_xy;
static constexpr auto text_error_font_size = (window_size_y / 100) * 8;
static constexpr auto text_error_color = Graph_lib::Color::red;
static constexpr auto text_error_label_invalid = "Invalid input";
static constexpr auto text_error_label_no_file = "File does not exist";
Graph_lib::Text text_error;
// "Window" open file
static const Point in_box_filename_xy;
static constexpr auto in_box_filename_size_x = (window_size_x / 100) * 59;
static constexpr auto in_box_filename_size_y = button_size_y;
static constexpr auto in_box_filename_label = "Enter Filename ";
static constexpr auto in_box_filename_label_size = in_box_filename_size_y;
static constexpr auto in_box_filename_text_size = in_box_filename_size_y;
Graph_lib::In_box in_box_filename;
static const Point button_open_file_xy;
static constexpr auto button_open_file_size_x = button_size_x;
static constexpr auto button_open_file_size_y = button_size_y;
static constexpr auto button_open_file_label = "Open";
Graph_lib::Button button_open_file;
void button_open_file_event();
// "Window" show filename
static const Point button_change_file_xy;
static constexpr auto button_change_file_size_x = button_size_x * 150 / 100;
static constexpr auto button_change_file_size_y = button_size_y;
static constexpr auto button_change_file_label = "Change File";
Graph_lib::Button button_change_file;
void button_change_file_event();
static const Point text_current_filename_xy;
static constexpr auto text_current_filename_font_size = button_change_file_size_y;
static constexpr auto text_current_filename_color = Graph_lib::Color::black;
Graph_lib::Text text_current_filename;
// "Window" Select
static constexpr auto window_select_count_of_buttons = 7;
static constexpr auto window_select_button_size_y = (window_size_y - button_change_file_size_y) / window_select_count_of_buttons;
static const Point button_occurences_xy;
static constexpr auto button_occurences_size_x = button_size_x * 150 / 100;
static constexpr auto button_occurences_size_y = window_select_button_size_y;
static constexpr auto button_occurences_label = "Occurences of:";
Graph_lib::Button button_occurences;
void button_occurences_event();
static const Point in_box_occurences_xy;
static constexpr auto in_box_occurences_size_x = (window_size_x / 100) * 59;
static constexpr auto in_box_occurences_size_y = window_select_button_size_y;
static constexpr auto in_box_occurences_label = "";
static constexpr auto in_box_occurences_label_size = (window_select_button_size_y / 100) * 90;
static constexpr auto in_box_occurences_text_size = (window_select_button_size_y / 100) * 90;
Graph_lib::In_box in_box_occurences;
static const Point text_occurences_xy;
static constexpr auto text_occurences_font_size = window_select_button_size_y;
static constexpr auto text_occurences_color = Graph_lib::Color::black;
Graph_lib::Text text_occurences;
static const Point button_most_frequent_xy;
static constexpr auto button_most_frequent_size_x = button_size_x * 150 / 100;
static constexpr auto button_most_frequent_size_y = window_select_button_size_y;
static constexpr auto button_most_frequent_label = "Most frequent Word";
Graph_lib::Button button_most_frequent;
void button_most_frequent_event();
static const Point button_longest_xy;
static constexpr auto button_longest_size_x = button_size_x * 150 / 100;
static constexpr auto button_longest_size_y = window_select_button_size_y;;
static constexpr auto button_longest_label = "Longest Word";
Graph_lib::Button button_longest;
void button_longest_event();
static const Point button_shortest_xy;
static constexpr auto button_shortest_size_x = button_size_x * 150 / 100;
static constexpr auto button_shortest_size_y = window_select_button_size_y;
static constexpr auto button_shortest_label = "Shortest Word";
Graph_lib::Button button_shortest;
void button_shortest_event();
static const Point button_starting_with_xy;
static constexpr auto button_starting_with_size_x = button_size_x * 150 / 100;
static constexpr auto button_starting_with_size_y = window_select_button_size_y;
static constexpr auto button_starting_with_label = "Words starting with:";
Graph_lib::Button button_starting_with;
void button_starting_with_event();
static const Point in_box_starting_with_xy;
static constexpr auto in_box_starting_with_size_x = (window_size_x / 100) * 59;
static constexpr auto in_box_starting_with_size_y = window_select_button_size_y;
static constexpr auto in_box_starting_with_label = "";
static constexpr auto in_box_starting_with_label_size = (window_select_button_size_y / 100) * 90;
static constexpr auto in_box_starting_with_text_size = (window_select_button_size_y / 100) * 90;
Graph_lib::In_box in_box_starting_with;
static const Point button_with_len_xy;
static constexpr auto button_with_len_size_x = button_size_x * 150 / 100;
static constexpr auto button_with_len_size_y = window_select_button_size_y;
static constexpr auto button_with_len_label = "Words with len:";
Graph_lib::Button button_with_len;
void button_with_len_event();
static const Point in_box_with_len_xy;
static constexpr auto in_box_with_len_size_x = (window_size_x / 100) * 59;
static constexpr auto in_box_with_len_size_y = window_select_button_size_y;
static constexpr auto in_box_with_len_label = "";
static constexpr auto in_box_with_len_label_size = (window_select_button_size_y / 100) * 90;
static constexpr auto in_box_with_len_text_size = (window_select_button_size_y / 100) * 90;
Graph_lib::In_box in_box_with_len;
static const Point button_show_all_xy;
static constexpr auto button_show_all_size_x = button_occurences_size_x;
static constexpr auto button_show_all_size_y = window_select_button_size_y;
static constexpr auto button_show_all_label = "Show all Words";
Graph_lib::Button button_show_all;
void button_show_all_event();
// "Window" display
static const Point button_display_back_xy;
static constexpr auto button_display_back_size_x = button_size_x;
static constexpr auto button_display_back_size_y = button_size_y;
static constexpr auto button_display_back_label = "Back";
Graph_lib::Button button_display_back;
void button_display_back_event();
static const Point button_previous_page_xy;
static constexpr auto button_previous_page_size_x = button_size_x;
static constexpr auto button_previous_page_size_y = button_size_y;
static constexpr auto button_previous_page_label = "Previous Page";
Graph_lib::Button button_previous_page;
void button_previous_page_event();
static const Point button_next_page_xy;
static constexpr auto button_next_page_size_x = button_size_x;
static constexpr auto button_next_page_size_y = button_size_y;
static constexpr auto button_next_page_label = "Next Page";
Graph_lib::Button button_next_page;
void button_next_page_event();
static const Point text_display_xy;
static constexpr auto text_display_font_size = in_box_filename_size_y / 2;
static constexpr auto text_display_offset_y = text_display_font_size;
static constexpr auto text_display_color = Graph_lib::Color::black;
Graph_lib::Vector_ref<Graph_lib::Text> text_display;
template<typename T>
void init_text_display(const T& container);
static constexpr auto text_display_entrys_per_page = ((window_size_y - text_current_filename_font_size - button_next_page_size_y) / text_display_font_size) - 1;
int page{ 0 };
void print_page();
std::string current_filename;
std::map<Word, Occurences> words_in_file;
};
inline bool file_exists(const std::string& filename) {
std::ifstream f(filename.c_str());
return f.good();
}
inline void init_element(Graph_lib::Window& window, Graph_lib::Text& text, int font_size, Graph_lib::Color color)
{
window.attach(text);
text.set_font_size(font_size);
text.set_color(color);
}
inline void init_element(Graph_lib::Window& window, Graph_lib::Button& button)
{
window.attach(button);
button.hide();
}
inline void init_element(Graph_lib::Window& window, Graph_lib::In_box& in_box, int text_size, int label_size)
{
window.attach(in_box);
in_box.set_text_size(text_size);
in_box.set_label_size(label_size);
in_box.hide();
}
inline void make_gui_text_output(Graph_lib::Vector_ref<Graph_lib::Text>& texts, const std::string& output, Point pos_xy)
{
texts.push_back(new Graph_lib::Text{ pos_xy,output });
}
inline void unselect(Graph_lib::Button& button)
// dirty hack to make button not longer preselected after it was pushed
{
button.hide();
button.show();
}
inline std::string make_gui_output(const std::pair<Word, Occurences>& p)
{
return p.first + " " + std::to_string(p.second);
}
inline std::string make_gui_output(const Word& p)
{
return p;
}
template<typename T>
void Word_query_window::init_text_display(const T& container)
{
int entrys = 0;
for (const auto& element : container) {
if (entrys == text_display_entrys_per_page) { // for case start display on order
entrys = 0;
}
make_gui_text_output(text_display, make_gui_output(element),
Point{ text_display_xy.x, text_display_xy.y + text_display_offset_y * entrys });
++entrys;
}
}
int word_query_application();
}
#endif
Word_query_window.cpp
#include "Word_query_window.h"
#include "Word_query.h"
#include "Cleaned_words.h"
namespace word_query_gui {
const Point Word_query_window::window_offset_xy{ Point{ 50,50 } };
const Point Word_query_window::text_error_xy{ Point{ window_offset_xy.x + (window_size_x / 100) * 50,window_offset_xy.y } };
// "Window" open file
const Point Word_query_window::in_box_filename_xy{ Point{ (window_size_x / 100) * 43,window_offset_xy.y + in_box_filename_size_y } };
const Point Word_query_window::button_open_file_xy{ Point{ window_size_x - button_open_file_size_x - (window_size_x / 100) * 2,in_box_filename_xy.y + in_box_filename_size_y } };
// "Window" show filename
const Point Word_query_window::button_change_file_xy{ Point{0,0} };
const Point Word_query_window::text_current_filename_xy{ Point{ button_change_file_xy.x + button_change_file_size_x, button_change_file_xy.y + button_change_file_size_y*9/10 } };
// "Window" select
const Point Word_query_window::button_occurences_xy{ Point{0, button_change_file_xy.y + button_change_file_size_y} };
const Point Word_query_window::in_box_occurences_xy{ Point{button_occurences_xy.x + button_occurences_size_x,button_occurences_xy.y} };
const Point Word_query_window::text_occurences_xy{ Point{in_box_occurences_xy.x + in_box_occurences_size_x,in_box_occurences_xy.y + in_box_occurences_size_y*9/10} };
const Point Word_query_window::button_most_frequent_xy{ Point{0, button_occurences_xy.y + button_occurences_size_y} };
const Point Word_query_window::button_longest_xy{ Point{0, button_most_frequent_xy.y + button_most_frequent_size_y} };
const Point Word_query_window::button_shortest_xy{ Point{0, button_longest_xy.y + button_longest_size_y} };
const Point Word_query_window::button_starting_with_xy{ Point{0, button_shortest_xy.y + button_shortest_size_y} };
const Point Word_query_window::in_box_starting_with_xy{ Point{button_starting_with_xy.x + button_starting_with_size_x,button_starting_with_xy.y} };
const Point Word_query_window::button_with_len_xy{ Point{0, button_starting_with_xy.y + button_starting_with_size_y} };
const Point Word_query_window::in_box_with_len_xy{ Point{button_with_len_xy.x + button_with_len_size_x,button_with_len_xy.y} };
const Point Word_query_window::button_show_all_xy{ Point{0, button_with_len_xy.y + button_with_len_size_y} };
// "Window" display order
const Point Word_query_window::button_display_back_xy{ Point{ 0 , window_size_y - button_display_back_size_y } };
const Point Word_query_window::button_previous_page_xy{ Point{ window_size_x - button_previous_page_size_x - button_next_page_size_x, window_size_y - button_previous_page_size_y } };
const Point Word_query_window::button_next_page_xy{ Point{ window_size_x - button_next_page_size_x, window_size_y - button_previous_page_size_y } };
const Point Word_query_window::text_display_xy{ Point{0,window_offset_xy.y + text_current_filename_font_size } };
Word_query_window::Word_query_window()
:Window{ window_offset_xy, window_size_x, window_size_y, window_label },
// Error
text_error{ text_error_xy,"" },
// "Window" open file
in_box_filename{ in_box_filename_xy,in_box_filename_size_x,in_box_filename_size_y,in_box_filename_label },
button_open_file{
button_open_file_xy,button_open_file_size_x,button_open_file_size_y,button_open_file_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_open_file_event(); }
},
// all following Menus
button_change_file{
button_change_file_xy,button_change_file_size_x,button_change_file_size_y,button_change_file_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_change_file_event(); }
},
text_current_filename{ text_current_filename_xy,"" },
// "Window" Select Option
button_occurences{
button_occurences_xy,button_occurences_size_x,button_occurences_size_y,button_occurences_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_occurences_event(); }
},
in_box_occurences{ in_box_occurences_xy,in_box_occurences_size_x,in_box_occurences_size_y,in_box_occurences_label },
text_occurences{ text_occurences_xy,"" },
button_most_frequent{
button_most_frequent_xy,button_most_frequent_size_x,button_most_frequent_size_y,button_most_frequent_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_most_frequent_event(); }
},
button_longest{
button_longest_xy,button_longest_size_x,button_longest_size_y,button_longest_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_longest_event(); }
},
button_shortest{
button_shortest_xy,button_shortest_size_x,button_shortest_size_y,button_shortest_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_shortest_event(); }
},
button_starting_with{
button_starting_with_xy,button_starting_with_size_x,button_starting_with_size_y,button_starting_with_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_starting_with_event(); }
},
in_box_starting_with{ in_box_starting_with_xy,in_box_starting_with_size_x,in_box_starting_with_size_y,in_box_starting_with_label },
button_with_len{
button_with_len_xy,button_with_len_size_x,button_with_len_size_y,button_with_len_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_with_len_event(); }
},
in_box_with_len{ in_box_with_len_xy,in_box_with_len_size_x,in_box_with_len_size_y,in_box_with_len_label },
button_show_all{
button_show_all_xy,button_show_all_size_x,button_show_all_size_y,button_show_all_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_show_all_event(); }
},
// "Window" display
button_display_back{
button_display_back_xy,button_display_back_size_x,button_display_back_size_y,button_display_back_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_display_back_event(); }
},
button_previous_page{
button_previous_page_xy,button_previous_page_size_x,button_previous_page_size_y,button_previous_page_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_previous_page_event(); }
},
button_next_page{
button_next_page_xy,button_next_page_size_x,button_next_page_size_y,button_next_page_label,
[](Graph_lib::Address, Graph_lib::Address pw) { Graph_lib::reference_to<Word_query_window>(pw).button_next_page_event(); }
}
{
init_element(*this, text_error, text_error_font_size, text_error_color);
init_window_open_file();
init_window_show_filename();
init_window_select();
init_window_display();
show_window_open_file();
}
void Word_query_window::init_window_open_file()
{
init_element(*this, in_box_filename, in_box_filename_text_size, in_box_filename_label_size);
init_element(*this, button_open_file);
}
void Word_query_window::show_window_open_file()
{
in_box_filename.show();
button_open_file.show();
}
void Word_query_window::hide_window_open_file()
{
in_box_filename.hide();
button_open_file.hide();
}
void Word_query_window::init_window_show_filename()
{
init_element(*this, button_change_file);
init_element(*this, text_current_filename, text_current_filename_font_size, text_current_filename_color);
}
void Word_query_window::show_window_show_filename()
{
button_change_file.show();
text_current_filename.set_label(current_filename);
}
void Word_query_window::hide_window_show_filename()
{
button_change_file.hide();
text_current_filename.set_label("");
}
void Word_query_window::init_window_select()
{
init_element(*this, button_occurences);
init_element(*this, in_box_occurences, in_box_occurences_text_size, in_box_occurences_label_size);
init_element(*this, text_occurences, text_occurences_font_size, text_occurences_color);
init_element(*this, button_most_frequent);
init_element(*this, button_longest);
init_element(*this, button_shortest);
init_element(*this, button_starting_with);
init_element(*this, in_box_starting_with, in_box_starting_with_text_size, in_box_starting_with_label_size);
init_element(*this, button_with_len);
init_element(*this, in_box_with_len, in_box_with_len_text_size, in_box_with_len_label_size);
init_element(*this, button_show_all);
}
void Word_query_window::show_window_select()
{
button_occurences.show();
in_box_occurences.show();
button_most_frequent.show();
button_longest.show();
button_shortest.show();
button_starting_with.show();
in_box_starting_with.show();
button_with_len.show();
in_box_with_len.show();
button_show_all.show();
}
void Word_query_window::hide_window_select()
{
button_occurences.hide();
in_box_occurences.empty();
in_box_occurences.hide();
text_occurences.set_label("");
button_most_frequent.hide();
button_longest.hide();
button_shortest.hide();
button_starting_with.hide();
in_box_starting_with.empty();
in_box_starting_with.hide();
button_with_len.hide();
in_box_with_len.empty();
in_box_with_len.hide();
button_show_all.hide();
}
void Word_query_window::init_window_display()
{
init_element(*this, button_display_back);
init_element(*this, button_previous_page);
init_element(*this, button_next_page);
}
void Word_query_window::show_window_display()
{
button_display_back.show();
if (page != 0) {
button_previous_page.show();
}
if (text_display.size() > text_display_entrys_per_page*(page + 1)) {
button_next_page.show();
}
}
void Word_query_window::hide_window_display()
{
button_display_back.hide();
button_previous_page.hide();
button_next_page.hide();
}
void Word_query_window::button_open_file_event()
{
text_error.set_label("");
current_filename = in_box_filename.get_string();
in_box_filename.empty();
if (current_filename.empty()) {
button_open_file.hide(); // to prevent button is still preselected
button_open_file.show();
text_error.set_label(text_error_label_invalid);
}
else if (!file_exists(current_filename)) {
button_open_file.hide(); // to prevent button is still preselected
button_open_file.show();
text_error.set_label(text_error_label_no_file);
}
else {
words_in_file = cleaned_words::read_words_from_file(current_filename);
hide_window_open_file();
show_window_show_filename();
show_window_select();
}
}
void Word_query_window::button_change_file_event()
{
hide_window_show_filename();
hide_window_select();
hide_window_display();
show_window_open_file();
}
void Word_query_window::button_occurences_event()
{
unselect(button_occurences);
auto search_word = in_box_occurences.get_string();
in_box_occurences.empty();
auto occurences = word_query::occurences_of_word(search_word, words_in_file);
text_occurences.set_label(std::to_string(occurences));
}
void Word_query_window::button_most_frequent_event()
{
unselect(button_most_frequent);
auto most_frequent_words = word_query::most_frequent_words(words_in_file);
init_text_display(most_frequent_words);
hide_window_select();
show_window_display();
print_page();
}
void Word_query_window::button_longest_event()
{
unselect(button_longest);
auto longest_words = word_query::longest_words(words_in_file);
init_text_display(longest_words);
hide_window_select();
show_window_display();
print_page();
}
void Word_query_window::button_shortest_event()
{
unselect(button_shortest);
auto shortest_words = word_query::shortest_words(words_in_file);
init_text_display(shortest_words);
hide_window_select();
show_window_display();
print_page();
}
void Word_query_window::button_starting_with_event()
{
unselect(button_starting_with);
auto begin_str = in_box_starting_with.get_string();
in_box_starting_with.empty();
auto words_starting_with = word_query::words_starting_with(begin_str, words_in_file);
init_text_display(words_starting_with);
hide_window_select();
show_window_display();
print_page();
}
void Word_query_window::button_with_len_event()
{
unselect(button_with_len);
auto length = in_box_with_len.get_int();
in_box_with_len.empty();
if (length > 0) {
auto words_with_len = word_query::words_with_length(length, words_in_file);
init_text_display(words_with_len);
hide_window_select();
show_window_display();
print_page();
}
}
void Word_query_window::button_show_all_event()
{
unselect(button_show_all);
init_text_display(words_in_file);
hide_window_select();
show_window_display();
print_page();
}
void Word_query_window::button_display_back_event()
{
for (int i = 0; i < text_display.size(); ++i) {
detach(text_display[i]);
}
text_display.~Vector_ref();
page = 0;
hide_window_display();
show_window_select();
}
void Word_query_window::button_previous_page_event()
{
--page;
hide_window_display();
show_window_display();
print_page();
}
void Word_query_window::button_next_page_event()
{
++page;
hide_window_display();
show_window_display();
print_page();
}
void Word_query_window::print_page()
{
auto entrys_per_page = text_display_entrys_per_page;
for (int i = 0; i < text_display.size(); ++i) {
detach(text_display[i]);
}
for (int i = entrys_per_page * page; i < text_display.size() && i < entrys_per_page + (entrys_per_page*page); ++i) {
text_display[i].set_font_size(text_display_font_size);
text_display[i].set_color(text_display_color);
attach(text_display[i]);
}
}
int word_query_application()
{
Word_query_window win;
return Graph_lib::gui_main();
}
}
main.cpp
#include "Word_query_window.h"
int main()
{
return word_query_gui::word_query_application();
}
Cleaned_words.h
#ifndef CLEAN_FILE290320180702_GUARD
#define CLEAN_FILE290320180702_GUARD
#include <cctype>
#include <string>
#include <vector>
#include <map>
namespace cleaned_words {
using Word = std::string;
using Occurences = int;
std::map<Word, Occurences> read_words_from_file(const std::string& filename);
std::map<Word, Occurences> read_cleaned_words_with_occurence(std::istream& is);
bool contains_digits(const Word& word);
Word remove_invalid_signs(Word word, const std::string& invalid_signs);
inline bool unsigned_isspace(unsigned char c)
{
return std::isspace(c);
}
inline bool unsigned_isdigit(unsigned char c)
{
return std::isdigit(c);
}
Word remove_whitespace(Word word);
Word remove_capital_letters(Word word);
std::vector<Word> remove_contractions(const Word& word);
void remove_plural(std::map<Word, Occurences>& cleaned_words);
void write_cleaned_words_to_file(const std::string& filename, const std::map<Word, Occurences>& cleaned_words);
}
#endif
Cleaned_words.cpp
#include "Cleaned_words.h"
#include <algorithm>
#include <cctype>
#include <filesystem>
#include <fstream>
namespace cleaned_words {
std::map<Word, Occurences> read_words_from_file(const std::string& filename)
{
std::ifstream ifs{ filename };
if (!ifs) {
throw std::runtime_error("void read_words_from_file(const std::string& filename)\nFile could not be opened\n");
}
return read_cleaned_words_with_occurence(ifs);
}
std::map<Word, Occurences> read_cleaned_words_with_occurence(std::istream& is)
{
std::map<Word, Occurences> cleaned_words;
for (Word word; is >> word;) {
if (contains_digits(word)) continue;
word = remove_invalid_signs(word, R"(°-_^@{}[]<>&.,_()+-=?“”:;/\")");
word = remove_whitespace(word);
word = remove_capital_letters(word);
if (word.empty()) continue;
std::vector<Word> words = remove_contractions(word);
for (auto& word : words) { // remove ' after concatenations were run to not erase them to early
word = remove_invalid_signs(word, "'");
}
for (const auto& word : words) {
if (word.size() == 1 && word != "a" && word != "i" && word != "o") continue;
++cleaned_words[word];
}
}
remove_plural(cleaned_words);
return cleaned_words;
}
bool contains_digits(const Word& word)
{
if (word.empty()) return false;
return std::any_of(word.begin(), word.end(), unsigned_isdigit);
}
Word remove_invalid_signs(Word word,const std::string& invalid_signs)
// replace invalid signs with whitespace
{
auto is_invalid = [&](char c) { return invalid_signs.find(c) != std::string::npos; };
word.erase(std::remove_if(word.begin(), word.end(), is_invalid), word.end());
return word;
}
Word remove_whitespace(Word word)
{
if (word.empty()) return word;
word.erase(std::remove_if(word.begin(), word.end(), unsigned_isspace), word.end());
return word;
}
Word remove_capital_letters(Word word)
{
for (auto& letter : word) {
letter = std::tolower(static_cast<unsigned char>(letter));
}
return word;
}
std::vector<Word> remove_contractions(const Word& word)
{
const std::map<Word, std::vector<Word>> shorts_and_longs
{
{ "aren't",{ "are","not" }},
{ "can't", {"cannot"} },
{ "could've",{ "could","have" } },
{ "couldn't",{ "could","not" } },
{ "daresn't",{ "dare","not" } },
{ "dasn't",{ "dare","not" } },
{ "didn't",{ "did","not" } },
{ "doesn't",{ "does","not" } },
{ "don't",{ "do","not" } },
{ "e'er",{ "ever" } },
{ "everyone's",{ "everyone","is" } },
{ "finna",{ "fixing","to" } },
{ "gimme",{ "give","me" } },
{ "gonna",{ "going","to" } },
{ "gon't",{ "go","not" } },
{ "gotta",{ "got","to" } },
{ "hadn't",{ "had","not" } },
{ "hasn't",{ "has","not" } },
{ "haven't",{ "have","not" } },
{ "he've",{ "he","have" } },
{ "how'll",{ "how","will" } },
{ "how're",{ "how","are" } },
{ "I'm",{ "I","am" } },
{ "I'm'a",{ "I","am","about","to" } },
{ "I'm'o",{ "I","am","going","to" } },
{ "I've",{ "I","have" } },
{ "isn't",{ "is","not" } },
{ "it'd",{ "it","would" } },
{ "let's",{ "let","us" } },
{ "ma'am",{ "madam" } },
{ "mayn't",{ "may","not" } },
{ "may've",{ "may","have" } },
{ "mightn't",{ "might","not" } },
{ "might've",{ "might","have" } },
{ "mustn't",{ "must","not" } },
{ "mustn't've",{ "must","not","have" } },
{ "must've",{ "must","have" } },
{ "needn't",{ "need","not" } },
{ "ne'er",{ "never" } },
{ "o'clock",{ "of","the","clock" } },
{ "o'er",{ "over" } },
{ "ol'",{ "old" } },
{ "oughtn't",{ "ought","not" } },
{ "shan't",{ "shall","not" } },
{ "should've",{ "should","have" } },
{ "shouldn't",{ "should","not" } },
{ "that're",{ "that","are" } },
{ "there're",{ "there","are" } },
{ "these're",{ "these","are" } },
{ "they've",{ "they","have" } },
{ "those're",{ "those","are" } },
{ "'tis",{ "it","is" } },
{ "'twas",{ "it","was" } },
{ "wasn't",{ "was","not" } },
{ "we'd've",{ "we","would","have" } },
{ "we'll",{ "we","will" } },
{ "we're",{ "we","are" } },
{ "we've",{ "we","have" } },
{ "weren't",{ "were","not" } },
{ "what'd",{ "what","did" } },
{ "what're",{ "what","are" } },
{ "what've",{ "what","have" } },
{ "where'd",{ "where","did" } },
{ "where're",{ "where","are" } },
{ "where've",{ "where","have" } },
{ "who'd've",{ "who","would","have" } },
{ "who're",{ "who","are" } },
{ "who've",{ "who","have" } },
{ "why'd",{ "why","did" } },
{ "why're",{ "why","are" } },
{ "won't",{ "will","not" } },
{ "would've",{ "would","have" } },
{ "wouldn't",{ "would","not" } },
{ "y'all",{ "you","all" } },
{ "y'all'd've",{ "you","all","would","have" } },
{ "yesn't",{ "yes","not" } },
{ "you're",{ "you","are" } },
{ "you've",{ "you","have" } },
{ "whomst'd've",{ "whomst","would","have" } },
{ "noun's",{ "noun","is" } },
};
auto it = shorts_and_longs.find(word);
if (it == shorts_and_longs.end()) {
return std::vector<Word>{word};
}
else {
return it->second;
}
return std::vector<Word>{};
}
void remove_plural(std::map<Word, Occurences>& cleaned_words)
// assume a plural is a word with an additional s
// e.g. ship and ships
// if both are present ships gets deleted and ++ship
{
for (auto it = cleaned_words.begin(); it != cleaned_words.end();) {
if(!it->first.empty() && it->first.back() == 's') {
Word singular = it->first;
singular.pop_back(); // remove 's' at the end
auto it_singular = cleaned_words.find(singular);
if (it_singular != cleaned_words.end()) {
cleaned_words[it_singular->first]+= it->second;
it = cleaned_words.erase(it);
}
else {
++it;
}
}
else {
++it;
}
}
}
void write_cleaned_words_to_file(const std::string& filename, const std::map<Word, Occurences>& cleaned_words)
{
std::ofstream ofs{ filename };
for (const auto& word : cleaned_words) {
ofs << word.first << " " << word.second << '\n';
}
}
}
Word_query.h
#ifndef WORD_QUERY_GUARD_270820181433
#define WORD_QUERY_GUARD_270820181433
#include <map>
#include <string>
#include <vector>
namespace word_query {
using Word = std::string;
using Occurences = int;
using Length = std::map<Word, Occurences>::size_type;
int occurences_of_word(const Word& word, const std::map<Word, Occurences>& words_with_occurences);
std::vector<std::pair<Word, Occurences>> most_frequent_words(const std::map<Word, Occurences>& words_with_occurences);
std::vector<Word> longest_words(const std::map<Word, Occurences>& words_with_occurences);
std::vector<Word> shortest_words(const std::map<Word, Occurences>& words_with_occurences);
std::vector<Word> words_starting_with(const Word& begin_of_word, const std::map<Word, Occurences>& words_with_occurences);
std::vector<Word> words_with_length(Length length, const std::map<Word, Occurences>& words_with_occurences);
}
#endif
Word_query.cpp
#include "Word_query.h"
#include <algorithm>
namespace word_query {
int occurences_of_word(const Word& word, const std::map<Word, Occurences>& words_with_occurences)
//How many occurences of x are there in a file?
{
auto it = words_with_occurences.find(word);
if (it == words_with_occurences.end()) {
return 0;
}
else {
return it->second;
}
}
std::vector<std::pair<Word, Occurences>> most_frequent_words(const std::map<Word, Occurences>& words_with_occurences)
//Which word occurs most frequently?
{
using pair_type = std::map<Word, Occurences>::value_type;
std::vector<std::pair<Word, Occurences>> words;
auto it_begin = words_with_occurences.begin();
auto it_result = words_with_occurences.end();
auto it_last_result = words_with_occurences.end();
for(;;){
it_result = std::max_element(
it_begin, words_with_occurences.end(),
[](const pair_type a, const pair_type b)
{
return a.second < b.second;
}
);
if (it_result == words_with_occurences.end()) {
break;
}
else if (it_last_result == words_with_occurences.end() || it_last_result->second == it_result->second) {
words.push_back(*it_result);
}
else {
break;
}
it_last_result = it_result;
it_begin = ++it_result;
}
return words;
}
std::vector<Word> longest_words(const std::map<Word, Occurences>& words_with_occurences)
//Which is the longest word in the file?
{
using pair_type = std::map<Word, Occurences>::value_type;
std::vector<Word> words;
auto it_begin = words_with_occurences.begin();
auto it_result = words_with_occurences.end();
auto it_last_result = words_with_occurences.end();
for (;;) {
it_result = std::max_element(
it_begin, words_with_occurences.end(),
[](const pair_type a, const pair_type b)
{
return a.first.size() < b.first.size();
}
);
if (it_result == words_with_occurences.end()) {
break;
}
else if (it_last_result == words_with_occurences.end() || it_last_result->first.size() == it_result->first.size()) {
words.push_back(it_result->first);
}
else {
break;
}
it_last_result = it_result;
it_begin = ++it_result;
}
return words;
}
std::vector<Word> shortest_words(const std::map<Word, Occurences>& words_with_occurences)
//Which is the shortest word in the file?
{
using pair_type = std::map<Word, Occurences>::value_type;
std::vector<Word> words;
auto it_begin = words_with_occurences.begin();
auto it_result = words_with_occurences.end();
auto it_last_result = words_with_occurences.end();
for (;;) {
it_result = std::min_element(
it_begin, words_with_occurences.end(),
[](const pair_type a, const pair_type b)
{
return a.first.size() < b.first.size();
}
);
if (it_result == words_with_occurences.end()) {
break;
}
else if (it_last_result == words_with_occurences.end() || it_last_result->first.size() == it_result->first.size()) {
words.push_back(it_result->first);
}
else {
break;
}
it_last_result = it_result;
it_begin = ++it_result;
}
return words;
}
std::vector<Word> words_starting_with(const Word& begin_of_word, const std::map<Word, Occurences>& words_with_occurences)
{
std::vector<Word> matched_words;
for (const auto& word : words_with_occurences) {
if (word.first.substr(0, begin_of_word.size()) == begin_of_word) {
matched_words.push_back(word.first);
}
}
return matched_words;
}
std::vector<Word> words_with_length(Length length, const std::map<Word, Occurences>& words_with_occurences)
//all words with n letters
{
std::vector<Word> words;
for (const auto& element : words_with_occurences) {
if (element.first.size() == length)
words.push_back(element.first);
}
return words;
}
}
Point.h
// This is a GUI support code to the chapters 12-16 of the book
// "Programming -- Principles and Practice Using C++" by Bjarne Stroustrup
//
#ifndef POINT_GUARD
#define POINT_GUARD
struct Point {
int x, y;
Point(int xx, int yy) : x(xx), y(yy) { }
Point() :x(0), y(0) { }
};
inline bool operator==(Point a, Point b) { return a.x==b.x && a.y==b.y; }
inline bool operator!=(Point a, Point b) { return !(a==b); }
#endif // POINT_GUARD
GUI.h
//
// This is a GUI support code to the chapters 12-16 of the book
// "Programming -- Principles and Practice Using C++" by Bjarne Stroustrup
//
#ifndef GUI_GUARD
#define GUI_GUARD
#include "Window.h"
#include "Graph.h"
#include <string>
namespace Graph_lib {
//------------------------------------------------------------------------------
typedef void* Address; // Address is a synonym for void*
typedef void(*Callback)(Address, Address); // FLTK's required function type for all callbacks
//------------------------------------------------------------------------------
template<class W> W& reference_to(Address pw)
// treat an address as a reference to a W
{
return *static_cast<W*>(pw);
}
//------------------------------------------------------------------------------
class Widget {
// Widget is a handle to an Fl_widget - it is *not* an Fl_widget
// We try to keep our interface classes at arm's length from FLTK
public:
Widget(Point xy, int w, int h, const std::string& s, Callback cb)
: loc(xy), width(w), height(h), label(s), do_it(cb)
{}
virtual void move(int dx, int dy) { hide(); pw->position(loc.x += dx, loc.y += dy); show(); }
virtual void hide() { pw->hide(); }
virtual void show() { pw->show(); }
virtual void attach(Graph_lib::Window&) = 0;
Point loc;
int width;
int height;
std::string label;
Callback do_it;
virtual ~Widget() { }
protected:
Graph_lib::Window* own; // every Widget belongs to a Window
Fl_Widget* pw; // connection to the FLTK Widget
private:
Widget & operator=(const Widget&); // don't copy Widgets
Widget(const Widget&);
};
//------------------------------------------------------------------------------
struct Button : Widget {
Button(Point xy, int w, int h, const std::string& label, Callback cb)
: Widget(xy, w, h, label, cb)
{}
void attach(Graph_lib::Window&);
};
//------------------------------------------------------------------------------
struct In_box : Widget {
In_box(Point xy, int w, int h, const std::string& s)
:Widget(xy, w, h, s, 0) { }
int get_int();
std::string get_string();
void attach(Graph_lib::Window& win);
// Extensions not provided by Stroustrup:
void set_text_size(int size);
void set_label_size(int size);
void empty(); // emptys the input field
};
//------------------------------------------------------------------------------
struct Out_box : Widget {
Out_box(Point xy, int w, int h, const std::string& s)
:Widget(xy, w, h, s, 0) { }
void put(int);
void put(const std::string&);
void attach(Graph_lib::Window& win);
// Extensions not provided by Stroustrup:
void set_text_size(int size);
void set_label_size(int size);
};
//------------------------------------------------------------------------------
struct Menu : Widget {
enum Kind { horizontal, vertical };
Menu(Point xy, int w, int h, Kind kk, const std::string& label)
: Widget(xy, w, h, label, 0), k(kk), offset(0)
{
}
Vector_ref<Button> selection;
Kind k;
int offset;
int attach(Button& b); // Menu does not delete &b
int attach(Button* p); // Menu deletes p
void show() // show all buttons
{
for (auto i = 0; i<selection.size(); ++i)
selection[i].show();
}
void hide() // hide all buttons
{
for (auto i = 0; i<selection.size(); ++i)
selection[i].hide();
}
void move(int dx, int dy) // move all buttons
{
for (auto i = 0; i<selection.size(); ++i)
selection[i].move(dx, dy);
}
void attach(Graph_lib::Window& win) // attach all buttons
{
for (int i = 0; i < selection.size(); ++i) {
win.attach(selection[i]);
}
own = &win;
}
};
//------------------------------------------------------------------------------
} // of namespace Graph_lib
#endif // GUI_GUARD
GUI.cpp
#include "GUI.h"
#include <sstream>
using namespace Graph_lib;
void Button::attach(Graph_lib::Window& win)
{
pw = new Fl_Button(loc.x, loc.y, width, height, label.c_str());
pw->callback(reinterpret_cast<Fl_Callback*>(do_it), &win); // pass the window
own = &win;
}
int In_box::get_int()
{
Fl_Input& pi = reference_to<Fl_Input>(pw);
// return atoi(pi.value());
const char* p = pi.value();
if (!isdigit(p[0])) return -999999;
return atoi(p);
}
std::string In_box::get_string()
{
Fl_Input& pi = reference_to<Fl_Input>(pw);
return std::string(pi.value());
}
void In_box::attach(Graph_lib::Window& win)
{
pw = new Fl_Input(loc.x, loc.y, width, height, label.c_str());
own = &win;
}
void In_box::set_text_size(int size)
{
reference_to<Fl_Input>(pw).textsize(size);
}
void In_box::set_label_size(int size)
{
reference_to<Fl_Input>(pw).labelsize(size);
}
void In_box::empty() // emptys the input field
{
reference_to<Fl_Input>(pw).value("");
}
void Out_box::put(int i)
{
Fl_Output& po = reference_to<Fl_Output>(pw);
std::stringstream ss;
ss << i;
po.value(ss.str().c_str());
}
void Out_box::put(const std::string& s)
{
reference_to<Fl_Output>(pw).value(s.c_str());
}
void Out_box::attach(Graph_lib::Window& win)
{
pw = new Fl_Output(loc.x, loc.y, width, height, label.c_str());
own = &win;
}
void Out_box::set_text_size(int size)
{
reference_to<Fl_Output>(pw).textsize(size);
}
void Out_box::set_label_size(int size)
{
reference_to<Fl_Output>(pw).labelsize(size);
}
int Menu::attach(Button& b)
{
b.width = width;
b.height = height;
switch (k) {
case horizontal:
b.loc = Point(loc.x + offset, loc.y);
offset += b.width;
break;
case vertical:
b.loc = Point(loc.x, loc.y + offset);
offset += b.height;
break;
}
selection.push_back(&b);
return int(selection.size() - 1);
}
int Menu::attach(Button* p)
{
// owned.push_back(p);
return attach(*p);
}
Window.h
#include "FL/fl_draw.H"
#include "FL/Enumerations.H"
#include "Fl/Fl_JPEG_Image.H"
#include "Fl/Fl_GIF_Image.H"
#include "Point.h"
#include <string>
#include <vector>
namespace Graph_lib {
class Shape; // "forward declare" Shape
class Widget;
class Window : public Fl_Window {
public:
Window(int w, int h, const std::string& title); // let the system pick the location
Window(Point xy, int w, int h, const std::string& title); // top left corner in xy
virtual ~Window() { }
int x_max() const { return w; }
int y_max() const { return h; }
void resize(int ww, int hh) { w = ww, h = hh; size(ww, hh); }
void set_label(const std::string& s) { label(s.c_str()); }
void attach(Shape& s);
void attach(Widget& w);
void detach(Shape& s); // remove s from shapes
void detach(Widget& w); // remove w from window (deactivate callbacks)
void put_on_top(Shape& p); // put p on top of other shapes
protected:
void draw();
private:
std::vector<Shape*> shapes; // shapes attached to window
int w, h; // window size
void init();
};
int gui_main(); // invoke GUI library's main event loop
inline int x_max() { return Fl::w(); } // width of screen in pixels
inline int y_max() { return Fl::h(); } // height of screen in pixels
}
#endif
Window.cpp
#include "Window.h"
#include "Graph.h"
#include "GUI.h"
namespace Graph_lib {
Window::Window(int ww, int hh, const std::string& title)
:Fl_Window(ww, hh, title.c_str()), w(ww), h(hh)
{
init();
}
Window::Window(Point xy, int ww, int hh, const std::string& title)
: Fl_Window(xy.x, xy.y, ww, hh, title.c_str()), w(ww), h(hh)
{
init();
}
void Window::init()
{
resizable(this);
show();
}
//----------------------------------------------------
void Window::draw()
{
Fl_Window::draw();
for (unsigned int i = 0; i<shapes.size(); ++i) shapes[i]->draw();
}
void Window::attach(Widget& w)
{
begin(); // FTLK: begin attaching new Fl_Wigets to this window
w.attach(*this); // let the Widget create its Fl_Wigits
end(); // FTLK: stop attaching new Fl_Wigets to this window
}
void Window::detach(Widget& b)
{
b.hide();
}
void Window::attach(Shape& s)
{
shapes.push_back(&s);
// s.attached = this;
}
void Window::detach(Shape& s)
{
for (unsigned int i = shapes.size(); 0<i; --i) // guess last attached will be first released
if (shapes[i - 1] == &s)
shapes.erase(shapes.begin() + (i - 1));//&shapes[i-1]);
}
void Window::put_on_top(Shape& p) {
for (auto i = 0; i<shapes.size(); ++i) {
if (&p == shapes[i]) {
for (++i; i<shapes.size(); ++i)
shapes[i - 1] = shapes[i];
shapes[shapes.size() - 1] = &p;
return;
}
}
}
int gui_main() { return Fl::run(); }
} // Graph
Graph.h/cpp
Due to limit of the characters per post i can't post Graph.h / cpp complete. You can find it here if necessary to look into: http://www.stroustrup.com/Programming/PPP2code/
Answer: Classes
There are a few things you can do to simplify:
std::vector<std::pair<Word, Occurences>> most_frequent_words(const std::map<Word, Occurences>& words_with_occurences);
std::vector<Word> longest_words(const std::map<Word, Occurences>& words_with_occurences);
std::vector<Word> shortest_words(const std::map<Word, Occurences>& words_with_occurences);
First, I want to pay attention to:
std::vector<std::pair<Word, Occurences>> most_frequent_words(const std::map<Word, Occurences>& words_with_occurences);
in particular. Are you trying to return a list of the most frequent words? Or are you trying to return the most frequent words and their frequency? I would consider separating into two functions: a function that, given a particular word, retrieves the number of occurrences of that word, and one that simply gets a list of the most frequent words.
There are some caveats to this (regarding efficiency). Now, let's think about something more important. You always pass: const std::map<Word, Occurences>& words_with_occurences into each function. Seems kind of repetitive.
The initial temptation is to create a global variable called words_with_occurences that most_frequent_words, longest_words and shortest_words use instead of the parameter you have suggested, but as it has been said multiple times, global variables are bad. Class members are not though!
I am rather surprised you have not used a class, so I am uncertain if you have any exposure to classes. I am not entirely sure what a good C++ tutorial for classes is but maybe someone else can help with that.
Once you acquire a basic understanding of classes the outline is as follows:
Create a class call it something like Ngram. This class is responsible for handling data related to word count and frequency.
In the constructor, create words_with_occurrences member that can be later referred to by other methods.
Now you can define the most_frequent_words longest_words and shortest_words without the words_with_occurrences parameter you have supplied.
Any Ngram object should have an initialization parameter that specifies the data source to generate words_with_occurrences from.
FLUID
Creating a GUI can be annoying, there are often multiple ways to assist with the creation of a GUI. Often, even if a GUI is built on top of a language, there is another language/language extension/tool that helps actually create the GUI.
For instance, developing and iPhone app you have Swift or Objective-C as the "underlying language" and a tool like Interface Builder. Android has a similar set of tools. Developing on the web, roughly speaking, you have HTML for structuring the contents of the code, CSS for styling, and Javascript as the "underlying language".
It is a little difficult express exactly what I mean by "underlying language". In the case of HTML/CSS/JavaScript, HTML and CSS are not used for heavy computation, nor or they often used for dynamically generating content. Javascript on the other hand is.
In the case of FLTK, FLUID takes a similar role to (at least) HTML and, in many cases, be used over C++ to describe the static parts of your user interface. The parts of your GUI that are static (don't change) should probably be designed with FLUID, it will vastly simplify much of your code. | {
"domain": "codereview.stackexchange",
"id": 31882,
"tags": "c++, c++17, fltk"
} |
Insertion sort implementation in Python | Question: I came up with a small optimization while practice coding insertion sort in Python. It's mostly about not accessing the input array too much and letting the hardware do element shifts instead of swapping manually. Could someone comment on my coding style as well as the optimization idea?
Original Version
# this version of insertion sort is half as fast as selection sort on my machine
def runslow(input):
for i in range(1,len(input)):
j = i
# list being accessed twice in this comparison
while j > 0 and input[j] < input[j-1]:
# unnecessary writes creating slowdown
input[j-1], input[j] = input[j], input[j-1]
j -= 1
Improved Version
# the optimized insertion sort
def run(input):
for i in range(1,len(input)):
j = i
# extract the value being compared
saved = input[i]
# now list isn't being accessed twice unnecessarily
while(j > 0 and input[j-1] > saved):
# don't actually perform the intermediate swaps,
# just let the cursor settle into the final position
# input[j - 1], input[j] = input[j], input[j - 1]
j-=1
# now insert it and let the hardware do the actual shifting of the values in the list
input.insert(j,input.pop(i))
return input
Answer: TL;DR
Be as specific as you can:
When naming functions,
And when setting up loops.
Details:
Here's what I've noticed in your code, line-by-line:
def run(input)
Avoid common names for your functions. While it's not always possible to prevent having the same name as functions from other modules (there are methods to deal with that), naming a function run is asking for trouble. For example, a commonly used command in Python test-suites is also run.
Your function name should match as closely as possible the specific goal of the function. e.g. maybe call it insertion_sort.
for i in range(1,len(input)):
Try to use the most specific iterators possible.
The general construction for i in range(... gets used a lot where it doesn't have to be. Any time you see it, there is usually a better way iterate, whether using a list comprehension, using Itertools, or other more specific methods. In your case, replacing for i in range( with for i in input results helps a lot (as in my example below).
The next two lines j = 1 and saved = input[i] are only there to convert i from the non-specific `for I in range (...' to the values you actually want. They can be elimited by changing how the loop is set up.
while(j > 0 and input[j-1] > saved):
The same goes for your while loop. You will never need a i+=1 (or j-=1) to increment a loop in Python. It's always best to think of what information the inside of the loop needs, and set up an iterator, list comprehension, or for loop that immediately provides that information.
Here is a modified version of your code, to show what I mean about being specific with loops:
def insertion_sort(input):
for i,n in enumerate(input):
ins_pos = 0 # default location to insert n
for j,m in reversed([r for r in enumerate(input[:i]])):
if m > n:
ins_pos = j + 1 # insert n one position after first larger number found
break
input.insert(ins_pos, input.pop(i))
return input
Once inside the loop, you only need:
i and j, the indices of the values to pop and insert.
n and m, the values to compare.
Once you know what the inside of the loop needs, set up the loop statements (e.g. In my example, for i,n in enumerate(input), and for m in reversed(input[i])) so they return only what is needed. This eliminates the extra lines in your code. Also, after this modification, the purpose of each loop is more clear.
input.insert(j,input.pop(i))
PEP8 specifies spaces between arguments in function calls.
And a note on the last line:
Try not to modify a list that is the source of your iterator.
In general, it's a bad idea to modify the same list that is being iterated over. Your example only works because the pop and insert functions get applied to the parts of the input list that have already been read through by the iterator. If you need to do it in future, be aware of why this works, and how other examples of the same practice might go wrong. I've assumed that part of your spec is to modify the list in place, so I've left it how it is.
Performance:
Nice work with eliminating the extra slice steps. Re-writing the list with each step in the inner while loop is indeed more expensive than just comparing values and moving on. However, both pop, and insert within a list are both \$\mathcal{O} (n)\$, not \$\mathcal{O} (1)\$, sad but true. There may be a small speed-up of a constant factor, doing away with the slices, but both your versions of insertion sort are ultimately limited by the time complexity of the algorithm itself, which is \$\mathcal{O} (n^2)\$, where n is the length of the list.
This line:
saved = input[i]
Won't change anything as, in Python, this sets up saved as a pointer to the value in input[i], rather than making a copy and storing it in saved. You could explicitly force a copy, but this would probably be self-defeating, as reading an index from an array is no more expensive than accessing any other int. | {
"domain": "codereview.stackexchange",
"id": 24386,
"tags": "python, performance, algorithm, insertion-sort"
} |
Is pin column-beam joint always have less moment, more deflection compared to fixed joint? | Question: I am referring to the answer here:
It seems that on a simple structure
(The diagrams on the left the images below have fully-fixed connections, while on the right the columns are pinned connections to the beams.)
A: The deflection on beam is bigger in pinned model, but the deflection on beam/column joint is bigger in fixed model
B: The moment on column in pinned model is 0
C: The axial load on column in pinned model is bigger, because the moment is being transformed into axial load (?? is this reasoning true??)
My questions are:
These conclusions are true for this particular simple model, but are these three conclusions always true on any general model?
And why? Can it be explained in terms of loading flow and statics? Or this is just FEM behavior that we can't explain more intuitively?
Answer: Let me start by answering your second question: models such as this one, which involve only one-dimensional beam elements, are 100% analytical and can therefore always in theory be understood intuitively. There is no "FEM behavior" for such models. Sometimes the models may get complex with lots of bars and whatnot, which may make "intuitive explanations" more difficult, but the result will always be analytical.
Let's start by looking at statement B:
Now, let's take a look at beam 4 in your model (the left-most beam). More specifically, it's bending moment diagram. As you've noticed, the pinned model displays zero moment at the left-most column. This is the very definition of a hinge and is expected behavior. The moment on the beam at the central column is non-zero because the beam itself is not hinged, but the central column is hinged and therefore displays zero moment at the node.
Now, on to statement A, starting by looking at the beam's deflection:
Let's keep looking at the bending moment diagram. The beam equation tells us that
$$\dfrac{\partial^2 }{\partial x^2}\left(EI\frac{\partial^2 w}{\partial x^2}\right) = q$$
which also tells us that
$$EI\frac{\partial^2 w}{\partial x^2} = M$$
that is: bending moment (divided by stiffness $EI$) is the second derivative of deflection. From calculus, we know that the second derivative of any function described the function's curvature. So bending moment describes deflection's curvature, which describes the "acceleration" with which the beam's tangent (the first derivative of deflection, and therefore bending moment's integral) changes.
So, the more balanced a bending moment diagram is between positive and negative bending moment, the more the total "acceleration" cancels itself out, implying in smaller tangent changes, and therefore smaller deflections. So yes, a fixed node will always lead to smaller deflections
To answer the matter of the node's displacement, we first need to explain point statement C. For that we need to look at beam 4 in isolation. To do so, we need to replace the surrounding beams with elastic supports which describe their stiffness.
The vertical supports' stiffness will be equal to the columns' axial stiffness (the node with the central column will also have a tiny addition due to the other beam's stiffness against imposed transversal displacements)
The horizontal supports' stiffness will be equal to the columns' stiffness against imposed transversal displacements
The rotational supports' stiffness will depend on the boundary conditions. If hinged, then the outer node will have zero stiffness and the central node will have a stiffness equal to the other beam's stiffness against imposed rotations. If fixed, then both nodes will have the columns' stiffness against imposed rotations, adding the other beam's stiffness as well for the central node.
So, basically, the only difference between the hinged and fixed cases is in the rotational stiffness (as would intuitively be expected). This increased stiffness, however, causes the node to pull in a greater proportion of all forces, thereby increasing the axial forces in your outer column and reducing them in the central column in the fixed model.
Returning to the issue of the node's deflections, they are now easy to explain. After all, in the fixed model the column suffers more axial forces, naturally increasing the vertical deflections. But it also suffers bending moment, which generates horizontal deflections as well as a tiny bit of additional vertical deflection. | {
"domain": "engineering.stackexchange",
"id": 967,
"tags": "structural-engineering, structural-analysis, statics"
} |
ros node not working when using launch file | Question:
Similar issue has been addressed in these two questions before hello node and output roslaunch. But I cannot get my node to work through the launch file. If I run the node like this:
./image_listener_test_node
it works everything is printed correctly. However through the launch file:
<launch>
<node name="image_listener_test" type="image_listener_test_node" pkg="image_listener_test" output="screen"/>
</launch>
Nothing is printed and no error message. Is there something else I am missing?
Originally posted by jtim on ROS Answers with karma: 153 on 2016-07-29
Post score: 0
Answer:
I figured it out. It was sourcing that was the issue. This means an old project file of the node existed a different place. So when running rosrun or roslaunch they found the old version. and when I executed I had the correct version. Deleting all Devel and build folders in my folder solved it.
Originally posted by jtim with karma: 153 on 2016-08-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25397,
"tags": "ros, roslaunch, roslauch"
} |
Computing expectation value of $|\langle z|C|0^n\rangle|^2$ over Haar random circuit | Question: I am trying to understand the integration on page 4 of this paper. Consider a Haar random circuit $C$ and a fixed basis $z$. Each output probability of a Haar random circuit (given by $|\langle z | C |0^{n} \rangle |^{2}$, for each fixed z) is distributed according to the Porter Thomas distribution, given by
\begin{equation}
\text{PorterThomas}(x) = 2^{n} e^{-2^{n} x}.
\end{equation}
The paper claims that
\begin{equation}
\mathbb{E}[|\langle z | C |0^{n} \rangle |^{2}] = \int_{0}^{\infty} \frac{x}{2^{n}} x e^{-x} \,dx = \frac{2}{2^{n}}.
\end{equation}
However, I do not understand the integration at all. Shouldn't the integration instead be
\begin{equation}
\mathbb{E}[|\langle z | C |0^{n} \rangle |^{2}] = \int_{0}^{\infty} x ~2^{n} e^{-2^{n} x} \,dx = \frac{1}{2^{n}},
\end{equation}
where I am just using the definition of the expected value and plugging in the pdf for Porter Thomas. However, this gives me a very different answer.
Where are all the extra terms coming from and why is the answer $\frac{2}{2^{n}}$?
Answer: The issue that easily leads to confusion is the dual role played by output bitstring probability. It enters the computation of the average in two ways. On one hand, it determines how often one sees different bitstrings. On the other hand, it determines the contribution that each bitstring makes towards the average. In mathematical terms, the output bitstring probability affects both the probability measure of the random variable as well as its value.
To see this, consider the following example procedure that yields $\mathbb{E}[|\langle z | C |0^{n} \rangle |^{2}]$:
Run the quantum circuit on a noiseless quantum computer or simulator and obtain the output bitstring $z$.
Simulate the quantum circuit on a classical computer to compute the value of the probability $|\langle z | C |0^{n} \rangle |^{2}$.
Repeat steps 1 and 2 to obtain the average of probabilities computed in step 2 across many output bistrings sampled in step 1.
In step 1, the output bitstring probability affects the bitstrings you see - you see the more likely bitstrings more often. In step 2, it affects the value you add up in the computation of the average - the more likely bitstrings contribute more towards the average.
We can make this reasoning more rigorous (following section IV C of QS paper supplement). The fact that the distribution of output bitstring probabilities is Porter-Thomas means that the fraction of output bitstrings with probability in $[p, p+dp]$ is:
$$
Pr(p) \, dp \approx 2^n e^{-2^np} dp.
$$
Since there are $2^n$ possible output bitstrings, the number of bitstrings with probability in $[p, p+dp]$ is
$$
N(p) \, dp \approx 4^n e^{-2^np} dp.
$$
Therefore, the probability that in step 1 above we see a bitstring whose probability lies in $[p, p+dp]$ is
$$
f(p) \, dp \approx p \, 4^n e^{-2^np} dp.
$$
Note that $f(p)$ is the probability density function for the output bitstring probability. Therefore, the average output bitstring probability is
$$
\mathbb{E}[|\langle z | C |0^{n} \rangle |^{2}] = \int_0^1 p f(p) dp \approx \int_0^1 p^2 4^n e^{-2^np} dp \approx 2/2^n
$$
as expected.
You may object that $f(p)$ defined above is not correctly normalized. This is due to the fact that the exponential formula is an approximate form of the Porter-Thomas distribution which is in fact a Beta distribution
$$
(2^n - 1) (1 - p)^{2^n - 2} \approx 2^n e^{-2^np}.
$$
In practice, this approximation is very good for $n$ above a dozen or so.
For completeness, note that if you were running the quantum circuits on a noisy quantum computer the distribution in step 1 would be different and the resulting average would be a number between $1/2^n$ and $2/2^n$ according to the fidelity obtained in the experiment. This is the key idea behind linear cross-entrpy benchmarking. | {
"domain": "quantumcomputing.stackexchange",
"id": 3119,
"tags": "quantum-gate, quantum-state, probability, random-quantum-circuit, haar-distribution"
} |
When I heat up a balloon, does the air inside increase in pressure as well as volume? | Question: When I heat up a balloon, does the air inside the balloon increase in pressure as well as volume? I thought pressure and volume were inversely proportional? Or does pressure and volume increase as temperature increases?
Answer: If the balloon is closed, then yes, both volume and pressure will increase when the gas inside is heated. Let's look at two simpler cases first.
If the gas were completely free to expand against ambient pressure (say, inside of a container sealed with a freely moving piston, with no friction), then the heated gas would expand until it created as much force per area (gas pressure) as the force per area acting on it (ambient pressure), so that the forces cancel out and the piston stops moving. Here, a temperature increase in the gas would translate solely to a volume increase.
If the gas were confined in a perfectly rigid box, then an increase in gas temperature would cause the molecules inside to bump harder against the inner surfaces, but to no avail, as the walls do not budge and the box stays exactly the same size. Here, a temperature increase in the gas would translate solely to a pressure increase.
In a balloon, the gas is free to expand, but not completely free. In other words, it's a situation somewhere between the two described above. The balloon skin is an elastic which pulls itself, creating a force vector pointing towards the interior of the balloon, and the more the skin is stretched, the stronger the force becomes. Now the gas inside the balloon has to create enough pressure to compensate for both the ambient pressure, and the elastic force trying to pull the balloon skin inwards. This means that after heating, the gas inside a balloon will expand since the balloon is not perfectly rigid, but the equilibrium pressure of the gas inside the balloon will be higher than before because the balloon is pressing against a more tightly stretched balloon.
This should be provable experimentally without much difficulty. Take a rubber balloon, preferably one with as thin of a membrane as you can get, and open its mouth to the atmosphere. Then, clamp it shut without blowing any air inside (in reality, you might have to blow some air in to unstick the rubber walls, but then let all the excess air out). The air inside will have a pressure exactly equal to ambient pressure, because the elastic is not being stretched. Now cover the balloon completely in warm water for a few minutes, and it should inflate slightly. Finally, remove the balloon from the water and quickly perforate it with a sharp object. You might be able to hear a small pop, and feel a rush of air. Both are an indication that pressure inside the heated balloon was higher than ambient pressure.
Notes:
Here is a very related Physics.SE question - Why is the pressure inside a soap bubble higher than outside? which deals with bubbles rather than balloons. The diagram and equations are applicable in both cases.
Evidently, taking into account the elasticity of the balloon requires a more subtle treatment of the problem. Take a look at Floris' interesting input in the comments. It seems that even an idealized balloon starts off acting as a quasi-rigid container. After reaching a maximum pressure, the balloon starts expanding, and from there the walls get weaker as expansion continues. This means that for a range of temperatures higher than some critical value, the pressure of the gas actually decreases as the temperature increases. The interior pressure of the balloon will still always be greater than ambient pressure, though. | {
"domain": "chemistry.stackexchange",
"id": 3528,
"tags": "physical-chemistry, thermodynamics, gas-laws"
} |
Epicenter of earthquake at sea - is it a point on ocean surface or sea bed? | Question: When an earthquakes' focus is below the sea bed, is the epicenter given as a point on the oceans' surface at mean sea level, or as a point on the sea bed?
There is some ambiguity in the term 'earths surface' here that is causing confusion in the English Language Learners sister site, which I'm hoping the experts here can clear up.
I'm using the definition of epicenter found here.
Is the epicenter always directly above the hypocenter?
Answer: Technically the depth is distance below the ocean floor, but bear in mind that unless there happens to be a cluster of seismometers close to the earthquake (unlikely in the ocean environment), the accuracy of depth estimates is only approximate. If you look at the table of earthquake depths in, for example, the Tonga Trench, you will see a few really deep earthquakes and many times more 'shallow earthquakes' at a nominal depth of 10 km - meaning 'shallow-ish'. | {
"domain": "earthscience.stackexchange",
"id": 791,
"tags": "earthquakes"
} |
Why doesn't Sanger (fluorescent) DNA sequencing double count nucleotides? | Question: My understanding is that PCR is carried out until a fluorescent nucleotide halts replication. The segments of DNA are fed through the capillary tube based on size, sifting through the segments from smallest to largest; subsequently, the sequence is read from one end to the other.
I'm confused as to how we know that each nucleotide in the sequence has been accounted for, or if a nucleotide has been counted twice. If two fragments of the same size, ending with the same ddNTP, pass through the capillary tube at slightly different times, how do we know if that specific nucleotide hasn't been double-counted?
Answer: In chain-termination sequencing, a population of molecules is detected as opposed to a single one. The readout looks like this:
[ http://seqcore.brcf.med.umich.edu/doc/dnaseq/trouble/badseq.html ]
The peaks represent intensity of the four different fluorophores at the detector. Every fragment that is terminated at the same spot will have the same fluorophore and the same length. Because capillary gel electrophoresis separates molecules by size, these fragments will pass through at approximately the same time. "Approximately the same time" is why you see broad-ish and overlapping peaks on the readout as opposed to sharp and distinct lines.
Ideally, the readout will appear like the left half of the image, where there's minimal overlap between peaks. However, often you will see overlapping peaks, as in the right half of the image. The link above explains many reasons for why this occurs. Whether or not a peak is correctly assigned is determined by Phred scores, which is a statistical analysis of the shape and resolution of each peak. | {
"domain": "biology.stackexchange",
"id": 3258,
"tags": "dna-sequencing"
} |
octomap Ubuntu dependencies | Question:
I'm doing a full source build of ROS Kinetic w/ MoveIt and one of the "system" dependencies is the ROS-packaged libfcl-0.5-dev, which depends on ros-kinetic-octomap, which depends on ros-kinetic-catkin / python-catkin-pkg. I'm not super familiar with ROS packaging, so I have two questions:
Why does ros-kinetic-octomap depend on ros-kinetic-catkin? Looking at the release repository, it seems that somewhere between upstream and the final .deb a package.xml gets added with a run_depend on catkin. Why would catkin be needed at runtime?
Why is octomap even a ROS package (ie, with package.xml)? (as opposed to some other 3rd party libraries that are simply packaged and distributed)
I'm asking partly because this is installing a second catkin alongside my built-from-source version, but also to learn.
Originally posted by mrjogo on ROS Answers with karma: 164 on 2017-01-06
Post score: 1
Original comments
Comment by 130s on 2017-01-07:
Interesting. I posted a comment on the devel repo hoping the authors would respond.
Answer:
Why does ros-kinetic-octomap depend on ros-kinetic-catkin? Looking at the release repository, it seems that somewhere between upstream and the final .deb a package.xml gets added with a run_depend on catkin. Why would catkin be needed at runtime?
Got a response from the author @AHornung; it's a way recommended in REP136 for 3rd party release.
UPDATE 20170124; And the reason why REP-0136 requires catkin for 3rd party pkg is this (handling setup.sh).
Why is octomap even a ROS package (ie, with package.xml)? (as opposed to some other 3rd party libraries that are simply packaged and distributed)
I had the same question, then if you search within the OctoMap code as of today, there's no ROS dependency there. So I assume it's just using ROS buildfarm as a release infrastructure so that the debian package name is ros-kinetic-octomap.
Originally posted by 130s with karma: 10937 on 2017-01-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26659,
"tags": "ros, octomap, release, dependencies"
} |
What is the mechanism by which myelination reduces the capacitance of the axon membrane? | Question: There are two mechanisms that have been proposed to me.
1) Layering of Schwann cell membrane with conducting fluid between the layers is analogous to several capacitors in series. Since capacitance in series add by the reciprocal rule (as resistors do in parallel), this reduces the total capacitance.
2) The myelin increases the distance between the 'plates' of the capacitor. For parallel plate capacitors $ C = \epsilon A/d$ where d = distance between the plates. Thus increasing the distance reduces the capacitance.
Which of these explanations best applies to myelin, or is it in fact a mixture of both?
Answer: Circuit analogies don't 100% apply to myelin because membranes have complex electrical properties, but both of those explanations work and they are in fact essentially interchangeable: Take a membrane with distance d across the membrane and capacitance c. Then we add some myelin to get a new capacitance C at a new distance D.
If you 4X the distance between plates (D = d * 4), C=c/4 (from the formula you posted as (2) ); if you add 3 extra plates (so now you have a total of 4 plates), C=1/(1/c + 1/c + 1/c + 1/c)=c/4.
Importantly, myelin also increases the membrane resistance, and because myelin is typically very thick compared to a normal membrane (~10nm for one layer vs. 500-2500nm for myelin), you can almost consider myelination to increase resistance to infinity (compared to the axial resistance of cytoplasm) and the capacitance to zero.
See this page for some more info.
Note that the reason these explanations are interchangeable is that there is effectively no distance between the added plates in series and no difference in capacitance for each individual capacitor/piece of membrane (for example, see this page). | {
"domain": "biology.stackexchange",
"id": 6215,
"tags": "neuroscience, neurophysiology, action-potential"
} |
Twitter client portfolio website | Question: I'm creating some sort of "portfolio" website for my self (a ton of placeholder content right now...) and I was wondering if I could improve the semantics of the HTML5 any further, especially the article stuff.
I'm not completely sure if I should use section elements inside it. I read through a number of HTML5 "guides" and a few of the element specs, but they often have different positions on this.
I think using sections would add to the semantics since the slides are a different "part/section" of the "article".
Don't rant about the CSS; it's generated by LESS.
The site can be viewed here.
Manually formatted HTML
<!DOCTYPE html>
<html>
<head>
<title>A Python Twitter Client | BonsaiDen</title>
<link rel="shortcut icon" href="/images/favicon.ico">
<link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Ubuntu:regular,bold">
<link rel="stylesheet" href="/stylesheets/style.css">
// will get copied to a local file sooner or later
<!--[if lt IE 9]>
<script src="http://html5shim.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->
</head>
<body>
// the content
<article>
// quite some divs here
<div>
<div id="content"> // maybe use section?
// gets replaced via ajax
<header data-page="/atarashii">
<h1 class="small">A Python Twitter Client</h1>
<div class="external">
<a href="https://github.com/BonsaiDen/Atarashii">Go to Project ➤</a>
</div>
<div class="clear"></div> // always hate these clear things...
</header>
<div>
<p>A Twitter Client for GNOME...</p>
</div>
// ajax end
</div>
<div id="slideshow"> // maybe use section?
<header>
<h1 id="slideTitle">[SlideShowItem Title]</h1>
</header>
<div id="slideContent">
<p>[Slideshow Image]</p>
</div>
</div>
<div class="clear"></div>
</div>
</article>
<header>
// navigation, surprise!
<nav>
// really happy with this
<ul>
<li class="left">
<h1>Projects</h1>
<ul>
<li><a href="/garden">JavaScript Garden</a></li>
<li><a href="/shooter">NodeGame: Shooter</a></li>
<li><a href="/atarashii" class="active">Atarashii</a></li>
</ul>
</li>
<li>
<h1>Code</h1>
<ul>
<li><a href="/neko">neko.js</a></li>
<li><a href="/bison">BiSON.js</a></li>
</ul>
</li>
<li>
<h1>Web</h1>
<ul>
<li><a href="/stackoverflow">Stack Overflow</a></li>
<li><a href="/github">GitHub</a></li>
<li><a href="/website">The Website</a></li>
</ul>
</li>
<li class="right">
<h1>ME</h1>
<ul class="info">
<li><a href="/me">Ivo Wetzel</a></li>
<li class="simple">
// div div div :/
<div>
<div id="picture">
<img src="images/snufkin.png" alt="Ivo Wetzel"/>
<a href="/me"></a>
</div>
<ul>
<li class="first"><a href="http://twitter.com/BonsaiDen">Twitter</a></li>
<li><a href="mailto: ivo.wetzel@googlemail.com">E-Mail</a></li>
</ul>
<div class="clear"></div>
</div>
</li>
</ul>
</li>
</ul>
</nav>
</header>
// no real content so far but a background image thingy
<footer></footer>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js"></script>
<script src="/javascripts/page.js"></script>
</body>
</html>
Answer: Looking through the code, one of the no-brainers for me that jump out immediately is the the separation of the profile anchor and image. The overlay effect is important, obviously, but this can be achieved much more cleanly with a bit of CSS elbow grease:
a {
background: #fff;
display: block;
height: 128px;
width: 128px;
}
a img:hover {
opacity: 0.9;
}
See: http://www.jsfiddle.net/yijiang/Tv7AP/
Looking at the code, it seems like the only reason why you have a div#navigation > a structure is for the background 'shadow'. If that is the case, you can easily get rid of the outer div by using either box-shadow or a 1px wide background image repeated along the y axis with some padding:
nav {
-moz-box-shadow: 0 3px 0 rgba(0, 0, 0, 0.3), 0 -3px 0 rgba(0, 0, 0, 0.3);
-webkit-box-shadow: 0 3px 0 rgba(0, 0, 0, 0.3), 0 -3px 0 rgba(0, 0, 0, 0.3);
box-shadow: 0 3px 0 rgba(0, 0, 0, 0.3), 0 -3px 0 rgba(0, 0, 0, 0.3);
}
Additionally, seeing li class="right" makes me slightly sad, but seeing li class="left" makes me sadder still - since the only reason you're using the left class is to avoid double borders (the right class is to give the profile section a bit more space, apparently), you can really using only one class:
nav ul li {
border-right: 4px solid #052C4F;
}
nav ul li.right {
border-right: 0;
} | {
"domain": "codereview.stackexchange",
"id": 2626,
"tags": "html, twitter"
} |
Magnetic field due to singular gauge transformation | Question: In $SU(2)$ gauge theory over $\mathbb R^3$, consider the following gauge transformation (in spherical polar coordinates)
$$ \Omega=\begin{bmatrix}e^{i\phi}\cos(\theta/2)&\sin(\theta/2)\\-\sin(\theta/2)&e^{-i\phi}\cos(\theta/2)\end{bmatrix}.$$
It is multivalued (aka singular) on the half-infinite line $\theta=0$. Such gauge transformations are said to create magnetic monopoles (see https://www.sciencedirect.com/science/article/abs/pii/0550321378901530).
To test this I'm doing the following. I start with the vacuum configuration $\vec A=0$, where $\vec A\equiv \vec A_a\sigma_a$ is the gauge field 3-vector (each component being a $2\times 2$ matrix), and I apply the gauge transformation $\Omega$. The gauge field is then $\vec A=i\vec \partial \Omega~\Omega^{-1}$ (I've set the gauge coupling to unity). Doing the calculation, we find that the gauge field is finite everywhere except on the line $\theta=0$, where it is infinite (I can provide the explicit expression if required).
Now I want to compute the magnetic field $B_i= \epsilon_{ijk}F_{jk}$. At $\theta\neq 0$ we can easily see that $\vec B=0$ because we started with the vacuum configuration and applied a gauge transformation. On $\theta=0$ though, since the gauge field is infinite, the magnetic field might be non-zero -- in fact, I suspect it is some kind of delta-function smeared along $\theta=0$. How exactly can I compute the magnetic field on $\theta=0$ to verify this?
Edit: If the group was Abelian, I would compute $\oint \vec A.d\vec{l}=\int \vec B.d\vec S$ for a loop enclosing $\theta=0$. This would tell me if any magnetic flux is enclosed.
Answer: So I used Stokes law to write
\begin{equation}
\int_S \vec B.d\vec S=2\oint_l \vec A.d\vec l+\int_S i\epsilon_{ijk}[A_i,A_j]dS^k
\end{equation}
where $l$ is a loop around $\theta=0$, and $S$ is a surface bounded by $l$. The integrals are well defined despite the singularity. This shows that
\begin{equation}
\vec B=-4\pi \sigma_3 \hat z
\end{equation}
on $\theta=0$, and zero elsewhere. | {
"domain": "physics.stackexchange",
"id": 93222,
"tags": "quantum-field-theory, gauge-theory, yang-mills, magnetic-monopoles"
} |
Difference between bits and bytes? | Question: What are bits and bytes? How do they differ? What role do these notions play in the computer science world? Why do talk about 8 bits = 1 byte = 256 states or different patterns? What's the rationale behind it? Why not 4 bits = byte? What is so special about 8? How much info can a bit and a byte hold?
Answer:
What is a bit and byte? How do they differ?
A bit is the smallest useful unit of information. A bit can only have two states, which are commonly called 0 and 1, False and True, or Off and On.
A byte is the smallest individually addressable unit of memory in a computer system.
Why role do these notions play in the computer science world?
As I mentioned above, a bit is the smallest useful unit of information. Information is very important in Computer Science: It has been said that Computer Science is really a misnomer, and is akin to calling Astronomy "Telescope Science" – just because computers can be used to investigate information and processes doesn't mean that they are somehow inherent to that science, just like the fact telescopes can be used to investigate planets and stars doesn't mean that telescopes are somehow inherent to astronomy.
There are some languages, for example German, French, and Italian, where the scientific discipline makes no reference to computers at all: in German, it is called Informatik, in French informatique, in Italian informatica – all are a neologism based on information and the Greek suffix -ik. In Spanish, it is called ciencias de la informática (similar to German, French, and Italian) or ciencias de la computación: note the subtle difference to English, it is the science of computation, not the science of computers. Danish uses the terms datalogi (a neologism formed by combining data with the -logi suffix as in geology, meteorology, metrology, etc.) for the stricter sense of the science of information, data, computation, and processes, and informatik for a broader inter-disciplinary view of the effects of "datalogi" on society, politics, humanity, and the broader world in general; what might be called Social Informatics in English.
As you can see, in many languages, there is a clear distinction made between "Informatics" and computers. It is a rather unfortunate accident of history that the language which confuses the two also happens to be the lingua franca for it.
Bytes, on the other hand, are not really playing a role in computer science. They do play a role in the organization of real-world computer systems, though. Accessing memory is an important operation in real-world computer systems, and the way the memory is addressed plays an important role in high-performance low-level code.
Why do talk about 8 bits = 1 byte = 256 states or different patterns?
We don't. A byte is not 8 bits. A byte is the smallest individually unit of memory in a computer system. In other words: the size of a byte depends on which computer system you are talking about. E.g. the DEC PDP-1, PDP-4, PDP-7, PDP-9, and PDP-15 has 18-bit words consisting of 3 6-bit bytes. The DEC PDP-5, PDP-8, PDP-12, and PDP-14 has 12-bit words consisting of 2 6-bit bytes. In fact, 6-bit bytes used to be rather common. Other byte sizes that have been used are 1, 4, 7, 9, 10, 12, 18, 20, 36, and generally many different sizes between 1 and 48 bits.
What's the rationale behind it?
There are many different trade-offs to consider when choosing the byte-size of a computer system. Traditionally, a byte had the same size as a character, and many early computers had 6-bit characters so they had 6-bit bytes. But for a DSP designed to process digital audio with a sampling bit depth of 24 bits, having both the word size and the byte size be 24 bits is completely natural.
Why not 4 bits = byte?
There have been computer systems with a byte size of 4 bits. There probably still are such computer systems.
What so special about 8?
From a theoretical perspective: nothing. From a market perspective: traditionally, bytes were the same size as characters, and first ASCII became a popular character set and encoding, which has a size of 7 bits. Even byte sizes and even more so, byte sizes which are a power of 2, are somewhat nicer to handle, so when 7-bit characters became common, 8-bit bytes became common. And once 8-bit bytes were common, 8-bit character sets (such as ISO8859-15 or Windows1252) became common, further cementing 8-bit bytes as the most common byte size.
Many networking protocols are based on 8-bit "chunks" as well, but they usually call them "octets" to make it clear that they are always 8 bit, even if the computer system has a different byte size.
How much info can a bit and a byte hold?
One bit can represent 2 different unique states. A byte can represent $2^{size}$ different unique states. | {
"domain": "cs.stackexchange",
"id": 19620,
"tags": "c"
} |
How to interpret the derivative of the Dirac delta potential? | Question: I met a Hamiltonian containing the derivative of the Dirac delta potential:
In order to do it we use a method described in [9]. We define a formal Hamiltonian
$$
\tag{2}\tilde{H}_{abcd}=-\frac{{\rm d}^2}{{\rm d}x^2}+a\delta\left(x\right)+b\delta'\left(x\right)+c\delta\left(x\right)\frac{{\rm d}}{{\rm d}x}+d\delta'\left(x\right)\frac{{\rm d}}{{\rm d}x}
$$
It is surprising to see terms like $b \delta'(x)$, how should one interpret $ \delta'(x)$?
Answer: Take this $\delta '(x)$ and apply in an arbitrary function $f(x)$.
$$
\int_{a}^{b} \delta'(x) f(x)\ \mathrm{d}x = f(x) \delta(x) |_{a}^{b} - \int_{a}^{b} \delta(x) f'(x)\ \mathrm{d}x = -f'(0)
$$
Then $ \delta '(x) \rightarrow -\delta (x) \frac{\mathrm{d}}{\mathrm{d}x}$. | {
"domain": "physics.stackexchange",
"id": 24377,
"tags": "differentiation, dirac-delta-distributions"
} |
Is there a term for the first moment of mass? | Question: If a stationary object has a rest-mass of say $2.0$kg and is located a distance $3.0$m from a particular point (of which it is stationary relative to).
Is there some term to describe the moment of it's mass?
Mass moment = mass x perpendicular distance = $6.0$kg.m
Note: I'm not referring to momentum.
Also: Wiki says that moment of inertia is mass x distance squared.
I'm looking for mass x distance.
Answer: To expand on By Symmetry's comment: if you have some mass (density) distribution, $\rho({\bf \vec x})$, to which a force, ${\bf \vec F}$ is applied at the origin, then:
The zeroth (scalar) moment tells you the total mass:
$$ m = \int{\rho({\bf \vec x})d^3x} $$
so the acceleration is:
$$ {\bf \vec a} = {\bf \vec F} / m$$
The 1st (vector) moment tells you the center of mass position (times the mass):
$$ {\bf \vec{x}}_0/m = \int{\rho({\bf \vec x}){\bf \vec{x}}d^3x} $$
Hence you can compute the torque on the mass distribution:
$$ {\bf \vec{\tau}} = {\bf \vec x}_0 {\bf \times \vec{F}}$$
As you pointed out, the 2nd (tensor) moment is the inertia tensor:
$$ {\bf \overleftrightarrow I} = \int{\rho({\bf \vec x})([{\bf \vec x \vec \cdot x)I}_2-({\bf \vec x \otimes \vec x)}] d^3x}$$
relates the torque and angular acceleration:
$${\bf \vec{\tau}}={\bf \overleftrightarrow I}\dot{\bf \omega} $$ | {
"domain": "physics.stackexchange",
"id": 47367,
"tags": "mass, terminology, moment-of-inertia, moment"
} |
Conservation of angular momentum - linear velocity | Question: The angular momentum of a single particle is $mr^2 \omega$, and its linear velocity is $\omega r$. Suppose that $m=1$, distance from the rotation axis $r=1$ and $\omega=1$.
What happens when the distance from the axis of rotation is reduced to $r=0.5$? This is analogous to a rotating ice skater that pulls their arms inwards, there is no external force that changes the momentum of particles, so the angular momentum remains constant. According to the formula it means it has to be equal to $1$, and therefore the angular velocity will be 4 times the original angular velocity $\omega$. Moreover the linear speed of the particle will double.
How is that possible? Doesn't it contradict the law that states if there's no external force acing on the particle, its linear momentum should remain constant?
Answer: Angular momentum $\mathbf L$ is given by:
$$\mathbf L = \mathbf r \times \mathbf P$$
Now the torque $\mathbf {\tau} $ acting on an object is given by:
$$\boldsymbol {\tau } = \frac {d \mathbf L}{dt}$$
$$\Rightarrow \boldsymbol {\tau } = \frac {d (\mathbf r \times \mathbf P) }{dt}$$
$$\Rightarrow \boldsymbol {\tau } = \mathbf r \times \mathbf F + \mathbf v \times \mathbf P$$
Since $\mathbf v $ and $\mathbf P$ are in the same direction therefore
$$\mathbf v \times \mathbf P = 0$$
i.e.,
$$\Rightarrow \boldsymbol {\tau } = \mathbf r \times \mathbf F $$
Now as you correctly say that $\mathbf L$ is constant therefore
$$\boldsymbol {\tau } = \mathbf r \times \mathbf F =0$$
Clearly since $\mathbf r \neq 0$ then this means that
$\mathbf F$ at all instants points towards the centre (i.e., to say that line of action of the force passes through the centre).
Note that force cannot be zero as a centripetal force is always required for maintaining circular motion.
Now as you can see from the figure during transition from outer orbit to inner orbit the force and velocity vector have acute angle between them therefore the force acts to accelerate the object and hence increase it's speed.
Also it should be noted that for angular momentum to be conserved the absence of force isn't a necessary criteria (example elliptical orbits of planets).
Regarding the comment:
So if I calculated the speed of particle after the radius has been reduced down to $0.5$ it would turn out to be $2v$, where $v$ is the original speed? Is there a formula that I can use?
Yes as you know that
$$\mathbf L = \mathbf r \times \mathbf P$$
$$\frac {\mathbf L}{m} = \mathbf r \times \mathbf v \tag 1$$
Clearly after the radius reduces to $\mathbf {r'}$ then the velocity becomes $\mathbf {v'}$
$$\Rightarrow \frac {\mathbf L}{m} = \mathbf {r'} \times \mathbf {v'} \tag 2$$
Therefore from $(1)$ and $(2)$ (also noting that after the particle comes into orbit the vectors have $90°$ angle between them)
$$\Rightarrow v'r'=vr$$
Then
$$v'= v \frac {r}{r'}$$
Note that this has been explained in the following Vsauce video (explanation starts from 10:05 to 13:15):
Laws and Causes | {
"domain": "physics.stackexchange",
"id": 64458,
"tags": "newtonian-mechanics, angular-momentum, rotational-dynamics, angular-velocity"
} |
$L$ APX-hard thus PTAS for $L$ implies $\mathsf{P} = \mathsf{NP}$ | Question: If $L$ is an APX-hard language, doesn't the existence of a PTAS for $L$ trivially imply $\mathsf{P} = \mathsf{NP}$?
Since for example metric-TSP is in APX, but it is not approximable within 220/219 of OPT [1] unless $\mathsf{P} = \mathsf{NP}$. Thus if there was a PTAS for $L$ we could reduce metric-TSP using a PTAS reduction to $L$ and thus can approximate OPT within arbitrary precision.
Is my argument correct?
[1] Christos H. Papadimitriou and Santosh Vempala. On the approximability Of the traveling salesman problem. Combinatorica, 26(1):101–120, Feb. 2006.
Answer: Some people (including more than one moderators) complained to me for posting an answer based on a comment, and I am tired of defending me from them. I asked them to delete this answer. | {
"domain": "cs.stackexchange",
"id": 432,
"tags": "complexity-theory, np-complete, approximation"
} |
Why don't boats have wings? | Question: According to Bernoulli's principle (aero and fluid) dynamics. High velocity creates force/pressure. Aeroplane have wings. F1 cars have wings. But why don't boat have wings too? (underwater for extra lift) perhaps..
Answer: Hydrofoils are boats/ships fitted with "wings", better known as foils, which act in a way very similiar to the wings fitted to aircraft.
As a hydrofoil craft gains speed, the hydrofoils lift the boat's hull out of the water, decreasing drag and allowing greater speeds.
In these pictures, you can see the layout of the foils, underneath the ship.
the basic idea being that the less of the ship's hull is in the water, the less drag occurs and the more efficient the ship becomes.
Text and Image Source: Wikipedia Hydrofoils
Since air and water are governed by similar fluid equations—albeit with different levels of viscosity, density, and compressibility—the hydrofoil and airfoil (both types of foil) create lift in identical ways. The foil shape moves smoothly through the water, deflecting the flow downward, which, following Newton's Third Law of Motion, exerts an upward force on the foil. This turning of the water creates higher pressure on the bottom of the foil and reduced pressure on the top. This pressure difference is accompanied by a velocity difference, via Bernoulli's principle, so the resulting flowfield about the foil has a higher average velocity on one side than the other.
When used as a lifting element on a hydrofoil boat, this upward force lifts the body of the vessel, decreasing drag and increasing speed. The lifting force eventually balances with the weight of the craft, reaching a point where the hydrofoil no longer lifts out of the water but remains in equilibrium. Since wave resistance and other impeding forces such as various types of drag (physics) on the hull are eliminated as the hull lifts clear, turbulence and drag act increasingly on the much smaller surface area of the hydrofoil, and decreasingly on the hull, creating a marked increase in speed.
Although they can allow the shop greater efficiency in increasing in speed for a given engine power and a decrease in drag, hydrofoils do have some significant disadvantages.
Hydrofoils are impractical if they operate in waters that are not perfectly clear of obstructions such as floating/semi submerged objects and large, heavy marine animals, such as dolphins, tuna, seals or whales. If the ship collides with these objects, the light construction of the foil and the struts may give way, creating a risk that the ship may roll upside down at high speed, which has reduced their popularity amongst ferry operators.
Hydrofoils are also relatively expensive to build and maintain. | {
"domain": "physics.stackexchange",
"id": 33953,
"tags": "fluid-dynamics, aerodynamics"
} |
Difference between Higgs and anti-Higgs Fields | Question: I'm assuming the LHC can create a Higgs and an anti-Higgs boson. If so, would their fields be identical with respect to mass effects? How would LHC detectors distinguish between the two bosons?
Answer: The Higgs is a real scalar field, so there's no "anti-Higgs" particle. All imaginary part of initial complex doublet are absorbed by the weak gauge bosons (Ws and Z), only a real scalar field remains after this. | {
"domain": "physics.stackexchange",
"id": 714,
"tags": "particle-physics"
} |
Where does one go to produce custom metal objects? | Question: I am wanting to produce custom mass plates for a project, very similar to those found in gyms. Here's a rough idea of what each one should look like:
This is essentially identical to mass plates one can buy online, however I am needing to produce my own, with custom specifications, sizes, etc. Most notably each plate will need a thread through the center hole.
Where would I even start looking for someone who could do this? Looking online, I get confused by all the metallurgical terms. What type of professional am I looking for? I feel clueless.
Thanks in advance.
Answer: The best place to look with just a drawing is a machine shop. Machine shops work by contacting metal suppliers for them, then creating the items as you need. For a small batch such as this, a machine shop is ideal. They will take metal already pre-made and shape it to the shape you need. For large batches with pre-made work, the machine shop may refer you to a cast-iron shop. | {
"domain": "engineering.stackexchange",
"id": 2162,
"tags": "mechanical-engineering, metallurgy, project-management"
} |
How to pull a message from queue with time interval? | Question:
When I subscribe my node, I set messages queue size and callback function:
int main(int argc, char **argv) {
ros::init(argc, argv, "scs_drive");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("scs", 1000, writePosCallback);
ros::spin();
return 0;
}
The incoming messages are being processed as soon as previous callback function operations are finished.
But I want to process incoming messages from queue not ASAP but each millisecond.
So maybe it is possible to pull messages here:
int main(int argc, char **argv) {
ros::init(argc, argv, "scs_drive");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("scs", 1000, writePosCallback);
ros::Rate loop_rate(1000);
while (ros::ok()) {
//some code here that pulls message from queue and process incoming data
ros::spinOnce();
loop_rate.sleep();
}
return 0;
}
Or I need to set pause in callback function and it is the one choice to process messages with a certain time interval?
Originally posted by Sergey Kravchenko on ROS Answers with karma: 7 on 2017-08-16
Post score: 0
Answer:
I don't think this is supported directly (ie: with some special spinner or something), but you could create your own CallbackQueue and then call CallbackQueue::callOne(..) at the appropriate time.
Note that this will mean that you're probably more likely to run into buffer overruns as spinOnce() actually calls callAvailable() which processes all available callbacks (here), so more regularly empties your queue.
It might already exist, but if you want to abstract this nicely you could implement a new spinner that implements this behaviour. Then you could instantiate that in your code instead of having to maintain your own CallbackQueue instance.
Originally posted by gvdhoorn with karma: 86574 on 2017-08-16
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 28623,
"tags": "ros, queue, message"
} |
How does shockwave from hypersonic movement protect the moving object from air? | Question: The Steak Drop article from the What If? book says:
The steak spends a minute and a half over Mach 2, and the outer surface will likely be singed, but the heat is too quickly replaced by the icy stratospheric blast for it to actually be cooked.
At supersonic and hypersonic speeds, a shockwave forms around the steak which helps protect it from the faster and faster winds. The exact characteristics of this shock front—and thus the mechanical stress on the steak—depend on how an uncooked 8 oz. filet tumbles at hypersonic speeds
How does this happen? I have read the Wikipedia page of Shock wave in supersonic flows, but it doesn't say anything on the protection of the wave from the impact of the medium.
Answer: I am not really sure what is exactly meant by "protecting" the steak. For instance, here are some schlieren images from some NASA wind tunnel tests.
Notice, for a blunt nosed body you will always observe a bow shock with a certain standoff distance from the body. However, for a sharp nosed body you will mostly always observe attached oblique shocks emanating from the nose of the body. As the Mach number is increased to larger values, the bow shock will "wrap" closer to the surface of the body, but will never actually touch the body. Similarly, for a sharp nosed body, the oblique shocks will approach the body's surface forming a very thin layer called the shock layer. The physical nature of this layer is very complex with sharp entropy and vorticity gradients and the formation of viscous boundary layer effects.
It is true that in the blunt nosed body case, the flow just behind the leading edge of the bow shock is subsonic. In which case, the flow will expand supersonic around the body even though the body is technically moving with a hypersonic Mach number. I suppose maybe this is what was meant by the shock "protects" the steak from faster wind speeds. However, in no way would this physically "protect" the steak. With high enough velocities, the gas behind the shock waves in both cases will be completely dissociated and ionized. Oddly enough, this generally happens within the boundary layer. For instance, here is a typical temperature profile for a hypersonic boundary layer.
The heat transfer by thermal conduction into the steak would be governed by Fourier's law,
$$ q = -k\nabla T $$
In this case, we would be focused on the temperature gradient normal to the steak, $\partial T/ \partial y$. In most cases this is a very large gradient as indicated in the image. Moreover, because the maximum temperature usually happens at an intermediate distance between the surface and the edge of the boundary layer, you can easily observe heat transfer in the form of radiation from a radiating plasma just above the surface when the speeds are high enough. Regarding this bit of physics, the steak would be mostly disintegrated and not protected. | {
"domain": "physics.stackexchange",
"id": 66978,
"tags": "thermodynamics, fluid-dynamics, acoustics, aerodynamics, shock-waves"
} |
Policemen catch thieves | Question:
Given an array of size n that has the following specifications:
Each element in the array contains either a policeman or a thief.
Each policeman can catch only one thief.
A policeman cannot catch a thief who is more than K units away from the policeman.
We need to find the maximum number of thieves that can be caught.
Here's code for policemen catch thieves written in Python3, I would like feedback/suggestions.
import random
def tp_build(n):
thief_police = []
for i in range(n):
thief_police.append(random.choice(['T', 'P']))
return thief_police
def catch(k=2, n=10):
tp_list = tp_build(n)
print('List: %s' % tp_list)
if 'P' not in tp_list:
return 'No Police!'
if 'T' not in tp_list:
return 'No Thieves!'
p_indexes = []
t_indexes = []
for i in range(n):
if tp_list[i] == 'P':
p_indexes.append(i)
elif tp_list[i] == 'T':
t_indexes.append(i)
combinations = []
for limit in range(1, k + 1):
for police in p_indexes:
if police + limit in t_indexes:
combinations.append((police, police + limit))
for police in p_indexes:
if police - limit in t_indexes:
combinations.append((police, police - limit))
p_list = []
t_list = []
for i, j in combinations:
p_list.append(i)
t_list.append(j)
new_p = []
new_t = []
for i in p_list:
if i not in new_p:
new_p.append(i)
for j in t_list:
if j not in new_t:
new_t.append(j)
final_combinations = list(zip(new_p, new_t))
print('Number of thieves caught: %s'%(len(final_combinations)))
return len(final_combinations)
if __name__ == '__main__':
for i in range(100):
catch()
Answer: Bugs
You return 'No Police!' instead of returning 0, but you don't check the return value. You might consider printing the string and returning 0 instead. Likewise for the no-thieves case.
Style
You are nominally compliant with pep-8, but your style is very much a beginner style. Why do you have variables named t_list and p_list? Do you really think that the _list part of the name is the most important, and likely to be overlooked by readers?
When choosing a name, you should almost always avoid using the type of the data in the name (1). After all, the type might change! (For example, you might switch from a list to a dict for storage.)
(1): Except when the type is the important part, like if you're writing a type conversion function (str2int or wav2mp3 or something).
Iteration
You are missing some tricks of Python iteration. First, let's look at your tp_build function:
def tp_build(n):
thief_police = []
for i in range(n):
thief_police.append(random.choice(['T', 'P']))
return thief_police
This code is a one-off "helper" that does a job that is described in the problem statement: generate a random array of police & thieves. Because it's simple and the requirements are clearly explained by the problem statement, you don't need to worry about documenting this, or leaving the code spelled out for comprehensibility. So just use a list comprehension and shorten that code a lot (also, strings are iterable):
def tp_build(n):
return [random.choice("TP") for _ in range(n)]
In your ``catch` function, you have a lot of iteration. Again, you're writing too-simple loops when you could get clarity and performance from Python features.
p_indexes = []
t_indexes = []
for i in range(n):
if tp_list[i] == 'P':
p_indexes.append(i)
elif tp_list[i] == 'T':
t_indexes.append(i)
In this paragraph, you construct lists of the indexes of the different characters. If you're going to look up the value of tp_list[i], you should just use enumerate:
p_indexes = []
t_indexes = []
for i, ch in enumerate(tp_list):
if ch == 'P':
p_indexes.append(i)
else:
t_indexes.append(i)
Of course, this is another case where you could use comprehensions. In theory, this is twice as slow. In practice, the comprehension might be faster because Python knows how to speed it up. You'll have to check it:
p_indexes = [i for i in range(n) if tp_list[i] == 'P']
t_indexes = [i for i in range(n) if tp_list[i] == 'T']
Clarity
You don't provide a function level comment explaining your algorithm, and frankly it's not obvious to me. Obviously you are pairing some police with some thieves. But it would be helpful if you explained what/how your pairing mechanic was, and why you felt it produced a best-possible result. In cases where there are potential conflicts between pairings, how do you decide which to choose, and why?
Also, note that at the end you are doing a lot of unnecessary work.
new_p = []
new_t = []
for i in p_list:
if i not in new_p:
new_p.append(i)
for j in t_list:
if j not in new_t:
new_t.append(j)
final_combinations = list(zip(new_p, new_t))
print('Number of thieves caught: %s'%(len(final_combinations)))
return len(final_combinations)
First, you make new_p and new_t. (Which aren't new: bad names.) You make them contain unique members of a previous list. Then you zip them together. The effect of zip is to stop when the shortest source iterable is exhausted. Then you compute the len of the result.
Effectively, you're asking for the length of the shortest unique group of indexes. Since order really doesn't matter here (you're using the len of the result) you can use a set. And since you just care about the length, you don't need to zip them, just compare their lengths:
catches = min(len(set(p_list)), len(set(t_list)))
print('Number of thieves caught: %d' % catches)
return catches | {
"domain": "codereview.stackexchange",
"id": 35012,
"tags": "python, python-3.x, programming-challenge"
} |
What is this synthetic molecular motor and what is the energy source? | Question: In the "Molecular dynamics" entry of 2018 version of Wikipedia (it have been removed for the current version), there is such a synthetic molecular motor:
You can also find this image by searching "MD_rotor_250K_1ns" for image on Bing.
(1) Any references about this synthetic molecular motor?
(2) According to the animation, it seems to be driven by thermal energy instead of chemical energy, is that true? If so, how to explain it from the second law of thermal dynamics, since the random thermal motion seems to be transformed into more ordered directional rotation?
Answer: The wiki article still exists, with the simulation too.
Molecular dynamics simulation of a synthetic molecular rotor composed of three molecules in a nanopore (outer diameter 6.7 nm) at 250 K
In the wiki article:
The basic requirements for a synthetic motor are repetitive 360° motion, the consumption of energy and unidirectional rotation.
So energy has to be supplied. There are light driven and chemically driven rotors.
The reference for the simulation is :
Palma, C.-A.; Björk, J.; Rao, F.; Kühne, D.; Klappenberger, F.; Barth, J.V. (2014). "Topological Dynamics in Supramolecular Rotors". Nano Letters. 148: 4461–4468.
the article states that
As of 2020 the smallest, atomically precise molecular machine has a rotor, which consist of four atom
Thermal motion is utilized as follows in the latest experiment:
By breaking spatial inversion symmetry, the stator defines the unique sense of rotation. While thermally activated motion is nondirected, inelastic electron tunneling triggers rotations, where the degree of directionality depends on the magnitude of the STM bias voltage.
......
This ultrasmall motor thus opens the possibility to investigate in operando effects and origins of energy dissipation during tunneling events, and, ultimately, energy harvesting at the atomic scales. | {
"domain": "physics.stackexchange",
"id": 69028,
"tags": "thermodynamics, molecular-dynamics"
} |
Calculating member stress in a truss | Question: I have the following question:
Here is my attempt:
Is this correct ?
Answer: You made a mistake in solving the reaction at joint "A". See calc below:
$\sum M_G = 0$
$R_A = \dfrac{22.31*8}{12} = 14.873$ kN
Solve internal member force using the method of section:
Since there is only one unknown in the vertical direction, so we can solve the member force $F_{BC}$ directly by $\sum F_X = 0$
$\sum F_X = 0$
$-F_{BC}cos 30^{o}$ + R_A = 0
$F_{BC} = \dfrac{R_A}{cos 30^{o}} = \dfrac{14.873}{0.866} = 17.17$ kN (Tension. Direction as assumed - away from the joint B)
$\sigma_{BC}$ = $\dfrac{17.17}{0.08} = 214.6 kN/m^2 = 214.6 kPa = 214,600 Pa$ | {
"domain": "engineering.stackexchange",
"id": 4376,
"tags": "mechanical-engineering, structural-engineering, structural-analysis, stresses"
} |
Disconnecting RC/RL-Circuit prematurely | Question: Lately I've been dealing with some RC/RL Circuits, and as I was working on a problem, I came across a peculiar issue, which I haven't properly resolved so far.
The given equations that describe $i(t),\;u_{C/L}(t)$ and $u_R(t)$ do so in dependence on $U_0$, that being the initial Voltage at the battery.
To me this makes sense, as long as the capacitor/inductor has had the time to fully charge/stop inducing. But what I don't fully understand, is what happens when the Battery is disconnected prematurely from the circuit. Which value do I insert for $U_0$? Either way, Kirchhoff's law needs to be satisfied, which implies that the given Voltages we're left with for $u_R$ and $u_{C/L}$ need to be opposite to one another, but I can't find a satisfactory explanation for what values I should logically continue working with. I hope my question is clear enough, and if not, I can clarify further if needed.
I've never asked a question on here before, nor am I particularly experienced; but so far, I simply haven't been able to find any sufficient explanations on the topic.
Answer: Given no information on the circuit you are dealing with, one can only provide some general rules.
Plug in the value $t=t_{o}$ into your equations for voltages and currents as a function of time where $t_{o}$ is the time when the battery is disconnected. Then
Whatever the voltage is across a capacitor at $t=t_o$ it will the same the instant the battery is disconnected. This is because you can't change the voltage across a capacitor in zero time and energy will be stored in the electric field of the capacitor.
Whatever the current is in an inductor at $t=t_o$ it will the same the instant the battery is disconnected. This is because you can't change the current in an inductor in zero time and energy will be stored in the magnetic field of the inductor.
The current in and voltage across a resistor at $t=t_o$ will be dictated by the values of 1 and 2 and Kirchhoff's laws.
For time $t>t_o$ you will have need new equations as a function of time for transients where there is no battery in the circuit and where the initial values of currents and voltages will be those of 1 through 3 above.
An anomaly occurs if you attempt to open a switch to the battery for a series RL circuit. The open switch is theoretically an infinite resistance so in order to satisfy condition 2 above, you would need an infinite voltage induced across the switch gap to maintain the current. In practice this can't happen and instead the inductor induces a sufficiently high voltage to cause arcing across the gap and a breakdown of the air, which together with the series resistor dissipates the energy of the magnetic field.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 76011,
"tags": "electric-circuits, electrical-resistance, capacitance, inductance"
} |
Massless charged particles | Question: Are there any massless (zero invariant mass) particles carrying electric charge?
If not, why not? Do we expect to see any or are they a theoretical impossibility?
Answer: There's no problem in writing down a theory that contains massless charged particles. Simple $\mathcal{L} = \partial_{\mu} \phi \partial^{\mu} \phi^*$ for a complex field $\phi$ will do the job. You might run into problems with renormalization but I don't want to get into that here (mostly because there are better people here who can fill in the details if necessary).
Disregarding theory, those particles would be easy to observe assuming their high enough density. Also, as you probably know, particles in Standard Model compulsorily decay (sooner or later) into lighter particles as long as conservation laws (such as electric charge conservation law) are satisfied. So assuming massless charged particles exist would immediately make all the charged matter (in particular electrons) unstable unless those new particles differed in some other quantum numbers.
Now, if you didn't mention electric charge in particular, the answer would be simpler as we have massless (color-)charged gluons in our models. So it's definitely nothing strange to consider massless charged particles. It's up to you whether you consider electric charge more important than color charge.
Another take on this issue is that Standard Model particles (and in particular charged ones) were massless before electrosymmetric breaking (at least disregarding other mechanisms of mass generation). So at some point in the past, this was actually quite common. | {
"domain": "physics.stackexchange",
"id": 69516,
"tags": "particle-physics, mass, charge"
} |
$F = ma$ In General Relativity | Question: I'm no expert in general relativity, so please bear with any misconceptions in my understanding :)
In general relativity, Einstein showed that we experience gravity because standing on earth is actually being in a non-inertial (accelerating) frame of reference in a curved space-time.
Only free falling along a geodesic contoured by the curvature of the local space-time is considered an inertial frame of reference.
On the other hand, we are led to believe that Newton's second law: $F=ma$ is valid only when one is in an inertial frame of reference.
So shouldn't $F=ma$ be invalid in most use cases classical mechanics (obviously it is valid, but what am I missing)?
Answer: Classically, gravity appears in a force diagram as a regular force (albeit one that depends on the mass of the object). This is necessary when we assume the surface of the earth represents a (nearly) inertial frame.
Because the same frame in GR is non-inertial, we can expect fictitious forces to appear. The classical gravitational force appears this way and makes the force diagram sum up as expected. | {
"domain": "physics.stackexchange",
"id": 60450,
"tags": "newtonian-mechanics, general-relativity, reference-frames, inertial-frames"
} |
Coaxial cable & Poynting vector | Question: I am working on a homework problem where a coaxial cable has an inner charge/length $\lambda$ and outer charge/length $-\lambda$ and a current $I$.
The inner (conducting) cylinder is solid while the outer cylinder is just a shell.
The solution asserts the $\vec{E}$ field inside the inner cylinder is 0, but I was under the impression that that assumption is only valid in electrostatics.
Does a wire that is itself charged keep all the "net" charges on the surface even as a current is running through it?
Answer: Your question assumes the wire to be a perfect electrical conductor (PEC). This basically means that its conductivity goes to infinity $\sigma \rightarrow \infty$, or equivalently its resistivity goes to zero. Recall from Ohm's law that:
$$\mathbf J (\mathbf x ,t) = \sigma \mathbf E (\mathbf x ,t)$$
Where $\mathbf J$ is the volume charge density. Now for a PEC, as $\sigma \rightarrow \infty$ , you can see that if we had a non-zero electric field inside the conductor, the current density would also go to infinity, which is clearly nonphysical. Thus, the only way to satisfy the above equation is for $\mathbf E (\mathbf x ,t)$ to be zero everywhere inside the conductor, at all times .
Thus, for a perfect electrical conductor, the electric field is zero inside the conductor even in time-varying cases.
Note that this is of course an idealization, and any wire has some finite conductivity, and thus the electric field inside is not exactly zero, although it is very small (I am disregarding things like superconductors). For a quantitative measure of how good the conductor rejects the internal electric field at some frequency , one can use the penetration depth, which is the the length you need to go inside the conductor for the field to become $1/e$ times its surface value. For further information about penetration depth check this answer.
Also note that the fact that your wires carry a charge doesn't affect the above reasoning, which means that the electric field is also zero in your specific problem. | {
"domain": "physics.stackexchange",
"id": 45905,
"tags": "homework-and-exercises, electromagnetism"
} |
Stop publishing a specific topic | Question:
Hi,
For testing purposes I want to stop the subscription of a topic during runtime (not programatically). I would like to know if there is any tool that allows this, any command that would allow this (or an XMLRPC to the ROS master to stop that topic, or something like that).
I do not want to kill the publishing node because what I want to do is to simulate a communication link lost. I was thinking that, if there is nothing, I would use topic_tools relay to remap the topic and then just kill the relay, but requires modifying the remaps on the launchfile (something I would like to avoid).
Thank you!
Originally posted by Javier V. Gómez on ROS Answers with karma: 1305 on 2017-03-21
Post score: 1
Answer:
You could use the mux from topic_tools. Create two publishers (one that sends data and one which doesn't) and start with the publishing node. You can then switch to the quiet one if you want to simulate your blackout.
Originally posted by NEngelhard with karma: 3519 on 2017-03-21
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Javier V. Gómez on 2017-03-22:
Thanks a lot, that still has the problem of the relay (requires modiying the launchfiles) but is a better solution that what I had in mind :) | {
"domain": "robotics.stackexchange",
"id": 27386,
"tags": "ros, topic, topic-tools"
} |
Why are white dwarfs being classified as compact objects instead of black dwarfs? | Question: Why are white dwarfs being classified as compact objects instead of black dwarfs? As a black dwarf is the end stage of a white dwarf.
Answer: Black dwarfs, if they existed now, would be classified as "compact objects".
The only reason that you won't see them on the list is that the the universe hasn't existed long enough to produce them. The compact objects that exist in the universe now are white dwarfs, neutron stars and black holes.
though see https://public.nrao.edu/news/cold-white-dwarf/ timing measurements of a pulsar imply the existence of a cool (3000 degree) compact object that may be a white dwarf that is well on the way to being a black dwarf.
Also note that there is a qualitative difference between neutron stars, black holes and white dwarfs. THere is no such difference between white and black dwarfs. A black dwarf is just a white dwarf that has cooled sufficiently. | {
"domain": "astronomy.stackexchange",
"id": 5769,
"tags": "terminology"
} |
What is Earth's linear velocity around the Sun? | Question: I am creating a theoretical model of the Earth's tangential acceleration around the Sun (on an elliptical orbit, not circular). First, I will build a theoretical model, which is not influenced by any other planet or mass in the solar system, and then compare it to the actual data available.
For this I would require data on the actual linear velocity of Earth around the Sun (using my method), does anyone know how I can obtain this data?
Answer: You can obtain this data (and other solar system data) to high accuracy using the HORIZONS software by NASA.
Use the following settings:
Ephemeris Type: VECTORS
Target Body: Earth [Geocenter] [399]
Coordinate Origin: Sun (body center) [500@10]
This will generate the position and velocity of Earth relative to the Sun at the specified time(s). More settings can be adjusted using Table Settings. | {
"domain": "physics.stackexchange",
"id": 82195,
"tags": "newtonian-mechanics, orbital-motion, astronomy, earth, solar-system"
} |
Why is the potential difference across these two capacitors both 9V? | Question:
So above are two difference circuits, each with a 9V battery and a capacitor of the same capacitance. $C_1=C_2$
Why is the potential difference across both capacitors 9V? Shouldn't it be less than 9V? The potential difference across the top of the circuit and the bottom of the circuit is 9V. So why is the potential difference along a very tiny capacitor 9V too? The distance between the two capacitor plates arent the same distance as between the top of the circuit and the bottom
Answer: For a capacitor with closer plate spacing (all else being equal), the electric field in the dielectric between the plates is stronger.
Assuming a uniform electric field, the potential difference is given by the product of the spacing and the strength of the electric field:
$$\Delta V = E\cdot d $$
So, the potential difference can be the same for both capacitors even though the plate spacing is different but the electric field between the plates will be different.
This is due to the fact that, for smaller spacing, the capacitance $C$ is larger and, thus, more charge $Q$ must be moved by the battery to charge the capacitor to 9V.
$$\Delta V = \frac{Q}{C}$$ | {
"domain": "physics.stackexchange",
"id": 38183,
"tags": "homework-and-exercises, electricity, electric-circuits, capacitance"
} |
Fourier Optics - Why is polarization omitted? | Question: I am wondering why in wave optics the polarisation is often omitted.
Examples:
In Fourier Optics, a wave at a point $x \in \mathbb{R}^3$ is given by a complex number $U(x, y, z)$ which describes amplitude and phase. But what about the polarisation direction?
The formulas for Fresnel and Fraunhofer diffraction are always written for scalar fields $U(x, y, z)$. It seems as if the polarisation direction does not make a difference when light is diffracted at an aperture. But is this really correct? If I try to imagine the electric field, I always come to the conclusion that the polarization of the light at the aperture should make a difference.
I am kind of confused about that and would be very grateful if somebody could help me to understand this correctly.
Kind Regards
Answer: If we assume implicitly, that the polarisation is constant, we are allowed to skip this detail in or description, because the result does not change. However, if we assume that different polarisation components are present, we have to address each component separately -- assuming that you consider two orthogonal components, which do not mix.
By the way, this is also true for the wavelength of the light -- there exists no light source which possesses only a single wavelength (i.e. frequency) component --, the concept of a perfectly plane wave, or the idea of a medium with a homogeneous index of refraction. These are conceptional descriptions.
Here an example:
Suppose we'd like to describe a plane wave with "frequency" $\omega$, wave vector $\vec k = \frac{2\pi}{\lambda} \, \vec e_z$, and polarisation $\vec p = \frac{1}{\sqrt{2}}(\vec e_x + \vec e_y)$. We could either choose the cartesian coordinate system and write
$$
\vec E(\vec r, t)
% = E_0 e^{-i(\omega t - \vec k\cdot \vec r)} \; \vec p
= \frac{E_0}{\sqrt{2}} e^{-i(\omega t - k z)} \;
\begin{pmatrix}
1\\
1
\end{pmatrix}
$$
or we could rotate the reference frame by 45° and use the vector basis
$\{\vec e_{p_{\parallel}}, \vec e_{p_{\perp}}\}$, which leads to
$$
\vec E(\vec r, t)
= E_0 e^{-i(\omega t - k z)} \;
\begin{pmatrix}
1\\
0
\end{pmatrix}
$$
Now, if we assume that all electric fields are either parallel or anti-parallel to $\vec e_{p_{\parallel}}$, we know that we only have to consider the first component of the vector. Hence, by dropping the vector notation and implicitly only considering the first component of the vector, we are allowed to write
$$
E(\vec r, t)
= E_0 e^{-i(\omega t - k z)}
$$
Hence, the polarisation is no longer explicitly considered, but only implicitly. | {
"domain": "physics.stackexchange",
"id": 66253,
"tags": "electromagnetism, optics, polarization"
} |
Event driven timed I/O | Question: The following code is used to access a shield which converts settings written over I2C to Servo output.
I make use of the MRAA library since that's the default supported by my hardware.
To prevent my I2C commands from being written all at the same time, I use setTimeout to wait a given amount of time before executing the next.
For those unfamiliar with MRAA and I2C:
writeReg accepts two arguments. The first is the target register, the second is the data. As long as two different registers are being written, it's no problem if one is written earlier than the other. However, I want to make this code more generic so it can be used with something different than Servos as well. A generic solution should give me the possibility to burst multiple parts of data to the same address and keep the order intact.
Save to say my current structure is not easily expandable. How should I make it more expandable the Node.JS way? Expandability is more important than performance, but since I'm looking for an idiomatic solution this shouldn't be relevant.
Also note the data written seems redundant, but there are out-of-scope reasons for this.
var m = require('mraa'); // I/O library
Servo = new m.I2c(0)
Servo.address(0x74)
timer();
function timer() {
firstI2C();
setTimeout(timer, 5000)
}
function firstI2C() {
Servo.writeReg(1, 250);
Servo.writeReg(2, 128+16+0x0);
Servo.writeReg(37, 0);
console.log("1");
setTimeout(secondI2C, 1000)
}
function secondI2C() {
Servo.writeReg(1, 5);
Servo.writeReg(2, 128+16+0x0);
Servo.writeReg(37, 0);
console.log("2");
setTimeout(thirdI2C, 1000)
}
function thirdI2C() {
Servo.writeReg(1, 250);
Servo.writeReg(2, 128+16+2);
Servo.writeReg(37, 0);
console.log("3");
setTimeout(fourthI2C, 1000)
}
function fourthI2C() {
Servo.writeReg(1, 5);
Servo.writeReg(2, 128+16+2);
Servo.writeReg(37, 0);
console.log("4");
}
I've been thinking about a function which would accept data in JSON and maps it to such functions and which keeps execution order by the position in the JSON (list). However, such a function is way beyond my current skill level.
Answer: A very simple solution is to use a Map that (as its name implies) maps a string to a function so that you can pass an array of sorted strings and ask for the application to get each function and run it in the specified order.
Something like this:
var myFuncs={ 'firstI2C':firstI2C,
'secondI2C':secondI2C,
//so on
}
Then you can have a function that tells the program to run next function. For example:
var nextMethodIndex=0;
var methodOrder=['firstI2C','secondI2C','thirdI2C',....] // this array can be fed as JSON.
function runNextMethod()
{
if(nextMethodIndex<methodOrder.length)
{
var func= myFuncs[methodOrder[nextMethodIndex]];
nextMethodIndex++;
setTimeout(func, 1000)
}
}
Finally you should modify your functions to call runNextMethod when they finish their job.
function timer() {
runNextMethod();
setTimeout(timer, 5000)
}
function firstI2C() {
Servo.writeReg(1, 250);
Servo.writeReg(2, 128+16+0x0);
Servo.writeReg(37, 0);
console.log("1");
runNextMethod();
} | {
"domain": "codereview.stackexchange",
"id": 13879,
"tags": "javascript, node.js, io, generics"
} |
How to create quantum circuits from scratch | Question: I am doing self-study at the moment using primarily the book: Quantum Computing a Gentle Introduction by Eleanor Rieffel and Wolfgang Polak.
Getting through the earlier chapters and exercises went quite well (fortunately the earlier chapters had plenty of examples), however I got stuck on the 5th chapter on quantum circuits. Although I understand the concepts the authors present, perhaps due to a lack of examples, I have trouble applying said concepts to the exercises.
The exercises I have trouble with (and where I can't find a solution or thorough/ introductory explanation for) are the following:
$\\$
Questions:
Design a circuit for creating:
$\left| W_n \right> = \frac{1}{\sqrt{n}}(\left| 0 \dots 001 \right> + \left| 0 \dots 010 \right> + \left| 0\dots 100 \right>) + \cdots + \left| 1\dots 000 \right>)$ from $\left| 0 \dots 000 \right>$
And design a circuit for creating "the Hardy state":
$\frac{1}{\sqrt{12}}(3\left| 00 \right> + \left| 01 \right> + \left| 10 \right> + \left| 11 \right>)$
$\\$
Can somebody point me in the right direction or refer me to some literature/ tutorials so I can grasp these kind of exercises better?
$\\$
Perhaps a related question:
Tips and tricks for constructing circuits to generate arbitrary quantum states
Answer: As DaftWullie pointed out, the question about $W_n$ has an excellent collection of answers here.
For the Hardy state question (and a lot of other tasks like it), you can approach it as follows.
Start with the $|0...0\rangle$ state.
Start by putting the first qubit "in the right state", which is a state $(\alpha |0\rangle + \beta |1\rangle) \otimes |0...0\rangle$, where $\alpha$ and $\beta$ are the relative weights of all basis states which start with 0 and with 1, respectively. For Hardy state specifically, two basis states start with 0: $\frac{1}{\sqrt{12}}(3\left| 00 \right> + \left| 01 \right>)$ and two basis states start with 1: $\frac{1}{\sqrt{12}}(\left| 10 \right> + \left| 11 \right>)$; their relative weights are just the sums of squares of their amplitudes: $\frac{9}{12} + \frac{1}{12} = \frac{10}{12}$ and $\frac{1}{12} + \frac{1}{12} = \frac{2}{12}$, respectively. So you'll need to put the first qubit in the state $(\sqrt{\frac{10}{12}} |0\rangle + \sqrt{\frac{2}{12}} |1\rangle)$ using $R_y$ gate.
Continue by putting the second qubit in the right state, applying controlled $R_y$ gates with the first qubit as the control. To get the first two terms right, you need to convert the term $\sqrt{\frac{10}{12}} |0\rangle \otimes |0\rangle$ into the term $\frac{1}{\sqrt{12}}(3\left| 00 \right> + \left| 01 \right>)$, which is the same as convert normal state $|0\rangle \otimes |0\rangle$ into $\frac{1}{\sqrt{10}}(3\left| 00 \right> + \left| 01 \right>)$ without affecting the state $|1\rangle \otimes |0\rangle$ (note the renormalization when switching from terms of a larger expression to standalone states!) To do this, you can do a 0-controlled $R_y$ with the first qubit as control and the second qubit as target.
If you have more qubits, you will continue doing this, using more control qubits to make your rotations more and more specific.
You can see this paper by Shende, Bullock and Markov if you want a more formal and less ad-hoc explanation. | {
"domain": "quantumcomputing.stackexchange",
"id": 886,
"tags": "quantum-gate, circuit-construction"
} |
Born-Oppenheimer approximation | Question: I'm studying about formulation of Born-Oppenheimer approximation from Atkins, Molecular quantum mechanics, 5th edition. In chapter 8.1 I found the following:
I keep wondering where did this W come from in equation 8.4.? It says later in text that it is responsible for non-adiabatic effects, but I don't see how it just appeared here. If someone is willing to explain, I'd be greatful.
Also, if anyone has tips on where to find more nice and clean text about more general formulation of Born-Oppenheimer approximation, since this one is confined to particular system.
Thanks!
Answer: Atkins did a good job on explaining what parametric dependence of the electronic wave function (and electronic energy) on nuclear coordinates is, but failed to mention that in the derivation of the Born-Oppenheimer approximation this dependence is assumed to be continuous and differentiable and that both first and second derivatives of this quantities with respect to nuclear coordinates are in general non-zero.
In particular, for the system described in the text, we have
$$
\frac{\partial \psi}{\partial Z_j} \neq 0 \, , \quad
\frac{\partial^2 \psi}{\partial Z_j^2} \neq 0\, .
$$
Now, when the solution of the form (8.3) is substituted into the Schrödinger equation (8.2), the term involving $T_\mathrm{e}$ is trivial: since the nuclear wave function $\chi$ is not a function of electronic coordinates it is just a constant when differentiating with respect to them, so we get
$$
T_\mathrm{e} (\psi \chi) = \chi T_\mathrm{e} \psi \, .
$$
But the term involving $T_\mathrm{N}$ does not trivially transform in a similar way,
$$
T_\mathrm{N} (\psi \chi) \neq \psi T_\mathrm{N} \chi
$$
since both $\psi$ and $\chi$ depend on the nuclear coordinates. Rather, applying the product rule twice, we get
\begin{align}
\frac{\partial^2}{\partial Z_j^2} (\psi \chi)
&=
\frac{\partial}{\partial Z_j} \left( \frac{\partial}{\partial Z_j} (\psi \chi) \right) \\
&=
\frac{\partial}{\partial Z_j} \left( \psi \frac{\partial \chi}{\partial Z_j} + \chi \frac{\partial \psi}{\partial Z_j} \right) \\
&=
\psi \frac{\partial^2 \chi}{\partial Z_j^2}
+
2 \frac{\partial \psi}{\partial Z_j} \frac{\partial \chi}{\partial Z_j}
+
\chi \frac{\partial^2 \psi}{\partial Z_j^2} \, ,
\end{align}
so that
$$
T_\mathrm{N} (\psi \chi)
=
- \psi \sum\limits_{j=1,2} \frac{\hbar^2}{2 m_j} \frac{\partial^2 \chi}{\partial Z_j^2}
-
\sum\limits_{j=1,2} \frac{\hbar^2}{2 m_j} \left(
2 \frac{\partial \psi}{\partial Z_j} \frac{\partial \chi}{\partial Z_j}
+
\chi \frac{\partial^2 \psi}{\partial Z_j^2} \right) \, ,
$$
where the first term is nothing but $\psi T_\mathrm{N} \chi$ and the remainings are designated as $W$. | {
"domain": "chemistry.stackexchange",
"id": 6359,
"tags": "quantum-chemistry, books"
} |
Time and gravity | Question: am i right in saying that if you could raise the distance in the speed = distance/ time equation without altering the other parameters it would give the appearance that (to an outside observer) time would appear (on paper) to slow? is this how we come to the conclusion that time near a massive body runs slower because space is distorted by gravity altering the parameters of this equation?
Answer: Please note that your speed=distance/time equation is just that, an equation. Being an equation, I could then write $vt=d$ (I've only moved time to the other side). Notice that if you stipulate that speed and time parameters are kept constant, that means the distance parameter MUST be constant. The left-hand side of the equation is equal to the right-hand side. If one side is constant, so is the other. Long story short, this is not how we concluded that gravity dilates time.
Gravitational time dilation has many forms, but the best one comes from something called the Schwarzschild metric. A metric is an equation that represents how each dimension (length, width, depth, duration) relates to each other dimension. The Schwarzschild metric (named after the guy) is just a metric with an object of mass $M$ located at the center of your coordinate system.
I'll spare you the boring details (well, I think they're fun, but I'm pretty sure that's because I'm crazy) but we can directly pull a time term from this metric that shows the following:
$$d\tau^2=\left(1-\frac{2GM}{c^2R}\right)dt^2$$
$d\tau$ is an amount of time that passes for an observer a fixed distance $R$ away from an object of mass $M$. $dt$ is the relative amount of time for an observer very far away from the mass. $G$ is Newton's gravity constant and $c$ is the speed of light. I'll spare you the messy derivation of this. Suffice it to say that this is where we get gravitational time dilation from; not some notion of gravity distorting only one parameter of $v=d/t$ | {
"domain": "physics.stackexchange",
"id": 39538,
"tags": "general-relativity, gravity, time, observers"
} |
making *.world file in gazebo package | Question:
hi all,how can i do to make a .world file?
i read all existing .world file, but i need something else for learning that.
thanks in advance.
Originally posted by Maurizio88 on ROS Answers with karma: 155 on 2011-11-15
Post score: 0
Answer:
I don't know of any tutorials specifically, but the gazebo_worlds package contains some pretty useful examples. I just stumbled across the test_friction.world, and that seems to have a lot of useful models for me to learn from.
Originally posted by DimitriProsser with karma: 11163 on 2011-11-16
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 7303,
"tags": "gazebo, simulation, simulator-gazebo, world, gazebo-worlds"
} |
Asynchronous Event Handler | Question: This class acts as an asynchronous event handler that will execute all attached tasks in an async/await context. Requires Nuget Immutables. Example usage:
class MyEventArgs : EventArgs {}
async Task SomeAsyncMethodAobject src, EventArgs args) {
Console.WriteLine("Task A...");
await Task.Delay(2000);
}
async Task SomeAsyncMethodB(object src, EventArgs args) {
Console.WriteLine("Task B...");
await Task.Delay(1000);
}
static async Task Main(string[] args) {
AsyncEvent<MyEventArgs> Events;
Events = new AsyncEvent<MyEventArgs>();
Events += SomeAsyncMethodA;
Events += SomeAsyncMethodB;
await Events?.InvokeAsync(this, new MyEventArgs());
// Use below to discard task and not await event task to finish.
// _ = Events?.InvokeAsync(this, new MyEventArgs()).ConfigureAwait(false);
}
Source for the AsyncEvent<EventArgsT> class:
// T is the EventArgs class type to pass to the callbacks on Invoke.
public class AsyncEvent<T> where T : EventArgs {
// List of task methods to await.
public ImmutableList<Func<object, T, Task>> Invokables;
// on += add new callback method to AsyncEvent.
public static AsyncEvent<T> operator+(AsyncEvent<T> source, Func<object, T, Task> callback) {
if (callback == null) throw new NullReferenceException("Callback is null! <AsyncEvent<T>>");
if (source == null) return null;
if (source.Invokables == null) source.Invokables = ImmutableList<Func<object, T, Task>>.Empty;
source.Invokables = source.Invokables.Add(callback);
return source;
}
// on -= remove existing callback from AsyncEvent.
public static AsyncEvent<T> operator -(AsyncEvent<T> source, Func<object, T, Task> callback) {
if (callback == null) throw new NullReferenceException("Callback is null! <AsyncEvent<T>>");
if (source == null) return null;
source.Invokables = source.Invokables.Remove(callback);
return source;
}
// Invoke the tasks asynchronously with a cancelation token.
public async Task InvokeAsync(object source, T evArgs, CancellationToken token) {
List<Task> tasks = new List<Task>();
if (Invokables != null)
foreach (var callback in Invokables)
if (!token.IsCancellationRequested)
tasks.Add(callback(source, evArgs));
await Task.WhenAll(tasks.ToArray());
}
// Invoke the tasks asynchronously.
public async Task InvokeAsync(object source, T evArgs) {
List<Task> tasks = new List<Task>();
if (Invokables != null)
foreach (var callback in Invokables)
tasks.Add(callback(source, evArgs));
await Task.WhenAll(tasks.ToArray());
}
}
Is there anything wrong with this asynchronous paradigm?
Answer: Not a review but an alternative solution to leave here. Just because the above one is overkill, as for me.
Probably you want to implement Publisher/Subscriber pattern in awaitable/Command mode. But it's possible using delegate itself.
Consider the extension method
public static class DelegateExtensions
{
public static Task InvokeAsync<TArgs>(this Func<object, TArgs, Task> func, object sender, TArgs e)
{
return func == null ? Task.CompletedTask
: Task.WhenAll(func.GetInvocationList().Cast<Func<object, TArgs, Task>>().Select(f => f(sender, e)));
}
}
And the test
class Program
{
public static event Func<object, EventArgs, Task> MyAsyncEvent;
static async Task SomeAsyncMethodA(object src, EventArgs args)
{
Console.WriteLine("Task A...");
await Task.Delay(2000);
Console.WriteLine("Task A finished");
}
static async Task SomeAsyncMethodB(object src, EventArgs args)
{
Console.WriteLine("Task B...");
await Task.Delay(1000);
Console.WriteLine("Task B finished");
}
static async Task Main(string[] args)
{
MyAsyncEvent += SomeAsyncMethodA;
MyAsyncEvent += SomeAsyncMethodB;
await MyAsyncEvent.InvokeAsync(null, EventArgs.Empty);
Console.WriteLine("Invoked");
Console.ReadKey();
}
}
Output
Task A...
Task B...
Task B finished
Task A finished
Invoked
Looks like it works. | {
"domain": "codereview.stackexchange",
"id": 41488,
"tags": "c#, async-await"
} |
Calculating acceleration function for given time, distance, initial velocity | Question: An object has an initial velocity of v and should stop in d meters, in t seconds where v*t >= d. I can predict that this acceleration won't be constant but it is a function of time and this problem may not have one exact solution for given v, t and d. But I am not able to think of any solution, can any one help me to calculate this function?
Answer: Assuming $v-t$ graph is quadratic; we can write velocity, acceleration and distance functions as
$$ f(x) = ax^2 + bx + c \mbox{ } (\mathrm{velocity})$$
$$f'(x) = 2ax + b \mbox{ } (\mathrm{acceleration}) $$
$$ F(x) = ax^3/3 + bx^2/2 + cx + k \mbox{ } (\mathrm{distance}) $$
From given requirement, we know that initial velocity is $v$ where $t=0$. Then:
$$ f(0) = v$$
$$c = v $$
Again from given requirement, we know that $v=0$ where $t=t$.
$$f(t) = 0$$
$$ at^2 + bt + c = 0$$
We also know that all movement will complete in a distance of $d$. So integral from $0$ to $t$ should result $d$.
$$ d = F(t) - F(0)$$
From here we can calculate that
$$ b = (6d - 4vt) / t^2$$
$$ a = -(((6d - 4vt) / t^2)t + v) / t^2$$
Now we can write $f(x)$ and $f'(x)$ in terms of given $v$, $t$ and $d$. | {
"domain": "physics.stackexchange",
"id": 9119,
"tags": "kinematics, acceleration"
} |
Simple ohms law on a battery ? Paradox or conceptual error? | Question: Suppose we have a regular pencil battery which supplies DC voltage $V$. Say we take copper wire and connect the ends of the battery to an $R$ ohms resistance.
Then Ohm's law tells use the current in the wire is $ \frac{V}{R}$.
This means as we keep decreasing the value of $R$, we will keep getting higher
and higher values of current, since $V$ is fixed.
Now if we simply connect the ends of the battery by a copper wire without
an intermediate resistance, of course the value of the current will not be infinity, but it will be $\frac{V}{R_{copper}}$ which is still very large.
The resistance of copper is so small that even for $V=1.5$ volts we will
get current of larger than 1 Amp with copper. And according to this link
1 Amp can almost give you a heart attack.
So why is it that we dont hear about major accidents about people connecting
two ends of a pencil battery with regular copper wire? Is there some fallacy
in my reasoning above?
Answer: An ampere passing through your heart can give you a heart attack. An ampere passing through a wire will not.
The human body has a fairly large resistance ($10000\ \mathrm{\Omega}$ perhaps?), so the same voltage that can make a large current pass through a copper wire will not necessarily make any significant current flow through a person. | {
"domain": "physics.stackexchange",
"id": 7704,
"tags": "electromagnetism, electricity, electric-current, batteries"
} |
Why is the radial velocity considered zero? | Question: I had recently come across a question which is stated as below:
A disc placed on a large horizontal floor is connected from a vertical cylinder of radius $r$ fixed on the floor with the help of a light inextensible cord of length $l$ as shown in the figure. Coefficient of friction between the disc and the floor is $\mu$. The disc is given a velocity $v$ parallel to the floor and perpendicular to the cord. How long will the disc slide on the floor before it hits the cylinder?
I thought hard for a few days but I couldn't solve it as the mathematics was terrible. Finally, while trying many other things, I tried considering the radial component of velocity to be zero and it worked! I got the answer.
But I am not able to understand the logic behind considering the radial velocity zero. Would someone please help me to understand it?
Edit: The figure is given as below:
Answer: Let's say the disk has got a non-zero radial velocity. This then has $2$ possibilities. First, the radial velocity is outward along the string and second, the radial velocity is inward along the string.
The First case cannot happen because of the restriction given in the question, the string is inextensible.
For the Second case, if the disk has a velocity inward along the length of the string, the string will slacken after the disk moves a distance $dl$ which will then lead to the tension force, the force exerted by the string and the only force that can provide a angular motion to the disk to instantaneously become zero. So, in this case, the disk will keep moving in the same direction with decreasing speed until the string tightens again to start providing a angular motion. | {
"domain": "physics.stackexchange",
"id": 76884,
"tags": "newtonian-mechanics, work"
} |
Does the divergence theorem imply an underlying symmetry? | Question: The divergence theorem connects the flux (through surface) and divergence (in a volume) for any vector field.
This theorem expresses continuity. It isn't clear (to me) whether there is a conserved quantity associated with the continuity equation. It appears that this theorem would be equivalent to mass conservation, if the flux represented (say) fluid flow. In the general case of a vector field, I'm not sure what (if anything) is conserved.
I would like to know if this continuity implies a conserved quantity and an underlying symmetry, by the converse of Noether's theorem. If this (existence of symmetry/conserved quantity) is true for some vector fields but not all, what causes the distinction? Examples would be greatly appreciated.
While I don't have a strong background on Lagrangian mechanics, I'm happy to be directed to background reading that would help.
Answer: The integral theorem of Gauss, $$\int\limits_V \! d^3x \; \vec{\nabla} \cdot \vec{A}(\vec{x}) =\int\limits_{\partial V}\! d \vec{\sigma} \cdot \vec{A}(\vec{x}), \tag{1} \label{1}$$ is a purely mathematical statement. Taken by itself, it does not express "continuity" in any sense.
The concept of continuity (in the physical sense) comes into play once you have a scalar field (scalar density) $\rho(t, \vec{x})$ and a vector field (current density) $\vec{j}(t, \vec{x})$ related by the continuity equation $$\frac{\partial \rho(t,\vec{x})}{\partial t} + \vec{\nabla} \cdot \vec{j}(t,\vec{x}) =0. \tag{2} \label{2}$$ Defining the "charge" contained in a volume $V \subset \mathbb{R}^3$ at time $t$ by $$Q_V(t):= \int\limits_V \! d^3x \, \rho(t,\vec{x}), \tag{3} \label{3}$$ the integral theorem of Gauss \eqref{1} can be used to show that \eqref{2} implies $$\frac{d Q_V(t)}{dt}=\int\limits_V \! d^3x \, \frac{\partial \rho(t,\vec{x})}{\partial t}=-\int\limits_V\! d^3 x \, \vec{\nabla} \cdot \vec{j}(t,\vec{x})=-\int\limits_{\partial V} \! d \vec{\sigma} \cdot \vec{j}(t,\vec{x})=: -I_{\partial V}(t), \tag{4} \label{4} $$ relating the change of the charge contained in the volume $V$ to the flux (current) $I_{\partial V}(t)$ through the surface $\partial V$ of the volume $V$. Conversely, if $$\dot{Q}_V(t)=-I_{\partial V}(t)\tag{5} \label{5}$$ holds for "any" three-dimensional manifold $V \subset \mathbb{R}^3$ (subject to some mathematical qualification), the continuity equation \eqref{2} can be derived as the "local" version of \eqref{5}.
Assuming further that $\rho(t, \vec{x})$ and $\vec{j}(t,\vec{x})$ fall off sufficiently fast for $|\vec{x}|\to \infty$, \eqref{4} implies that the total charge $$Q:= \int\limits_{\mathbb{R}^3} \! d^3x \, \rho(t,\vec{x}) \tag{6} \label{6}$$ is time-independent, defining a conserved quantity.
Prominent examples are the charge density $\rho$ with the current density $\vec{j}$ in electrodynamics, the energy density of the electromagnetic field $\eta$ together with the energy flux density $\vec{S}$ in Maxwell's theory (in the absence of charges), mass density $\rho$ together with $\rho \vec{v}$ in nonrelativistic continuum mechanics and many others. | {
"domain": "physics.stackexchange",
"id": 99874,
"tags": "symmetry, conservation-laws, gauss-law, vector-fields, noethers-theorem"
} |
running rosjava_tutorial_pubsub | Question:
Hi,
I have successfully built the tutorial, but I don't know how to get it running. I will be grateful for any help.
Thanks
Originally posted by chcorbato on ROS Answers with karma: 202 on 2011-09-11
Post score: 1
Answer:
To run a rosjava node do:
# rosrun rosjava_bootstrap run.py <PACKAGE> <Node>
for example for the tutorial:
# rosrun rosjava_bootstrap run.py rosjava_tutorial_pubsub org.ros.tutorials.pubsub.Talker
to run the talker,
and in another terminal:
# rosrun rosjava_bootstrap run.py rosjava_tutorial_pubsub org.ros.tutorials.pubsub.Listener
For me it only works once I've previously launched the ros Master, seems that it fails when launching it from the java Classes of the nodes
Originally posted by chcorbato with karma: 202 on 2011-09-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 6659,
"tags": "rosjava"
} |
SetLinearVel() is not constant? | Question:
Hello !
I am using GAZEBO 7 and ROS to make a simulation on pioneer3-AT for my internship.
I am facing a problem that is just blocking me for days, I have my plugin that at a specific moment is using the fonction SetLinearVel(), however, I can see that the model got an accéleration and then stop and just slide.
I wanted to know if there is a way to solve this issue ? I do need a constant velocity and I don't know how to do, here are my 2 functions :
void definirVitesseLineaire(const double x, const double y){
this->model->SetLinearVel(math::Vector3(x, y, 0));
}
void definirVitesseAngulaire(const double z){
this->model->SetAngularVel(math::Vector3(0, 0, z));
}
Even if my robot are going the same speed, they wont actually follow each other as they're supposed to do ( with my code ) because the ammount of time both of the functions above are being called is different from a robot to another.
This is why I need a constant velocity.
Thank you in advance !
Originally posted by shenki on Gazebo Answers with karma: 39 on 2017-04-13
Post score: 1
Original comments
Comment by eugene-katsevman on 2017-04-13:
Consider posting you comment as an answer, esp the last sentence )
Comment by shenki on 2017-04-14:
hello guys, sloretz thank you for that information. Now it's working pretty well, I am calling these functions at every world frame.
Comment by sloretz on 2017-04-14:
I converted the comment to an answer. I'm glad it working now shenki
Comment by eugene-katsevman on 2017-04-14:
shenki, accept the answer, please
Answer:
These functions make the model move at the velocity without applying any forces, so you won't see a force/acceleration from them. The velocities must be given in the world frame. They set the velocity at the time they're called, afterwards the object can be slowed by forces. If you need a constant velocity you must call the methods every time step.
Originally posted by sloretz with karma: 558 on 2017-04-13
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 4080,
"tags": "ros-kinetic, gazebo-7"
} |
Connect two roscore's | Question:
Hi!
i have two computers and each of them runnig roscore, is there any way to share topics between these two computers?
any help would be welcome
thanks
Originally posted by milorenus on ROS Answers with karma: 11 on 2014-12-31
Post score: 0
Original comments
Comment by ahendrix on 2014-12-31:
There are a number of solutions for this problem. Search for "multimaster"
Comment by atp on 2014-12-31:
Normal tcpip using python is not too hard to do either.
Comment by milorenus on 2015-01-01:
Hi atp, i am not really good at that tcpip connection, can you advice me some good tutorial please? thanks
Answer:
It's not clear from your question exactly what capability you require.
The easy case: ROS is a distributed system that supports topic sharing between computers out-of-the-box -- you simply need to use the ROS_MASTER_URI environment variable (see the Running ROS across multiple machines tutorial).
If you really need separate masters running on individual computers, follow the advice given by @ahendrix about "multimaster".
Originally posted by kramer with karma: 1470 on 2014-12-31
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 20459,
"tags": "ros"
} |
Category theory, computational complexity, and combinatorics connections? | Question: I have been trying to read “Pearls of Functional Algorithm design”, and subsequently “The Algebra of Programming”, and there is an obvious correspondence between recursively (and polynomially) defined data types and combinatorial objects, having the same recursive definition and subsequently leading to the same formal power series (or generating functions), as shown in the introductions to combinatorial species (I read “Species and Functors and Types, Oh My!”).
So, for the first question, is there a way to recover the generating (recursive) equation from the power series? That’s an afterthought though.
I was more interested in the notion of initial algebras and final co-algebras as kind of “defining procedures about the data structure”. There are some practical rules in functional programming, concerning composition, products of mapping between algebras and similar, described for example in this tutorial. It seems to me that this could be quite powerful way to approach complexity and for example, it looks fairly straightforward to recover Master’s theorem in such context (I mean, you have to do the same argument, so not much gain in this instance), and the unique catamorphism from the initial algebra and the fact (am I mistaken?) that the algebras between A and FA for F-polynomial functor are isomorphic, makes it look to me that such approach could have a lot of benefits in analysing complexity of operations over data structures.
From practical standpoint, looks like fusion rules (basically, ways to compose algebra morphisms with one another, coalgebra morphisms, and general morphisms) are very powerful optimization technique for program transformation and refactoring. Am I right in thinking that full utilization of these rules can produce optimal program (no unnecessary intermediate data structures or other extra operations).
Am I onto something (and what) here? Is it beneficiary (from learning standpoint) to try to look about computational complexity in this way? Are the structures, for which we can have "nice" initial algebras somehow too limited for some problems?
I’m mostly trying to find a way to think about complexity in terms of the structure of the search space and the way the "search space" and "search algorithm" interact through some "nice" object like the initial algebra of the functor and to understand if it's useful to try to view things this way, when looking at more complicated structures.
Answer: Dave Clarke's comment is quite important. Generally fusion doesn't change the O(-) efficiency. However, of particular interest is Liu, Cheng, and Hudak's work on Causal Commutative Arrows. Programs written with them are necessarily optimizable, in part through stream fusion, to a single loop free of dynamic memory allocation and intermediate structures: http://haskell.cs.yale.edu/?post_type=publication&p=72 | {
"domain": "cstheory.stackexchange",
"id": 2894,
"tags": "cc.complexity-theory, co.combinatorics, functional-programming, ct.category-theory"
} |
Can a stoichiometric mixture of oxygen and methane exist as a liquid at standard pressure and some (low) temperature? | Question: This answer to the question Pre-mixing cryogenic fuels and using only one fuel tank written by a non-chemist (me) begins with:
At STP:
LOX's boiling point is 90.19 K
Methane's freezing point is 90.7 K
This does not a priori prove that a solution of the two can not exist. However it does mean that they can not be handled as liquids at the same temperature, making mixing the two more difficult.
We know that liquid air exists which shows that LOX and LN2 can mix together. But methane is an organic molecules and we know that heavier $\text{C}_n \text{H}_{2n+2}$ hydrocarbons include oils and waxes don't like to dissolve in non-organic solvents.
A stoichiometric mixture of oxygen and methane would be 2:1 molar:
$$\ce{ 2O2 + CH4 -> CO2 + 2H2O }$$
Though the two can not be conveniently maintained as liquids at the same temperature, can a stoichiometric mixture of the two exist as a liquid at some (low) temperature and standard pressure?
Answer: There's a NASA report that looks into this: "ON THE SOLUBILITIES AND RATES OF SOLUTION OF GASES IN LIQUID METHANE", Hibbard and Evans, 1968 and concludes that such mixtures are possible.
Starting on page 8:
Figure 5(a) presents the curves for oxygen, argon, carbon monoxide,
and nitrogen. Also shown are the two experimental values for nitrogen.
Agreement is excellent at 99.83K and good at 110.9K. The curves for
these gases show that solubility should decrease with increasing
temperature and the nitrogen data confirm this. This figure shows the
mole fraction solubility of oxygen to be 1.0 at 90K. This means that
oxygen, which has a normal boiling temperature of 90.1K would
continuously condense in, and be miscible in all proportions, with
liquid methane at 90K. This is confirmed by reference 11 where, in a
study of the solubility of methane in liquid oxygen, it was concluded
that these formed a near-ideal solution at -297 F (90K)
(emphasis added) Reference 11 mentioned in there is "Hydrocarbon-Oxygen Systems Solubility", McKinley and Wang, 1960 (unfortunately paywalled) which also has interesting discussion of the stability (i.e. presence or absence of a tendency to explode) of various mixtures.
Figure 5 is reproduced below. Note how the solubility of oxygen rises rapidly as temperature drops. | {
"domain": "chemistry.stackexchange",
"id": 11635,
"tags": "solutions, mixtures, fuel, liquids"
} |
impulse response VS zero-input response | Question: I am new in the field of systems and signals, and I have a rather basic for the majority of the group, question:
Can we find the impulse response function of homogeneous ODE, instead of its zero-input response?
for example, we have the following 2nd order homogeneous ODE:
$a_{2}\ddot{x}+a_{1}\dot{x}+a_{0}{x} = 0$
, where the output is the $x(t)$ given the initial conditions
I understand that if it were:
$a_{2}\ddot{x}+a_{1}\dot{x}+a_{0}{x} = f(t)$
the $f(t)$ would be its input, $x(t)$ its output given the input, and we could find the impulse response by replacing the $f(t)$ with $\delta (t)$.
Now that the input is zero, how can we find what the output would look like with respect to any input?
Is this:
$a_{2}\ddot{x}+a_{1}\dot{x}+a_{0}{x} = \delta (t)$
even allowed, for an initially homogeneous equation?
Answer: The answer to your question
Can we find the impulse response function of homogeneous ODE, instead of its zero-input response?
is "no", because only a system with an input and an output can have an impulse response, a homogeneous ODE doesn't have an impulse response.
The impulse response of a system, possibly described by an ODE, is the zero-state response to an input signal $x(t)=\delta(t)$.
Of course, a system can also have a zero-input response, which is obtained by solving the corresponding homogeneous ODE with the appropriate initial conditions, but this response is not directly related to the system's impulse response. | {
"domain": "dsp.stackexchange",
"id": 9641,
"tags": "linear-systems, impulse-response"
} |
Beginner code for a text-based battle simulator | Question: I wanted to see if anyone would like to review the existing code I have for a beginner program that simulates a battle. I have learned a heck of a lot while doing it, and I had a few questions regarding some things that I found worked, but I am not sure on the why of them working.
Please do note that this is not finished and I am definitely going to see this program through to the end with a few more features that what I have now. However, at this stage it is functional and I wanted to see if the things I am doing now will lead to a functional program at the end.
The main issues I have had so far come from sticking the dictionaries inside of a list. I had to deal with a lot of errors passing a dictionary function to a list and vice versa.
Here is the code:
# A combat simulator which allows input for your name, your weapon, randomly chooses your enemy, and simulates a battle based on simple stats such as HP and dmg.
# a dice roll will modify dmg for things such as ambushes and counter attacks. <- Maybe!
# TODO: X implement dictionary (?) for stats along with enemy list and weapon list, input validation, don't submit to feature creep!, X verify return variable from
# local scope, battle sequence, end program
# 6/5 - Plan to implement: Final scripted battle sequence to simulate one battle, will need to understand how to modify the HP value accordingly...
import random, time
enemyOrc = {"Name" : "Orc", "HP" : "90", "Attack" : "18"}
enemyGK = {"Name" : "Goblin King", "HP" : "182", "Attack" : "16"}
enemyGR = {"Name" : "Giant Rat", "HP" : "32", "Attack" : "3"}
enemyWitch = {"Name" : "Witch", "HP" : "68", "Attack" : "32"}
enemyEM = {"Name" : "Evil Mirror", "HP" : "350", "Attack" : "11"}
weaponSword = {"Name" : "Sword", "Damage" : "12"}
weaponAxe = {"Name" : "Axe", "Damage" : "18"}
weaponQS = {"Name" : "Quarter-Staff", "Damage" : "22"}
weaponDagger = {"Name" : "Dagger", "Damage" : "8"}
weaponFists = {"Name" : "Fists", "Damage" : "28"}
enemyList = [enemyOrc, enemyGK, enemyGR, enemyWitch, enemyEM]
weaponList = [weaponSword, weaponAxe, weaponQS, weaponDagger, weaponFists]
def introduction(): # Get player name and randomly select opponent
print("Greetings combatant!\n" + "Please enter your name:")
playerName = input()
enemyChosen = random.choice(enemyList)
print ("Nice to meet you, " + playerName + ". I hope you are ready for a fight!\n" + "Today you will be facing...")
time.sleep(2)
print ("The " + enemyChosen["Name"] + "!")
time.sleep(2)
print ("Hope you're ready for a tough one!\n")
return enemyChosen, playerName
def weaponSelection(): # Get player equipment
print ("It's time to choose your weapon: ")
print("\n" .join(d["Name"] for d in weaponList))
weaponChosen = input()
while True:
if any(d['Name'] == weaponChosen for d in weaponList):
return weaponChosen.lower()
else:
print("I'm sorry, you can only choose what is available.")
weaponChosen = input()
def weaponReturn(weapon): # to test return variables // probably unnecessary?
print("Let's see if your " + weapon.lower() + " can defeat your adversary. Good luck!")
def battleSequence(enemy, player, weapon):
ambushChance = random.randint(0,1)
if ambushChance == 0:
print("The " + enemy["Name"] + " launched an attack on you from behind!")
print (player + " loses 10hp.")
else:
print("You get the first attack. You swing your " + weapon + "!")
#time.sleep(1)
print(player + " does 5 damage to the " + enemy["Name"] + "!")
playAgain = 'yes'
while playAgain == 'yes' or 'y':
enemyName, characterName = introduction()
time.sleep(1.5)
battleWeapon = weaponSelection()
weaponReturn(battleWeapon)
battleSequence(enemyName, characterName, battleWeapon)
Answer: name your enemies!
Use namedtuple for the enemies/weapons:
Enemy = collections.namedtuple('Enemy', ['name', 'hp', 'attack'])
enemies = [
Enemy('Orc', 90, 18),
...
]
And to access the name, use
enemy.name
Instead of
enemy['Name']
That way, you don't risk constructing an enemy with a 'name' instead of a 'Name'.
mistaken logic
while playAgain == 'yes' or 'y':
Let's think about what this means. Tip: where are the parenthesis.
while (playAgain == 'yes') or 'y':
So, probably not what you mean. What you mean is
while playAgain in ('yes', 'y'):
string formatting
Probably not your biggest concern, but if you ever want to make translation easy, start now by using Python string formatting.
Instead of
print("The " + enemy["Name"] + " launched an attack on you from behind!")
Write
print("The {} launched an attack on you from behind!".format(enemy["Name"]))
Or
print("The {enemy} launched an attack on you from behind!".format(enemy=enemy["Name"]))
It also reads a lot better!! | {
"domain": "codereview.stackexchange",
"id": 20325,
"tags": "python, beginner, role-playing-game, battle-simulation"
} |
How does one calculate the absolute value of a Feynman diagram's amplitude? | Question: How do I obtain the absolute value of a Feynman diagram's amplitude if I do not have values for the components of this amplitude?
If the amplitude of a process such as $e^+(p_1) + e^- (p_2) \to \phi (p_3) + \phi^* (p_4) $ is given as:
$$\require{cancel} \mathcal{A}=ie^2 \frac{\bar{\nu}(p_1)(-\cancel{p_3} + \cancel{p_4}) u(p_2)}{(p_1+p_2)^2}$$
How do I express $|\mathcal{A}|$ to obtain $|\mathcal{A}|^2$?
Answer: Calculate the product $\mathcal{A}\mathcal{A}^*=|\mathcal{A}|^2$. Write out the Dirac spinors $u$ and $\nu$ explicitly in terms of energy and momentum. | {
"domain": "physics.stackexchange",
"id": 68472,
"tags": "quantum-electrodynamics, feynman-diagrams, complex-numbers"
} |
Book CS:APP, using bit set and bit clear | Question: Okay so I'm going through a book CS:APP.
I'm in chapter 2. And problem 2.13 talks about how the Digital Equipment VAX computer used bis (bit set) and bic (bit clear) instead of Boolean operations AND and OR.
Both instructions take a data word x and a mask word m. They generate a result z consisting of the bits x modified according to the bits of m. With bis, the modification involves setting z to 1 at each position where m is 1. With bic, the modification involves setting z to 0 at each position where m is 1.
Now you're given x and y. And you're allowed to use bis and bic to recreate x | y. And x ^ y.
Now I have already tried looking at the answers to understand what's going on. But I don't understand it.
If I literally look at explaination for bis and bic I would say bis just copies the mask and leaves x unused. And for bic it goes opposite of m, but also without using x. If I try to interpret it differently I end up with questions like what does bic do (for example) when m is 0?
The answers are for x | y: bis(x, y) where y becomes m... so a relatively simple answer.
And for x ^ y (xor): bis(bic(x, y), bic(y, x)) where y and then x becomes m
So basically anything that could clarify what's going on here would be much appreciated!
And aside from this question, I also would like to ask if this is the right place for questions about a specific CS book? Because I'm teaching myself and don't have a teacher or someone to ask questions to and seeing with the previous exercise I also had a question it's possible that I might get a lot. But I can't find anything when searching on CS:APP (Computer Systems: A Programmer's Perspective) and yeah don't know where else to turn to. The book was recommended through teachyourselfcs.com if anyone was wondering.
Answer: If $z=\mathrm{bis}(x,m)$ then
$$
z_i = \begin{cases}
x_i & \text{if } m_i = 0, \\
1 & \text{otherwise}.
\end{cases}
$$
This is exactly bitwise OR.
If $z=\mathrm{bic}(x,m)$ then
$$
z_i = \begin{cases}
x_i & \text{if } m_i = 0, \\
0 & \text{otherwise}.
\end{cases}
$$
This is bitwise AND of $x$ and the bitwise complement of $m$.
In order to see how $\mathrm{bis}(\mathrm{bic}(x,y),\mathrm{bic}(y,x))$ computes bitwise XOR, we just need to compute the truth table:
$$
\begin{array}{c|c|c|c|c}
x & y & \mathrm{bic}(x,y) & \mathrm{bic}(y,x) & \mathrm{bis}(\mathrm{bic}(x,y),\mathrm{bic}(y,x)) \\\hline
0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 1 & 1 \\
1 & 0 & 1 & 0 & 1 \\
1 & 1 & 0 & 0 & 0
\end{array}
$$
Another way of seeing this is replacing $\mathrm{bis}(x,y)$ with $x \lor y$ (read: $x$ or $y$) and $\mathrm{bic}(x,y)$ with $x \land \lnot y$) (read: $x$ and not $y$):
$$
(x \land \lnot y) \lor (\lnot x \land y),
$$
which is one of the standard formulas for XOR. In C, it would look as follows: (x & ~ y) | (~x & y). | {
"domain": "cs.stackexchange",
"id": 17630,
"tags": "bit-manipulation"
} |
ggplot aes() choice | Question: What is the difference between
ggplot(mtcars, aes(mpg)) +
geom_histogram(aes(y = ..density..))
and
ggplot(mtcars, aes(mpg), aes(y = ..density..)) +
geom_histogram()
I know that aes() i the geom layer overrides the aes() in the data layer. But are one of the code snippets above preferable?
Answer: The difference is that when the aesthetics are set in the ggplot function, they are inherited by any other geom's that build on top of it. If you specify the aesthetics only in a geom, it will only be used in that geom. And, as you mentioned, any aesthetics used in the geom override the settings in the ggplot function.
As far as which is preferable, I think it depends on you goal with the ggplot objects. For example, if you are only creating a single plot then it doesn't really matter which method you use. However, if you plan to present multiple different visualizations of the same data, you could reuse the ggplot object and simply add different geom layers to it:
Create a reusable ggplot object:
p <- ggplot(mtcars)
Add histogram geom:
p + geom_histogram( aes(mpg, ..density..))
Reuse ggplot object with different geom:
p + geom_point(aes(cyl, mpg))
This is a simple example but you can understand that when creating more complicated visualizations, the ability to reuse plot objects comes in handy. | {
"domain": "datascience.stackexchange",
"id": 3037,
"tags": "r, visualization"
} |
Mechanism for the addition of hydrogen iodide to 3,3‐dimethylbut‐1‐yne | Question: The following reaction mechanism was given as a solution to a solved problem in my textbook1 for the addition of hydrogen iodide to 3,3‐dimethylbut‐1‐yne:
It can be seen that that 2,2-diiodo-3,3-dimethylbutane (a geminal dihalide) is the product according to this mechanism. I arrived at a different product 2,3-diiodo-2,3-dimethylbutane (a vicinal dihalide) by using the following mechanism:
The difference is due to the methanide shift I did on the second step, converting 3,3-dimethylbut-1-en-2-ylium cation to 2,3‐dimethylbut‐3‐en‐2‐ylium cation. I consider this to be a reasonable carbocation rearrangement because after the methanide shift, the 2,3‐dimethylbut‐3‐en‐2‐ylium cation is tertiary as well as in conjugation with the double bond. Moreover, the initial carbocation has a positive charge on a more electronegative $\mathrm{sp^2}$ hybridized carbon atom. Due to this, I believe, the methanide shift produces a more stable carbocation. But, why did the author proceed without making this rearrangement?
When I discussed this in chat, there was a consensus that the product of this reaction must be a geminal dihalide as obtained by the author. The only way, I could think of, by which I can obtain the geminal dihalide even after the rearrangement discussed earlier is to do another rearrangement during the addition of second molar equivalent of hydrogen iodide as given below:
The problem with this mechanism is the carbon bearing the postitive charge after the methanide shift also has an iodine atom attached to it. Earlier, I learnt that chlorine atom is the only halogen for which the positive mesomeric effect is stronger than the negative inductive effect thereby stabilizing the positive charge. But here, due to the presence of iodine, I think the carbocation is not stable after rearrangement.
Even after neglecting this fact, there seems to be a major difference betweeen the author's mechanism and the modified mechanism, which I've emphasized using a carbon-12 labelled reactant as given below:
It can be seen that even though we obtain geminal dihalides through either of the mechanism, the products formed aren't exactly the same. One has the iodine attoms attached to the normal carbon whereas in the other they are attached to the carbon-12 atom.
In short, what happens when hydrogen iodide is added to 3,3‐dimethylbut‐1‐yne?
Reference
Solomons, et al. Organic Chemistry for JEE (Main & Advanced). Edited by MS Chouhan, Third Edition; Wiley India Private Limited. ISBN 978-81-265-6065-3
Answer: There seems to be an issue with both mechanisms on the front that an SN1 reaction would not take place since the carbocation formed is a vinylic carbocation which is highly unstable.
The actual reaction follows a termolecular mechanism where the rate of reaction is given to be:
$$\text{Rate}=[\ce{HX}]^2[\text{alkyne}]$$
Now, according to Advanced Organic Chemistry by Francis A. Carey, the reaction mechanism would be as follows:$^1$
Step $1$:
A concerted termolecular reaction...
This involves an acid/base reaction, protonation of the alkyne developing positive charge on the more substituted carbon. The π electrons act pairs as a Lewis base.
The other part is attack of the nucleophilic bromide ion on the more electrophilic carbocation creates the alkenyl bromide.
Step $2$:
In the presence of excess reagent, a second protonation occurs to generate the more stable carbocation.$^2$
Step $3$:
Attack of the nucleophilic bromide ion on the electrophilic carbocation creates the geminal dibromide.
[$1$]: The reaction mechanism stated above uses $\ce{HBr}$ and not $\ce{HI}$ instead. However this reaction takes place for $\ce{HX}$.
[$2$]: The same reaction using $\ce{HI}$ would have a comparatively lower yield of the geminal product since geminal diiodides are unstable due to the steric hinderance posed by the large size of the iodine atoms. | {
"domain": "chemistry.stackexchange",
"id": 14356,
"tags": "organic-chemistry, reaction-mechanism, rearrangements"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.