anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Is there any physical significance to the non-uniqueness of the least action principle?
Question: In classical mechanics we often define the action as the quantity $$ \int_{0}^{T} \left[ T - V \right] dt$$ Which in many applications is some variant of $$ \int_{0}^{T} \left[ \frac{1}{2}m \left( x' \right)^2 - V(x) \right] dt. $$ The usual justification for the principle of least action is the observation that if you take integrand above and put it into the euler lagrange equations you get back Newton's law. I.E. if you believe $$ \frac{\partial L}{\partial x} - \frac{d}{dt} \left( \frac{\partial L}{\partial x'} \right) + \frac{d^2}{dt^2} \left(\frac{\partial L}{\partial x''} \right) - ... = 0$$ With $L = \frac{1}{2}m \left( x' \right)^2 - V(x) $ you will find $$ - \frac{dV}{dx} = mx'' $$ (i.e $F = ma$). So this is old news that we consider quite well understood but then I realized the following, suppose we try to minimize this action instead: $$ \int_{0}^{T} \left[ \frac{1}{2}mxx'' + V(x) \right] dt $$ I.E. $L = \frac{1}{2}mxx'' + V(x) $. If we plug this into the euler lagrange equations we ALSO end up deriving $$ - \frac{dV}{dx} = mx'' $$ Via $$ \frac{\partial }{\partial x}[\frac{1}{2}mxx'' + V] + \frac{d^2}{dt^2}\frac{\partial}{\partial x''}\left[ \frac{1}{2}mxx'' + V \right] = 0 \rightarrow \frac{1}{2}mxx'' + \frac{dV}{dx} + \frac{1}{2}mxx'' = 0 \rightarrow mxx'' + \frac{dV}{dx} = 0 \rightarrow F = -\frac{dv}{dx}$$ I found this very curious, I recognize the physical significance of $(mx'')*x$ as the classical expression for work (Force x distance). But is there any deeper physical significance to this second lagrangian, or is this just a curious mathematical oddity/not a useful problem solving tool. Can this second lagrangian be used in place of the first in other contexts (ex: in the Feynman path integral). So it seems that $\frac{1}{2}mxx'' - V$ is a conserved quantity. (I came to this conclusion after checking only one example involving a newtonian gravitational field between two bodies at 2 locations, so maybe this is wrong.) Answer: It is wellknown that given a set of EOMs, the action $S$ is not necessarily unique, cf. e.g. this Phys.SE post. OP points out that the Euler-Lagrange (EL) equations are not affected if we add a boundary term, cf. e.g. this Phys.SE post. However, the caveat is that the boundary conditions (BCs) [which are necessary to impose in order to make the variational principle well-posed] may change! OP's 1st example: $$L_1 ~=~ \frac{1}{2}m\dot{q}^2-V(q).\tag{1a}$$ The infinitesimal variation reads $$ \delta S_1 ~=~\int_{t_i}^{t_f} \! dt~ {\rm EOM}~\delta q + \left[m\dot{q}\delta q \right]^{t=t_f}_{t=t_i}. \tag{1b}$$ If we focus$^1$ on the initial condition (IC), we have to impose either $\color{red}{\rm weak}$ Dirichlet IC: $ q(t_i)=q_i $, or Neumann IC: $ \dot{q}(t_i)=0, $ in order to make the boundary term disappear [which is necessary in order to derive the EL equation from the variational principle]. See also e.g. my Math.SE answer here. OP's 2nd example: $$L_2 ~=~ -\frac{1}{2}mq\ddot{q}-V(q)~=~L_1 - \frac{d^2}{dt^2}(\frac{m}{4}q^2).\tag{2a}$$ The infinitesimal variation reads $$ \delta S_2 ~=~\int_{t_i}^{t_f} \! dt~ {\rm EOM}~\delta q + \frac{m}{2}\left[\dot{q}\delta q - q\delta \dot{q}\right]^{t=t_f}_{t=t_i}. \tag{2b}$$ We have to impose either $\color{red}{\rm strong}$ Dirichlet IC: $ q(t_i)=0 $, or Neumann IC: $ \dot{q}(t_i)=0 $. There are no other possibilities! TL;DR: The lesson is that depending on the physical system and the physically relevant BCs, we might have to choose a specific action for the variational principle. See also e.g. this related Phys.SE post. -- $^1$The final condition (FC) is similar.
{ "domain": "physics.stackexchange", "id": 99317, "tags": "classical-mechanics, lagrangian-formalism, variational-principle, boundary-conditions, boundary-terms" }
Thermodynamics - heating and cooling curves
Question: At the melting and vaporising stages, it is said that the temperature does not rise because the energy is used to break the bonds rather than increase the temperature. Why is this the case? Why can't energy be used to do both? I would have thought that energy expenditure is somewhat random, so I am not really comprehending why energy is all going to the bond breaking process. Answer: it is said that the temperature does not rise because the energy is used to break the bonds rather than increase the temperature. Why is this the case? Why can't energy be used to do both? Energy can do both, but not necessarily at at the same time. In your graph, where the water is being heated as as a liquid and where the water is heated as a gas, the heat transfer is causing an increase in the translational kinetic energy of the molecules and atoms, i.e. an increase in temperature. But it is not at this point a sufficient amount of energy to increase the separation of the molecules and atoms (increasing microscopic potential energy). But when the phase changes occur (solid to liquid and liquid to gas) the heat transfer to the substance is increasing the separation of the molecules and atoms causing an increase in microscopic potential energy. At this time it is not causing an increase in the microscopic kinetic energy (increasing temperature). So the temperature does not change. Hope this helps.
{ "domain": "physics.stackexchange", "id": 69568, "tags": "thermodynamics" }
Terminology for number of topological sorts
Question: Is there a standard terminology for the topological sort count over a partial order? I went with magnitude of a poset rather than dimension as this is too close to linear algebra terminology. I wonder if there is a generally accepted choice. Answer: "Number of linear extensions" is probably the most popular one. Google scholar returned 1000+ matches for "number of linear extensions" while "number of topological order" or "number of topological sort" return less than 10 matches. "magnitude of a poset" has only one match with seemingly different meaning.
{ "domain": "cs.stackexchange", "id": 20704, "tags": "partial-order" }
What is the function of epidermal ridges on human fingers (that produce fingerprints)?
Question: What function is served by the epidermal or capillary ridges on human fingers, the supposedly unique impressions of which are known as fingerprints? Answer: I found many plausible claims that fingerprints increase friction. However, the following article claims, at least under their experimental conditions, that fingerprints actually decrease friction with smooth surfaces by reducing contact area. Fingerprints are unlikely to increase the friction of primate fingerpads. It is generally assumed that fingerprints improve the grip of primates, but the efficiency of their ridging will depend on the type of frictional behaviour the skin exhibits. Ridges would be effective at increasing friction for hard materials, but in a rubbery material they would reduce friction because they would reduce contact area. In this study we investigated the frictional performance of human fingertips on dry acrylic glass using a modified universal mechanical testing machine, measuring friction at a range of normal loads while also measuring the contact area. Tests were carried out on different fingers, fingers at different angles and against different widths of acrylic sheet to separate the effects of normal force and contact area. The results showed that fingertips behaved more like rubbers than hard solids; their coefficients of friction fell at higher normal forces and friction was higher when fingers were held flatter against wider sheets and hence when contact area was greater. The shear stress was greater at higher pressures, suggesting the presence of a biofilm between the skin and the surface. Fingerprints reduced contact area by a factor of one-third compared with flat skin, however, which would have reduced the friction; this casts severe doubt on their supposed frictional function. That said, the author does later discuss their potential role in gripping of rough or wet surfaces: So why do we have fingerprints? One possibility is that they increase friction on rougher surfaces compared with flat skin, because the ridges project into the depressions of such surfaces and provide a higher contact area. Experiments on materials of contrasting known roughness are needed to test this possibility. A second possibility is that they facilitate runoff of water like the tread of a car tyre or grooves in the feet of tree frogs (Federle et al., 2006), so that they improve grip on wet surfaces. Though there is evidence that friction falls on fingers coated with high levels of moisture (Andre et al., 2008) it is possible that it falls less quickly on fingertips than on flatter skin. Once more, suitable experiments could test this idea. There seems to be more consensus on the idea that fingerprints are useful for tactile sensation. The following are just some articles which discuss this. Effect of fingerprints orientation on skin vibrations during tactile exploration of textured surfaces. In humans, the tactile perception of fine textures is mediated by skin vibrations when scanning the surface with the fingertip. These vibrations are encoded by specific mechanoreceptors, Pacinian corpuscules (PCs), located about 2 mm below the skin surface. In a recent article, we performed experiments using a biomimetic sensor which suggest that fingerprints (epidermal ridges) may play an important role in shaping the subcutaneous stress vibrations in a way which facilitates their processing by the PC channel. Here we further test this hypothesis by directly recording the modulations of the fingerpad/substrate friction force induced by scanning an actual fingertip across a textured surface. When the fingerprints are oriented perpendicular to the scanning direction, the spectrum of these modulations shows a pronounced maximum around the frequency v/λ, where v is the scanning velocity and λ the fingerprints period. This simple biomechanical result confirms the relevance of our previous finding for human touch. The role of fingerprints in the coding of tactile information probed with a biomimetic sensor. In humans, the tactile perception of fine textures (spatial scale <200 micrometers) is mediated by skin vibrations generated as the finger scans the surface. To establish the relationship between texture characteristics and subcutaneous vibrations, a biomimetic tactile sensor has been designed whose dimensions match those of the fingertip. When the sensor surface is patterned with parallel ridges mimicking the fingerprints, the spectrum of vibrations elicited by randomly textured substrates is dominated by one frequency set by the ratio of the scanning speed to the interridge distance. For human touch, this frequency falls within the optimal range of sensitivity of Pacinian afferents, which mediate the coding of fine textures. Thus, fingerprints may perform spectral selection and amplification of tactile information that facilitate its processing by specific mechanoreceptors. This paper also asserts a reason for the elliptical nature of fingerprints: In humans, fingerprints are organized in elliptical twirls so that each region of the fingertip (and thus each PC) can be ascribed with an optimal scanning orientation.
{ "domain": "biology.stackexchange", "id": 6794, "tags": "evolution, human-anatomy" }
Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?
Question: When we drop a ball, it bounces back to the spot where we dropped it, due to the reaction forces exerted on it by the ground. However, if a person falls down (say, if we push them), why don't they come back to their initial position where they started their fall? According to Newton's 3rd law of motion, to every action there is always an equal but opposite reaction. If we take the example of ball then it comes back with the same force as it falls down. But in the case of a human body, this law seems not to be applicable. Why? Answer: Newton's third law just says when the person is hitting the floor the force the person exerts on the ground is equal to the force the ground exerts on the person at all times. i.e. all forces are interactions. Newton's third law does not say that all collisions are elastic, which is what you are proposing. When someone hits the floor, most of the energy is absorbed by the person through deformation (as well as the floor, depending on what type of floor it is), but there is barely any rebound since people tend to not be very elastic. i.e. the deformation does not involve storing the energy to be released back into kinetic energy. Contrast this with a bouncy ball where much of the energy goes into deforming the ball, but since it is very elastic it is able to spring back and put energy back into motion. However, it is unlikely the collision is still perfectly elastic, as you seem to suggest in your question. In summary, Newton's third law tells us that action-reaction force pairs must have equal magnitudes and opposite directions, but it doesn't tell us anything about what the magnitude of these forces actually are. Your misunderstanding likely comes from the imprecise usage of the words "action" and "reaction". In this case, these words refer to just forces, not entire processes. You can get some confusing questions if you don't understand this. For example, why is it that when I open my refrigerator that my refrigerator doesn't also open me?
{ "domain": "physics.stackexchange", "id": 57306, "tags": "newtonian-mechanics, forces, conservation-laws, collision, free-body-diagram" }
Recommendations for a hardy, IP67 arm
Question: Hi, I'm looking for an IP67 arm for a mobile robot in a harsh environment. Some other wishes are: ROS compatability Small controller (must fit on a ~900mm x 900mm platform) Reach >500mm At least 5 DOF So far, I've just found the Fanuc LR Mate 200iD and possibly an Oceaneering Terabot. But I think the Fanuc's controller is huge. Any other suggestions? Thanks in advance! Originally posted by AndyZe on ROS Answers with karma: 2331 on 2016-10-26 Post score: 0 Original comments Comment by gvdhoorn on 2016-10-26: Not an answer, but the Fanuc R-30iB Mate controller would actually seem to fit your constraints: regular cabinet is 470x402x400mm (WxDxH) max, OpenAir is even smaller: 370x350x356mm (WxDxH) max. They're 40 and 20 Kg respectively. Which controller have you seen? Comment by gvdhoorn on 2016-10-26: Control interfaces offered by the Terabot are way better than those for the Fanuc though. External motion control on Fanucs is not too good. But do ask Fanuc about that. Comment by AndyZe on 2016-10-31: Thank you for the advice, @gvdhoorn! I got a quote from Fanuc and the price seems good (better than I expected). Unfortunately the smallest controller is not available so it would be the 470x402x400mm one. Can you tell me more about the weaknesses of the ROS-I Fanuc driver? Comment by gvdhoorn on 2016-10-31: It's not the ros-i driver per se: Fanucs just don't really have options for external motion control, when compared to vendors like KUKA, ABB, Denso, Mitsubishi, etc. All those vendors have 100Hz+ joint position control at least. The closest thing Fanuc has is something called DPM .. Comment by gvdhoorn on 2016-10-31: .. Dynamic Path Modification. But that is Cartesian only, and is really limited. With sufficient time & effort, DPM could perhaps be used to create an external interface with sufficient performance, but even then, it would only be position control. Velocity, effort or impedance are not possible .. Comment by gvdhoorn on 2016-10-31: .. at least not when you want to exert any influence over it from an external entity. Everything as far as I know of course, but I've spent some time looking into this. If you can prove me wrong, please do so, as it would allow me to improve the ROS-I driver as well. If you have the option, .. Comment by gvdhoorn on 2016-10-31: .. and you want to control your robot externally (which I suspect you want to do, as you list "ROS compatibility"), then, if you want to stay with industrial robots, I'd go with any of the other vendors. Comment by gvdhoorn on 2016-10-31: An example: the KUKA Agilus (KR 3, 6, 10, etc) are IP67, and when used with RSI, have a proper (250Hz) position control interface. ROS-I pkg: kuka_experimental. Comment by gvdhoorn on 2016-10-31: Apparently they have a waterproof version of the Agilus: KR AGILUS Waterproof. I'd really like you to use Fanuc, but depending on your requirements, other mfgs might be more realistic right now. Comment by gvdhoorn on 2016-10-31: Please ask Fanuc about their external motion control options btw. It'd be nice if I'm completely wrong. Comment by gvdhoorn on 2016-10-31: O, and I'm assuming you'd want some sort of reactive control here. For simple pick-and-place type work, the current driver is ok (but could still certainly be improved). Answer: I appreciate @gvdhoorn's advice. We were leaning towards a Fanuc LR Mate until finding that UR sells protective suits. And we already know and love UR's ROS interface, so I think we'll go with that. Originally posted by AndyZe with karma: 2331 on 2017-01-11 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2017-01-11: Thanks for letting us know what you eventually chose. And we already know and love UR's ROS interface [..] Just to clarify: you mean the ROS driver(s) that exist(s) for UR hw. UR had no hand in them :).
{ "domain": "robotics.stackexchange", "id": 26061, "tags": "ros, manipulator" }
Mass and energy in special relativity [conservation of energy problem]
Question: A certain quantity of ice at $0$ celsius melts into water at $0$ celsius and in doing so gains $1.00$ kg of mass. What was its initial mass? Now the problem with my solution is that it of course doesn't meet any expectations of reality, my attempt: For the ice to melt into liquid water, it needs an amount of heat energy to melt it, which is the latent heat of fusion $L = 334k \frac{J}{Kg}$ then the ice gains an amount $ Q = m_i L$ where $m_i$ is the initial unknown mass of the ice, then $Q = 3.34 x 10^5 m_i$ This amount $Q$ plus the rest energy of the ice $m_i c^2$ is the energy before melting, call it $E_0$ The energy after melting $E$ will be the new rest mass energy $E = m_n c^2$ , where $m_n$ must be the original mass + 1kg, so $m_n = m_i + 1$ then $E = (m_i + 1) c^2$ Energy before = Energy after, then, $E_0 = E$ $m_i c^2 + 3.34x10^5 m_i = (m_i + 1) c^2$ $3.34x10^5 m_i = c^2$ $m_i = \frac {c^2}{3.34x10^5} = 2.7 x 10^{11} kg$ Which is totally ridiclous, what's wrong here? Answer: Since you got the right answer, this isn't doing your homework for you. Set $\Delta M = 1\,$kg, then: $$ ML = \Delta M c^2$$ or $$ M = \frac {\Delta M c^2}L = 2.7\times 10^{11}\,{\rm kg} $$ Now since it asked for the initial mass, maybe you want to include all 12 digits required to distinguish it from the final mass? $$ M = 269451410204.4\,{\rm kg} $$ but that seems a bit silly, since we don't know $L$ to 12 digits. I used $L=333.55\,$kJ/kg.
{ "domain": "physics.stackexchange", "id": 55168, "tags": "homework-and-exercises, special-relativity, energy-conservation, mass-energy" }
Duplicate entry and rename check
Question: I wrote a program that prompts users to enter one string per line and stores it into an array. It also checks for duplicate entries as data are fed and renames those to become distinct by appending an incrementing number. Imagine that it will be used by a large number of people in some name registration event. I am concerned about how slow it will run because of all the comparisons as more and more strings are put. About the code below, the first line scans how many times inputs will be scanned. I have yet to finish the input validation part as I noticed the above performance issue halfway through writing it. How can I reduce the number of comparisons and make the program run faster? import java.util.*; public class Temp { public static void main (String[] args){ Scanner scanner = new Scanner(System.in); int n = scanner.nextInt(); String[] names = new String[n]; int namesPoint = 0; for(int i=0;i<n;i++) { String name = scanner.next(); int k=0; boolean dupe = false; for(int j=0;j<namesPoint;j++) { if(names[j].equals(name)) { dupe=true; break; } } if(dupe) { String newname=""; while(dupe){ dupe=false; k++; StringBuilder sb = new StringBuilder(); sb.append(name); sb.append(k); newname = sb.toString(); for(int j=0;j<namesPoint;j++) { if(names[j].equals(newname)) { dupe=true; break; } } } System.out.printf("%s\n",newname); names[namesPoint]=newname; namesPoint++; k=0; }else { names[namesPoint]=name; namesPoint++; System.out.printf("OK\n"); } } scanner.close(); } } Answer: You have the right instinct / deduction that this takes longer than it needs to. This is because each check for duplicates does a 'full scan' over your data. To improve this, you need a way to store / sort your data so that you can quickly ask: "Does this entry already exist?" Enter the HashSet. A hash set is a data structure that allows for very fast lookups. It does this by mapping entries to an integer (the hash) and then using that to quickly get the right index in an array. Using a hash set, your program will perform better and have fewer lines of code. The following example code uses the Set.add function that returns true on success (element was added) and false on failure (element was not added, because it already exists). int n = scanner.nextInt(); Set<String> names = new HashSet<>(n); // expected capacity while ( n-- > 0 ) { String name = scanner.next(); if ( !names.add(name) ) { // returns false if already present // very short but a bit opaque: for ( int suffix = 0; !names.add(name + suffix); suffix++ ); // alternatively, written out: for ( int suffix = 0; ; suffix++ ) { if ( names.add(name + suffix) ) { break; } } } } for ( int suffix = 0; !names.add(name + suffix); suffix++ ); still scans from 0 every time till an unclaimed name is found... Is there perhaps a quicker way? Indeed, there is—good catch! We can speed up the duplicate counting by using a Map instead, mapping a name to the number of times we've found it. (This is essentially a multi-set, but our standard libaries don't carry such a data structure, so we 'fake' it.) Our printing code will look quite different, though: int n = scanner.nextInt(); Map<String, Integer> names = new HashMap<>(n); // expected capacity while ( n-- > 0 ) { String name = scanner.next(); int count = names.getOrDefault(name, 0); names.put(name, count + 1); // alternatively: // names.merge(name, 1, Integer::sum); } // print results for ( Map.Entry<String, Integer> entry : names.entrySet() ) { String name = entry.getKey(); int count = entry.getValue(); System.out.println(name); // plain name for ( int i = 0; i < count - 1; i++ ) { System.out.println(name + i); // number from 0 } } If you're wondering about the performance of HashMap : it is the same as HashSet. In fact, HashSet uses a HashMap under the hood.
{ "domain": "codereview.stackexchange", "id": 28209, "tags": "java, performance, beginner" }
(L&L vol. 4, sec. 7) Normalization of Spherical Wavefunctions of Photons
Question: In volume 4, section 7 of the Landau and Lifshitz collection (second edition) the authors discuss spherical wavefunctions of photons. I'm having trouble understanding how the equality in (7.7) is obtained. It is said that the wavefunction (in the momentum representation) of a photon having an angular momentum $j$ and component along some given axis $m$ is $$\mathbf{A}_{\omega j m}(\mathbf{k})=\frac{4\pi^2}{\omega^{3/2}} \delta(|\mathbf{k}|-\omega)\mathbf{Y}_{jm}(\mathbf{n}),$$ where $\mathbf{n}$ is a unit vector representing direction and $\mathbf{Y}_{jm}$ are the "spherical harmonic vectors" which satisfy the normalization relation $$\int \mathbf{Y}_{jm}\cdot \mathbf{Y}_{j'm'}^* d o = \delta_{jj'}\delta_{mm'},$$ where $o$ represents solid angle and integration takes place over all directions. Equation (7.7) states that the photon wavefunctions satisfy the normalization $$\frac{1}{(2\pi)^4} \int \omega\omega' \mathbf{A}_{\omega'j'm'}^*(\mathbf{k})\cdot\mathbf{A}_{\omega jm}(\mathbf{k})d^3k=\omega \delta(\omega'-\omega)\delta_{jj'}\delta_{mm'}.$$ I do not follow how this equality is obtained. When I attempt to plug in the available expressions in this integral I obtain $$ \int \frac{1}{\sqrt{\omega\omega'}}\delta(|\mathbf{k}|-\omega')\delta(|\mathbf{k}|-\omega)\mathbf{Y}_{jm}\cdot \mathbf{Y}_{j'm'}^* |\mathbf{k}|^2\hphantom{;}d|\mathbf{k}|\hphantom{;}d o = \int \frac{\delta(|\mathbf{k}|-\omega')\delta(|\mathbf{k}|-\omega)}{\sqrt{\omega\omega'}}\delta_{jj'}\delta_{mm'} |\mathbf{k}|^2\hphantom{;}d|\mathbf{k}|.$$ I do not understand how to deal with the two delta functions appearing under the integral. I can see that if $\omega\neq\omega'$ then this could be interpreted to be $0$. However when $\omega=\omega'$, a square of the delta function appears, the integral of which does not exist. I hope someone can help out, and thank you in advance. Answer: You need the identity $$ \int dx \delta(x-y)f(x) = f(y). $$ In particular, for $f(x)=\delta(x-z)g(x)$ you get $$ \int dx \delta(x-y)\delta(x-z)g(x)=\delta(y-z)g(y)=\delta(y-z)g(z). $$ This is, of course, "physicist" level of rigor. Formally to work with such integrals you need to interpret them as convolutions of distributions etc.
{ "domain": "physics.stackexchange", "id": 83591, "tags": "quantum-mechanics, electromagnetism, photons, wavefunction" }
When I put my hand on a hot solid why don't the particles transfering heat to my hand exert a force on it?
Question: When I put my hand on a hot metal (say) solid, I can feel my hand heating up. I suspect this is caused mostly by particles (electrons, atoms, ...?) from the solid colliding with the particles that make up my hand thereby transferring kinetic energy to it. But why does this lead to my hand heating up and not it (also?) being pushed? Answer: Here is another scenario where the thing that you describe does happen: A tube is filled with a gas, for example plain air. The tube fits nicely around a finger. The fit is so precise that there is a sufficient seal, so the air cannot escape, but there is only just enough friction between the tube wall and your finger to prevent the tube from sliding off just like that. Gently increase the temperature of the gas. Now the molecules of the gas have a higher average velocity. The effect of that higher average velocity is that your finger is pushed out of the tube. The force on your finger arises from the accumulative effect of gas molecules bouncing against your skin. A gas doesn't have internal cohesion. When you give a gas opportunity to expand it will. Now consider a solid. A solid has internal cohesion. A solid does not expand like a gas at room temperature, and neither does it expand like a gas when you heat it up. (A solid will expand a little, but that's not visible to the naked eye.) When you heat a solid the molecules of the solid move back and forth faster than at colder temperature. Let's say a particular molecule has - just for an instant - a velocity away from the bulk of the solid. So the molecule is on its way to ascend out of the solid. But as that molecule ascends the forces of cohesion from the neighbouring molecules increase. As a consequence the ascending molecule is pulled back into the solid. The molecule now acquires a velocity back towards the bulk of the solid. This molecule will overshoot, and will very briefly create a local indentation of the solid. The motion of the molecules of the solid do transfer heat to your skin as you are touching the heated solid. And it's not just the outward punches that transfer heat. There is also an effect of interaction with the transient indentations from molecules overshooting on their way back into the bulk of the solid. You can think of that as a suction effect, if you will. As to your skin being pushed one way or the other: the combined effect of the "punches" and the "suctions" adds to zero. What remains is the transfer of heat. For that transfer the effect of the "punches" and the "suctions" do add up; that is the transfer of heat from a solid to your skin.
{ "domain": "physics.stackexchange", "id": 79222, "tags": "thermodynamics, statistical-mechanics, temperature, physical-chemistry, molecular-dynamics" }
Questions about amateur astrophotographer Nik Szymanek's telescope
Question: The 2011 Sixty Symbols video Spy Satellites (from Deep Sky Videos) shows amateur astrophotographer Nik Szymanek and his telescope. Question: Can someone identify the model and design of this particular telescope and it's mount, and explain why there are so many baffles inside the tube (isn't this more than typical?) and the structure of the tube? Is it some composite? Why the spiral pattern? click all images for full size Answer: Szymanek says he took some of his Flickr images with a GSO 10" Ritchey-Chrétien telescope. At least one third party offers these with Altair Astro branding and says the tube is made of carbon fiber. The Ritchey-Chrétien design is a Cassegrain with hyperbolic primary and secondary mirrors. It minimizes coma and spherical aberration at the expense of field curvature. The baffle tube forward of the hole in the primary is required to shade the image plane from direct skylight. Many Cassegrain telescopes also have a short baffle tube aft of the secondary. The multiple baffle rings inside the main tube just darken the tube more completely than black paint alone. They are more likely separate rings than a continuous helix. The alternative would be to roughen the surface e.g. with flocking paper. The mount looks like an older Software Bisque Paramount, a robotic German equatorial mount. On current models the declination axis housing is cylindrical rather than rectangular.
{ "domain": "astronomy.stackexchange", "id": 3997, "tags": "telescope, amateur-observing, optics" }
Understanding the form of the raising and lowering operators matrices for angular momentum
Question: I have a system with angular momentum $s=1$ and I can show that the raising and lowering operators for are given by $$S_{\pm}=\sqrt{s(s+1)-m(m\pm1)}\hbar\delta_{m^\prime,m\pm1}$$ Clearly $m=-1,0,1$ so as $\sqrt{s(s+1)-m(m\pm1)}$ is real and greater than zero this restricts the choices of $m \text{ and } m^\prime$ to $m^\prime=0, m=1 $ and $m^\prime=-1,m=0$. This gives the matrix $$S_{+}=\begin{pmatrix} 0 & \sqrt{2} & 0 \\ 0&0&\sqrt{2}\\ 0&0&0\end{pmatrix}$$ So then, by reference to $\delta_{m^\prime,m+1}$, surely we would seek values s.t. $m^\prime=m+1$ implying that $m=0, m=1 $. Therefore how does one deal with zero indices, that make no sense given the definition of $\delta_{m^\prime,m\pm1}$, to arrive at the above matrix for $S_{+}$, and by extension $S_{-}$. Thanks in advance. Answer: If you write $S_\pm$ in that form, it's more convenient to explicitly show the matrix indices $$(S_{\pm})_{m',m}=\sqrt{s(s+1)-m(m\pm1)}\,\hbar\,\delta_{m^\prime,m\pm1}\,.$$ The state $|m\rangle$ is the state with a $1$ in the entry $m$ and zero otherwise. So we can write it as follows: $$(|m\rangle)_{m'} = \delta_{m' m}\,.$$ This gives (indices repeated are summed over) $$ \begin{aligned} (S_+)_{m',m''}(| m\rangle)_{m''} &= \sqrt{s(s+1)-m''(m''+1)}\,\hbar\,\delta_{m',m''+1} \delta_{m'' m} = \\&= \sqrt{s(s+1)-m(m+1)}\,\hbar\,\delta_{m',m+1} = \\&= \sqrt{s(s+1)-m(m+1)}\,\hbar\,(|m+1\rangle)_{m'}\,. \end{aligned} $$ Dropping the explicit indices $$ S_+ |m\rangle = \sqrt{s(s+1)-m(m+1)}\,\hbar\,|m+1\rangle\,. $$ This is to show that regarding $m$ inside $|m\rangle$ as an index is dangerous: it does not behave like an index! It is a label for a vector, and the result is that matrices seem to act on it from the right rather than from the left. I admit I wasn't sure whether that was your doubt or not. If not, ignore all this if it's unclear.
{ "domain": "physics.stackexchange", "id": 64610, "tags": "quantum-mechanics, angular-momentum, operators" }
Dynamic array implementation in C
Question: I have implemented a dynamic array in C. I am a beginner in C so any constructive feedback about improving the implementation would be greatly appreciated. Header file for the implementation( dyn_array.h) #ifndef TYPE_H #define TYPE int #define TYPE_SIZE sizeof(int) #endif #ifndef STDDEF_H #include <stddef.h> #endif #ifndef STDBOOL_H #include <stdbool.h> #endif #ifndef INIT_BUFFER_SIZE #define INIT_BUFFER_SIZE 2 #endif typedef struct DynArray { TYPE *data; size_t size; size_t capicity; }DynArray; bool DynArray_init(DynArray *self); TYPE DynArray_getElement(const DynArray *self, size_t pos); bool DynArray_setElement(DynArray *self, size_t pos, TYPE value); size_t DynArray_getSize(const DynArray *self); bool DynArray_pushBack(DynArray *self, TYPE value); TYPE DynArray_removeElement(DynArray *self, size_t pos); dyn_array.c #include "dyn_array.h" #include <stdint.h> #include <stdbool.h> #include <stdlib.h> #include <string.h> #include <stddef.h> #include <assert.h> #include <stdio.h> /*Allocate an pool of memory to store data upto N elements * * @param capicity * capacity for the data pool * * @returns * Pointer to a memory area of type TYPE with given number * */ TYPE * __DynArray_createDataPool(size_t capicity) { if (capicity != 0) { size_t bytesize = TYPE_SIZE * capicity; TYPE *tmp = malloc(bytesize); if (!tmp) return NULL; tmp = memset(tmp, 0x00, bytesize); return tmp; } return NULL; } /*Initilize an DynArray * * @param self * A pointer to an DynArray struct * * @returns * true if initilization is successful or false if initlization is * unsuccessful (possible reason - out of memory or bad pointer) * * * */ bool DynArray_init(DynArray *self) { if (self) { self->size = 0; self->data = __DynArray_createDataPool(INIT_BUFFER_SIZE); if (!self->data) return false; self->capicity = INIT_BUFFER_SIZE; return true; } return false; } /** *returns the element at a given index * * @param index * index of the element that need to be read * * @returns * value of the element at given index, * assert Fails if the it's called with an invalid index * and NDEBUG is not defined. * **/ TYPE DynArray_getElement(const DynArray *self, size_t index) { assert(index < (self->size)); return self->data[index]; } /* double the capicity of an array * * */ bool __DynArray_expendArray(DynArray *self) { if (self) { TYPE *tmp = __DynArray_createDataPool(2*(self->capicity)); if (!tmp) return false; size_t byteToCopy = TYPE_SIZE* (self->size); tmp = memcpy(tmp, self->data, byteToCopy); free(self->data); self->data = NULL; self->data = tmp; self->capicity = 2*(self->capicity); return true; } return false; } bool __DynArray_shrinkArray(DynArray *self, size_t capicity) { TYPE *tmp = __DynArray_createDataPool(capicity); if (!tmp) return false; size_t byteToCopy = TYPE_SIZE*(self->size); tmp = memcpy(tmp, self->data, byteToCopy); free(self->data); self->data = tmp; self->capicity = capicity; return true; } /* push an element to last of the array * * @param self * pointer to the DynArray struct * * @param value * Value that need to be pushed * * @returns * true if push is successfule otherwise false * */ bool DynArray_pushBack(DynArray *self, TYPE value) { if ((self->size) == (self->capicity)) { bool res = __DynArray_expendArray(self); if(!res) return false; } self->data[self->size] = value; self->size += 1; return true; } /* * * returns the current size of elements in array * @param self * pointer to a DynArray struct * * @returns * current size of the array */ size_t DynArray_getSize(const DynArray *self) { return self->size; } /*remove the element at a given index * *@param self * pointer to the DynArray struct *@param index index of the element that needs to be removed (If the index is greater then the element in array then the return value is undefined) * * @returns * element that's is removed from the given index * */ TYPE DynArray_removeElement(DynArray *self, size_t index) { assert(index < self->size); if (self->size < (self->capicity/4)) { __DynArray_shrinkArray(self,(self->capicity/2)); } TYPE indexValue = self->data[index]; for (size_t i = index; i < (self->size - 1); i++) self->data[i] = self->data[i+1]; self->size -= 1; return indexValue; } Answer: There are a couple small spelling errors that I noticed: capacity, not capicity expandArray, not expendArray Include guard Your header file lacks an include guard, you won't be able to include it more than once, and it will error if you attempt to do so. (Consider what happens if you include this header, then a second header, and the second header includes this header.) At very top of your header you should add this: #ifndef H_DYNARRAY #define H_DYNARRAY and last line must be: #endif This will protect it from being included more than once, so you can include "dyn_array.h" to your hearts content. STDDEF_H and STDBOOL_H No need to define these — standard headers already contain include guards. calloc Instead of malloc + memset, consider using calloc: TYPE * __DynArray_createDataPool(size_t capacity) { if (capacity == 0) { return NULL; } return calloc(capacity, TYPE_SIZE); } If allocation fails, calloc will return NULL. There's a little more information about calloc here: https://stackoverflow.com/a/2688522 realloc Same as above — instead of malloc + memcpy + free, consider using realloc instead. You can see man 3 realloc, but basically: bool __DynArray_expandArray(DynArray *self) { if (!self) { return false; } TYPE *tmp = realloc(self->data, TYPE_SIZE * self->capacity * 2); if (tmp == NULL) { return false; } // Fill new memory with zeros memset(tmp + self->capacity, 0, self->capacity); self->data = tmp; self->capacity *= 2; return true; } Note: realloc won't initialize new memory with zero's, so if that is important, you'll want to manually clear it out.
{ "domain": "codereview.stackexchange", "id": 30194, "tags": "beginner, c, array" }
Suspend a publisher
Question: Hi! In my system there are 2 nodes that publish on a topic concurrently. This situation leads an unstable state for my system, so I would suspend publication (only for that topic) from one of the two nodes. I wouldn't work on the code of two nodes. There is a way to pause the subscribtion of a publisher to a topic? Thanks Originally posted by jony on ROS Answers with karma: 23 on 2017-03-15 Post score: 0 Answer: With a topic_tools mux you can feed both inputs into a node and select which gets published out with a service call: http://wiki.ros.org/topic_tools/mux So instead of node1 -> topicA node2 -> topicA topicA -> node3 it would be node1 -> topicB -> mux node2 -> topicC -> mux mux -> topicA (which is a copy of either B or C depending on mux selection) topicA -> node3 Originally posted by lucasw with karma: 8729 on 2017-03-15 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by jony on 2017-03-15: thank you for the answer, but following your suggestion I should work on the code of node1 and node2 to divert respectively node1 to topic B and node2 to topicC. Right? In this case your solution isn't good for me. Comment by lucasw on 2017-03-15: You don't have to alter the code to change where publications go, remapping can change that at runtime: http://wiki.ros.org/Remapping%20Arguments or http://wiki.ros.org/roslaunch/XML/remap Comment by jony on 2017-03-16: Great! It works. Thank you so much ;)
{ "domain": "robotics.stackexchange", "id": 27320, "tags": "ros, topic, publisher" }
Python IPv6 verifier, parser and converter
Question: This is a simple Python 3 program that verifies whether a string is a valid IPv6 address or not, parses IPv6 address to its hexadecimal and decimal values, and converts an integer to IPv6 format. I am writing a webscraper, and to reduce network IO bound time complexity I need to programmatically lookup DNS records of target addresses and change hosts file accordingly due to DNS poisoning by GFW. I use this API: 'https://www.robtex.com/dns-lookup/{website}' to get the addresses using XPaths, and the scraped results contains both IPv4 and IPv6 addresses, I would like to differentiate between IPv4 and IPv6 addresses and validate them separately. For IPv4 addresses I have written this simple regex: '^((25[0-5]|2[0-4]\d|1?[1-9]\d?|0)\.){3}(25[0-5]|2[0-4]\d|1?[1-9]\d?|0)$' It does the validation in one step, but IPv6 is another thing, and after trying and failing to solve this using regex many times I gave up, I have searched for a regex to validate IPv6 and after I realized the regexes that work perfectly are way to long I changed my approach. An IPv6 address is an integer between 0 and 2 ^ 128 (340282366920938463463374607431768211456) represented as 32 hexadecimal digits, formatted as 8 fields of 4 hex digits separated by 7 colons. If not for the shortening rules IPv6 addresses can easily be validated by regexes. There are two shortening rules that are used together, the first rule trims leading zeros in each field. Now with only the first rule applied IPv6 can still be verified using this regex: '^([\da-fA-F]{1,4}\:){7}([\da-fA-F]{1,4})$' But there is the second rule, that omits continuous zero fields for once and uses '::' in its place, for example, 0:0:0:0:0:0:0:0 -> :: 0:0:0:0:0:0:0:1 –> ::1 fe80:0:0:0:0:0:0:1 –> fe80::1 fe80:0:0:0:0:1:0:1 -> fe80::1:0:1 Because of the second rule, each of the eight places where the omitted fields can be requires at least one regex, so that is at least 8 regexes in total... So I have written my own code, and here it is: import re from ipaddress import ip_address, IPv6Address from typing import Iterable COLONS = {':', '::'} DIGITS = frozenset('0123456789ABCDEFabcdef') MAX_IPV6 = 2**128 - 1 ALL_ZEROS = '0:'*7 + '0' LEADING = re.compile('^(0:)+') TRAILING = re.compile('(:0)+$') MIDDLE = re.compile('(:0)+') def is_ipv6(ip: str) -> bool: if not isinstance(ip, str): raise TypeError('Argument must be an instance of str') if not ip or len(ip) > 39: return False first = True digits = 0 colons = 0 fields = 0 compressed = False for i in ip: if i == ':': digits = 0 first = False colons += 1 if colons == 2: if not compressed: compressed = True else: return False elif colons > 2: return False else: if i not in DIGITS: return False digits += 1 if digits > 4: return False if colons or first: first = False colons = 0 fields += 1 if fields > 8 - compressed: return False if (fields == 8 and colons != 1) or compressed: return True return False def split_ipv6(ip: str) -> Iterable[str]: if not isinstance(ip, str): raise TypeError('Argument must be an instance of str') buffer = '' chunks = [] digits = 0 colons = 0 fields = 0 compressed = False for i in ip: tail = True if i == ':': if digits: chunks.append(buffer) digits = 0 colons += 1 if colons == 2: if not compressed: compressed = True tail = False else: return False if colons > 2: return False else: if i not in DIGITS: return False digits += 1 if digits > 4: return False if colons or not buffer: if colons: chunks.append(':' * colons) colons = 0 fields += 1 buffer = i if fields > 8 - compressed: return False else: buffer += i if tail: chunks.append(buffer) else: chunks.append('::') if (fields == 8 and colons != 1) or compressed: return chunks return False def parse_ipv6(ip: str) -> dict: segments = split_ipv6(ip) if not segments: raise ValueError('Argument is not a valid IPv6 address') compressed = False empty_fields = None if '::' in segments: fields = ['0'] * 8 compressed = True cut = segments.index('::') left = [i for i in segments[:cut] if i not in COLONS] right = [i for i in segments[cut+1:] if i not in COLONS] fields[8-len(right):] = right fields[:len(left)] = left empty_fields = [i for i, f in enumerate(fields) if f == '0'] else: fields = [c for c in segments if c not in COLONS] digits = [int(f, 16) for f in fields] # decimal = sum(d * 65536 ** i for i, d in enumerate(digits[::-1])) hexadecimal = '0x' + ''.join(i.zfill(4) for i in fields) decimal = int(hexadecimal, 16) parsed = { 'segments': segments, 'fields': fields, 'digits': digits, 'hexadecimal': hexadecimal, 'decimal': decimal, 'compressed': compressed, 'empty fields': empty_fields } return parsed def trim_left(s: str) -> str: return s[:-1].lstrip('0') + s[-1] def to_ipv6(n: int, compress=True) -> str: if not isinstance(n, int): raise TypeError('Argument should be an instance of int') if not 0 <= n <= MAX_IPV6: raise ValueError('Argument is not in the valid range that IPv6 can represent') hexa = hex(n).removeprefix('0x').zfill(32) ipv6 = ':'.join(hexa[i:i+4] for i in range(0, 32, 4)) if compress: ipv6 = ':'.join(trim_left(i) for i in ipv6.split(':')) if ipv6 == ALL_ZEROS: return '::' elif LEADING.match(ipv6): return LEADING.sub('::', ipv6) elif TRAILING.search(ipv6): return TRAILING.sub('::', ipv6) ipv6 = MIDDLE.sub(':', ipv6, 1) return ipv6 if __name__ == '__main__': test_cases = [ (42540766411282592856904265327123268393, '2001:db8::ff00:42:8329'), (42540766411282592875278671431329809193, '2001:db8::ff00:0:42:8329'), (0, '::'), (1, '::1'), (5192296858534827628530496329220096, '1::'), (338288524927261089654018896841347694593, 'fe80::1'), (160289081533862935099527363545323831451, '7896:8ddf:4b26:f07f:a4cd:65de:ee90:809b'), (264029623924138153874706093713361856950, 'c6a2:4182:24b2:20f3:2d00:d2bb:3619:e9b6'), (155302777326544552126794348175886719955, '74d6:3a18:151d:948f:d13e:4d87:4fed:1bd3'), (152846031713612901234538066636429037612, '72fd:132e:fe1d:d05c:27d0:6001:a05f:902c'), (21824427460045008308753734783456952407, '106b:3b59:a20b:25dc:61b9:698e:d1e:c057'), (267115622348742355941753354636068900005, 'c8f4:98fa:50b3:e935:2bc9:25b0:593b:cca5'), (16777215, '::ff:ffff'), (3232235777, '::c0a8:101'), (4294967295, '::ffff:ffff'), (2155905152, '::8080:8080'), (18446744073709551615, '::ffff:ffff:ffff:ffff'), (18446744073709551616, '::1:0:0:0:0') ] for number, ip in test_cases: assert to_ipv6(number) == ip assert parse_ipv6(ip)['decimal'] == number As you can see, the first function validates input and splits the input into chunks, in the same loop, I could have used a regex to split the input but that isn't much faster than the manual approach and I figure if I use regex I need at least another for loop to validate the result, in this approach I can save the cost of one loop and do early returns if the input is invalid. And about the two other functions after I have written the first I just couldn't help myself. All functions are working properly and there's exactly 0 chance that there are bugs introduced by me, everything returns what I assume to be correct, that is, addresses like '1::' are treated by the functions to be valid, I don't know if such syntax exists but I assume it does. How can my code be improved? Update I have written a new function that does only the validation and therefore is much faster. And why did I write these functions in the first place? Why I don't just use ipaddress library? Well, I am reinventing the wheel and I have good reasons to do it. The library code is not perfect, for one thing, I need to validate IPv6 addresses, and yes this means the strings might be invalid, and if the string is not a valid IP address, ipaddress.ip_address raises ValueError so I have to use try catch clauses... In [690]: ipaddress.ip_address('100') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-690-10b17acc39c6> in <module> ----> 1 ipaddress.ip_address('100') C:\Program Files\Python39\lib\ipaddress.py in ip_address(address) 51 pass 52 ---> 53 raise ValueError('%r does not appear to be an IPv4 or IPv6 address' % 54 address) 55 ValueError: '100' does not appear to be an IPv4 or IPv6 address And it validates both IPv4 and IPv6 addresses so I have to use isinstance checks... My custom function only validates IPv6 addresses and doesn't raise exceptions, so there is not any extra step needed. Secondly it is much slower than my functions: Python 3.9.7 (tags/v3.9.7:1016ef3, Aug 30 2021, 20:19:38) [MSC v.1929 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 7.28.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: ...: if compress: ...: ipv6 = ':'.join(trim_left(i) for i in ipv6.split(':')) ...: if ipv6 == ALL_ZEROS: ...: return '::' ...: ...: elif LEADING.match(ipv6): ...: return LEADING.sub('::', ipv6) ...: ...: elif TRAILING.search(ipv6): ...: return TRAILING.sub('::', ipv6) ...: ...: ipv6 = MIDDLE.sub(':', ipv6, 1) ...: ...: return ipv6 ...: ...: ...: def ipaddress_test(s): ...: try: ...: return isinstance(ip_address(s), IPv6Address) ...: except ValueError: ...: return False ...: ...: if __name__ == '__main__': ...: test_cases = [ ...: (42540766411282592856904265327123268393, '2001:db8::ff00:42:8329'), ...: (42540766411282592875278671431329809193, '2001:db8::ff00:0:42:8329'), ...: (0, '::'), ...: (1, '::1'), ...: (5192296858534827628530496329220096, '1::'), ...: (338288524927261089654018896841347694593, 'fe80::1'), ...: (160289081533862935099527363545323831451, '7896:8ddf:4b26:f07f:a4cd:65de:ee90:809b'), ...: (264029623924138153874706093713361856950, 'c6a2:4182:24b2:20f3:2d00:d2bb:3619:e9b6'), ...: (155302777326544552126794348175886719955, '74d6:3a18:151d:948f:d13e:4d87:4fed:1bd3'), ...: (152846031713612901234538066636429037612, '72fd:132e:fe1d:d05c:27d0:6001:a05f:902c'), ...: (21824427460045008308753734783456952407, '106b:3b59:a20b:25dc:61b9:698e:d1e:c057'), ...: (267115622348742355941753354636068900005, 'c8f4:98fa:50b3:e935:2bc9:25b0:593b:cca5'), ...: (16777215, '::ff:ffff'), ...: (3232235777, '::c0a8:101'), ...: (4294967295, '::ffff:ffff'), ...: (2155905152, '::8080:8080'), ...: (18446744073709551615, '::ffff:ffff:ffff:ffff'), ...: (18446744073709551616, '::1:0:0:0:0') ...: ] ...: for number, ip in test_cases: ...: assert to_ipv6(number) == ip ...: assert parse_ipv6(ip)['decimal'] == number In [2]: ip_address('2001:0db8:0000:0000:ff00:0000:0042:8329') Out[2]: IPv6Address('2001:db8::ff00:0:42:8329') In [3]: type(ip_address('2001:0db8:0000:0000:ff00:0000:0042:8329')) == IPv6Address Out[3]: True In [4]: %timeit ipaddress_test('2001:0db8:0000:0000:ff00:0000:0042:8329') 12.2 µs ± 685 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [5]: %timeit is_ipv6('2001:0db8:0000:0000:ff00:0000:0042:8329') 6.01 µs ± 557 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [6]: %timeit ipaddress_test('2001:0db8:0000:0000:ff00:0000:0042:8329') 12.3 µs ± 657 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [7]: %timeit parse_ipv6('2001:0db8:0000:0000:ff00:0000:0042:8329') 15.2 µs ± 302 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [8]: %timeit parse_ipv6('2001:db8::ff00:0:42:8329') 14.6 µs ± 629 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [9]: %timeit is_ipv6('2001:db8::ff00:0:42:8329') 4.02 µs ± 561 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [10]: %timeit ipaddress_test('2001:db8::ff00:0:42:8329') 10.4 µs ± 204 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [11]: %%timeit ...: for number, ip in test_cases: ...: assert is_ipv6(ip) == True 62.5 µs ± 5.24 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) In [12]: %%timeit ...: for number, ip in test_cases: ...: assert ipaddress_test(ip) == True 169 µs ± 4.58 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) In [13]: is_ipv6('f:'*7) Out[13]: False In [14]: is_ipv6('f:'*8) Out[14]: False In [15]: is_ipv6('f:'*9) Out[15]: False In [16]: is_ipv6('f:'*7+':') Out[16]: True In [17]: is_ipv6('f:'*7+'f') Out[17]: True In [18]: %timeit is_ipv6('f:'*7) 2.57 µs ± 392 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [19]: %timeit is_ipv6('f:'*8) 2.9 µs ± 468 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [20]: %timeit is_ipv6('f:'*9) 3.06 µs ± 368 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [21]: %timeit is_ipv6('f:'*7+':') 2.76 µs ± 468 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [22]: ipaddress_test('f:'*7) Out[22]: False In [23]: ipaddress_test('f:'*8) Out[23]: False In [24]: ipaddress_test('f:'*9) Out[24]: False In [25]: ipaddress_test('f:'*7+'f') Out[25]: True In [26]: %timeit ipaddress_test('f:'*7) 5.88 µs ± 684 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [27]: %timeit ipaddress_test('f:'*8) 5.87 µs ± 753 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [28]: %timeit ipaddress_test('f:'*9) 5.14 µs ± 770 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [29]: %timeit ipaddress_test('f:'*7+'f') 11.1 µs ± 400 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [30]: ipaddress_test('f::') Out[30]: True In [31]: ipaddress_test('::f') Out[31]: True In [32]: ipaddress_test('f::f') Out[32]: True In [33]: ipaddress_test('f:::f') Out[33]: False In [34]: %timeit ipaddress_test('f::') 5.53 µs ± 639 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [35]: %timeit ipaddress_test('::f') 5.54 µs ± 552 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [36]: %timeit ipaddress_test('f:::f') 5.25 µs ± 581 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [37]: %timeit ipaddress_test('100') 4.52 µs ± 591 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [38]: is_ipv6('100') Out[38]: False In [39]: %timeit is_ipv6('100') 747 ns ± 34.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [40]: %timeit is_ipv6('f::') 741 ns ± 60.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [41]: %timeit is_ipv6('::f') 735 ns ± 43 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [42]: %timeit is_ipv6('f:::f') 864 ns ± 49.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [43]: %timeit is_ipv6('windows') 278 ns ± 38.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [44]: %timeit ipaddress_test('windows') 4.62 µs ± 749 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [45]: As you can see the library code is nowhere near the speed of my functions and in particular my custom functions spots invalid inputs much much faster than the library code... Have I made myself clear? Again, IPv6Address is not fast enough: In [107]: from ipaddress import AddressValueError In [108]: IPv6Address('100') --------------------------------------------------------------------------- AddressValueError Traceback (most recent call last) <ipython-input-108-46a502d0274c> in <module> ----> 1 IPv6Address('100') C:\Program Files\Python39\lib\ipaddress.py in __init__(self, address) 1916 addr_str, self._scope_id = self._split_scope_id(addr_str) 1917 -> 1918 self._ip = self._ip_int_from_string(addr_str) 1919 1920 def __str__(self): C:\Program Files\Python39\lib\ipaddress.py in _ip_int_from_string(cls, ip_str) 1629 if len(parts) < _min_parts: 1630 msg = "At least %d parts expected in %r" % (_min_parts, ip_str) -> 1631 raise AddressValueError(msg) 1632 1633 # If the address has an IPv4-style suffix, convert it to hexadecimal. AddressValueError: At least 3 parts expected in '100' In [109]: def IPv6Address_test(s): ...: try: ...: IPv6Address(s) ...: return True ...: except AddressValueError: ...: return False In [110]: %timeit IPv6Address_test('100') 2.13 µs ± 33.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [111]: IPv6Address_test('100::') Out[111]: True In [112]: IPv6Address_test('255.255.255.255') Out[112]: False In [113]: valid_long = [ ...: '2001:0db8:0000:0000:ff00:0000:0042:8329', ...: '2001:db8::ff00:0:42:8329', ...: 'f:'*7+':', ...: 'f:'*7+'f', ...: '7896:8ddf:4b26:f07f:a4cd:65de:ee90:809b', ...: 'c6a2:4182:24b2:20f3:2d00:d2bb:3619:e9b6', ...: '74d6:3a18:151d:948f:d13e:4d87:4fed:1bd3', ...: '72fd:132e:fe1d:d05c:27d0:6001:a05f:902c', ...: '106b:3b59:a20b:25dc:61b9:698e:d1e:c057', ...: 'c8f4:98fa:50b3:e935:2bc9:25b0:593b:cca5', ...: '2001:db8::ff00:42:8329', ...: '2001:db8::ff00:0:42:8329' ...: ] In [114]: valid_short = [ ...: '::', ...: '::1', ...: '1::', ...: 'fe80::1', ...: '::ff:ffff', ...: '::c0a8:101', ...: '::ffff:ffff', ...: '::8080:8080', ...: '::ffff:ffff:ffff:ffff', ...: '::1:0:0:0:0', ...: 'f::', ...: '::f', ...: 'f::f' ...: ] In [115]: invalid = [ ...: '100', ...: 'windows', ...: 'intelligence', ...: 'this is not an IPv6 address', ...: 'esperanza', ...: 'hispana', ...: 'esperanta', ...: '255.255.255.255', ...: '192.168.1.1', ...: '127.0.0.1', ...: '151.101.129.69', ...: 'f:::f', ...: 'f:'*7, ...: 'f:'*8, ...: 'f:'*9 ...: ] In [116]: %timeit for i in valid_long: assert is_ipv6(i) == True 65 µs ± 7.37 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) In [117]: %timeit for i in valid_long: assert IPv6Address_test(i) == True 122 µs ± 5.68 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) In [118]: %timeit for i in valid_short: assert is_ipv6(i) == True 19.8 µs ± 485 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [119]: %timeit for i in valid_short: assert IPv6Address_test(i) == True 59.4 µs ± 6.72 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) In [120]: %timeit for i in invalid: assert is_ipv6(i) == False 16.6 µs ± 650 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [121]: %timeit for i in invalid: assert IPv6Address_test(i) == False 38.3 µs ± 7.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) In [122]: %timeit is_ipv6('::') 526 ns ± 25.1 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [123]: is_ipv6('::') Out[123]: True In [124]: %timeit is_ipv6('::1') 739 ns ± 41.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [125]: %timeit is_ipv6('1::') 741 ns ± 33.4 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [126]: %timeit is_ipv6('100') 761 ns ± 43.1 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [127]: %timeit IPv6Address_test('::') 2.65 µs ± 468 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [128]: %timeit IPv6Address_test('::1') 3.38 µs ± 687 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [129]: %timeit IPv6Address_test('1::') 3.4 µs ± 684 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [130]: %timeit IPv6Address_test('100') 2.13 µs ± 41.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [131]: %timeit IPv6Address_test('g') 2.11 µs ± 29 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [132]: %timeit is_ipv6('g') 289 ns ± 54.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [133]: ``` Answer: I have written this simple regex Ish. You haven't written it in a very simple way. Write it on multiple lines, add comments. 65536 ** i seems like a less obvious way of accomplishing 1 bit-shifted left by 16*i. This: hexadecimal = '0x' + ''.join(i.zfill(4) for i in fields) decimal = int(hexadecimal, 16) joins the fields in the string domain, but I think it would make more sense to join them in the integer domain - i.e. using bit-shift operations. Even if you did want to join them in the string domain, there's no need to prepend 0x. parse_ipv6 currently has a weak type - a dict - and should prefer something like a named tuple instead. It's an odd choice to have your accepted test case values in decimal. Hexadecimal literals like 0xFFFF_FFFF will be more obviously correct than 4294967295. I am reinventing the wheel and I have good reasons to do it. I beg to differ, but let's dig into it: if the string is not a valid IP address, ipaddress.ip_address raises ValueError so I have to use try/catch clauses That's a feature, not a bug. Thinking about the typical consumers of an IP parsing routine, well-written code would make better use of exceptions than a boolean value, and doing an except is trivial if needed. And it validates both IPv4 and IPv6 addresses so I have to use isinstance checks... That's because you're using it wrong. You should not be calling ip_address, and instead should directly construct an IPv6Address. it is much slower than my functions First, a fair comparison would only use IPv6Address() instead of forcing ip_address to try parsing an IPv4 address with a guaranteed failure. Beyond that: whether or not fixing the above brings the routines into being performance-comparable, it's relatively rare that an application needs to validate thousands of addresses, and nearly always, correctness and maintainability matter more than performance. Your code is non-trivial, and will be a true pain to maintain as compared to using a built-in. How confident are you that your code is correct? 80%? 90%? Do you think that you can beat the stability and test coverage of the Python community? There are times where reinventing the wheel is called for, but this isn't one of them.
{ "domain": "codereview.stackexchange", "id": 42489, "tags": "performance, python-3.x, programming-challenge, reinventing-the-wheel, converting" }
Does this esoteric representation of integers have decidable equality?
Question: Consider the following datatypes in Haskell: data Foo = Halt | Iter Foo newtype BigInt = BigInt {nthBit :: Foo -> Bool} Foo is Peano numbers compactified by one limit point, namely fix Iter. BigInt represents arbitrary-length integers in two's complement. The limit point fix Iter forces the field nthBit to eventually terminate in infinite string of zeros, or infinite string of ones. Since Foo is compact and Bool is discrete, BigInt expresses a discrete function space. But does this mean BigInt has decidable equality? Semideciding inequality of BigInt is easy. Given two different integers, Keep evaluating their nthBit fields at Halt, Iter Halt, Iter (Iter Halt) and so on until they mismatch. But what about semideciding equality? Given two equal integers, mathematically their equality at fix Iter should already serve as a proof that they might mismatch only at finitely many bits, but that doesn't give the range of the might-be mismatch. Or was it my misconception that discreteness implies decidable equality? Answer: Fun question! Yes equality is externally decidable here, though I don't know if the procedure is internally expressible in Haskell. Here's a more direct solution: Let infty be the infinite integer fix Iter. On input a and b: Run a infty and b infty. If they are unequal, return "false" Otherwise, track the number of steps that each of the above computations takes to run, and let $N$ be the maximum of the two. Then it follows that, since a and b cannot distinguish infty from an integer that is N or greater, the Nth digits and greater of both a and b must be equal to a infty and b infty. So it remains only to check the digits from 0 to N are equal; if they are all equal, return "true", else return "false". The crux of the argument is that "a and b cannot distinguish infty from an integer that is N or greater". This should be true in Haskell, but it is also why I am not sure if this procedure is internally expressible. Functions cannot introspect on infty to see its implementation, so they cannot do anything on input infty other than run it some finite number of times or diverge. But the procedure above fundamentally relies on running a infty and b infty to see how long they run. That is a bit weaker than inspecting the implementation of a and b, but it may not be possible in a purely functional way, or at least I don't see how to do it.
{ "domain": "cs.stackexchange", "id": 21287, "tags": "computability, functional-programming, semi-decidability, equality, topology" }
BASH correct usage of single/double quotes when writing variables to file
Question: (running Ubuntu 16.04.3) I created small script with tasks I do every time I upgrade/change Ubuntu version. One of the tasks is to write script to file, this is actual written file: #!/bin/bash if [ "$1" == "eno1" ]; then case "$2" in up) nmcli radio wifi off notify-send "cable network detected, turning WIFI off" -u low -t 10;; down) nmcli radio wifi on notify-send "cable network unplugged, turning WIFI on" -u low -t 10;; esac fi And this is how I write it from another script: MY_FILE="/etc/NetworkManager/dispatcher.d/99-wlan" ni="$(nmcli dev status | grep ethernet | awk '{ print $1 }')" echo -e '#!/bin/bash\n\nif [ "$1" == "'"$ni"'" ]; then\n case "$2" in\n'\ ' up) nmcli radio wifi off\n notify-send "cable network detected, turning WIFI off" --urgency critical --expire-time 6;;\n'\ ' down) nmcli radio wifi on\n notify-send "cable network unplugged, turning WIFI on" --urgency critical --expire-time 6;;\n'\ ' esac\nfi' | sudo tee -a "$MY_FILE" > /dev/null sudo chmod +x "$MY_FILE" My question is, if syntax I'm using is correct. When I check it with shellcheck utility, it complaints about first line: echo -e '#!/bin/bash\n\nif [ "$1" == "'"$ni"'" ]; then\n case "$2" in\n'\ ^-- SC2016: Expressions don't expand in single quotes, use double quotes for that. ^-- SC2016: Expressions don't expand in single quotes, use double quotes for that. can I ignore this warnings or is there a better how to write it ? Keep in mind I'm using variable $ni when I'm writing to file. Answer: I agree with @TobySpeight's review, and have only minor things to add on top. I find it tedious and error-prone to escape symbols in here-documents. So I prefer the <<"EOF" syntax which makes the here-document content verbatim, so no escaping is necessary. To work around the goal to embed a variable, I would split the content to two here-documents, one where values can be expanded, and one that's verbatim: cat <<EOF >"$SCRIPT" #!/bin/bash ni="$(nmcli dev status | awk '/ethernet/ { print $1 }')" EOF cat <<"EOF" >>"$SCRIPT" if [ "$1" == "$ni" ]; then case "$2" in up) nmcli radio wifi off notify-send "cable network detected, turning WIFI off" --urgency critical --expire-time 6 ;; down) nmcli radio wifi on notify-send "cable network unplugged, turning WIFI on" --urgency critical --expire-time 6 ;; esac fi EOF Notice that I replaced | grep ethernet | awk '{ print $1 }' with | awk '/ethernet/ { print $1 }' which is the same thing, but shorter, and using one less process in the pipeline.
{ "domain": "codereview.stackexchange", "id": 27835, "tags": "bash, linux, shell" }
Why are the left- and right-hand sides of a differential equation with two separated variables equal to a constant?
Question: While deriving the Time Independent Schrodinger Equation, my book mentioned this line. So time and position of a particle are two independent variables. If they are equal to one another for all values of $t$ & $r$, then why should they be equal to a constant? Can't we have other solutions to this other than treating both the sides as a constant? Answer: There are two logical options when you vary $t$: either the value of the left-hand side changes, or it doesn't. If it changes, then the right side must change as well, since they are equal. But the right-hand side can't change when you vary $t$, since it is not a function of $t$! Therefore, since varying $t$ produces no change in the left-hand-side, then the left-hand side must be constant. And since it is equal to the right-hand side, then they are both (the same) constant.
{ "domain": "physics.stackexchange", "id": 60989, "tags": "schroedinger-equation, differential-equations" }
What is $v(t)$ in a sliding conducting rail in a magnetic field?
Question: This is problem 7.7c from David J. Griffiths - Introduction to Electrodynamics. A metal bar of mass $m$ slides frictionlessly on two parallel conducting rails a distance $l$ apart. A resistor $R$ is connected across the rails, and a uniform magnetic field $B$, pointing into the page, fills the entire region. The force on the bar is $F = \frac{B^2l^2v}{R}$ (to the left). If the bar starts out with speed $v_0$ (to the right as in the figure) at time $t = 0$, and is left to slide, what is its speed at a later time $t$? The correct solution is: $\frac{dv}{dt} = -\frac{B^2l^2v}{Rm} \Rightarrow v = v_0e^{-\frac{B^2l^2t}{Rm}}$ But my initial solution was: $v = v_0 - \frac{B^2l^2vt}{Rm} \Rightarrow v = \frac{v_0}{1+\frac{B^2l^2t}{Rm}}$ I think the formula $v = v_0 + at$ is only valid when acceleration is constant. That is where my error lies. Right? Answer: The formula you're using is only valid when the acceleration of the body is constant. You can read up more here to understand why this is so. Simply put it's because of our assumption of constant acceleration that we end with the result you're using otherwise you'll get a differential equation. In this question the force keeps on changing hence the acceleration also keeps changing. To get the correct answer as mentioned in the book you need to write a differential equation for the velocity. You can do this by writing your acceleration as dV/dt As it is rate of change of velocity You'll end up with a fairly simple differential equation and by taking the antilog on both sides you'll get your answer.
{ "domain": "physics.stackexchange", "id": 60183, "tags": "homework-and-exercises, electromagnetism, electric-current, electrical-resistance, electromagnetic-induction" }
How does one find the wavefunction of a particle in its rest frame?
Question: In classical mechanics, the orbital angular momentum of a particle is defined as $\textbf{L}=\textbf{r}\times\textbf{p}$. This is zero in the rest frame of the particle where $\textbf{p}=0$. Quantum mechanically, $\textbf{p}$ is an operator. So putting $\hat{\textbf{p}}=0$ in $\hat{\textbf{L}}=\hat{\textbf{r}} \times\hat{\textbf{p}}$ and claiming that the orbital angular momentum of a quantum particle is zero in its rest frame does not make sense. One must look at the value of $\hat{\textbf{L}}^2$ on the "wavefunction in the rest frame" of the particle. How does one find the wavefunction of a particle in its rest frame? Answer: The rest-frame wavefunction $\psi(\boldsymbol x,t)$ is the one such that $$ \boldsymbol 0\equiv\langle \boldsymbol p\rangle=\int_{\mathbb R^3}\psi^*(\boldsymbol x,t)(-i\boldsymbol \nabla)\psi(\boldsymbol x,t)\ \mathrm d\boldsymbol x $$ If $\boldsymbol k\equiv\langle \boldsymbol p\rangle$ is non-zero, we just need to redefine the wave-function: $$ \psi(\boldsymbol x,t)\to\mathrm e^{-i\boldsymbol k\cdot\boldsymbol x}\psi(\boldsymbol x,t) $$ which satisfies $\langle\boldsymbol p\rangle\equiv \boldsymbol 0$ by construction. This is just a translation in momentum space, $$ \tilde\psi(\boldsymbol p,t)\to \tilde\psi(\boldsymbol p-\boldsymbol k,t) $$ which obviously has zero mean. More generally, if you have a system of many particles, the rest-frame of the system is, by definition, the one where $\langle\boldsymbol p\rangle\equiv\boldsymbol 0$, where $\boldsymbol p$ denotes the total linear momentum: $$ \boldsymbol p=\sum_i \boldsymbol p_i $$
{ "domain": "physics.stackexchange", "id": 46323, "tags": "quantum-mechanics, angular-momentum, reference-frames, wavefunction, commutator" }
Two objects are thrown into a black hole. The first crosses the event horizon at time's end, so when does the second one cross?
Question: An observer throws an object towards a black hole, and then an arbitrary amount of time later, throws a second object towards the black hole. Disregarding Hawking radiation and assuming the Black Hole will last forever, it will take an infinite amount of time from the point of the view of the observer for the first object to reach the event horizon. In other words, the first object crosses the event horizon right at the very end of time. But if this is true, then when does the second object reach the event horizon? After the first object reaches the event horizon, there is theoretically no more time that can be elapsed, yet we know that the second object must arrive at the event horizon after the first object. Update: Perhaps it would be better to rephrase some of the above. Just for clarification, I am not thinking of infinity as a number, but more like indexes in infinite set theory. Infinity is not a number, but there is a concept denoting the last index in an infinite set, omega. In this case, the state of the object corresponds to an index of the time set, and the state where the object crosses the event horizon is defined to correspond to an index of omega. Two sets with last indices omega and omega + 2 have the same cardinality, but are just indexed differently. My question was that the state of the first object when it passes the horizon corresponds to index omega, and the index corresponding to the event when the second object crosses the event horizon must come after omega. However, time is defined with a last index of omega, so my question is: what is the index CORRESPONDING to (not equal to, as with numbers) the event when the second object crosses? Ie, if an index of omega is ascribed to the event when the first object reaches the event horizon, what index do we ascribe to the event when the second object reaches the event horizon? Answer: (It's important to note that this is, of course, talking about an idealized mathematical setup - in real life a real black hole does not last for infinite time and it's not certain the universe will either.) Both reach it at infinite time. It's for the same reason the graphs of both $f(x) = \frac{1}{x + 1}$ and $g(x) = \frac{1}{x + 2}$ have limit 0 at infinite $x$ despite starting at different points when $x = 0$. ADD (2018-01-16): The other answer here mentions how that "infinity" is "not a number", and "should not be regarded as one". I'd say this depends on your point of view. Whether you call "infinity" a "number" or not depends on what objects you choose to admit under the label of "number" (and also, what objects you choose to label with the word "infinity") which is, admittedly something that is rather not admitting of a precise formal definition (that is, there is no precise formal mathematical definition of what is a "number", except given that some kinds of mathematical objects are called as such.). The relevant concept of "infinity" here is that of the "extended real number line" - I note the original questioner mentioned something about infinities in his post saying they were like infinite set cardinalities. This is not correct - the relevant notion is the "infinity" as used in calculus which is formally a member of this extended set, and a suitably continuous function can be extended to it by taking the limit. If one objects to this formalism (though I don't see why one should - it is perfectly sensible as long as one plays by the rules that govern it, which is required for all mathematics), instead of saying "that it is 'reached at infinite time'" one can say both "approach arbitrarily close to the horizon at suitably large times", or that "both reach the horizon in the limit of arbitrarily large times". In any case a limit is required because the relevant functions are not defined directly at $t = \infty$ (on the ERL) - it is much the same case as for a "removable singularity" like of $f(x) = x^0$. Now as for the physical world, it is not something we can empirically test or confirm whether or not time goes on arbitrarily far, much less if the terminal point $t = \infty$ that would be added in an extended real number setup for convenience actually existed. It is rather a feature of our models, and admittedly quite idealized ones at that, that can be left off without affecting any prediction that is actually testable. (Indeed one could argue that any claim as to a putative "end of time" is not empirically testable at all because once it hits, we cease to exist and thus cannot register it as a truth.) In reality, a real black hole is almost certainly limited by the Hawking radiative evaporation which cuts off its lifespan at a very high but finite time (about $10^{100}$ s for the biggest black holes, versus the age of the current Universe at about 435 Ps ($435 \times 10^{15}$ s) or 13.8 Ga.). The rule for the time lapse in that case is that any object in free-fall will just reach the horizon at the instant the black hole vanishes. Thus also, both objects reach the horizon at the same time, only now it is at a finite and mathematically uncontroversial "number".
{ "domain": "physics.stackexchange", "id": 46019, "tags": "general-relativity, black-holes, time-dilation, observers, event-horizon" }
In Java: convert an array of floats to bytes and vice versa
Question: I'm just returning to Java after a multi-decade hiatus -- there are some nice new packages, such as java.nio. I need to convert an array of floats into bytes and vice versa. The floatToBytes() function feels okay, but I'm pretty certain that bytesToFloats() could be implemented better. Thoughts? (P.S.: ignore the public, protected etc -- this code is wrapped inside a class...) import java.nio.ByteBuffer; import java.nio.ByteOrder; protected static final int BYTES_PER_FLOAT = Float.SIZE / 8; public static byte[] floatsToBytes(float[] floats){ ByteBuffer buffer = ByteBuffer.allocate(BYTES_PER_FLOAT * floats.length). order(ByteOrder.BIG_ENDIAN); for (float f : floats) { buffer.putFloat(f); } return buffer.array(); } protected static float[] bytesToFloats(byte[] bytes) { if (bytes.length % BYTES_PER_FLOAT != 0) { throw new RuntimeException("Illegal length"); } ByteBuffer buffer = ByteBuffer.wrap(bytes).order(ByteOrder.BIG_ENDIAN); int n_floats = bytes.length / BYTES_PER_FLOAT; float[] floats = new float[n_floats]; for (int i=0; i<n_floats; i++) { floats[i] = buffer.getFloat(i * BYTES_PER_FLOAT); } return floats; } I also note that there's a FloatBuffer class -- would that be appropriate here? Answer: Your code is way more verbose than it needs to be. First of all, BYTES_PER_FLOAT is unnecessary. You could simply use Float.BYTES, a built-in constant. Secondly, the initial ordering of a ByteBuffer is always BIG_ENDIAN, so you don't need to set this explicitly. Finally, as dariosicily indicated, using asFloatBuffer() allows bulk operations, but additionally, you should "Invocation chaining" to make the code significantly shorter: class Convert { public static byte[] floatsToBytes(float[] floats) { byte bytes[] = new byte[Float.BYTES * floats.length]; ByteBuffer.wrap(bytes).asFloatBuffer().put(floats); return bytes; } public static float[] bytesToFloats(byte[] bytes) { if (bytes.length % Float.BYTES != 0) throw new RuntimeException("Illegal length"); float floats[] = new float[bytes.length / Float.BYTES]; ByteBuffer.wrap(bytes).asFloatBuffer().get(floats); return floats; } }
{ "domain": "codereview.stackexchange", "id": 39040, "tags": "java" }
Yaw angle of lse_xsens_mti behaves weirdly
Question: I ran the lse_xsens_mti driver on my robot . I had it turn 360 degrees at a uniform velocity, and computed the Yaw angle output by the driver data in two ways: Using the tf::getYaw() function on the quat formed by the orientation.x,y,z,w values output by imu/data. By atan2(w,z)*2 on the z and w values. In both cases, the Yaw angle didn't vary uniformly with time as the robot turned. What could be wrong? Originally posted by PKG on ROS Answers with karma: 365 on 2011-11-03 Post score: 0 Original comments Comment by Gonçalo Cabrita on 2012-03-13: The lse_xsens_mti driver did not see much testing since it was created, I will take a look at it and update it soon! Answer: Apparently there was a bug on MTi.h discovered by Prof. Dr. Stefan May which as already been corrected. Could you please verify that the software now works as expected? Originally posted by Gonçalo Cabrita with karma: 591 on 2012-03-14 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 7178, "tags": "ros, imu, imu-drivers" }
Expansion wave in a hermetically sealed room
Question: Suppose there is a human in a hermetically sealed room who takes in a small amount of air without letting it out. The room is initially at atmospheric pressure. The suction is similar to the case in which a piston is withdrawn to generate an expansion wave in the surrounding media. If I measure the pressure in a region furthest away from the human, will it still be atmospheric? How will equilibrium in pressure be achieved in this case compared to the case when the room is not hermetically sealed (opened to atmosphere)? Answer: Let's do a simple analysis. Let quantities with subscript "r" and "l" correspond to the room (excluding the lung) and the lung respectively. Since in a sealed room total mass of air $m_0$ must remain constant, we have: $$m_r+m_l=m_0$$ Assuming that temperature is uniform and steady, ideal gas law gives: $$p_rV_r=m_rRT\\ p_lV_l=m_lRT\\ \therefore\quad p_rV_r+p_lV_l=(m_r+m_l)RT=m_0RT,~\textrm{a constant}\\$$ If initially the pressure was $p_0$ everywhere (same inside room and lung) then $p_0V_0=m_0RT$. Thus: $$p_rV_r+p_lV_l=p_0V_0$$ Now we assume $V_0=V_r+V_l$; this is a good assumption as far as breathing is concerned. Above equation becomes: $$(p_r-p_0)V_0+(p_l-p_r)V_l=0$$ If $p_0$ is atmospheric pressure, the question whether room pressure is above or below atmospheric subsequent to inhalation concerns the sign of $(p_r-p_0)$. Its sign is opposite to that of $(p_l-p_r)$. If pressures inside the room and lung were equalized at the end of inhalation, i.e. $p_l=p_r$, then room pressure remains atmospheric, i.e. $p_r=p_o$. Maintaining unequal pressure requires external force on the lung, and a non-zero pressure difference can result if breathing was stopped before pressures were equalized. In this case, higher (lower) lung pressure compared to that of room coexists with below (above) atmospheric pressure in the room. To obtain the result for a room which isn't sealed but open to atmosphere (or a sealed room which is very big compared to the lung), we take the limit $V_0\to\infty$. Since $(p_l-p_r)V_l$ is a finite quantity, we must have $p_r\to p_0$. Therefore in a room which isn't sealed pressure will practically always be atmospheric.
{ "domain": "physics.stackexchange", "id": 50357, "tags": "fluid-dynamics, pressure, gas" }
Kinematic Property of Link
Question: I am sorry if this question has been asked, but I haven't been able to find any solution. What exactly is the Kinematic property of a link. Why is it 'false' by default (i.e. unchecked in Gazebo GUI). What is the purpose of the property. Any explanation or link to an explanation would be very helpful. Thanks Originally posted by Shrutheesh R Iyer on Gazebo Answers with karma: 3 on 2020-05-08 Post score: 0 Answer: A kinematic link (as opposed to a dynamic link) is not affected by external forces. It can influence other bodies in its vicinity, but it's not influenced by them. You can use the kinematic property of a link to take control of the link from the physics engine. Then you get to define by yourself how the link should move in the world (for example, to create an animation). Here is a reference to the kinematic state. Below you can see how a box interacts with gravity and a sphere, with disabled and enabled kinematic state. Originally posted by nlamprian with karma: 833 on 2020-05-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Shrutheesh R Iyer on 2020-05-09: Thank you for the detailed explanation!
{ "domain": "robotics.stackexchange", "id": 4514, "tags": "gazebo-9" }
Is this homework problem on T-joins wrong?
Question: In Question 9.3a, it states that if $T=V$, then the minimum cost perfect matching is the minimum cost T-join. Is this actually true? I think I have a counterexample which I have drawn below. Answer: The statement of the problem is incorrect. But $T$-joins are indeed very much related to the perfect matching problem. What the theorem that 9.3a is supposed to be conveying is: Assume $G$ is connected. Suppose that $T = V$. The minimum $T$-join can be found as follows: construct a complete graph $G'$ such that the weight on an edge (a,b) in $G'$ is the length of the shortest path between vertices $a$ and $b$ in $G$. Now, find a minimum weight perfect matching in $G'$. This gives the minimum $T$-join in $G$.
{ "domain": "cstheory.stackexchange", "id": 5134, "tags": "graph-theory, co.combinatorics, approximation-algorithms, optimization, linear-programming" }
What Software could I use to create Diagrams of Biochemical Processes like these?
Question: Cross-posted at GraphicDesign.SE here Is there any software software that I could use to (closely) create pathways as depicted in these diagrams? Or does anyone happen to know what software was used for these diagrams specifically? [Sourced from Harper's Illustrated Biochemistry (31e) and Murray's Medical Microbiology (8e) respectively] I would like to be able make alterations such as adding extra steps, changing color, including newer pieces of data and such-like for the purpose of making my own notes. I assume that knowing which software was (or can be) used to create these, I'd have greater versatility making the edits I need, or creating something in similar from scratch for other processes/pathways. Answer: By posting to this forum, I presume the question is whether the illustrators of this and other biochemistry books used software specifically designed for the purpose of drawing reaction pathways, or, if not, whether such software exists. From my experience of using such illustrations in my own teaching over the years, I believe that the answer is probably no (although there are a couple of what I would consider ‘almosts’ mentioned below.) The main reason I would not expect this is that it would not seem commercially viable (biochemistry teachers and book illustrators are hardly a huge market) and you can get by with more generic pre-existing applications. The class of application I consider best for this is termed a vector graphics application (popularly known as a ‘drawing’ application), as opposed to a bitmap or raster graphics application (popularly known as a ‘paint’ application). Although today there are applications that combine both functions, vector graphics applications have the advantage that they are more easily editable and they scale to any print resolution. The professional applications in this area can be very expensive and have a steeper learning curve than bitmap graphics applications (such as Adobe Photoshop). Moreover, they may lack features needed for biochemical use. It is therefore be worth exploring cheaper offerings, or investigating specialist vector graphics applications more focused on engineering or computing diagrams. For biochemical work I personally would require software to be able to provide the following: Controllable geometric lines (Bezier curves) Customizable arrowheads Text control that at least allows subscripts and superscripts, and ideally both simultaneously, e.g. for a charged phosphate species. Rotatable text (as in the upper example) Basic graphic primitives (as in the lower example) Export to vector formats accepted by other applications (PostScript or PDF) to allow output in formats and at resolutions required for print publication. How do different applications satisfy these criteria? They all support basic graphic primitives, and most allow export at least as PDF, if not the more flexible PostScript. Surprisingly the creation of a line with an arrowhead is not straightforward as one is dealing with a combination of two separate objects. Although all applications tend to provide straight lines with arrowheads, curved lines with arrowheads tend to be absent from the general applications (even the most sophisticated) because a curved line is not really a line as such, but is generated as an oval segment with a stroke (visualization of its drawn edge) but no fill (the body is transparent). I have tended to use this sort of application (going back to MacDraw II of the Classic Mac) and fudge the arrows on them by combining them with straight lines of the same length as the arrowhead. However they are provided by more specialized diagramming software like OmniGraffle (mentioned in another answer) or the dedicated ChemDraw (which I find too limiting to use other than for export). Text handling is the other problem. Placing a superscript directly above a subscript is a page layout operation that only dedicated mathematical software (e.g. MathType) is likely to support. Microsoft Word’s equation editor will do this, but export is only as PDF, and import into other applications varies in effectiveness. If you work in a generic application you will have to fudge it yourself with separate text boxes. What are the choices for generic vector drawing applications? Adobe Illustrator is the most sophisticated, is used by most professional illustrators, but is very expensive (it is now subscription only). Other vector drawing applications that I am aware of but have not used include the commercial Corel Draw (cheaper than Illustrator) and the even cheaper Affinity Designer. As mentioned in a comment, there is also the free application, Inkscape. Free trials are available with most applications, so I would recommend seeing what is the best solution for your personal situation. Addendum I have just had a quick look at BioRender, which was mentioned in another answer. It has many different modules, including ones for biochemistry (including a Glycolysis pathway template) and seems quite attractive. However if you want to use the results for publications you need a $35 per month personal academic subscription (or your institution to buy a site license). I guess you tend to get what you pay for.
{ "domain": "biology.stackexchange", "id": 10669, "tags": "software, resource-recommendation" }
Species identification, tree from Ecuador
Question: This tree is recently photographed in Quito, Ecuador. It grows everywhere. I'm new to South American flora, so I'd like to know what the species it is? Answer: Your plant appears to be Chionanthus pubescens, the pink fringe tree, which is native to Ecuador and Peru. The genus has a number of species. It belongs to the family Oleaceae, which includes well known plants like jasmine, forsythia, ash trees and olives. I could not find much biological information on the pink fringe tree but plantlist.org contains a number of links to various databases that may be helpful if you want to dig deeper.
{ "domain": "biology.stackexchange", "id": 2707, "tags": "botany, species-identification" }
What does a coordinate representation of density matrix mean?
Question: A coordinate representation of density matrix $\rho$ is defined as $$ \rho (x, x') \equiv \left<x\right| \rho \left|x'\right> .$$ When $x = x'$, this expresses a probability where a particle is in the state $\left|x\right>$. Question: what does that mean when $x \neq x'$? Is that related to some probability? According to Feynman (Statistical Mechanics a set of lectures, (p.72)), $$ I \equiv \int dx _1 \int dx _2 \cdots \int dx _{n - 1} \left<x\right| \rho \left|x_1\right> \left<x_1\right| \rho \left|x_2\right> \cdots \left<x_{n-1}\right| \rho \left|x'\right>$$ can be interpreted as that "the particle travels from $x'$ to $x$ through a series of intermediate steps, $x_1, x_2, \cdots, x_{n-1}$, which define a path". I don't understand this statement. Answer: The diagonal entries of the density matrix are called populations and provide information about the probability density of the particles (described by the density matrix), i.e. their probability of "being found" in real space. This is easily seen from a density matrix $\rho = |\Psi\rangle \langle \Psi|$, and $$\rho(x,x) = \langle x|\rho|x\rangle = \langle x|\Psi\rangle \langle \Psi|x\rangle = | \langle x |\Psi\rangle|^2 = |\psi(x)|^2,$$ which is the usual quantum mechanical probaibility density. The off diagonal entries of the density matrix are called coherences and provide information about the phase coherence of the system described by $\rho$ between two positions $x$ and $x'$. Is there a fixed phase relationship between $x$ and $x'$, especially as $|x-x'|\rightarrow \infty$? I.e. will constructive & coherent interference occur over a large distance or will it be washed out? The most famous application of the off diagonal elements of the density matrix is in Off-Diagonal Long-Range Order (ODLRO), which is what manifests in Bose-Einstein Condensates or Superfluids. These are phases where the system breaks a $U(1)$ phase and the wavefunction "picks" a specific phase $\theta$. The "broken" phase is distinguished from the unbroken phase because it obeys: $$ \lim_{|x-x'|\rightarrow \infty} \rho(x,x') \rightarrow n_0 \neq 0,$$ i.e. phase coherence is preserved over arbitrarily long distances.
{ "domain": "physics.stackexchange", "id": 59731, "tags": "quantum-mechanics, density-operator" }
Why the Feynman diagrams contributing to the effective action $\Gamma[\phi_{\rm cl}]$ are stripped/amputated/have no external lines?
Question: I am reading P&S Chapter 11 and specifically I am trying to understand the derivation of $\Gamma[\phi_{\rm cl}]$. All the algebra is okay, but I am failing to understand the connection to Feynman diagrams. I have also read Chapter 9 from Srednicki and the reply on this stack-exchange question, which I find very illuminating: Perturbation expansion of effective action My question, however, is: Why does the author say that the "the Feynman diagrams contributing to $\Gamma[\phi_{\rm cl}]$ have no external lines"? How can I understand that (pictorially or algebraically)? My guess is that it has something to do with the source term missing from the expression of the effective action, but I do not understand it that much. Answer: Here is one argument: Recall that the 1PI effective/proper action$^1$ $$\Gamma[\phi_{\rm cl}]~=~W_c[J]-J_k \phi^k_{\rm cl} \tag{1} $$ is the Legendre transformation of the generator $W_c[J]$ of connected diagrams. We can recursively construct higher and higher $n$-point 1PI correlator functions $\Gamma_{n,k_1\ldots k_n}$ from pertinent combinations of connected $m$-point correlation functions $W_{c,m}^{k_1,\ldots k_m}$, where $m\leq n$, cf. e.g. my Phys.SE answer here. Notice that in this context the connected 2-point function $W_{c,2}^{k\ell}$ plays the role of an (inverse) metric that raises and lowers the DeWitt indices. The connected $m$-point correlation function $W_{c,m}^{k_1,\ldots k_m}$ has upper indices because it includes its external legs (which are attached to the sources $J_{k_1}\ldots J_{k_m}$ with lower indices). The $n$-point 1PI correlator function $\Gamma_{n,k_1\ldots k_n}$ has lower indices because its external legs are stripped/amputated. Instead it is attached to the classical fields $\phi_{\rm cl}^{k_1}\ldots \phi_{\rm cl}^{k_n}$ with with upper indices in the effective action $\Gamma[\phi_{\rm cl}]$. Conversely, and perhaps more illuminating diagramatically, the connected $m$-point correlation function $W_{c,m}^{k_1,\ldots k_m}$ is a sum of all possible trees made from connected propagators $W_{c,2}^{k\ell}$ and (amputated) 1PI vertices $\Gamma_{n,k_1\ldots k_n}$, where $n\leq m$, cf. e.g. this Phys.SE post. -- $^1$ We use DeWitt condensed notation to not clutter the notation.
{ "domain": "physics.stackexchange", "id": 86324, "tags": "quantum-field-theory, feynman-diagrams, correlation-functions, propagator, 1pi-effective-action" }
Getting point cloud from image_rect_color using stereo_image_proc
Question: I have a bag publishing left and right camera_info and image_rect_color topics. How can I use stereo_image_proc to get the point cloud? stereo_image_proc/disparity nodelet subscribes to image_rect and gives out disparity, which can be used by the stereo_image_proc/point_cloud2 nodelet. Do I need to modify the source code, make a custom launch file or this can be done via an easier way that I am unable to see immediately? Afai understand, I have to convert to mono image and then maybe remap or publish to the relevant topic. ROS_NAMESPACE=stereo rosrun stereo_image_proc stereo_image_proc -> here the whole node subscribes to raw images which I don't have as a topic. Originally posted by ratneshmadaan on ROS Answers with karma: 71 on 2015-09-27 Post score: 0 Original comments Comment by ratneshmadaan on 2015-09-27: https://github.com/ros-perception/image_pipeline/blob/indigo/stereo_image_proc/src/nodes/stereo_image_proc.cpp. Read the source. So, I need to comment out the debayer and rectify blocks. But I need an image_rect from image_rect_color(color to mono basically?) and then I can use the two nodelets. Comment by ratneshmadaan on 2015-09-29: As I had zero distortion - plumb bob model, with the parameters needed being [0,0,0,0,0], all I needed to do was remap: rosbag play my_bag_file.bag /my_cam/left/image_rect_color:=/my_cam/left/image_raw /my_cam/right/image_rect_color:=/my_cam/right/image_raw Answer: As I had zero distortion - plumb bob model, with the parameters needed being {0,0,0,0,0}, all I needed to do was remap: rosbag play my_bag_file.bag /my_cam/left/image_rect_color:=/my_cam/left/image_raw /my_cam/right/image_rect_color:=/my_cam/right/image_raw Originally posted by ratneshmadaan with karma: 71 on 2015-09-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 22719, "tags": "ros, stereo-camera, stereo, stere-image-proc, pointcloud" }
Peak to peak amplitude of sum of sinusoidals (harmonic frequencies)
Question: I would like to estimate the peak-to-peak amplitude of a periodic signal whose frequency components are known. This is, I have the frequency spectrum (a peak in the fundamental frequency and other peaks in its harmonics) and I would like to compute the peak-to-peak amplitude. Is that possible without reconstructing the signal in the time domain and then detecting peaks? Answer: Suppose that we have a continuous-time periodic signal $\displaystyle s(t) = a_0 + \sum_{n=1}^N a_n \cos(n\omega_0 t + \theta_n)$. What does this mean? Do we have a trace of $s(t)$ on some recorder chart and the trace looks periodic? If we did, the question to be solved would be trivial since we could simply measure the maximum and minimum values of $s(t)$ on the chart. So the problem of interest is: Given the values of $\omega_0, a_0, a_1, a_2, \cdots, a_N, \theta_1, \theta_2, \cdots, \theta_N$, find the values of $$M_\max = \max s(t)\quad \operatorname{and} \quad M_\min = \min s(t).$$ The OP wants to find the peak-to-peak amplitude of $s(t)$ and this is, of course, just $M_\max-M_\min$. It is worth noting that since $s(t)$ is periodic with period $T = \frac{2\pi}{\omega_0}$, the maximum value and minimum value occur (at least once) in each interval of length $T$ on the time axis. The key issue here is since we know the $a_i$ and the $\theta_i$ etc., we can write down the formula $$s(t) = a_0 + \sum_{n=1}^N a_n \cos(n\omega_0 t + \theta_n)\tag{1}$$ (whether this constitutes "reconstruction" of the time-domain signal is a question that I will leave to others to answer) but the standard method for finding the maxima and minima of $s(t)$ requires us to find solutions to the nonlinear equation $(2)$ below (that is, values of $t$ for which $(2)$ holds): $$\frac{ds(t)}{dt} = -\sum_{n=1}^N a_n\cdot n\omega_0 \sin(n\omega_0 t + \theta_n) = 0. \tag{2}$$ Once we have found the values $t_1, t_2, \ldots, t_k, \ldots$ for which $\displaystyle \sum_{n=1}^N a_n\cdot n\omega_0 \sin(n\omega_0 t_k + \theta_n) = 0$, we can simply calculate $s(t_1), s(t_2), \ldots, s(t_k), \ldots$ and search through this list of numbers (it is not necessary to sort the list as recommended in hotpaw2's answer) to find $M_\max$ and $M_\min$. But, ignoring the difficulty of finding the $t_k$'s makes hotpaw's answer not particularly useful in solving the OP's problem. The answer by ethereal is even worse in this regard since all it boils down to is the assertion that (for $\alpha > 0$) the maximum and minimum values of $\alpha\cdot s(t)$ are $\alpha\cdot M_\max$ and $\alpha\cdot M_\min$ respectively. Finding the maximum and minimum of $s(t)$ from knowledge of its Fourier series is a nontrivial task, not at all as easy as it is made out to be by ethereal or by hotpaw2. A related problem has received a lot of research attention in the past thirty years or so. For a periodic signal such as $s(t)$, the average power in the signal is readily computed as $$\bar{P} = \frac{1}{T}\int_0^T |s(t)|^2\,\mathrm dt = a_0^2 + \frac{1}{2}\sum_{i=1}^N |a_i|^2.$$ On the other hand, the peak power is $P_\max = \max\{M_\max^2, M_\min^2\}$, and is also of interest, especially to system and power amplifier designers, and the ratio $\frac{P_\max}{\bar{P}}$, aptly named the _peak-to-average-power ratio (PAPR) has received much attention. It can be calculated exactly for a given $s(t)$ but only with great computational effort, and so a lot of effort has gone into finding bounds on the PAPR. Some of these could help in getting bounds for the OP's problem. only with a great deal of effort.
{ "domain": "dsp.stackexchange", "id": 1023, "tags": "frequency-spectrum, amplitude" }
Gravitational redshift in terms of wavelength
Question: I know that Einstein’s theory of general relativity predicts that the wavelength of electromagnetic radiation will lengthen as it climbs out of a gravitational well. Photons must expend energy to escape, but at the same time must always travel at the speed of light, so this energy must be lost through a change of frequency rather than a change in speed. If the energy of the photon decreases, the frequency also decreases. This corresponds to an increase in the wavelength of the photon, or a shift to the red end of the electromagnetic spectrum – hence the name: gravitational redshift. How can I get the gravitational redshift in terms of the wavelength? I would really appreciate your help. Answer: For fixed $r$ and $\phi$, all but the time differentials vanish, so $$ds^2 = c^2 d\tau^2 = \left(1-\cfrac{a}{r}\right)c^2dt^2$$ so that $$ d\tau^2 =\left(1-\cfrac{a}{r}\right)dt^2$$ $$ \frac{d\tau^2}{dt^2}= \left(1-\cfrac{a}{r}\right)$$ $d\tau$ is the clock time of an observer at distance $r$ from the source, $dt$ is the time measured by a distant observer, and $a$ is the Schwarzshild radius given by $$a = \frac{GM}{c^2}$$ Now $$ \frac{d \tau}{dt} = \frac{ c\lambda_s} { c \lambda}= \sqrt{ 1- \frac{GM}{r c^2}}$$ where the subscript $s$ stands for source and $\lambda$ is the shifted wavelength. Finally we can write $$ \frac{\lambda} {\lambda_s} = ( 1- \frac{GM}{r c^2})^{-\frac{1}{2}}$$
{ "domain": "physics.stackexchange", "id": 71206, "tags": "general-relativity, wavelength, gravitational-redshift" }
extract classifier properties from pickled file
Question: I have *.clf file which I get from fit() of sklearn. I fit my data with SVM or KNN and want to show its properties when using it for predictions. For example I open earlier pickled classifier file and when I print it I get something like this: SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='scale', kernel='rbf', max_iter=-1, probability=True, random_state=None, shrinking=True, tol=0.001, verbose=True) How can I get the value of, for example, gamma to print out it somewhere else except for traversing it as string? Because at first I have to define either it's SVM or KNN. Answer: clf = SVC() clf.fit(X, y) print(clf.get_params())
{ "domain": "datascience.stackexchange", "id": 7265, "tags": "classification, scikit-learn, pickle" }
Which tasks are called as downstream tasks?
Question: The following paragraph is from page no 331 of the textbook Natural Language Processing by Jacob Eisenstein. It mentions about certain type of tasks called as downstream tasks. But, it provide no further examples or details regarding these tasks. Learning algorithms like perceptron and conditional random fields often perform better with discrete feature vectors. A simple way to obtain discrete representations from distributional statistics is by clustering, so that words in the same cluster have similar distributional statistics. This can help in downstream tasks, by sharing features between all words in the same cluster. However, there is an obvious tradeoff: if the number of clusters is too small, the words in each cluster will not have much in common; if the number of clusters is too large, then the learner will not see enough examples from each cluster to generalize. Which tasks in artificial intelligence or NLP are called as downstream tasks? Answer: In the context of self-supervised learning (which is also used in NLP), a downstream task is the task that you actually want to solve. This definition makes sense if you're familiar with transfer learning or self-supervised learning, which are also used for NLP. In particular, in transfer learning, you first pre-train a model with some "general" dataset (e.g. ImageNet), which does not represent the task that you want to solve, but allows the model to learn some "general" features. Then you fine-tune this pre-trained model on the dataset that represents the actual problem that you want to solve. This latter task/problem is what would be called, in the context of self-supervised learning, a downstream task. In this answer, I mention these downstream tasks. In the same book that you quote, the author also writes (section 14.6.2 Extrinsic evaluations, p. 339 of the book) Word representations contribute to downstream tasks like sequence labeling and document classification by enabling generalization across words. The use of distributed representations as features is a form of semi-supervised learning, in which performance on a supervised learning problem is augmented by learning distributed representations from unlabeled data (Miller et al., 2004; Koo et al., 2008; Turian et al., 2010). These pre-trained word representations can be used as features in a linear prediction model, or as the input layer in a neural network, such as a Bi-LSTM tagging model (§ 7.6). Word representations can be evaluated by the performance of the downstream systems that consume them: for example, GloVe embeddings are convincingly better than Latent Semantic Analysis as features in the downstream task of named entity recognition (Pennington et al., 2014). Unfortunately, extrinsic and intrinsic evaluations do not always point in the same direction, and the best word representations for one downstream task may perform poorly on another task (Schnabel et al., 2015). When word representations are updated from labeled data in the downstream task, they are said to be fine-tuned. So, to me, after having read this section of the book, it seems that the author is using the term "downstream task" as it's used in self-supervised learning. Examples of downstream tasks are thus sequence labeling documentation classification named entity recognition Tasks like training a model to learn word embeddings are not downstream tasks, because these tasks are not really the ultimate tasks that you want to solve, but they are solved in order to solve other tasks (i.e. the downstream tasks).
{ "domain": "ai.stackexchange", "id": 2894, "tags": "natural-language-processing, terminology" }
Relation of SNR and BER
Question: I am working towards establishing the error rate of the communication system relying on on-off keying. I have trouble understanding the estimation of bit error rate (BER) performance when the following formula $$\mathrm{BER} \propto \mathrm{erfc}(\sqrt{\mathrm{SNR}}) $$ is used. Suppose I have a received signal $$x(t) = s(t) + n(t)$$ where $s(t)$ is the original signal and $n(t)$ is AWGN. My signal reads $$ s(t) = \sum_{n}d_n g(t-nT), $$ where $g(t)$ is the modulation pulse (rectangle for simplicity), $d_n$ is either 0 or 1 and $T$ is the width of the pulse. Defining the SNR as $$\mathrm{SNR} = \frac{P_s}{P_n} = \frac{\mathrm{E}[s(t)]^2}{\mathrm{var}[n(t)]}.$$ However, looking at Wikipedia and signal course lecture materials I was visiting, I noticed that it could also be defined as $$\mathrm{SNR}_\mathrm{alt} = \frac{\mu_x^2}{\sigma_x^2} = \frac{\mathrm{E}[x(t)]^2}{\mathrm{var}[x(t)]}.$$ In practice, it is a nontrivial task to separate $x(t)$ into original $s(t)$ and $n(t)$ so the $\mathrm{SNR}_\mathrm{alt}$ definition is more straightforward for evaluation. Now, back to my problem. I have tried to do some numerical simulations, generating over a million random bit symbols and assigning them to the rectangular pulse of width $T$. I have found out that the formula $$\mathrm{BER} = \frac{1}{2}\mathrm{erfc}(\sqrt{\frac{T}{2}\mathrm{SNR}}) $$ is a perfect estimation of the BER computed from simulation. However, using the $\mathrm{SNR}_\mathrm{alt}$ instead, the mentioned formula starts to be useless somewhere about $-6$ dB. Rescaling the BER-SNR formula would not help here as the data saturate somewhere around 0.00078. Can someone please elaborate on this? I am lost in the assumptions and what I can use or not. The formula with SNR seems to be a nice estimation of the performance. But calculating the SNR value without additional processing seems to be impractical. Answer: The alternative definition is not equivalent to $P_s/P_n$. It is a different definition, applicable in different contexts; it is not a replacement. It may be possible to find a formula for BER based on the alternative definition, but it would certainly have a different form.
{ "domain": "dsp.stackexchange", "id": 12435, "tags": "signal-analysis, noise, demodulation, snr" }
Can you build a solver from a verifier?
Question: Given code to just an NP-verifier, where the certificate/witness is required to be of size polynomial in the instance, for a language L, can you, from that data alone, construct code for a solver, or generate / get back the language L itself? At a glance, the answer seems to be yes--you could just try every word, certificate pair; however, there is a part I'm not sure about with this process: sure there are only a finite number of words of each size, but there are an infinte number of words that are poly|w| for any given size |w| that are potential certificates. So, without the certificates, you could be trying different strings as the second input for the same word as first input forever to no avail. Thanks! Answer: Having a verifier for a language in general is known as semidecidability, which is actually weaker than decidability. So in general, the answer is no, we can't build a decider for $L$. But if the verifier is efficient (i.e., $L \in $ NP), then indeed $L$ is decidable, and your argument is pretty close for why that is. Let's say we have an efficient verifier $V(x,c)$. Given an $x \in \Sigma^*$, the "certificate space" is $C=\{c \in \Sigma^* \mid |c| \leq p_L(|x|)\}$, where $p_L$ is a polynomial. That means the certificate space is actually finite. $x \in L$ if and only if there exists a $c \in C$ such that $V(x,c)$ accepts. So indeed we can just check every possible certificate for a given input $x$ to construct a decider for $L$. In fact with a bit more work you can show that $L \in$ EXP, meaning $L$ is decidable in exponential time.
{ "domain": "cs.stackexchange", "id": 17311, "tags": "turing-machines, data-compression, semi-decidability" }
Is a liquid vitamin E supplement possible without additives?
Question: Looking at E vitamin products sold commercially, all “pure vitamin E oil” products seem to be for skincare rather than consumption as a supplement. I have read that d-alpha tocopherol is the form of vitamin E with greatest bioavailability, but even liquid supplement products often contain sunflower or other oils in addition, which I understand to be a source of but of course are not entirely vitamin E. Is a pure liquid supplement possible or practical for consumption as a supplement, and if not, why? Answer: As a practical matter, vitamins generally can't be provided to consumers in their pure forms. The problem is that we need so little of them. For instance, the RDA for d-alpha tocopherol is 15 mg. Consumers simply don't have the training or equipment necessary to precisely measure out 15 mg (or the time or patience to do this daily!). Most likely they would overdose if they tried. Thus the vitamin E has to be diluted with a much larger quantity of edible material into which it can dissolve (such as a vegetable oil), to make the size of a single dose large enough for a consumer to be able to measure out. For instance, if we were to dilute the vitamin E 100-fold with vegetable oil, then 1 RDA would be 1.5 g (instead of 15 mg), which works out to about 1/3 tsp. That's a dosage consumers can handle. Of course, the typical approach is not to offer it as a liquid, but rather conveniently encapsulated.
{ "domain": "chemistry.stackexchange", "id": 12881, "tags": "organic-chemistry, food-chemistry, medicinal-chemistry" }
Generating a 64 bit unique integer in Java
Question: I need to generate a 64 bit unique integer in Java. I need to make sure that there is very few or no collisions if possible. I came up with the below code which works fine: public class TestUniqueness { private static final AtomicLong TS = new AtomicLong(); public static void main(String[] args) { // for testing, just added the for loop for (int i = 1; i <= 100000; i++) { System.out.println(getUniqueTimestamp()); } } public static long getUniqueTimestamp() { long micros = System.currentTimeMillis() * 1000; for (;;) { long value = TS.get(); if (micros <= value) micros = value + 1; if (TS.compareAndSet(value, micros)) return micros; } } } I will be running the above code in production. Answer: My specification is only to generate the 64 bit unique integer, that's it. In this case, there's no need for anything more complicated than atomically incrementing a counter: public class Counter { private static final AtomicLong counter = new AtomicLong(0); public static long getNextNumber(){ return counter.incrementAndGet(); } } To offer a more specific critique of your code, it's unnecessarily complicated and inefficient. I don't see anything incorrect about it (i.e. it seems like it meets your criteria), but there's the old saying about obviously no bugs vs no obvious bugs (paraphrased).
{ "domain": "codereview.stackexchange", "id": 12086, "tags": "java, random" }
Taking a trace using a continuous spectrum of eigenstates
Question: This may be a simple question, but I have not been able to find an adequate discussion in any source that quite answers it. In many cases in quantum mechanics, traces are evaluated using the discrete spectrum of the Hamiltonian: $Tr[A] = \sum \langle n|A|n\rangle$. Is there a generalization for a Hamiltonian with a continuous spectrum? As a specific example, take $H=\frac{p^2}{2}$, the non-relativistic free particle Hamiltonian. The eigenvalues of the Hamiltonian are just momentum eigenstates, with corresponding energies $E(p) = p^2/2$. So we could write the energy eigenstates as $|E,\pm\rangle$, where $E$ ranges from 0 to $\infty$ and plus/minus denotes right/left moving particles. Say I want to calculate the partition function, $Z(\beta) = Tr[e^{-\beta H}]$. If I try this naively by expanding the trace in the momentum basis, I get $$Z(\beta) = \int_{\mathbb{R}} dp \langle p | e^{-\beta p^2/2} | p \rangle = \int dp e^{-\beta p^2 /2} \delta(0) = \delta(0)\sqrt{\frac{2\pi}{\beta}}.$$ On the other hand, if I expand in the energy eigenbasis, I would naively get $$Z(\beta) = \sum\limits_{s\in\{+,-\}}\int_{\mathbb{R}\geq0} dE \langle E,s| e^{-\beta H} | E,s \rangle = 2\int dE e^{-\beta E} \delta(0) = \frac{2}{\beta}\delta(0),$$ which is not the same as above. My suspicion of what went wrong in this particular calculation is that the measure for evaluating the trace in the energy basis was incorrect (it might be possible to see this by changing variables in the momentum basis integral), but I am not sure. My second suspicion is that rigged Hilbert space formalism may be able to clear up the ambiguity. Regardless, it would be useful to see under what conditions an integral-type trace as above exists and is well-defined. Answer: The momentum eigenvalue density of states is $dp/2\pi$ per unit volume. The momentum-density-of-states partition function for $H=\hat p^2$ is therefore $$ {\rm Vol} \int_{-\infty}^{\infty} \frac{dp}{2\pi} e^{-\beta p^2} ={\rm Vol} \frac 1{2\pi}\sqrt{\frac{\pi}{\beta}}. $$ Since $dE = dp^2 = 2p dp= 2\sqrt E \,dp$ the energy eigenvalue density of states per unit volume is $ dE/\sqrt E$. This is twice as big as one would expect from the expression for $dE $. This doubling is because the energy range $E=(0,\infty)$ is covered twice as $p$ rangers from $-\infty$ to $\infty$. The energy trace per unit volume is therefore $$ \int_0^\infty e^{-\beta E} E^{-1/2} \frac{dE}{2\pi}= \frac{1}{2\pi \sqrt\beta} \Gamma(1/2)= \frac{1}{2\pi \sqrt \beta}\sqrt \pi, $$ which is the same as the momentum calculation. A useful observation is that $\delta(0)$ in position space, while not mathematically well defined, has the physical interpretatation density of momentum states per unit spatial volume. Meanwhile the equation $$ \int_{-\infty}^\infty e^{ikx} dx=2\pi \delta(k) $$ s shows that $2\pi \delta(0)$ in momentum space is the volume of the system.
{ "domain": "physics.stackexchange", "id": 55501, "tags": "quantum-mechanics, operators, hilbert-space, mathematical-physics, partition-function" }
Python script to scrape titles of public Youtube playlist
Question: Just started in Python; wrote a script to get the names of all the titles in a public Youtube playlist given as input, but it got messier than it might have to be. I looked around online and found HTMLParser, which I used to extract the titles, but it had problems with encoding which might have to do with there being foreign characters in the playlist HTML, so I messed around with encodes and decodes until it worked. But is there a prettier way to fix the problem? import urllib.request from html.parser import HTMLParser playlistUrl = input("gib nem: ") with urllib.request.urlopen(playlistUrl) as response: playlist = response.read() html = playlist.decode("utf-8").encode('cp1252','replace').decode('cp1252') titles = "" class MyHTMLParser(HTMLParser): def handle_starttag(self, tag, attrs): for attr in attrs: if attr[0] == "data-title": global titles titles += attr[1] + "\n" parser = MyHTMLParser() parser.feed(html) print(titles) with open("playlistNames.txt", "w") as f: f.write(titles) Answer: Well, how you handle the output of the titles can be improved. You don't need to fall back to using global variables here. They are very rarely really needed. Here it would be easier to make handle_starttag a generator, which is then consumed by str.join: class MyHTMLParser(HTMLParser): def handle_starttag(self, tag, attrs): for attr in attrs: if attr[0] == "data-title": yield attr[1] parser = MyHTMLParser() titles = '\n'.join(parser.feed(html)) print(titles) This assumes that HTMLParser.feed does not return any other values except from within the handle_starttag method (and that it actually returns the output of handle_starttag). Note that I increased the number of spaces to 4 per indentation level, as recommended by Python's official style-guide, PEP8. You also might want to add an early exit if the tag is not the correct tag. If those assumptions above about feed are wrong, you might want to look for a different tool. Most parsing is done with BeautifulSoup, as far as I can tell. It offers strainers, with which you can reduce the amount of HTML to parse to only those tags you care about and CSS selectors which would let you directly select all of those tags with the right attribute.
{ "domain": "codereview.stackexchange", "id": 29308, "tags": "python, html, web-scraping, unicode, youtube" }
What type of camera instruments are used to record optical images in space telescopes like HUBBLE & JWST.?
Question: I am just curious about photos taken by space telescopes What kind of type of camera systems are used in space telescopes like HUBBLE and JWST? Are those cameras systems specially built for that purpose? Is there a significant difference between cameras we use and of those? What type of lenses are used? Answer: The HST and JWST are both mirror telescopes. There are many instruments on each, but the best known is the wide field and planetary camera (wfpc1), and its successors (wfpc2 and wfc3). These are all CCD cameras, and as such are based on the same basic technology that is found in many digital cameras, but there the similarity ends. The camera is placed at the primary focus of the telescope. The telescope, therefore, acts as the main "lens". There are secondary optics, particularly in wfpc2 and 3 there are optics to correct the miscurvature of the main mirror. These optics are relay mirrors, not lenses. Each camera is a one-off. Designed and built in-house by NASA. Compared to modern cameras, the resolution of the CCD is not so high. The original WFPC had an 800x800 pixel CCD, the WFPC2 had 4 800x800 pixel CCD sensors (at different pixel densities) If these seem low, remember that Hubble was designed and built in the '80s. The cameras have to deal with a lot of things that the cameras we use don't. There are high-energy cosmic rays that can damage CCDs, and servicing required a shuttle launch. The CCDs are sensitive in a wide range of frequencies, from infrared to ultraviolet. The latest camera, installed in 2009, has two light paths as shown, separating the visible from the IR imaging. As noted by Sean Lake in a comment, CCDs are read out by moving charge across the detector, infrared arrays read out individual pixels. IR arrays have more problems with stuck pixels or variable sensitivity across the array. Schematic from the space telescope science institute. So the cameras on Hubble, and the cameras that will be placed on JWST are distant relations of commercial digital cameras. They have no lenses, can see in a wider range of frequencies. The camera and its housing are also about 2m wide. However, the basic technology of converting an image into a digital file is shared between commercial digital cameras and the HST
{ "domain": "astronomy.stackexchange", "id": 2290, "tags": "space-telescope, photography" }
roslaunch and command OS
Question: can you include operating system commands as superuser in a launch file? Originally posted by mag.rod on ROS Answers with karma: 58 on 2017-10-14 Post score: 0 Answer: can you include operating system commands [..] in a launch file? Please see #q272267. as superuser yes, see #q165246 for instance. But, also see #q189457, as it's rarely really needed to run things as root. In almost all cases it can be solved with configuring the proper permissions for the user starting the ROS node. Try to avoid it. Originally posted by gvdhoorn with karma: 86574 on 2017-10-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29075, "tags": "ros, roslauch" }
How to do dimensional analysis?
Question: $$mgh = \frac{mc^2}{\sqrt{1-(v/c)^2}}-mc^2.$$ In dimensional analysis do we just ignore the square root? Or do we solve what’s inside first then we do the square root? Do we say $(v/c)^2$ is 1 as dimensions cancel? Then say each term now has the same dimension, so this is correct? I’m so confused please help! Answer: Since $v/c$ is dimensionless, the function $1/\sqrt{1-v^2/c^2}$ is dimensionless and dimensional analysis gets no information from this factor. It's not so much that you ignore this factor, but just it plays no role in dimensional analysis, just like "$3$" or $\pi$ does not enter in this analysis.
{ "domain": "physics.stackexchange", "id": 53923, "tags": "energy, relativity, dimensional-analysis" }
Lift (elevator) and equivalence principle
Question: A famous example to explain the equivalence principle is a lift (elevator) in the space (no gravity field), moving up with the acceleration of "g". It is said there is no way (experiment) that one can tell he/she is inside the moving lift (acceleration of g) or inside a lift that is stationary on earth. Now, consider this experiment: if we think of earth escape velocity, if we shoot a bullet with earth escape velocity (or faster), it will never come back if we are on earth, but if we are in the moving lift, we eventually reach the bullet. How can we explain this? Answer: The Earth’s escape velocity is finite because the force of gravity decreases with increasing distance from the Earth’s centre. However, a lift with constant acceleration is equivalent to a uniform gravitational field which does not decrease with distance. For a uniform gravitational field there is no finite escape velocity - no matter how fast we fire a bullet it will eventually fall back to Earth again. Equivalently, no matter how fast we fire a bullet the accelerating lift will eventually catch up with it.
{ "domain": "physics.stackexchange", "id": 83871, "tags": "equivalence-principle, escape-velocity" }
Can reinforcement learning algorithms be applied to computer vision problems?
Question: Can reinforcement learning algorithms be applied to computer vision problems? If yes, what are some examples of these applications? Answer: Yes, there are a couple of ways to apply reinforcement learning in computer vision problems. This mainly employs the principle of "applying the algorithm -> evaluating the outcome -> adopting the best outcome". The following are a couple of examples that use reinforcement learning in computer vision. CAD2RL: Real Single-Image Flight Without a Single Real Image. End-to-End Training of Deep Visuomotor Policies
{ "domain": "ai.stackexchange", "id": 289, "tags": "reinforcement-learning, computer-vision, applications, reference-request" }
Why do certain signals in proton NMR experience extensive splitting despite only having 1 neighbour?
Question: I came across the compound 3-methylpentan-2-ol. Looking at its proton NMR spectra, it seems there is a lot of splitting for the signal at 4 ppm. I'm assuming that the 4 ppm represents the proton in the hydroxyl group (since it clearly is much more deshielded than the other protons), but this would mean it only has one neighbouring hydrogen and hence should exist as a doublet. I just wanted to clarify, is there any reason why the 4 ppm peak has so much splitting despite only having one neighbour? Or otherwise, I have also considered, could I be incorrect in assuming the 4 ppm peak represents the hydroxyl group, and if so why? Answer: This question has a few different parts to it. Part 1 - what's that peak at 3.6ppm? Well, it's not the -OH peak. The peak at 3.6 is from the CH(OH). Part 2 - where is the -OH peak if that isn't it at 3.6ppm? Peaks from labile protons are frequently not observed, and have variable chemical shifts, unless great care is taken in sample preparation. This sample was run in CDCl3, and the -OH peak exchanges with H2O, and comes at ~1.6ppm. Part 3 - can we account for the splitting of the peak at 3.6ppm now we know what it is? The CH(OH) proton should couple to proton on the adjacent carbon (split to a doublet) and also the protons of the methyl group (split to a quartet). It won't couple to the -OH for the same reason we don't readily observe the -OH; it is rapidly exchanging with water. So, we expect that the peak at 3.6ppm should be a doublet of quartets. Looks more complicated than that to me. Part 4 - It looks more complicated that a doublet of quartets to me. What's happening there? Of course, 3-methylpentan-2-ol has 2 stereocentres, and so we have here a mixture of diastereomers in solution in the sample. Both diastereotopic CH(OH) protons have very similar chemical shifts, and so there are in fact 2 sets of doublets or quartets. It's also why the rest of the spectrum looks such a mess.
{ "domain": "chemistry.stackexchange", "id": 13431, "tags": "organic-chemistry, bond, spectroscopy, nmr-spectroscopy, hydrocarbons" }
3-function program computing connected components of a point cloud given a distance matrix
Question: I wrote this code in Haskell (instead of Python) for the educational benefit. Can anyone suggest ways to improve this code? I'm guessing that I'm using fromIntegral inefficiently. It takes two commandline arguments. The first is a path to a symmetric distance matrix. The second is a threshold. The program interprets vertices to be adjacent if their distance is less than the threshold. Then the program counts the number of connected components and the number of vertices in each connected component and prints this information. import System.Environment import Data.Matrix hiding (flatten) import qualified Data.Vector as V import Data.Graph import Data.Tree -- Turns a distance matrix to an adjacency matrix using a threshold, then prints the number -- and size of the connected components. -- Usage: run `stack run location_of_distance_matrix threshold` -- Output is in the form (number of bins, [number of vertices in each bin]). main :: IO () main = do args <- getArgs contents <- readFile $ args !! 0 let dmat = fromLists $ (map ((map (read :: String -> Float)) . words) (lines contents)) amat = amatFromDmat dmat $ read (args !! 1) (g,_,_) = graphFromEdges (map (\n -> (n, n, neighbours n amat)) [(1 :: Integer)..(fromIntegral $ ncols amat)]) comp = components g putStrLn $ show $ (length comp, map (length . flatten) comp) -- Transforms a distance matrix into an adjacency matrix using a threshold. amatFromDmat :: Matrix Float -> Float -> Matrix Bool amatFromDmat m e = matrix (nrows m) (ncols m) threshold where threshold (i,j) | i == j = False | m ! (i,j) < e = True | otherwise = False -- Outputs the list of neighbours of a vertex in a graph, taking an adjacency -- matrix. -- The addition and subtraction of 1 are here because vectors are 0-indexed but -- I made my graph vertices 1-indexed. neighbours :: Integer -> Matrix Bool -> [Integer] neighbours n mat = map (fromIntegral . (1+)) $ filter (\m -> row V.! m) [0..(ncols mat)-1] where row = getRow (fromIntegral n) mat Edit: I found a bug and improved the code a little bit. Answer: I haven't done a detailed review of Haskell code in a while, so I suspect my advice could structured better. Anyway, here's a mix of general and specific advice: "Functional core, imperative shell": Move more code out of main (and out of IO) into separate (pure) functions. The type signatures on the extracted functions will help with readability. Use types to model your domain. Haskell makes it easy to define expressive types, you should make use of that feature! :) For example, you could define type AdjacencyMatrix = Matrix Float. The Int <-> Integer conversions look unnecessary to me. Just stick to Int since the Data.Matrix API forces you to use it anyway. In general, it's a good idea to use as few partial functions as possible. (I see (!!), (Data.Vector.!), read, getRow and fromInteger) Since this is a script, using read for parsing is acceptable. Instead of indexing with (Data.Vector.!) and getRow, I'd try to map, fold or zip instead, which usually are total operations. Instead of extracting the command line arguments with (!!), you could write [filename, threshold] <- getArgs. amatFromDmat smells functorial to me, mostly because the input and output matrices have the same dimensions. Maybe try to implement it in terms of fmap. (Hint: If the input is a true distance matrix, the elements on the diagonal are the only ones that are 0.) Use qualified imports or import lists to make it more clear, where functions are coming from. (I personally prefer qualified imports) Tree has a Foldable instance and length is a method of Foldable. That means you can simply use length to get the size of the connected components. You don't need flatten.
{ "domain": "codereview.stackexchange", "id": 34599, "tags": "haskell, graph, matrix" }
How to launch nodelet? (RTABmap)
Question: Hi, I came up across remote mapping and RTABmap provides this launch file: <launch> <include file="$(find freenect_launch)/launch/freenect.launch"> <arg name="depth_registration" value="True" /> </include> <arg name="rate" default="5"/> <arg name="approx_sync" default="true" /> <!-- true for freenect driver --> <!-- Use same nodelet used by Freenect/OpenNI --> <group ns="camera"> <node pkg="nodelet" type="nodelet" name="data_throttle" args="load rtabmap_ros/rgbd_sync camera_nodelet_manager" output="screen"> <param name="compressed_rate" type="double" value="$(arg rate)"/> <param name="approx_sync" type="bool" value="$(arg approx_sync)"/> <remap from="rgb/image" to="rgb/image_rect_color"/> <remap from="depth/image" to="depth_registered/image_raw"/> <remap from="rgb/camera_info" to="rgb/camera_info"/> <remap from="rgbd_image" to="rgbd_image"/> </node> </group> </launch> What is the camera_nodelet_manager and how can I supply it? For example, if I use ZED camera they supply with the camera_manager: https://github.com/stereolabs/zed-ros-wrapper/blob/master/zed_wrapper/launch/zed_camera_nodelet.launch But how exactly can I use it? Originally posted by EdwardNur on ROS Answers with karma: 115 on 2019-04-01 Post score: 0 Original comments Comment by matlabbe on 2019-04-03: See this page: https://github.com/stereolabs/zed-ros-wrapper/tree/master/examples/zed_nodelet_example, showing how they start the nodelet manager. If you use the zed launch version without nodelet, launch rgbd_sync as a stanaldone nodelet like this : <node pkg="nodelet" type="nodelet" name="data_throttle" args="standalone rtabmap_ros/rgbd_sync" output="screen"> Answer: You can see #q304083 for an example. This launch file provides the nodelet manager when including freenect.launch. If you look at this file (source code) you will find this : <include file="$(find rgbd_launch)/launch/includes/manager.launch.xml"> <arg name="name" value="$(arg manager)" /> <arg name="debug" value="$(arg debug)" /> <arg name="num_worker_threads" value="$(arg num_worker_threads)" /> </include> This other launch file load the nodelet manager for you. In the link you gave, the launch file doesn't load any nodelet manager but expect one : zed_nodelet_manager. So if you want to use this launch file, all you have to do is load a manager before like this : rosrun nodelet nodelet manager __name:=zed_nodelet_manager Or by adding this line at the beginning of the launch file you linked : < node pkg="nodelet" type="nodelet" name="$(arg nodelet_manager_name)" args="manager"/> You need a nodelet manager because nodelets are like plugins, they are loaded at runtime by an executable, this executable being your nodelet manager. This link might help you understand the concept of nodelets. Originally posted by Delb with karma: 3907 on 2019-04-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32800, "tags": "slam, navigation, ros-melodic, rtabmap, rtabmap-ros" }
Principle of Least Action
Question: Is the principle of least action actually a principle of least action or just one of stationary action? I think I read in Landau/Lifschitz that there are some examples where the action of an actual physical example is not minimal but only stationary. Does anybody of know such a physical example? Answer: By convention we try to set things up so that it's usually a minimum, but we can't make any definition of the action that would make that hold in all cases. In optics, consider a situation in two dimensions where you have an ellipsoidal cavity with reflecting walls. If you release a ray of light from the center, along the major axis, it gets reflected back to the center by following a path of maximum time. If you start a ray from the center, along the minor axis, it comes back after following a path of minimum time. You can choose the action to be the time or minus the time, but no matter what, one of these rays will be a minimum of the action and the other will be a maximum. In special relativity, you can take the action for a particle to be the proper time $s=\int ds$ (with $ds$ positive), or you can take it to be $-s$ or other possibilities such as $-mcs$ (Landau and Lifschitz's choice). It doesn't matter which sign you choose, because the physical predictions are the same either way. In both of these examples (optics and SR), you can make a choice of sign such that the action is minimized for infinitesimally short trajectories in free space. However, the example of the ellipsoidal cavity shows that you cannot in general make it a minimum for all paths of finite length. In relativity, the metric only defines $ds$ in absolute value, $ds^2=g_{ab}dx^a dx^b$. Also, we'd like to be able to talk about timelike, lightlike, and spacelike geodesics. If we choose the action for timelike geodesics to be real, then it has to be imaginary for spacelike ones -- or we could define it as $\int\sqrt{|ds^2|}$, but the absolute value could be a nuisance because it isn't a smooth function. For L&L's discussion of this, see Mechanics (3rd ed.), section 2; and Classical theory of fields (2nd ed.), sections 8, 53, and 87.
{ "domain": "physics.stackexchange", "id": 8381, "tags": "classical-mechanics, lagrangian-formalism, variational-principle, action" }
Change unit-test structure to avoid try/except/finally clause
Question: I have 10 or 20 tests with the same structure which I would like to improve. The test itself is nested within a try/except/finally clause to make sure that some of the variables are deleted at the end. def test_time_correction(): """Test time_correction method.""" sinfo = StreamInfo("test", "", 2, 0.0, "int8", uuid.uuid4().hex[:6]) try: outlet = StreamOutlet(sinfo, chunk_size=3) inlet = StreamInlet(sinfo) inlet.open_stream(timeout=5) tc = inlet.time_correction(timeout=3) assert isinstance(tc, float) except Exception as error: raise error finally: try: del inlet except Exception: pass try: del outlet except Exception: pass In the example above, I have 2 variables inlet and outlet, but in other tests, the finally clause might be: finally: try: del outlet1 except Exception: pass try: del outlet2 except Exception: pass try: del outlet3 except Exception: pass Open to suggestion on how this could be improved, maybe through fixture or other means. Answer: Python uses garbage collection. Objects will be deleted automatically when they are no longer referenced. The del statement doesn't force the object to be destroyed, it just removes the variable from the scope. What your code achieves is extremely niche: if del destroys the object immediately and the object's destructor raises an exception then this exception will be ignored, rather than failing or crashing the tests. This is extremely niche because since Python is garbage collected, it doesn't have any kind of deterministic destruction. I have never seen a __del__() method (destructor) in practice, even though Python supports this feature, and it might be useful when integrating native code. See also these SO questions: What is the __del__ method and how do I call it? How do I correctly clean up a Python object? The latter post explains that if you have to perform cleanup in your code, then you should use the with statement, for which objects have to provide __enter__() and __exit__() methods per the context manager protocol. All of this means that your test is likely equivalent to: def test_time_correction(): """Test time_correction method.""" sinfo = StreamInfo("test", "", 2, 0.0, "int8", uuid.uuid4().hex[:6]) outlet = StreamOutlet(sinfo, chunk_size=3) inlet = StreamInlet(sinfo) inlet.open_stream(timeout=5) tc = inlet.time_correction(timeout=3) assert isinstance(tc, float) But it could make sense to use with-statements. For example: def test_time_correction(): """Test time_correction method.""" sinfo = StreamInfo("test", "", 2, 0.0, "int8", uuid.uuid4().hex[:6]) with sinfo.outlet(chunk_size=3) as outlet, sinfo.inlet() as inlet: inlet.open_stream(timeout=5) tc = inlet.time_correction(timeout=3) assert isinstance(tc, float) Pytest also has a feature for setting up and tearing down resources for a test, called fixtures. This feature is documented here: https://docs.pytest.org/en/stable/explanation/fixtures.html A fixture is a separate function that creates and tears down a value that is needed by a test function. Fixtures are invoked by giving the test function a parameter with a corresponding name. This is mostly useful if the same kind of object is needed for multiple test cases. For example: @pytest.fixture def sinfo(): sinfo = StreamInfo("test", "", 2, 0.0, "int8", uuid.uuid4().hex[:6]) yield sinfo clean_up(sinfo) @pytest.fixture def inlet(sinfo): inlet = StreamInlet(sinfo) inlet.open_stream(timeout=5) yield inlet clean_up(inlet) @pytest.fixture def outlet(sinfo): outlet = StreamOutlet(sinfo, chunk_size=3) yield outlet clean_up(outlet) def test_time_correction(inlet, outlet): """Test time_correction method.""" tc = inlet.time_correction(timeout=3) assert isinstance(tc, float)
{ "domain": "codereview.stackexchange", "id": 44863, "tags": "python, python-3.x, unit-testing" }
How to get all 3 labels' sentiment from finbert instead of the most likely label's?
Question: I'm using bert to do sentiment analysis. I previous used cardiffnlp's twitter-roberta-base-sentiment, https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment. It gives the the usage on its page. from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax import csv import urllib.request # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) # Tasks: # emoji, emotion, hate, irony, offensive, sentiment # stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary task='sentiment' MODEL = f"cardiffnlp/twitter-roberta-base-{task}" tokenizer = AutoTokenizer.from_pretrained(MODEL) # download label mapping labels=[] mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt" with urllib.request.urlopen(mapping_link) as f: html = f.read().decode('utf-8').split("\n") csvreader = csv.reader(html, delimiter='\t') labels = [row[1] for row in csvreader if len(row) > 1] # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) model.save_pretrained(MODEL) text = "Good night " text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # # TF # model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) # model.save_pretrained(MODEL) # text = "Good night " # encoded_input = tokenizer(text, return_tensors='tf') # output = model(encoded_input) # scores = output[0][0].numpy() # scores = softmax(scores) It shows sentiments of all three labels, positive, neutral and negative. However, I'm now trying to use Finbert from ProsusAI to do sentiment analysis https://huggingface.co/ProsusAI/finbert. It doesn't give me its usage on its page. So I'm following this tutorial https://towardsdatascience.com/effortless-nlp-using-pre-trained-hugging-face-pipelines-with-just-3-lines-of-code-a4788d95754f. My code is from transformers import pipeline classifier = pipeline('sentiment-analysis', model='ProsusAI/finbert') classifier('Stocks rallied and the British pound gained.') However, the result is [{'label': 'positive', 'score': 0.8983612656593323}]. It only shows the sentiment of the most likely label's (positive). But I need all three labels' sentiment (positive, neutral and negative). How should I use it? Answer: You can get the scores for all labels as follows: from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import scipy tokenizer = AutoTokenizer.from_pretrained("ProsusAI/finbert") model = AutoModelForSequenceClassification.from_pretrained("ProsusAI/finbert") inputs = tokenizer("Stocks rallied and the British pound gained.", return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits scores = {k: v for k, v in zip(model.config.id2label.values(), scipy.special.softmax(logits.numpy().squeeze()))} scores # {'negative': 0.034473564, 'neutral': 0.067165166, 'positive': 0.8983614}
{ "domain": "datascience.stackexchange", "id": 10959, "tags": "bert, transformer, sentiment-analysis, huggingface" }
subscriber queue length
Question: I understand that ROS queues all incoming messages until nodes are able to service those callbacks and that this queue length is specified in the constructor of a subscriber. But what I would like to know is, what happens if the subscriber specifies a queue length that is bigger than what ROS can support? Does the limit depend on some setting in ROS or does it merely depend on the amount of RAM available on the system? Originally posted by cassander on ROS Answers with karma: 121 on 2012-02-25 Post score: 2 Answer: There's no limit built into ROS. It will continue to work until you run out of RAM. Originally posted by tfoote with karma: 58457 on 2012-03-09 This answer was ACCEPTED on the original site Post score: 5
{ "domain": "robotics.stackexchange", "id": 8385, "tags": "ros, queue" }
Is the human biological clock genetically programmed or learnt?
Question: Argument favouring learning: A newborn sleeps for 20-22 hours. But overtime (s)he learns to focus sleeping time to night time, according to his or her needs and family needs. Some sleep from 1 am to 7 am, some from 11pm to 5 am. So it is learnt. Argument favouring a genetic origin: Somewhere I read (I don't exactly remember the article) where it was told that the biological clock is not of 24, but of 25 hours. This means that the synchronisation is not maintained with the earth. If it would be learnt, then why is it not in synchrony with the world? Answer: Short answer The biological master clock is neurologically and genetically hardwired. The biological clock is not affected by learning. Instead, it is constantly entrained mainly by daylight. Background The biological clock is hardwired in the human body and can be traced back to the master clock in the suprachiasmatic nuclei (SCN) of the hypothalamus (Fig. 1). Fig. 1. The SCN is entrained by environmental light captured by the retina in the eyes. Source: Neuroscience News. The SCN is an intrinsic oscillator that governs sleep and wake timing, rhythms of temperature, hormones, mood and cognitive acuity etc. etc. These rhythms are entrained to 24 hours by the environmental light-dark cycle primarily via a subset of photosensitive retinal ganglion cells that project directly to the SCN (Fig. 1). Using hormones and neuronal signals, the SCN entrains peripheral clocks of similar molecular mechanism present in many tissues (Pagani et al., 2010). In humans and other organisms, the timing of 24-hour behavior is governed by the period length of the circadian oscillator. This period is approximately, but not exactly, 24 hours long (circa diem), and ranges from 23.47–24.64 across people (23 hours 28 min to 24 hours 38 minutes) in laboratory conditions (Pagani et al., 2010). Why is it circa diem and not exactly 24 hours? Note that the absolute length of a day depends on the definition used. For example, one complete rotation of the earth around its axis (the sidereal day) actually takes 23 hours, 56 minutes and 4.1 seconds. The solar day is 24 hours and is defined by the time it takes the sun to make an apparent circuit across the skies. More importantly, there is apparent solar time, sometimes called true solar time, which is determined by the daily apparent motion of the observed sun. It is based on the interval between two successive returns of the sun to the local meridian. The length of a solar day varies throughout the year, and the accumulated effect of these variations (equation of time) produces seasonal deviations of up to 16 minutes. The length of a day can also be defined in the number or hours from sunrise to sunset, which varies where you are on earth. At the equator every day of the year is exactly 12 hours long. At 50° latitude (e.g., England & Canada) the longest day is 16.5 hours and the shortest day 7.5 hours. At 35° latitude (e.g. many US states and Greece) the longest day is 14.5 hours and the shortest day 9.5 hours. Given that light is the most effective way to entrain the circadian rhythm, it is obvious why the absolute length of the intrinsic circadian rhythm is not important. As long as it is circa diem, it will do. Reference - Pagani et al., PloSONE (2010); 5(10): e13376
{ "domain": "biology.stackexchange", "id": 4012, "tags": "genetics, circadian-rhythms" }
What is the transpose of Lorentz transformation under spinor representation?
Question: Let $S$ be the Lorentz transfortmation under spinor representation, and from any quantum field theory textbooks, we know that $$ S^\dagger=\gamma^0S^{-1}\gamma^0 \\ S^{-1}=\gamma^0S^\dagger\gamma^0 $$ where $\gamma^\mu$ is Dirac matrices ($\mu=0,1,2,3$). The question that confused me is that how are $S^T$ and {$S, S^\dagger, S^{-1}, \gamma^\mu$} related? Answer: Short answer: it depends on which $4\times 4$ matrix representation is used. In any representation, as far as its action on a Dirac spinor $\psi$ is concerned, the connected part of the Lorentz group is generated by transformations of the form $$ \psi\rightarrow S \psi \hskip2cm S = \exp\left(\frac{\theta}{2}\gamma^\mu\gamma^\nu\right) \tag{1} $$ with $\mu\neq \nu$. This is a Lorentz boost if $\mu=0$ or $\nu=0$, and it is an ordinary rotation if $\mu\geq 1$ and $\nu\geq 1$. Now suppose that we use a representation in which $\gamma^\mu$ is hermitian for $\mu=0$ and anti-hermitian for $\mu =1,2,3$ symmetric for $\mu=0,2$ and anti-symmetric for $\mu=1,3$ The representation used in Peskin and Schroeder's An Intro to QFT satisfies these conditions. In this representation, the matrix $$ C\propto \gamma^0\gamma^2 \tag{2} $$ satisfies $$ (\gamma^\mu)^T C = -C\gamma^\mu, \tag{3} $$ which implies $$ S^T C = C S^{-1}. \tag{4} $$ Therefore, under the Lorentz transformation $\psi\rightarrow S\psi$, the quantity $\psi^T C\psi$ transforms as a scalar: $$ \psi^T C\psi\rightarrow \psi^T S^T CS\psi = \psi^T C\psi \tag{5} $$ and the quantity $\psi^T C\gamma^\mu\psi$ transforms as a vector: $$ \psi^T C\gamma^\mu\psi\rightarrow \psi^T S^T C\gamma^\mu S\psi = \psi^T C (S^{-1}\gamma^\mu S)\psi. \tag{6} $$ By the way, the similar-looking quantity $\psi^T \gamma^\mu C\psi$ transforms like this: $$ \psi^T \gamma^\mu C\psi\rightarrow \psi^T S^T \gamma^\mu CS\psi = \psi^T S^T \gamma^\mu (S^T)^{-1}C\psi, \tag{7} $$ which is not the way a vector (or any other tensor) should transform. In any representation, we can think of (4) as the defining condition for a matrix $C$. The matrix $C$ that satisfies this condition depends on the representation; but if such a matrix exists, then equations (5)-(6) hold automatically. We can think of equations (5)-(6) as the motive for the condition (4). Similarly, if we choose a matrix $A$ so that $$ S^\dagger A = A S^{-1}, \tag{8} $$ then the quantity $\psi^\dagger A\psi$ transforms as a scalar: $$ \psi^\dagger A\psi\rightarrow \psi^\dagger S^\dagger AS\psi = \psi^\dagger A\psi \tag{9} $$ and the quantity $\psi^\dagger A\gamma^\mu\psi$ transforms as a vector: $$ \psi^\dagger A\gamma^\mu\psi\rightarrow \psi^\dagger S^\dagger A\gamma^\mu S\psi = \psi^\dagger A (S^{-1}\gamma^\mu S)\psi. \tag{10} $$ We can think of equations (9)-(10) as the motive for the condition (8). In the representation described above, the familiar choice $A\propto\gamma^0$ satisfies the condition (8), but this depends on the representation.
{ "domain": "physics.stackexchange", "id": 53931, "tags": "quantum-field-theory, lorentz-symmetry, dirac-matrices" }
Is there a known maximum for how much a string of 0's and 1's can be compressed?
Question: A long time ago I read a newspaper article where a professor of some sort said that in the future we will be able to compress data to just two bits (or something like that). This is of course not correct (and it could be that my memory of what he exactly stated is not correct). Understandably it would not be practical to compress any string of 0's and 1's to just two bits because (even if it was technically possible), too many different kind of strings would end up compressing to the same two bits (since we only have '01' and '10' to choose from). Anyway, this got me thinking about the feasibility of compressing an arbitrary length string of 0's and 1's according to some scheme. For this kind of string, is there a known relationship between the string length (ratio between 0's and 1's probably does not matter) and maximum compression? In other words, is there a way to determine what is the minimum (smallest possible) length that a string of 0's and 1's can be compressed to? (Here I am interested in the mathematical maximum compression, not what is currently technically possible.) Answer: Kolmogorov complexity is one approach for formalizing this mathematically. Unfortunately, computing the Kolmogorov complexity of a string is an uncomputable problem. See also: Approximating the Kolmogorov complexity. It's possible to get better results if you analyze the source of the string rather than the string itself. In other words, often the source can be modelled as a probabilistic process, that randomly chooses a string somehow, according to some distribution. The entropy of that distribution then tells you the mathematically best possible compression (up to some small additive constant). On the impossibility of perfect compression, you might also be interested in the following. No compression algorithm can compress all input messages? Compression functions are only practical because "The bit strings which occur in practice are far from random"? Is there any theoretically proven optimal compression algorithm?
{ "domain": "cs.stackexchange", "id": 5640, "tags": "data-compression" }
rosrun rqt_graph rqt_graph shows error
Question: http://wiki.ros.org/ROS/Tutorials/UnderstandingTopics Environment: ROS kinetic Ubuntu 16.04 ERROR: rosrun rqt_graph rqt_graph Could not import "pyqt" bindings of qt_gui_cpp library - so C++ plugins will not be available: Traceback (most recent call last): File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui_cpp/cpp_binding_helper.py", line 43, in import libqt_gui_cpp_sip ModuleNotFoundError: No module named 'libqt_gui_cpp_sip' Traceback (most recent call last): File "/opt/ros/kinetic/lib/rqt_graph/rqt_graph", line 8, in sys.exit(main.main(sys.argv, standalone='rqt_graph.ros_graph.RosGraph')) File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_gui/main.py", line 59, in main return super(Main, self).main(argv, standalone=standalone, plugin_argument_provider=plugin_argument_provider, plugin_manager_settings_prefix=str(hash(os.environ['ROS_PACKAGE_PATH']))) File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/main.py", line 505, in main plugin = plugins.keys()[0] TypeError: 'dict_keys' object does not support indexing! Could someone please tell me what exactly is going wrong here ? Thanks in advance! Originally posted by Kfi on ROS Answers with karma: 31 on 2017-02-21 Post score: 3 Original comments Comment by steven on 2017-04-14: Exact same problem, please help. Answer: You can use an alias in ubuntu just edit the "~/.bashrc" or the "~/.bash_aliases" file and add: alias python3=python Test it: $ python --version Python 2.7.12 $ python3 --version Python 3.5.2 enjoy! Originally posted by ΦXocę 웃 Пepeúpa ツ with karma: 424 on 2017-04-19 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 27084, "tags": "ros, rqt-graph" }
What is the difference between representation and embedding?
Question: As I searched about this two terms, I found they are somehow like each other, both try to create a vector from raw data as I understood. But, what is the difference of this two term? Answer: Vector representation is a generic term used to talk about any type of feature encoding, embedding vectors are instead a special case of vector representation. When talking about vector representation the only underlying assumption is that every variable was encoded into numerical values, without any restriction regarding the numbers or the vector itself. Embedding vectors instead are specifically continuous vectors of fixed dimensions obtained trough matrix factorization techniques or deep learning models. They originally proposed to encode text in the Word2Vec paper, and since then they acquired more and more popularity due to the high generalization potential of the proposed method in other AI branched rather than natural language processing.
{ "domain": "ai.stackexchange", "id": 3616, "tags": "machine-learning, deep-learning, terminology, embeddings, representation-learning" }
Which part of the computer allocates memory in RAM?
Question: When we declare a variable there will be a random part of memory will be allocated in RAM. Which component will allocate the memory? Is the processor or any other specific hardware doing the allocation? Answer: When you compile your program, the compiler sets up (relative) positions of memory areas for variables in each piece. The linker takes several separately compiled pieces and libraries, and creates an executable file, which lays out places for each variable. The executable file is loaded into the memory by a loader, which (under the control of the operating system) loads the program at a address in memory, asigning fixed memory positions to the layout given before. Modern programming languages also allow you to define functions/procedures/blocks with local variables, which get asigned memory addresses at runtime. When the procedure/function is called or the block entered, memory is asigned to the variable; on exit the space is reclaimed. But then again, modern operating systems use virtual memory, so the variables will be placed at fixed virtual addresses only... the operating system shuffles the memory areas around at will. And just to muddy the waters a bit more, while analyzing your program the compiler might decide to have the value residing in different places (in memory or in a register) during the program, have several variables share the same space when they aren't used at the same time; or sometimes even get rid of the variable completely.
{ "domain": "cs.stackexchange", "id": 2359, "tags": "operating-systems, memory-management, memory-access, memory-hardware" }
When to use the dependency EXPORTED_TARGETS vs generate_messages_cpp
Question: Let's suppose pkg1 creates new services, messages, and actions. If I want a cpp ROS node from within this pkg1 to ensure that the messages, services, and actions are all built before the cpp file is built, would I add: add_dependencies(cpp_file_target pkg1_EXPORTED_TARGETS) or add_dependencies(cpp_file_target pkg1_generate_messages_cpp)? Really, what's the difference in this case? Additionally, if pkg2 wanted to use messages from pkg1, and find_package() included pkg1, would a dependency on ${catkin_EXPORTED_TARGETS} be sufficient? Or would pkg2 need to depend on pkg1_generate_messages_cpp? I wasn't able to find a clear answer within the documentation and would appreciate guidance for best practices. Originally posted by kacaroll on ROS Answers with karma: 41 on 2023-01-04 Post score: 0 Original comments Comment by gvdhoorn on 2023-01-05: I seem to remember this has been discussed before. See #q286311, #q201227 and #q52744 for instance. The second Q&A links to relevant documentation. The third Q&A's accepted answer has a bunch of comments which go into some more detail, and the second answer is the preferred way of doing things "nowadays" (note: the third Q&A is from 2013). If you could read those (and perhaps find some additional info, use Google, append site:answers.ros.org to your query), and things are still unclear, please update your question and explain what is unclear exactly. Let's try to avoid duplicating discussions. Comment by kacaroll on 2023-01-05: Thanks for the tip. This additional discussion along with your links answered my question: Answer: After being pointed to the right links, found out that it is best practice to always use catkin_EXPORTED_TARGETS for any pkg that depends on any ROS message, service, etc... pkg1_EXPORTED_TARGETS is only required if pkg1 creates new messages, actions, headers, etc.. (non libs) pkg1_generate_messages_cpp is a subset of pkg1_EXPORTED_TARGETS catkin_EXPORTED_TARGETS does NOT include messages, actions, headers, etc... from the current project Originally posted by kacaroll with karma: 41 on 2023-01-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 38210, "tags": "ros, ros-melodic, catkin, build, add-dependencies" }
Temperature and remanent magnetization
Question: Is it necessary or required to take into account the temperature of the environment whilst determining the remanent field $B_r$ of a magnet? According to the datasheet of the magnet $B_r$ is somewhere in between $1.29-1.32$ T and the maximum operating temperature is at 80 °C. Answer: The temperature should be taken into account since the Br is something that can change due to the temperature history of the magnet. If the magnet was heated to a temperature beyond the operating temperature of the material the residual energy in the magnet will be reduced from when it was initially magnetized.
{ "domain": "physics.stackexchange", "id": 91732, "tags": "ferromagnetism" }
Does time or space complexity of arithmetic operations get affected by the number of digits?
Question: Suppose I have two 5-digit numbers (A and B) and two 50-digit numbers(C and D). Do the operations A+B and C+D have equal complexity in terms of time and space? or C+D is more complex due to the size of digits? Answer: The complexity of arithmetic operations is typically measured in bit operations, though not everybody agrees this is the correct measure. There is a Wikipedia page listing the complexity of the best algorithms currently known for various operations, including addition and multiplication.
{ "domain": "cs.stackexchange", "id": 16112, "tags": "complexity-theory, time-complexity, space-complexity, arithmetic" }
Rock chips showing magnetic property?
Question: If i am placing two rock(stone) chips closer with a wider area of contact and rotating another rock stone around these two chips(having close contact) for some time (around 30 sec) and try to separate the two stones that i put into contact at first, it seems there is an attraction between the two. Why this is happening? Please explain the phenomenon behind it. Answer: The stone contains a ferromagnetic material, which is a material with a spontaneous magnetization. In thermal equilibrium, such a material aligns itself to be magnetized in the same direction. But over a very large distance, you can reduce the bulk energy by randomizing the field direction, so that it pays to put in very large domain walls separating different magnitization regions. When there is an external magnetic field, it makes the domains line up. The dynamics of the domains is extremely slow, so you end up with a magnetized rock. You are seeing magnetization caused by placing a ferromagnet in the magnetic field of another. You don't need to rotate to see this, just hold them close and wait.
{ "domain": "physics.stackexchange", "id": 1962, "tags": "electromagnetism, geomagnetism" }
Calculating the solar power hitting Earth
Question: I'm trying to calculate the amount of power from the sun hitting the earth, but I am getting a number which is off by a factor of ~4. I calculate the "area" of the earth, as seen from the sun, and then divide that by the surface area of a sphere of radius 1 AU, to get the portion of the sun's rays we absorb, and then I multiply that by the solar luminosity. Mathematica gives me this: But when I put it into Wolfram|alpha, I get this result: Am I doing something wrong? where could that big an error be introduced? Is "mean power intercepted by earth" different than what I'm calculating? Answer: The area of a circle is calculated using its radius instead of its diameter: $$ A = \pi r^2 = \pi \left(\frac{d}{2}\right)^2 = \pi \frac{d^2}{4}$$ which is where your missing factor of 4 reappears.
{ "domain": "physics.stackexchange", "id": 10692, "tags": "homework-and-exercises, astrophysics, sun, luminosity" }
Recurrence doesn't add up
Question: I made a recurrence tree and guessed that solution to $T(n)=2T(n-2)+n$ is $O(2^{n/2})$ and I am now trying to prove this through substitution. These are my steps so far, but I can't get it to pass for some reason: \begin{align} T(n-2) &\leq c2^{(n-2)/2}\\ T(n) &= 2(c2^{(n-2)/2})+n\\ &\leq c2^{n/2} \quad (?) \end{align} But as far as I see, it can never add up, because it will always be greater than the $+n$ term and times $2$. Any help will be much appreciated. Answer: The hints you've been given almost work. The important idea is that your guess, $T(n)≤c2^{n/2}$, while correct, doesn't behave nicely in your induction proof, as you noted. What's the problem? Clearly, it's that "$+n$" term in the recurrence, so let's try to make it work by choosing a better guess: $$ T(n)≤c2^{n/2}−kn \quad\text{for some suitable } k $$ With this guess, we'll have $$\begin{align} T(n)&=2T(n−2)+n\\ &\le 2(c2^{(n−2)/2}−k(n−2))+n\\ &=c2^{n/2}−2k(n−2)+n\\ &=c2^{n/2}−2kn+4k+n \end{align}$$ and if we can find a suitable $k$ to make this less than or equal to $c2^{n/2}−kn$ we'll be done. In other words, we need to find $k$ such that $$ −2kn+4k+n≤kn $$ It's not hard to see that this will happen if $$ k\ge\frac{n}{n−4} $$ and this will be satisfied when $k=5$ for any integer $n>4$. Now go back to the inductive proof and recast it for the guess $T(n)\le c2^{n/2}−5n$. You'll find that everything works nicely. Finally, observe that $c2^{n/2}−5n=O(2^{n/2})$ and you'll be done. This is explained in a bit more detail in this answer to a similar question.
{ "domain": "cs.stackexchange", "id": 2801, "tags": "recurrence-relation" }
Making a page update based on the progress of a process
Question: Now this is something I've looked into, and while I have a "working" solution, I don't like it. Background: Through our intranet website, we want to run a process that copies a file from another machine, unzips it, and then analyzes the content. As the file is large, this takes some time (usually around 5-6 minutes). Rather than have the user just hit a Button and pray that they get a success message in 5-6 minutes, we want to show the progress via updates to a TextBox. What I've learned so far: This isn't as simple as putting everything into an UpdatePanel and updating it at various steps in the process. Seems like it would be, but it's not. I looked into threading as well, but I couldn't get it working. That is to say, I got the process to run on a separate thread, but while it was running, the interface wouldn't update. It would queue up everything, and then display it all at once, once the process finished. As I'm still relatively new, the possibility that I was just doing something wrong is high. What I have (which works, I guess...): Two .aspx pages, DatabaseChecker.aspx and Processing.aspx DatabaseChecker.aspx: <form id="form1" runat="server"> <asp:Button ID="btnExecute" runat="server" onclick="btnExecute_Click" style="height: 26px" Text="Execute" /> <iframe src="Processing.aspx" name="sample" width="100%" height="700px" style="border-style: none; overflow: hidden;"> </iframe> </form> Processing.aspx: <meta http-equiv="refresh" content="1" /> <form id="form1" runat="server"> <asp:TextBox ID="TextBox1" runat="server" Height="633px" TextMode="MultiLine" Width="504px"></asp:TextBox> </form> The .aspx portion is very simple. DatabaseChecker.aspx simply has a button to begin the process, and it has Processing embedded as an iframe. The result looks like this: The TextBox1 in Processing.aspx is where the progress update goes. Now let me just point out the dirty trick and the part I don't like right now. In Processing.aspx, there is a meta tag to refresh the page once per second. How it works (summary): When the process starts, a Session variable called ["Running"] is set to true. When the process ends, Session["Running"] is set to false. And since Processing.aspx refreshes once per second, what happens is it saves the current contents of TextBox1.Text to another Session variable called ["TextBoxContent"]. And then the Page_Load method for Processing.aspx fills the TextBox back up with the previous content, and adds a period. So the output will begin simply looking like "Process Starting", but after 10 seconds it will look like "Process Starting.........." (one period per second). How it works (details): The process begins in DatabaseChecker.aspx's Execute button: protected void btnExecute_Click(object sender, EventArgs e) { Session["TextBoxContents"] = "Copying .zip file from other machine..."; Session["Running"] = true; Thread thread = new Thread(new ThreadStart(TheProcess)); thread.IsBackground = true; thread.Start(); } private void TheProcess() { CopyFromOtherMachine(); UnzipFiles(); ConvertTextFilesToDataTables(); //and so on and so forth Session["Running"] = false; } private void CopyFromOtherMachine() { if (File.Exists(Path.Combine(FileRootDirectory, DesiredFileName))) { Session["TextBoxContent"] += "Previous .zip file already detected. Deleting..."; File.Delete(Path.Combine(FileRootDirectory, DesiredFileName)); Session["TextBoxContent"] += "OK!" + Environment.NewLine; } Session["TextBoxContent"] += "Copying .zip from other machine..."; File.Copy(@"\\someothermachine\production\desiredfile.zip", Path.Combine(FileRootDirectory, DesiredFileName)); Session["TextBoxContent"] += "OK!" + Environment.NewLine; } // UnzipFile() // ConvertTextFilesToDataTables() // and so on And then we have Processing.aspx's Page_Load method, which is where we display our progress: protected void Page_Load(object sender, EventArgs e) { if (Session["TextBoxContent"] != null) { TextBox1.Text = Session["TextBoxContent"].ToString(); if ((bool)Session["Running"] != false) { Session["TextBoxContent"] += "."; } } } What I want to improve: Basically, everything. This whole thing feels like a really makeshift house of cards. In particular, I don't like that the page has to update once per second, particularly because the Windows mouse icon changes to loading and not loading cursors very fast. If the user knows the system and knows what's going on, then yeah big deal I guess, we can just deal with it because we know what's going on. But I think to the average user, the behavior is jarring, it feels like something might be wrong or something. Hopefully I've made it clear what I'm trying to achieve overall, so I'm open to other ideas about how to go about it. Answer: Have you considered using SignalR? As their homepage states, it's a "library for ASP.NET developers that makes it incredibly simple to add real-time web functionality to your applications. What is "real-time web" functionality? It's the ability to have your server-side code push content to the connected clients as it happens, in real-time."
{ "domain": "codereview.stackexchange", "id": 5991, "tags": "c#, asp.net" }
Connected and strongly connected Feynman diagrams
Question: Recently I read, that only connected Feynman diagrams give contribution of nonzero values into the scattering amplitude. Why it is so and what is the physical sense of connected diagrams (due to their definition in Wikipedia)? Also, I don't understand why strongly connected (=one-particle irreducible) Feynman diagrams are so important in scattering theory. By the other words, I don't understand why do we cut off one of the internal lines in the Diagram and does it relate to some physical process. Answer: The fact that only connected Feynman diagrams contribute to the scattering amplitude can be interpreted in terms of the vacuum of the theory. Omitting disconnected diagrams amounts to shifting the vacuum: the vacuum of the interacting theory differs from that of the free theory. Regarding your second question: strongly connected (also called one-particle irreducible) diagrams are needed in order to calculate loop corrections to the propagator. The exact propagator is given by a geometric series consisting of one-particle irreducible diagrams. Furthermore, they play a role in the calculation of the exact vertex function. I can recommend two excellent and free sources for more information on the subject: David Tong's lectures on QFT and Mark Srednicki's book.
{ "domain": "physics.stackexchange", "id": 10696, "tags": "quantum-field-theory, scattering, feynman-diagrams, s-matrix-theory, 1pi-effective-action" }
How to assemble two quantum circuits which has each own qubit mapping state?
Question: I'm curious about something. I tried to do some qubit mapping using SABRE algorithm. Suppose I have two quantum circuits and apply SABRE algorithm to both of them. Then each of them has its own qubit mapping states. After that, I want to compose them to make it as one long quantum circuit. How can I do it? I used python code and qiskit. I have already searched qiskit API and used some functions (compose, combine, append). But the circuits could not be combined. How can I assemble two quantum circuit which have different qubit mapping state respectively? Thank you Answer: The short answer to composed circuit is the following. Given circuit1 and circuit2, you can do like this: circuit = circuit1 + circuit2 You can also do that with transpiled circuits: transpiled1 = transpile(circuit1, backend, routing_method='sabre') transpiled2 = transpile(circuit2, backend, routing_method='sabre') circuit = transpiled1 + transpiled2 Notice that the circuits to composed need to be the same size. After transpilation, that is ensured. transpile will make the circuit as big as the backend (given that you use the same backend during transpilation). The operation + will wire the links one-to-one. Here is an example to compose circuit with different sizes: circuit1 = QuantumCircuit(5) circuit1.mcx([0, 1, 3, 4], 2) print(circuit1) circuit2 = QuantumCircuit(2) circuit2.cx(0, 1) print(circuit2) q_0: ──■── │ q_1: ──■── ┌─┴─┐ q_2: ┤ X ├ └─┬─┘ q_3: ──■── │ q_4: ──■── q_0: ──■── ┌─┴─┐ q_1: ┤ X ├ └───┘ In this case, you need to use compose(..., qubits=...). The parameter qubits indicates how to wire the circuits. circuit = circuit1.compose(circuit2, qubits=[3,2]) print(circuit) q_0: ──■─────── │ q_1: ──■─────── ┌─┴─┐┌───┐ q_2: ┤ X ├┤ X ├ └─┬─┘└─┬─┘ q_3: ──■────■── │ q_4: ──■───────
{ "domain": "quantumcomputing.stackexchange", "id": 2895, "tags": "programming, qiskit" }
Decomposition of **3D** structuring elements for morphological operations
Question: I am struggling to implement a mathematical morphology toolset in an image processing package. I base my implementation on what I saw in MATLAB, and on several papers on the subject. There seems to be abundant literature on morphological operation optimization through structuring element (strel) decomposition. For instance, one can get a tremendous speed bonus by using two orthogonal lines instead of a square as structuring element for dilation. Several papers give methods for optimization through strel decomposition: Rolf Adams, "Radial Decomposition of Discs and Spheres," CVGIP: Graphical Models and Image Processing, vol. 55, no. 5, September 1993, pp. 325-332. Rein van den Boomgard and Richard van Balen, Methods for Fast Morphological Image Transforms Using Bitmapped Binary Images, CVGIP: Models and Image Processing, vol. 54, no. 3, May 1992, pp. 252-254 etc... However, all these publications are about 2D structuring elements. I could not find much on 3D decompositions. Do you have any clues on how to decompose: a 3D sphere. Not a ball that is used in 2D grayscale morphology, but an actual flat 3D sphere; a 3D diamond? Answer: I guess this depends on the digital distance transform that one is approximating on the 3d grid and there are various local connectivities possible. There is an implementation in ImageJ here. It would also be good to verify if you are using a non-flat structuring element or a correct 3d structuring element. Read Matlab reference here. In the place of euclidean distance sqrt(x.^2 + y.^2 + z.^2) one could add the Manhattan distance using this. Quick 3d distance here. For 3d structuring element decomposition one can see. But the 3d decompositions of a convex shapes like sphere into separable 2d lines is non-trivial, certain shapes are easier than others like the cube. One can refer here for efficient algorithms for spheres.
{ "domain": "dsp.stackexchange", "id": 1329, "tags": "3d, morphological-operations, decomposition" }
How do we make an observation of a 4D spacetime trajectory if we, observers, have only access to a 3D world?
Question: So far, I know that a trajectory in general relativity is a 4-vector, and a force-free particle follows a geodesic which is in 4D as well. My question is: how do we make an observation of such a trajectory if we, observers, have only access to a 3D world? Excuse my question if it turned out to be too naive! Answer: We don’t only have access to a 3D world. We can measure time with clocks and three dimensions of space with rulers. To measure a 4D worldline we just measure the location at many times, typically with respect to some physically implemented coordinate system.
{ "domain": "physics.stackexchange", "id": 92601, "tags": "general-relativity, spacetime, spacetime-dimensions" }
Why are tertiary alcohols more reactive towards Lucas reagent but show low rates of esterification?
Question: Generally the order of reactivity of alcohols follows the order tertiary > secondary > primary. But why is it reverse in case of esterification. Answer: Do you know the mechanism of esterification? The first step - which is also the rate determining step - involves the nucleophilic attack of the lone pair on the alcohol's oxygen atom on the electrophilic carbon atom of the carboxylic acid. Hence, the more nucleophilic the oxygen atom, more will the equilibrium shift to the right, favoring esterification. Now, can you tell which degree alcohol is the most nucleophilic? And can you tell now which degree alcohol will favor esterification the most? For Lucas reagent, the reaction involves the formation of a carbocation (at the position where the hydroxyl group originally was) by elimination of the hydroxyl group as an oxonium ion. Thus, the more stable the carbocation, the more the Lucas reagent favor the white precipitate.
{ "domain": "chemistry.stackexchange", "id": 9712, "tags": "organic-chemistry" }
Gravity compensation
Question: I am just getting curious about this gravity compensation technique. If we could compensate the force generated by the gravity (by feeding in the exact amount of force into the opposite direction), will the mass of the robot be 0 kg? Could we then just swing a 1000kg robot arm with our hand? Answer: In an ideal scenario, yes, that should be the case. When gravity compensation is implemented on robots, all joints apply a torque to balance out the torque applied by the force of gravity. They should ideally turn into floating robots objects. However, it is not always the case due to inaccurate modeling and gravity compensation implementation. Moreover, to swing a robot arm, you'll also need to overcome friction, stiction and other forces that oppose motion at the joint level. If you have a Zero-Friction Zero-Gravity controller (you feed gravity compensation torques as well as friction torques in a tight closed loop), then the robot should be simply floating in space, massless. In such a case, you can definitely move a robot arm around with your hand, regardless of the mass. It is because all the forces that the robot needs to overcome and break into motion have already been overcome and the slightest of external efforts shall be extra input and sufficient to move the robot.
{ "domain": "robotics.stackexchange", "id": 2670, "tags": "robotic-arm, industrial-robot" }
Centripetal force in rotating mass attached with a string in gravitational field
Question: Say we have a mass, tied to a string, rotating around a centre point. At the bottom of the rotation, $F_g$ is pointing down and $F_t$ is pointing up. In this scenario, is $F_c = F_t - F_g$ (sum of all forces in the y axis), or is $F_c = F_t$? My teacher said that $F_c = F_t - F_g$. $F_g$ is NOT pointing towards the centre of the circle, so how can it be a centripetal force? Any help would be appreciated :) Answer: The centripetal force is the component of the net force that is directed towards the center of your circle and equals $mv^2/R$, all provided that the motion you are dealing with is circular. In the picture above, the thick red, purple and yellow forces all act on the green particle. Their projections along the center are shown. The sum of those projections is the centripetal force. (note that the forces represent random arbitrary forces)
{ "domain": "physics.stackexchange", "id": 74347, "tags": "newtonian-mechanics, centripetal-force" }
amcl and global map question
Question: **** static map ***** image: rosmap.pgm resolution: 0.005 origin: [0, 0, 0] negate: 0 occupied_thresh: 0.65 free_thresh: 0.196 width: 20.0 height: 20.0 *** common map parms **** obstacle_range: 2.5 raytrace_range: 3.0 footprint: [[0.25, 0.1], [0.25, -0.1], [-0.25,-0.1], [-0.25, 0.1]] max_scaling_factor: 0.02 # The scalling factor for footprint defined in local costmap inflation_radius: 0.02 # Propagating cost values out from occupied cells that decrease with distance. map_type: costmap track_unknown_space: true observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: hokuyo_frame, data_type: LaserScan, topic: /scan, marking: true, clearing: true} resolution: 0.005 global_costmap: global_frame: /map robot_base_frame: /base_link update_frequency: 30.0 publish_frequency: 30.0 static_map: true width: 20.0 height: 20.0 local_costmap: global_frame: /odom robot_base_frame: /base_link update_frequency: 30.0 publish_frequency: 30.0 static_map: false rolling_window: true width: 16.0 height: 16.0 I running amcl and move_base with no static map. The map on setting position looks like. Should not the local map be contained fully within the global map. How do I get them to center on each other? . if I set a goal that does not over lap the two then I get his message. [ WARN] [1382396905.755665136, 26.606000000]: The goal sent to the navfn planner is off the global costmap. Planning will always fail to this goal. Why does the global map not overlap the local cost map? How can I get them to align. The global map should map on top of the grid. Originally posted by rnunziata on ROS Answers with karma: 713 on 2013-10-21 Post score: 0 Answer: I had a bad origin setting in the map yaml file Originally posted by rnunziata with karma: 713 on 2013-10-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15926, "tags": "ros" }
What will happen in principle if one tries to push a neutron star?
Question: When we push an object it moves due to the mutual repulsion between the electrons in our hand and the electrons in that object. Since a neutron star contains only neutrons, what will happen in principle if one tries to push it? Answer: Repulsion happens with any fermion particles, not just electrons. If neutrons would not repel each other we wouldn't have neutron stars. All of them would always collapse into black holes instead. So neutrons in our atoms still repel neutron star. However what would really happen is, neutron star has really massive gravity to the point that protons and electrons would merge into neutrons. Any kind of atom couldn't keep being atom anymore. So when anything tries to touch neutron star, it would be suck in by gravity and collapse into lump of neutrons and feed their mass into that neutron star. And if it collects enough mass it would collapse into a black hole.
{ "domain": "physics.stackexchange", "id": 16448, "tags": "forces, neutron-stars" }
In QFT, are there any restrictions on spontaneous breaking $G\to H$, due to "spontaneity"?
Question: For simplicity, let us restrict to the spontaneous breaking of global symmetries. Given any pair of groups $G\supset H$, is it always possible to find a $G$-invariant Lagrangian that gives a QFT such that the vacuum state is only $H$-invariant? For example, at least perturbatively at tree level, $$\mathcal L=\frac12(\partial_\mu\phi^a)(\partial^\mu\phi^a)+\frac12m^2\phi^a\phi^a-\frac14\lambda\phi^a\phi^a\phi^b\phi^b,\quad a,b=1,\cdots,N$$ has a spontaneous $O(N)\to O(N-1)$ breaking. It has the familiar Mexican hat effective potential $V(\phi)$ (i.e., effective action for field configurations constant throughout spacetime). However, is it possible to modify the Lagrangian so that the vacuum only has an $O(N-2)$ symmetry? My guess is that we should introduce another $O(N)$-vector field and carefully setup the interactions, but after thinking for a couple of minutes I have got nothing... [If I later find a way to do this, I will leave a comment below.] More generally, I wonder if there is a general strategy to construct a QFT with spontaneous $G\supset H$ breaking. Any comment is welcome! Answer: For half a century, students have been going (running!) to LF Li 1974 which also treats your problem as a warmup, a homework problem infiltrating lots of standard QFT texts like yours, apparently. Two scalar fields, which are O(N) vectors, will give you the generic O(N)-invariant potential (trash the disorienting superfluous pseudo-mass terms, for crying out loud!), $$ \lambda (\vec\phi^2-v^2)^2 + \lambda' (\vec\chi^2-w^2)^2 +\lambda'' (\vec\phi \cdot \vec \chi-cvw)^2, $$ where c is the cosine of the v.e.v. vectors, of lengths v and w, respectively. You can see the minimum of the potential at its vanishing vanishing for these lengths and vanishing cosine (so the kinetic term need not be rediagonalized...). Of course, $c=1$ aligns the two vectors and gives you nothing new; but any other c is in the domain of vanishing c, w.l.o.g. So, choose $\langle \phi^i\rangle =v\delta_{1i}$, $\langle \chi^i\rangle=w\delta_{2i}$, hence their cosine vanishes, w.l.o.g. The vanishing remaining components 3,...,N span an N-2-dimensional space, invariant under the unbroken O(N-2). The strategy you are asking for is evident in the reference. You contrive potentials invariant under G with a minimum invariant under H, which the reference illustrates systematically for the standard classical Lie algebras/groups. I know of no inaccessible unbroken subgroup H.
{ "domain": "physics.stackexchange", "id": 96094, "tags": "lagrangian-formalism, symmetry, field-theory, group-theory, symmetry-breaking" }
Why does using images that are not really formed work in ray optics?
Question: It's all in the title. For instance, if I have two lenses , I have been taught to first find the position of the image formed by the first lens, and then use that image to find the final image formed by the 2nd lens, if the first image is formed beynd the 2nd lens. Why does this work? edit:- image for reference Answer: I addressed this before but will elaborate further. Refer to the diagram here. Suppose there is an object R on the axis a distance r to the left of a lens with focal length f and r<f. When the rays leave the lens they diverge as if coming from a point P a distance p to the left of the lens. So P is a virtual image. We have 1/r + 1/p = 1/f 1/p = (r-f)/rf where r>0 and p<0. Rays are reversible so consider rays from the right heading toward P i.e. P is now a virtual object. They have to converge at R. Your question is basically can we use the same lens equation for this case. Let's see if 1/p = (r-f)/rf works. Well, f is the same, the absolute values of p and r are the same. In this case r > 0 as it's now a real image and still r<f. So we will end up with the right magnitude for p but it will be negative. So we conclude that we can use a virtual object if the image distance is negative. Edit: fixed equation for 1/p. Conclusion still holds.
{ "domain": "physics.stackexchange", "id": 91908, "tags": "optics, geometric-optics" }
Which algorithms are usable for heatmaps and what are their pros and cons
Question: This is a cross post from Stack Overflow, and DSP at Stackexchange since I cannot really decide which part of Stackexchange is most fitting. If this is the wrong place please tell me and I'll remove the question. I have a matrix with numerical data. The matrix contains values from 0 to an arbitrary integer value. Each element of the matrix is equivalent to a coordinate on a map. I want to display that data as a heatmap overlayed the original map. The three approaches I have found so far are Linear interpolation. I guess the interpolation is don from the original datapoint to some set distance away from it in each direction. Average of surrounding cells. Each empty cell gets the average value of the eight adjacent cells. Gaussian blur as suggested on the SO thread. Box blur with 1..n passes. Are there any more methods? What are the pros and cons of the different approaches? What is a good source, online or print, for a discussion on heatmaps or similar problems? Answer: If I have understood your question correctly (an illustrative example would have been helpful), I would recommend either Gaussian blurring or linear interpolation depending on the behavior you are after. Both are simple and should perform relatively fast even on a handheld device. Linear interpolation (or rather bilinear interpolation) is simple but requires a square heat sample grid and you get some boundary effects that might be undesirable. The method is the most representative since "empty cells" (i.e., cells without heat information) are simply empty because of sparse sampling and not because they are void of heat. The interpolation tries to fill in the missing heat information in the empty cells by looking at the nearby heat samples. Cell averaging will give you an equally high response in a cell neighboring the heat sample as in the heat sample cell itself (granted that the average is computed for all cells including the cells containing heat samples). If the average is computed for empty cells only (i.e., for cells with no heat information) then the cells with the heat sample will keep its full signal while adjacent cells will contain an average, which in a sparse sample grid will be 1/8 of the heat sample, resulting in a distribution almost as misleading as only using the original heat samples. Gaussian blurring with a wide enough kernel is simple and gives a better representation of the impact of the heat samples than the cell averaging. However, the values in the blurred heat map for the empty cells will only be affected by the heat "spread" from the sample cells. So if the heat map is sparse, there will likely be empty cells on equal distance between two heat cells that do not get any heat information at all. I hope this helps.
{ "domain": "cs.stackexchange", "id": 2056, "tags": "algorithms, image-processing, matrices, signal-processing" }
Change base_link position
Question: How do we change the base_link coordinate centre on autoware? Is there a default one that it automatically uses? If so where is it positioned? Originally posted by melanielim on ROS Answers with karma: 3 on 2019-05-07 Post score: 0 Original comments Comment by Geoff on 2019-05-13: Can you expand on what you mean by "change the base_link coordinate centre"? Answer: In Autoware coordinate system, base_link is defined at the center of rear wheels as follows. You need to set the relative transform between base_link to velodyne on Setup tab. Originally posted by aohsato with karma: 36 on 2019-05-15 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2019-05-15: Could I ask you to attach the images to your answer directly? I've given you sufficient karma for that. Comment by aohsato on 2019-05-15: Greatly thanks. I've edited.
{ "domain": "robotics.stackexchange", "id": 32985, "tags": "ros, base-link, ros-kinetic" }
Computing the union closure
Question: Given a family $\mathcal F$ of at most $n$ subsets of $\{ 1, 2, \dots, n \}$. The union closure $\mathcal F$ is another set family $\mathcal C$ containing every set that can be constructed by taking the union of 1 or more sets in $\mathcal F$. By $|\mathcal C|$ we denote the number of sets in $\mathcal C$. What is the fastest way to compute the union closure? I have showed a equivalence between the union closure and listing all maximal independent sets in a bipartite graph, therefore we know that deciding the size of the union closure is #P-complete. However there is a way to list all maximal independent sets (or maximal cliques) in $O(|\mathcal C| \cdot nm)$ time for a graph with $n$ nodes and $m$ edges Tsukiyama et al. 1977. But this is not specialized for bipartite graphs. We gave an algorithm for bipartite graphs with runtime $|\mathcal C| \cdot \log |\mathcal C| \cdot n^2$ http://www.ii.uib.no/~martinv/Papers/BooleanWidth_I.pdf Our method is based on the observation that any element in $C$ can be made by the union of some other element of $C$ and one of the original sets. Hence we will whenever we add an element to $C$ try to expand it by one of the $n$ original sets. For each of these $n \cdot |C|$ sets we need to check if they are still in $C$. We store $C$ as a binary search tree, so each lookup takes $\log |C| \cdot n$ time. Is it possible to find the union closure $\mathcal C$ in $O(|\mathcal C| \cdot n^2)$ time? Or even in time $O(|\mathcal C| \cdot n)$? Answer: The complexity of enumerating maximal independent sets in graphs is the same as in bipartite graphs, so bipartiteness does not bring anything new. You have an algorithm (with exponential space) in $O(|C|\cdot n^2)$, but no polynomial space algorithm that acheives this time complexity is known. The following paper http://www.sciencedirect.com/science/article/pii/S0166218X08004563 is a good survey.
{ "domain": "cstheory.stackexchange", "id": 1773, "tags": "co.combinatorics, graph-algorithms, clique, independence" }
Why is the covariant derivative of the metric tensor with UPPER indices equal to zero?
Question: I've shown that $\nabla_{\lambda} g_{\mu\nu} = 0 $ rigorously by the following method: $ \nabla_{\lambda} g_{\mu\nu} = \partial_{\lambda}g_{\mu\nu} - \Gamma^{\rho}_{\lambda\mu} g_{\rho\nu} - \Gamma^{\rho}_{\lambda\nu} g_{\mu\rho} $ $ = \partial_{\lambda}g_{\mu\nu} - \frac{1}{2}g_{\rho\nu}g^{\rho\sigma}(\partial_{\lambda}g_{\mu\sigma} + \partial_{\mu}g_{\sigma\lambda} - \partial_{\sigma}g_{\lambda\mu}) - \frac{1}{2}g_{\mu\rho}g^{\rho\sigma} (\partial_{\lambda}g_{\nu\sigma} + \partial_{\nu}g_{\sigma\lambda} - \partial_{\sigma}g_{\lambda\nu}) $ We have that $ g_{\rho\nu}g^{\rho\sigma} = \delta^{\sigma}_{\nu}$ and $ g_{\mu\rho}g^{\rho\sigma} = \delta^{\sigma}_{\mu} $ so, $ = \partial_{\lambda}g_{\mu\nu} - \frac{1}{2}\delta^{\sigma}_{\nu} ( \partial_{\lambda}g_{\mu\sigma} + \partial_{\mu}g_{\sigma\lambda} - \partial_{\sigma}g_{\lambda\mu}) - \frac{1}{2} \delta^{\sigma}_{\mu} (\partial_{\lambda}g_{\nu\sigma} + \partial_{\nu}g_{\sigma\lambda} - \partial_{\sigma}g_{\lambda\nu}) $ $ = \partial_{\lambda}g_{\mu\nu} - \frac{1}{2}(\partial_{\lambda}g_{\mu\nu} + \partial_{\mu}g_{\nu\lambda} - \partial_{\nu}g_{\lambda\mu}) - \frac{1}{2} ( \partial_{\lambda}g_{\nu\mu} + \partial_{\nu}g_{\mu\lambda} - \partial_{\mu}g_{\lambda\nu}) $ $ = \partial_{\lambda}g_{\mu\nu} - \frac{1}{2} \partial_{\lambda}g_{\mu\nu} - \frac{1}{2}\partial_{\lambda}g_{\nu\mu}$ and with $g_{\mu\nu} = g_{\nu\mu} $ we have that $\nabla_{\lambda}g_{\mu\nu} = 0 $ Great. Now I'm trying to show that $\nabla_{\lambda}g^{\mu\nu} = 0$ and I'm having trouble. I've been advised to "cleverly" use the result from $\nabla_{\lambda}g_{\mu\nu} = 0$ in proving the second case, but I'm not seeing it and am getting caught up in index gymnastics -- or missing something carelessly. We are working under the condition that $g_{\mu\nu} \neq g^{\mu\nu}$. Can someone please help to show me the proof for the case of the metric with upper indices under this regime? Answer: A hint: the condition which defines the inverse metric is $g_{\mu\nu} g^{\nu \rho} = \delta_\mu{}^\rho$, and we can differentiate this equality: one side is a constant and the other can be expanded with the product rule. If you do not trust the instinct that $\nabla_\mu \delta_\nu{}^\rho = 0$, you can show that it is true in a few different ways, I'd do it like this: $$ \nabla_\mu \delta_\nu{}^\rho = \partial_\mu \delta_\nu{}^\rho + \Gamma_{\mu \alpha}{}^\rho \delta_\nu{}^\alpha - \Gamma_{\mu \nu}{}^\alpha \delta_\alpha{}^\rho = \Gamma_{\mu \nu}{}^\rho - \Gamma_{\mu \nu}{}^\rho = 0\,. $$
{ "domain": "physics.stackexchange", "id": 77477, "tags": "homework-and-exercises, general-relativity, differential-geometry, metric-tensor, differentiation" }
The oxidation of aldehydes and alpha-diketones with peroxy compounds
Question: Recently, I have been reading up on the Baeyer-Villiger (BV) oxidation. The oxidation is most commonly discussed for ketones. However, the oxidation also works for aldehydes and $\alpha$-diketones (Rojas, 2015). The above image is Scheme 2.2 from Rojas (2015). I have mainly two queries: Does the oxidation of aldehydes to carboxylic acids by peroxy compounds, such as peracids, proceed via a BV oxidation where the hydride is the migrating group? This would be seemingly different from the oxidation mechanism for the oxidation by $\ce {MnO4^-}$ or $\ce {CrO3}$ where the aldehyde is oxidised when it is in the gem-diol form. Why does the oxidation of $\alpha$-diketones yield the corresponding anhydride and not an $\alpha$-ketoester? This is extremely puzzling because the acyl group is terribly bad at stabilising any partial positive charges on the acyl carbon. How is it possible that such a BV oxidation can occur? Reference Rojas, C. M. (2015). Molecular Rearrangements in Organic Synthesis (1st ed.). Hoboken, New Jersey: John Wiley & Sons, Inc. Answer: The Baeyer-Villiger oxidation is conducted under acidic conditions such as an alkyl peroxide in the presence of a mineral acid (e.g.; H2SO4) or a peracid (e.g.; peracetic acid or m-chloroperbenzoic acid). In the reactions described in the diagram, the reactions of aldehyde 1 or α-diketone 7 are initiated by proton of the carbonyl oxygen followed by addition of peroxide (an alkylperoxide is illustrated). Ultimately, the oxygen of the RO-moiety is protonated and is the effective leaving group (ROH) in the rearrangement. In both sequences, the red route is favored; the blue route disfavored. Cation 3 is a more stable species than cation 5 because of hyperconjugation with the alkyl group or resonance stabilization if R'=aryl. Cation 3 affords a carboxylic acid while cation 4 would lead to formate ester 6. In the case of peroxyhemiketal 8, acyl migration leads to cation 9 which is more stable than species 11, which has a positive charge adjacent to a carbonyl group, which is destabilizing. Thus, anhydride 10 is formed in preference to α-ketoester 12.
{ "domain": "chemistry.stackexchange", "id": 11580, "tags": "organic-chemistry, organic-oxidation, rearrangements" }
Does triphenylphosphine show resonance?
Question: The lone pair of phosphorus and the double bonds in benzene looks to be in conjugation. Do the molecule show resonance? If so, how? Answer: The triphenylphosphine molecule would show resonance because there is a lone pair on phosphorus and it is in conjugation with the double bonds in benzene. But the lone pair would be in resonance only with one ring at a time because the molecule is non-planar (tetrahedral) having sp3 configuration. Also, we should consider the steric effects on the molecule. The steric hindrance produced by other two benzene rings would cause less resonance effect and the lone pair would be inhibited from going into the other two rings. So, the resonance in this molecule would be simply like that in aniline.
{ "domain": "chemistry.stackexchange", "id": 7906, "tags": "organic-chemistry, resonance" }
The temperature of an electron
Question: Does an electron have a temperature, if so, what is it? Imagine an electron (Ke = 1 eV) in a tube at room temperature (300 K) what is its temperature? Imagine now same electron in space (3 K) with same Ke, is it any different from the other? What is the influence of external temperature on electrons? Do those electrons have electric energy, or do we have such energy only when the electron hits something and discharges its Ke? Answer: Temperature is a measurement defined in the mathematics of thermodynamics. Thermodynamic quantities emerge from statistical mechanics, so there exists a definition of temperature : Except in the quantum regime at extremely low temperatures, the thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the mean average kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three x, y, and z–axis dimensions of space means the particles move in the three spatial degrees of freedom. The temperature derived from this translational kinetic energy is sometimes referred to as kinetic temperature and is equal to the thermodynamic temperature over a very wide range of temperatures. Since there are three translational degrees of freedom (e.g., motion along the x, y, and z axes), the translational kinetic energy is related to the kinetic temperature by where: E_bar is the mean kinetic energy in joules (J) and is pronounced “E bar” k_B = 1.3806504(24)×10^−23 J/K is the Boltzmann constant and is pronounced “Kay sub bee” T_k is the kinetic temperature in kelvins (K) and is pronounced “Tee sub kay” So individual particles have kinetic energy, and temperature is defined by the average of the ensemble. An electron within the hot element of the cathode ray tube, before emission will participate in defining the average temperature of the filament, when it is ejected it will have a specific momentum drawn from the kinetic energy distribution in the filament. In the vacuum of the tube, there is no ensemble to generate a temperature, the electron will keep this kinetic energy and increase it according to the field imposed that is attracting it to the cathode. By a hand waving , if one were to assume that the kinetic energy of the single electron impinging on the cathode represents an average of an ensemble, one might say that the electron has the temperature of the sun plasma, for example, but it is a sloppy , not correct , assignment.
{ "domain": "physics.stackexchange", "id": 65304, "tags": "thermodynamics, statistical-mechanics, electrons, temperature" }
Effect of waters changing specific gravity on objects apparent weight placed in liquid
Question: My goal is to monitor the change in specific gravity of a liquid over a period of time. My question is: What are the appropriate formula for determining expected apparent weight of an object immersed in a liquid where the liquids specific gravity g/ml is expected to change? EG. If I were to take an object who's density is 2.6 (average for glass) weighing 100 grams and plunk it into distilled water I believe I should expect an apparent weight should be roughly 61.53 grams. Please let me know if I am just horridly wrong. So then if that distilled waters density/specific gravity were to change say to 1.010, would my new apparent weight of the object be 61.15 grams? My math is not solid in this. I'm basically using ratios in order to produce these answers. Please for the sake of simplicity if you are to choose to answer leave out extenuating circumstances such as temperature of the liquid/object and possible compression of the object due to pressure. If you do chose to add extenuating circumstances I would ask to add those concepts as tertiary answers. I'm sure that my question is probably very basic, but grasping the concepts has proven perplexing to me. I am probably not using the correct search. Your help in this simple question is greatly appreciated. Answer: Archimedes' principle tells us that the upwards force on an object immersed in a fluid is equal to the weight of fluid displaced. So let's take your initial experiment. You don't tell us the volume of your object, but you do give its weight, $W_g = 100$ g, and density, $\rho_g = 2.6$ g/cm$^3$, so the volume is: $$ V = \frac{W_g}{\rho_g} = 38.46 cm^3 $$ When you put this in water the volume of water displaced is the same as the above volume, so the mass of water displaced is: $$ M_w = V\rho_w = \frac{W_g}{\rho_g} \rho_w = 38.46g $$ and hence the effective mass of your glass object is 100 - 38.46 = 61.54g as you say. If the density of water changes to 1.010 g/cm$^{3}$ just use the equation above but set $\rho_w = 1.010$, and you will indeed get the effective weight of the glass equal to 61.15g. Though if you want to be really accurate you should note that changing the temperature will change the density of the glass as well, so $\rho_g$ would be slightly different as well.
{ "domain": "physics.stackexchange", "id": 12427, "tags": "measurements, density, buoyancy, weight" }
How to get Euclidean Distance between feature maps
Question: I'm trying to find "keyframes" within a video using this paper however I'm a little new to machine learning and I'm stuck on the distance vector step. The goal is to calculate the distance vector between consecutive frames to detect a big context change indicating a keyframe. here's what I've done: Using Googlenet pre-trained model I extracted a (1024,7,7) feature map, so a vector of 7x7 matrices for every other frame in the video. Now I want to calculate the euclidean distance between two consecutive frames but I'm not sure how that would be. My intuition is that the distance vector would also be (1024,7,7). The last step would be to apply a "convolution of 4-window of last distance values with vector [0.1, 0.1, 0.1, 0.99]" on the distance vector?? I don't understand this step either. Any help or guidance would be appreciated! Answer: The Euclidean between two images $p$ and $q$ can be calculated as follows: $d(p, q) = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + ... + (q_{49} - p_{49})^2}$ which is the distance between the 49 (7x7) features of the two images. This should then give you a vector of shape (1024, 1) where each value is the Euclidean distance of the feature maps of the previous image, with the first one being all NA since it's the first image. Then a convolution is applied with a window of 4 with the vector/kernel [0.1, 0.1, 0.1, 0.99], which basically multiplies 4 values of the (1024, 1) vector by the kernel values and adds them up. I.e. given the first 4 values of the (1024, 1) vector of [0.3, 0.5, 0.2, 0.4] the result of this multiplication would be: $0.3 * 0.1 + 0.5 * 0.1 + 0.2 * 0.1 + 0.99 * 0.4 = 0.03 + 0.05 + 0.02 + 3.96 = 4.06$
{ "domain": "datascience.stackexchange", "id": 9168, "tags": "machine-learning, keras, tensorflow, cnn" }
Mirror that flips polarisation?
Question: Is it possible to build a mirror which not just reflects a photon but also flips its polarisation from horizontal tho vertical (or vice versa)? The reason why I ask is the following: If I put an optical device in front of the mirror which flips the polarisation, this device will act twice on the photon (once on the way there and once on the way back). Thus the photon ends up with the same polarisation. Is there a common solution to this problem? Note that in my case the photon must take the same way back. So tilting the mirror is no solution... Answer: Yes, the device is called a Faraday mirror and it consists of a normal mirror following a Faraday rotator. The latter is a magneto-optical device that rotates the state of polarization of light passing through it in a non-reciprocal manner. The most well-known application of Faraday rotators is to provide optical isolation. The Faraday effect is wavelength dependent so it may not work as per your expectations if you source is too broadband; commercially-available Faraday rotators have a bandwidth in the $10-50\,$nm range. Some rotators may allow you the possibility of tuning the central wavelength but I am not sure if it is easy to order just the free-space rotator element alone (most vendors seem to offer buying only the complete isolator assembly).
{ "domain": "physics.stackexchange", "id": 16990, "tags": "quantum-mechanics, photons, quantum-optics, polarization" }
Prevent overfitting when decreasing model complexity is not possible
Question: I'm fairly new to machine learning and as an exercise for a more complicated task, I'm trying to do the following what I thought was a trivial task. Suppose as an input I have population density maps. These are 2D images with one layer, in which each pixel is the count of persons living in that area. From that data, I'd like my model to "estimate" (in fact it would be possible to calculate the exact solution) the total number of persons living on that density map. Essentially, the task consists of just taking the sum of the 2D input. I have tried many architectures and I found that the simpler the better. In fact a model containing no hidden layers performed best: from keras.layers.core import Dense from keras.layers import Flatten, Input inputs = Input(shape=(225, 350, 1)) x = Flatten()(x) x = Dense(1)(x) While this performs very well on the training data, it fares very poorly on the validation data. I know that this is a sign of overfitting, but how can I prevent overfitting given that is not possible to further decrease complexity of the model? Or would another approach / architecture be better altogether? Note that I have performed the usual data pre-processing (normalising inputs and outputs). Thanks in advance for any hints. Answer: is this about trying to have the model learn the sum function? Because if it is you can always initialize the weights to be 1 and then make the entire model untrainable. If all weights are one and the model has linear activations it will just compute the sum of your inputs. If its important to you to train the model to do this I would add a Dropout layer (?). Your model is so simple it can be written down in closed form as it is $\sum_{i=0}^{255*350}w_i x_i$ and therefore I would just set the $w_i$ to be 1. (In general initializing the weights to 1, the ideal solution, should make training very fast)
{ "domain": "datascience.stackexchange", "id": 6183, "tags": "keras" }
pH and materials selction
Question: I working on the Navy study guide for their nuclear engineering programs and I am not a Chemist. Thus, I have come here to try and develop a better understanding of the subject matter. Why is pH important in materials selection? Answer: The pH scale is merely a way of keeping track of the concentration a species (the hydrogen ion $\ce{H^+}$, $pH = -log\ a_{H^+} \simeq -log\ [H^+]$) in a medium (usually liquid, not necessarily aqueous). You can define arbitrary scales for other species, too. Analytical chemists are more familiar with scales such as pCl, with $pCl = -log\ a_{Cl^-} \simeq -log\ [Cl^-]$, and that's fine, but they're much rarer. The reason chemistry pays so much attention to hydrogen ions specifically is because the pH scale keeps track of what is arguably the most reactive species in all of chemistry (one could make a case for alpha particles, i.e. bare helium-4 nuclei $\ce{^4_2 He^{2+}}$ instead, but hydrogen ions are nevertheless comparable in strength and far more common). If you stop to think about it, a (non-solvated) hydrogen ion, $\ce{H^+_{(g)}}$ is a bare proton. Let me explain why this is an important realization. In chemistry, we talk a lot about how electrons get shoved around when different molecules meet (also known as chemical reactions), and the exact way in the electrons shuffle about is dependent on complicated electromagnetism at the quantum level. However, to a simple but enlightening approximation, one can imagine how substances behave based on how much electrical charge they concentrate in a region; simply, concentrated charges will display stronger electromagnetic effects. In most of chemistry, we deal with charges of order unity relative to the fundamental electrical charge ($e=1.6022 \times 10^{-19}\ C$) spread in the volume of a few atoms (of order $10^{-3}\ nm^3$). The resulting charge density can be found by dividing the charge by the volume. Now here's the thing - the nucleus of an atom is tiny compared to the size of the entire atom (with its electrons). A lone hydrogen atom has a radius of approximately $0.1\ nm$, but a lone hydrogen nucleus has a radius of about $1\ fm = 0.000001\ nm$, a difference of five orders of magnitude. The volume difference is thus fifteen orders of magnitude. Therefore, the bare nucleus which comprises a $\ce{H^+}$ ion has a charge density 15 orders of magnitude greater than a similarly-charged atomic ion, for example $\ce{Li^+}$. In effect, this means that a bare $\ce{H^+}$ has such a huge positive charge density that it will tear opposite charges (electrons) from absolutely anything. Helium and neon are notoriously unreactive, but both react quite happily (and rather exothermically) with gaseous $\ce{H^+}$ forming hydrohelium(+) and hydroneon(+) as the respective products. So why doesn't water (or any ready source of $\ce{H^+}$ ions, i.e. protic substance) immediately dissolve absolutely everything it touches? It so happens that, in condensed phases, whatever molecules close to a hydrogen ion are so attracted that they effectively encase the hydrogen ions into solvated hydrogen ions, forming a shell which protects everything else from the aggressive reactivity of the hydrogen ion. For example, in water, $\ce{H^+}$ is actually not a very correct description of reality. Rather, one finds $\ce{H(H_2O)_{n}^+}$ (often alluded to by formulae such as $\ce{H3O^+}$, $\ce{H5O2^+}$, $\ce{H9O4^+}$ and others), or more concisely (but less distinguishably) $\ce{H^+_{(aq)}}$. After being surrounded by water molecules, the hydrogen ion loses almost all of its reactivity, as the original charge is now effectively dispersed over several molecules, greatly dimishing the charge density of the solvated hydrogen ion compared to the non-solvated hydrogen ion. Even after the massive handicap, solvated hydrogen ions are still quite reactive. Thus, it is important to keep track of their amount, to figure out if certain materials are capable of withstanding the onslaught. After all I said, it seems like it would be a good idea to minimize the amount of $\ce{H^+}$ in water, which can be done by going to high values in the pH scale. However, by doing so you begin to significantly increase the amount of hydroxide ions $\ce{OH^-}$ present, in order to obey the autodissociation reaction equilibrium of water. In some sense, hydrated $\ce{OH^-}$ ions in solution are about as reactive as solvated $\ce{H^+}$ (even though non-solvated $\ce{H^+}$ is much, much more reactive than non-solvated $\ce{OH^-}$). Thus, going to either end of the pH scale can be a bad idea. One could say that water solutions are least reactive* when their pH is close to 7 (including pure water), as one minimizes the sum of $\ce{H^+}$ and $\ce{OH^-}$ present. * Least reactive in the specific sense that if a material resists acidic aqueous solutions or basic aqueous solutions, then it must resist pure water and neutral aqueous solutions, assuming that whatever else anything else in the water incapable of altering the pH value is also unreactive; a pH 7 sodium chloride solution for example can be quite corrosive to metals, but not because of the water itself.
{ "domain": "chemistry.stackexchange", "id": 1102, "tags": "physical-chemistry, ph, materials" }
If a spaceship were to be able to travel at light speed, would it pass through objects undamaged? Would it damage/destroy objects?
Question: We know, not just by scientific theory, but by practice (I have seen it with my own eyes), that an increase in velocity increases the mass of the given object proportionally. One day visiting a science demo years ago, there was a display of the effects of a standard drinking straw put through a block of wood simulating a tree trunk. The display was in regards to the velocity of thrown objects from a tornado. A tornado can increase the velocity of an otherwise harmless object to travel through almost anything. It can effectively send a drinking straw though a tree, I have seen it myself. And this is only at negligible speeds. An F-5 tornado can only go up to 318mph, which is a lot; but compared to the speed of light, it is practically standing still. It's also why a vehicle moving at 100mph is more dangerous to be hit by than a vehicle moving 1mph. Einstein said that in order to increase an object to the speed of light, it would gain infinite mass. Negating the plausibility to the also infinite energy needed to attain the speed of light, and negating the other complications like the ship being ripped apart due to uneven acceleration or instant death of the passengers; and also negating the scientifically plausible "warp drive", this is my question: If a spaceship is travelling almost speed of light, having almost infinite mass, it stands to reason that it should be, during travel, basically impregnable - entirely indestructible. Is this true? Also, if the spaceship were to pass through planets or stars during this journey, would it utterly obliterate anything in its path? Also, would the near infinite mass cause the ship to function effectively like a black hole while at that speed, consuming anything it passed nearby? Ideally, if science were to get so advanced, we'd hope that interstellar navigation would allow us to avoid such collisions. However, I would think, were near-light travel possible, that it would be practically impossible to detect a potential collision with 100% certainty. Thus, I wonder the answers to these questions. Answer: I believe anything with relativistic velocities and much mass (let alone a spaceship) would quickly be destroyed by "junk" in space (when I say "junk", I include stray hydrogen atoms.) To get an idea of the energy released when something like that hits something (like a planet), in Randall Monroe's latest book What if, he states that a baseball with a relativistic velocity would wipe out a small city. I'm not sure if GR changes the equation for kinetic energy, but if it still applies with near light-speed velocities, we can find the KE of an object with a mass of 1kg at 0.99999C with $\frac{1}{2}mv^2$. First, if we assume C to be $3.00 \times 10^8$ we get $300000000\text{ m/s}$. Plugging that into the KE equation, we get $4.5\times 10^{16} \text{ J}$. If we can believe this website, the bomb dropped on Hiroshima had an energy of 63TJ. Therefore, our little $4.5 \times 10^4 \text{ TJ}$ mass would be about 714 Hiroshima bombs. So to answer your questions: Yes, it would. Unless said object has no mass (photons etc.) Well yes, anything in its path would be in trouble, but then again at those speeds a stray hydrogen atom would put a huge hole in your ship. (That's why in Star Trek, the spaceships are always equipped with Bussard Collectors to "sweep up" threatening matter before it could harm the ships) I believe that is entirely dependent on the mass of the spaceship, and the fraction of C it is traveling at. (If it has enough velocity or mass, then yes. It would.) As stated in another answer, whether or not something has enough mass to be a black hole is entirely dependent on the rest mass, not the relativistic mass. Hope this helps :-). Edit: I should also say that if you are not accelerating an object to an appreciable percent of C, than the increase in mass will be negligible. For example: (from Wikipedia) Raising the temperature of an object (increasing its heat energy) increases its mass. For example, consider the world's primary mass standard for the kilogram, made of platinum/iridium. If its temperature is allowed to change by 1 °C, its mass will change by 1.5 picograms (1 pg = 1×10−12 g). 1.5 pg is really, really, really tiny. (0.0000000000015g. A baseball is about 142g.)
{ "domain": "physics.stackexchange", "id": 27868, "tags": "special-relativity, energy, mass" }
How loud would the Sun be?
Question: Sound can't travel through outer space. But if it could, how loud would the Sun be? Would the sound be dangerous to life on Earth, or would we barely hear it from this distance? Answer: The Sun is immensely loud. The surface generates thousands to tens of thousands of watts of sound power for every square meter. That's something like 10x to 100x the power flux through the speakers at a rock concert, or out the front of a police siren. Except the "speaker surface" in this case is the entire surface of the Sun, some 10,000 times larger than the surface area of Earth. Despite what "user10094" said, we do in fact know what the Sun "sounds" like -- instruments like SDO's HMI or SOHO's MDI or the ground-based GONG observatory measure the Doppler shift everywhere on the visible surface of the Sun, and we can actually see sound waves (well, infrasound waves) resonating in the Sun as a whole! Pretty cool, eh? Since the Sun is large, the sound waves resonate at very deep frequencies -- typical resonant modes have 5 minute periods, and there are about a million of them going all at once. The resonant modes in the Sun are excited by something. That something is the tremendous broadband rushing of convective turbulence. Heat gets brought to the surface of the Sun by convection -- hot material rises through the outer layers, reaches the surface, cools off (by radiating sunlight), and sinks. The "typical" convection cell is about the size of Texas, and is called a "granule" because they look like little grains when viewed through a telescope. Each one (the size of Texas, remember) rises, disperses its light, and sinks in five minutes. That produces a heck of a racket. There are something like 10 million of those all over the surface of the Sun at any one time. Most of that sound energy just gets reflected right back down into the Sun, but some of it gets out into the solar chromosphere and corona. No one can be sure, yet, just how much of that sound energy gets out, but it's most likely between about 30 and about 300 watts per square meter of surface, on average. The uncertainty comes because the surface dynamics of the Sun are tricky. In the deep interior, we can pretend the solar magnetic field doesn't affect the physics much and use hydrodynamics, and in the exterior (corona) we can pretend the gas itself doesn't affect the physics much. At the boundary layers above the visible surface, neither approximation applies and the physics gets too tricky to be tractable (yet). In terms of dBA, if all that leaked sound could somehow propagate to Earth, well let's see... Sunlight at Earth is attenuated about 10,000 times by distance (i.e. it's 10,000 times brighter at the surface of the Sun), so if 200 W/m2 of sound at the Sun could somehow propagate out to Earth it would yield a sound intensity of about 20 mW/m2. 0dB is about 1pW/m2 , so that's about 100dB. At Earth, some 150,000,000 kilometers from the sound source. Good thing sound doesn't travel through space, eh? The good folks at the SOHO/MDI project created some sound files of resonant solar oscillations by speeding up the data from their instrument by 43,000 times. You can hear those here, at the Solar Center website. Someone else did the same thing with the SDO/HMI instrument, and superposed the sounds on first-light videos from SDO. Both of those sounds, which sound sort of like rubber bands twanging, are heavily filtered from the data -- a particular resonant spatial mode (shape of a resonant sound) is being extracted from the data, and so you hear mainly that particular resonant mode. The actual unfiltered sound is far more cacophonous, and to the ear would sound less like a resonant sound and more like noise.
{ "domain": "astronomy.stackexchange", "id": 4023, "tags": "the-sun, earth, asteroseismology, sound" }
Slice rows in R based on column value
Question: I have multi-touch attribution data like: medium conversion 1 organic 0 2 (none) > referral > referral > (none) > (none) > referral 0,0,0,0,0,0 3 (none) 0 4 organic > referral > referral 0,1,0 5 referral > referral > referral > referral 0,0,1,0 6 organic > referral > referral > (none) > referral 0,1,0,1,0 I'd like to remove rows with no conversions (like rows 1, 2 and 3) and I tried grepl but couldn't make it work. For the remaining rows how to split rows so it ends with a conversion, e.g. row 4 will be organic > referral 0,1 and row 6 will split into organic > referral 0,1 and organic > referral > referral > (none) 0,1,0,1 Answer: This can be done with packages from the dplyr which is part of the tidyverse made by Hadley Wickham. The stringr package (also made by Hadley) is really helpfull in working with vectors of strings. Another package purrr is helpfull for applying functions t lists. First, lets import the libraries and create data: library(tidyverse) library(stringr) df = tibble( medium = c("organic", "(none) > referral > referral > (none) > (none) > referral", "(none)", "organic > referral > referral", "referral > referral > referral > referral", "organic > referral > referral > (none) > referral"), conversion = c("0", "0,0,0,0,0,0", "0", "0,1,0", "0,0,1,0", "0,1,0,1,0") ) You probably have the data as strings while a list represention would be much easier to work with. The following code converts the strings to lists by splitting on ">" or "," (resp. for medium and conversion). Also read upon the %>% which is really handy for working with data frames. After converting the strings to lists the conversion column can be made boolean by mapping the == operator on each row. df <- df %>% mutate(medium = str_split(medium, ">"), conversion = str_split(conversion, ","), conversion = map(conversion, `==`, "1")) Removing the items after the first conversion is now a matter of simple indexing. A little magic is done with dplyr::lag and cumsum to get a boolean list that indicates all the steps before and during conversion. Then map is used to get (using the [ operator) all the mediums by boolean indexing with no_conversion_yet. df <- df %>% mutate(no_conversion_yet = map(conversion, function(x) dplyr::lag(cumsum(x) < 1, default=TRUE))) %>% mutate(medium = map(no_conversion_yet, medium, `[`)) Filtering the rows that do not have a conversion is now easy. Simple remove all the rows that do not have any TRUE value in the conversion column. df <- df %>% mutate(any_conversion = map_lgl(conversion, any)) %>% filter(any_conversion) Voila, life made easy by the superb packages of Hadley Wickham! (Also check his paper.)
{ "domain": "datascience.stackexchange", "id": 1982, "tags": "r" }
Higher-order derivatives than second-order differential equations
Question: From https://doi.org/10.1063/1.2155755 he limited himself to second-order differential equations. Our experience in elementary-particle physics has taught us that any term in the field equations of physics that is allowed by fundamental principles is likely to be there in the equations I guess the author means from the effective field theory point of view. Namely, effective actions include non-renormalizable terms, which can lead to higher derivatives. I try to see an example beyond second-order differential equations. Let me start from $\phi^4$. The effective Lagrangian is, e.g., Peskin & Schroeder eq. (12.23) $$ \int d^d x \mathcal{L}_{\mathrm{eff}} = \int d^d x' \left[ \frac{1}{2} \left( \partial'_{\mu} \phi' \right)^2 + \frac{1}{2} m'^2 \phi'^2 + \frac{1}{4} \left( \lambda' \phi'^4 + C \left( \partial'_{\mu} \phi' \right)^4 + D' \phi'^6 +\cdots \right) \right] \tag{1} $$ I suppose $$ \mathcal{L}_{\mathrm{eff}} = \frac{1}{2} \left( \partial'_{\mu} \phi' \right)^2 + \frac{1}{2} m'^2 \phi'^2 + \frac{1}{4} \left( \lambda' \phi'^4 + C \left( \partial'_{\mu} \phi' \right)^4 + D' \phi'^6 +\cdots \right) \tag{2} $$ Try to cook a classical equation of motion. From the Euler-Lagrangian equation, $$ \frac{ \partial \mathcal{L} }{ \partial \phi} - \partial_{\mu} \frac{ \partial \mathcal{L} }{ \partial \left( \partial_{\mu} \phi \right) } = 0\tag{3} $$ plug in the effective lagrangian, we should get some extra terms than the Klein-Gordon equation $$ \square \phi' - m^2 \phi' + C \partial'_{\mu} \left[ \left( \partial'^{\mu} \phi' \right) \left( \partial'_{\mu} \phi' \right)^2 \right] +\cdots = 0.\tag{4} $$ So far the extra term with prefactor $C$ still looks like a second-order differential equation, as one first-order derivative outside the square bracket, $\partial'_{\mu}$, acting on one first-order derivative term $\left( \partial'^{\mu} \phi' \right) $ times the other first-order derivative term (a first-order derivative times itself) $\left( \partial'_{\mu} \phi' \right)^2$, i.e., $(fg)' = f'g + fg'$. If I further organize the inside square bracket part of the extra term by $f' g = (fg)' - f g' $, $$ C \partial'_{\mu} \left[ \left( \partial'^{\mu} \phi' \right) \left( \partial'_{\mu} \phi' \right)^2 \right] \\ \equiv C \partial'_{\mu} \left\{ \left( \partial'^{\mu} \phi' \right) \left( \partial'_{\mu} \phi' \right)^2 \right\} \\ = C \partial'_{\mu} \left\{ \partial'^{\mu} \left[ \phi' \left( \partial'_{\mu} \phi' \right)^2 \right] - \phi' \partial'^{\mu}\left[ \left( \partial'_{\mu} \phi' \right)^2 \right] \right\} \\ = C \underline{\partial'_{\mu}} \left\{ \partial'^{\mu} \left[ \phi' \left( \partial'_{\mu} \phi' \right)^2 \right] - 2 \phi' \left[ \left( \partial'^{\mu} \phi' \right) \left( \underline{ \partial'^{\mu} \partial'_{\mu}} \phi' \right) \right] \right\}.\tag{5} $$ It seems I get a third-order differential equation from the underline part of the above equation. Is my reasoning right? I think I did not impose any quantization in getting the equation of motion (except effective action from path integrals), since I think the view in the physics today essay is not much about quantization. Or I am not even wrong? Or a second-order differential equation should be counted as the total number of the derivatives terms than taking a second-order differentiation on a single term? Answer: OP is right that if the Lagrangian density remains of 1st order, then the Euler-Lagrange (EL) equations will only be of 2nd order. See also e.g. this & this related Phys.SE posts. However, the Wilsonian effective action $$\begin{align} \exp&\left\{ -\frac{1}{\hbar}W_c[J^H,\phi_L] \right\}\cr ~:=~~~&\int \! {\cal D}\frac{\phi_H}{\sqrt{\hbar}}~\exp\left\{ \frac{1}{\hbar} \left(-S[\phi_L+\phi_H]+J^H_k \phi_H^k\right)\right\} \end{align}$$ is defined by integrating out heavy/high modes $\phi^k_H$ and leaving the light/low modes $\phi^k_L$. Here $J^H_k$ denotes sources for the heavy modes. The (possibly non-local!) Wilsonian effective action $W_c[J^H,\phi_L]$ is the generating functional of connected $\phi_H$ Feynman diagrams in a background $J^H,\phi_L$. Nevertheless, the heavy propagators are exponentially suppressed, so the non-locality is mild, and can be taking into account by a Taylor expansion, cf. e.g. my Phys.SE answer here. The upshot is that, in the Wilsonian renormalization group flow, the Wilsonian Lagrangian density will in principle contain all possible terms that are not excluded by symmetry, e.g. $$ \ldots + \ldots +\frac{E}{2} (\partial_{\mu}\partial_{\nu}\phi)(\partial^{\mu}\partial^{\nu}\phi) + \frac{F}{2} (\partial_{\mu}\phi)(\partial^{\mu}\partial^{\nu}\phi)(\partial_{\nu}\phi) + \ldots ,$$ i.e., the Lagrangian density becomes of higher order. For higher-order Lagrangian theories, the EL equations (3) become $$ 0~\approx~\frac{\delta S}{\delta \phi} ~=~\frac{\partial {\cal L}}{\partial \phi} -\sum_{\mu} \frac{d}{dx^{\mu}} \frac{\partial {\cal L}}{\partial (\partial_{\mu}\phi)} + \sum_{\mu\leq \nu} \frac{d}{dx^{\mu}} \frac{d}{dx^{\nu}} \frac{\partial {\cal L}}{\partial (\partial_{\mu}\partial_{\nu}\phi)} - \ldots. $$ Here the $\approx$ symbol means equality modulo eoms, and the ellipsis $\ldots$ denotes possible higher-derivative terms. In general, if the Lagrangian density is of $n$'th order, then the EL equations will be of $2n$'th order.
{ "domain": "physics.stackexchange", "id": 84461, "tags": "lagrangian-formalism, field-theory, differential-equations, variational-calculus, effective-field-theory" }
Why is silver acetate insoluble in water?
Question: As seen in this video, $\ce{AgNO3}$ reacted with sodium acetate $(\ce{CH3CO2Na})$ to form silver acetate $(\ce{CH3CO2Ag})$. Why is that $\ce{CH3CO2Na}$ is soluble but $\ce{CH3CO2Ag}$ forms precipitate? Shouldn't both be polar and thus dissolve in water? Answer: Probably for the same reason that AgCl is insoluble compared to NaCl and KCl. Trying to research this on the web gives all sorts of explanations about differences in lattice and solvation energies with no reasons why. My explanation is that the relatively exposed valence orbital because of the filled d orbitals inefficient shielding, the ionic bond has some charge transfer feedback increasing the lattice energy. This loosely explains the decreasing solubility and increasing color of AgBr and AgI and the relative solubilities of the PbCl2, PbBr2 and PbI2.
{ "domain": "chemistry.stackexchange", "id": 16613, "tags": "solubility, polarity" }
Are there star systems orbited by stars?
Question: I never really heard about such occurencies and now asked my self if this could be possible. So could there be systems with a star (or black hole) that is so heavy that other less heavy stars are orbiting it? I could imagine 2 things that would both be a no. First, this isn't possible for so heavy objects they would just be affect each other and not one beeing a stable center. Or the second option is this wouldn't be possible withing a galaxy since this would just form a galaxy. So except the 2 named scenarios, could this happen within a galaxy or would this jsut form something diferent? Answer: There are binary stars (orbiting around their centre of mass) and there are stars orbiting around neutron stars or black holes (or rather, again, around the centre of mass of the system). I don't think many stars would orbit a black hole, except... There is the black hole at the centre of most galaxies, including our own. Lots of stars orbit around that - in fact the entire galaxy does. It is possible that some very small stars orbit a massive star or black hole, but I am not aware of the existence of such a system. The stars would have to be very small, possibly even brown dwarfs, as otherwise the centre of revolution of the system would be way outside the primary (as it is in the Pluto/Charon system), and your requirements would no longer be met. SO yes, there are stars orbiting other things, be they stars, neutron stars, or black holes.
{ "domain": "physics.stackexchange", "id": 29731, "tags": "astrophysics, stars" }
Help understanding Einstein notation
Question: This is basically the same question as this one. I have the same problem with the sign. In the Dirac equation $(i\gamma^{\mu}\partial_{\mu}-m)\psi = 0$, the term $i\gamma^{\mu}\partial_{\mu}$ is: $$i\gamma^{\mu}\partial_{\mu} = \sum_{j=0}^{3}i\gamma^{j}\partial_{j}$$ However, the Einstein summation convention is being used. The answer of the linked question says that this is because $\partial_{\mu}$ is contravariant; however, I have seen many times $\partial_{\mu}$ being used the sign convention. For instance, the D'Alembertian is: $$\square^{2} = \partial_{0}^{2}-\partial_{1}^{2}-\partial_{2}^{2}-\partial_{3}^{2} = \partial^{\mu}\partial_{\mu} = \partial_{\mu}\partial^{\mu}$$ So, what's the difference here? When do I have to change the signs and when do I not? Answer: We use the metric $[\eta]=\mathrm{diag}(+,-,-,-).$ Note first that $$X^\mu Y_\mu=X^0Y_0+X^1 Y_1+X^2Y_2+X^3Y_3 \tag{1},$$ but also $$X^\mu Y_\mu=\eta^{\mu\nu}X_\mu Y_\nu=\eta^{00}X_0Y_0+\eta^{11}X_1Y_1+\eta^{22}X_2Y_2+\eta^{33}X_3Y_3, \tag{2}$$ which, using the components of the metric gives $$X^\mu Y_\mu=X_0Y_0-X_1Y_1-X_2Y_2-X_3Y_3. \tag{3}$$ Note the position of the indices in $(3)$ compared to $(1)$. We have both indices down in $(3)$ at the cost of introducing factors of $\pm1$ from the Minkowski metric.
{ "domain": "physics.stackexchange", "id": 79570, "tags": "metric-tensor, notation, dirac-equation" }
Identifying minimums in gravitational wells in General Relativity
Question: According to the Newtonian gravitational potential: $$\phi\left(r\right)=-\dfrac{GM}{r}.\tag{1}$$ We can find the minimum possible point in a gravitational well as the one for which the value of the potential is lower (in $r\rightarrow 0$ in this case). How can you do the same in General Relativity? Is it needed to use the Ricci tensor instead of the usual gravitational potential or is it more than sufficient to use the latter? Do you lose relevant information using eq. (1) rather than the usual field equations? Answer: One can find the effective potential a particle experiences for instance in the most frequent case, the Schwarzschild case. Outside of a non-rotating massive object the effective potential can be found by computing the 4-momentum square in that metric ($r_s$ is the Schwarzschild radius): $$(mc)^2 = p_\mu p^\mu = m^2 \frac{dx_\mu}{d\tau}\frac{dx^\mu}{d\tau} = m^2 \left[ \left(1-\frac{r_s}{r}\right)c^2\dot{t}^2 -\frac{\dot{r}^2}{1-\frac{r_s}{r}} -r^2 \dot{\varphi}^2\right] \tag{1}$$ The motion is governed by 2 constants of motion $E$ and $L_z$. EDIT: This can be seen considering the expression $m^2 \frac{dx_\mu}{d\tau}\frac{dx^\mu}{d\tau}$ as Lagrangian (more precisely said the Lagrangian multiplied by $-2m$): $$-2mL = m^2 \frac{dx_\mu}{d\tau}\frac{dx^\mu}{d\tau}$$ In order to find the canonical momenta the derivatives with respect to $\dot{t}$, $\dot{r}$ and $\dot{\varphi}$ are taken. Most interesting are $\dot{t}$ and $\dot{\varphi}$ since $t$ and $\varphi$ are cyclic variables ($r$ is not a cyclic variable since the Lagrangian depends explicitly on $r$): Then we get (where energy $E$ and angular momentum $L_z$ appear as constants of motion scaled by $2m$ as the Lagrangian): $$const =\frac{\partial (-2mL)}{\partial \dot{t} } =2m^2c^2 \dot{t} \left(1-\frac{r_s}{r}\right) =2mE$$ and $$const =\frac{\partial (-2m L)}{\partial \dot{\varphi} } =-2m^2c^2 r^2\dot{\varphi} =-2mL_z$$ which turn into: EDIT END $$\dot{t} =\frac{E}{mc^2(1-\frac{r_s}{r})} \quad\text{and}\quad \dot{\varphi} =\frac{L_z}{mr^2}$$ After a bit of algebra one gets from (1): $$ \frac{E^2}{(mc^2)^2} = 1 + \frac{\dot{r}^2}{c^2} +V_{eff}$$ where the effective potential equals: $$ V_{eff}= -\frac{r_s}{r} + \frac{L_z^2}{m^2c^2}(\frac{1}{r^2} - \frac{r_s}{r^3})$$ The minima and maxima of this potential provide you the requested information. If the solution of Einstein's field equations (EFE) is already known as here assumed the task is rather easy. Of course to find the rotational- symmetric EFE's solution outside a non-rotating massive object one has to solve the EFE's in vacuum, i.e. $$ R_{\mu\nu}=0 \quad\text{with the corresponding symmetry conditions} $$ where $R_{\mu\nu}$ is the Ricci tensor.
{ "domain": "physics.stackexchange", "id": 95688, "tags": "general-relativity, gravity, potential, potential-energy, equilibrium" }
How to specify the cut-off frequence in cascaded biquad filters?
Question: As part of my effort to understand how a high-pass filter can be implemented, I'm reading this section of HandWiki. My question here is: How do the 2 simple formula presented there filter out at a cut-off at 8kHz? Do we just cleverly set the $a_{1,2}$ and $b_{0,1,2}$ coefficents? Do we not need another (set of) components? Answer: Do we just cleverly set the $a_{1,2}$ and $b_{0,1,2}$ coefficents? Yes. The coefficients determine entirely what he filter does: shape (lowpass, highpass, bandpass, peaking EQ, shelve, etc.), steepness, crossover frequency, Q, gain, etc. Calculating the coefficients is called "filter design" and many textbooks have been written on the topic. It's mathematically challenging and typically requires university level math/engineering skills. An example for an introductionary course is https://ccrma.stanford.edu/~jos/filters/ This being said: most scientific programming languages (Matlab/Octave, Python, etc) have built-in libraries: you can input your filter requirements (including the cutoff frequency) and the filter coefficients pop out.
{ "domain": "dsp.stackexchange", "id": 12379, "tags": "software-implementation, parameter-estimation, biquad" }
Can magnetic field change kinetic energy?
Question: An electron and a proton are moving under the influence of mutual forces. In calculating the change in the kinetic energy of the system during motion, one ignores the magnetic force of one on another. This is, because (a) the two magnetic forces are equal and opposite, so they produce no net effect (b) the magnetic forces do not work on each particle (c) the magnetic forces do equal and opposite (but non-zero) work on each particle (d) the magnetic forces are necessarily negligible Answer: Magnetic field does no work. The Lorentz force due to a point charge is $$\vec{F} = q(\vec{E}+\vec{v}\times\vec{B}). $$ The force due to the magnetic field is $$ \vec{F}_{mag} = q(\vec{v}\times\vec{B}) .$$ The work done on $q$ due to the magnetic force per unit time is $$P_{mag} = \vec{F}_{mag}·\vec{v} = q(\vec{v}\times\vec{B})·\vec{v} = q(\vec{v}\times\vec{v})·\vec{B} = 0. $$ Thus, quite generally, forces due to magnetic fields do no work, because $\vec{F}_{mag}$ is orthogonal to $\vec{v}$.
{ "domain": "physics.stackexchange", "id": 60444, "tags": "homework-and-exercises, magnetic-fields, work" }
Describing pressure in incompressible fluids
Question: The canonical process of determining the pressure, velocity, and density of a fluid under the influence (or not) of external forces is through simultaneously solving conservation of mass, conservation of momentum, and an equation of state for the pressure (or conservation of energy). For an incompressible Navier-Stokes fluid with constant density $\rho_0$, this would look like: $$p = p(\rho, \vec{v}) = p(\rho_0,\vec{v})$$ $$\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \vec{v}) = \nabla\cdot \vec{v} =0$$ $$\rho \left(\frac{\partial \vec{v}}{\partial t} + \vec{v}\cdot\nabla \vec{v}\right) = -\nabla p + \mu \nabla^2\vec{v} + \vec{f}$$ Given some external force, all three of the variables can be solved for with this system of equations. However, when discussing flows in introductory fluid mechanics like pipe flow, we ignore the equation of state; we arbitrarily prescribe a pressure gradient and then use conservation of mass/momentum to obtain the resulting velocity for a fluid with a given density. How do we not run the risk of prescribing a pressure gradient that violates the equation of state/conservation of energy? Surely the answer lies in the difference between thermodynamic and mechanical pressure, but is the mechanical pressure really just treated as a totally unconstrained variable? Answer: I think the role of pressure is to adjust itself immediately according to changes in the velocity field so that velocity is divergence-free at all times. In that sense, it does not have any thermodynamic meaning.
{ "domain": "physics.stackexchange", "id": 61856, "tags": "thermodynamics, fluid-dynamics, pressure" }