text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Specific force (SF) is a mass-specific quantity defined as the quotient of force per unit mass.
S
F
=
F
/
m
{\displaystyle \mathrm {SF} =F/m}
It is a physical quantity of kind acceleration, with dimension of length per time squared and units of metre per second squared (m·s−2).
It is normally applied to forces other than gravity, to emulate the relationship between gravitational acceleration and gravitational force.
It can also be called mass-specific weight (weight per unit mass), as the weight of an object is equal to the magnitude of the gravity force acting on it.
The g-force is an instance of specific force measured in units of the standard gravity (g) instead of m/s², i.e., in multiples of g (e.g., "3 g").
== Type of acceleration ==
The (mass-)specific force is not a coordinate acceleration, but rather a proper acceleration, which is the acceleration relative to free-fall. Forces, specific forces, and proper accelerations are the same in all reference frames, but coordinate accelerations are frame-dependent. For free bodies, the specific force is the cause of, and a measure of, the body's proper acceleration.
The acceleration of an object free falling towards the earth depends on the reference frame (it disappears in the free-fall frame, also called the inertial frame), but any g-force "acceleration" will be present in all frames. This specific force is zero for freely-falling objects, since gravity acting alone does not produce g-forces or specific forces.
Accelerometers on the surface of the Earth measure a constant 9.8 m/s^2 even when they are not accelerating (that is, when they do not undergo coordinate acceleration). This is because accelerometers measure the proper acceleration produced by the g-force exerted by the ground (gravity acting alone never produces g-force or specific force). Accelerometers measure specific force (proper acceleration), which is the acceleration relative to free-fall, not the "standard" acceleration that is relative to a coordinate system.
== Hydraulics ==
In open channel hydraulics, specific force (
F
s
{\displaystyle F_{s}}
) has a different meaning:
F
s
=
Q
2
g
A
+
z
A
{\displaystyle F_{s}={\frac {Q^{2}}{gA}}+zA}
where Q is the discharge, g is the acceleration due to gravity, A is the cross-sectional area of flow, and z is the depth of the centroid of flow area A.
== See also ==
Acceleration
Proper acceleration
== References == | Wikipedia/Force_per_unit_mass |
Naïve physics or folk physics is the untrained human perception of basic physical phenomena. In the field of artificial intelligence the study of naïve physics is a part of the effort to formalize the common knowledge of human beings.
Many ideas of folk physics are simplifications, misunderstandings, or misperceptions of well-understood phenomena, incapable of giving useful predictions of detailed experiments, or simply are contradicted by more thorough observations. They may sometimes be true, be true in certain limited cases, be true as a good first approximation to a more complex effect, or predict the same effect but misunderstand the underlying mechanism.
Naïve physics is characterized by a mostly intuitive understanding humans have about objects in the physical world. Certain notions of the physical world may be innate.
== Examples ==
Some examples of naïve physics include commonly understood, intuitive, or everyday-observed rules of nature:
What goes up must come down
A dropped object falls straight down
A solid object cannot pass through another solid object
A vacuum sucks things towards it
An object is either at rest or moving, in an absolute sense
Two events are either simultaneous or they are not
Many of these and similar ideas formed the basis for the first works in formulating and systematizing physics by Aristotle and the medieval scholastics in Western civilization. In the modern science of physics, they were gradually contradicted by the work of Galileo, Newton, and others. The idea of absolute simultaneity survived until 1905, when the special theory of relativity and its supporting experiments discredited it.
== Psychological research ==
The increasing sophistication of technology makes possible more research on knowledge acquisition. Researchers measure physiological responses such as heart rate and eye movement in order to quantify the reaction to a particular stimulus. Concrete physiological data is helpful when observing infant behavior, because infants cannot use words to explain things (such as their reactions) the way most adults or older children can.
Research in naïve physics relies on technology to measure eye gaze and reaction time in particular. Through observation, researchers know that infants get bored looking at the same stimulus after a certain amount of time. That boredom is called habituation. When an infant is sufficiently habituated to a stimulus, he or she will typically look away, alerting the experimenter to his or her boredom. At this point, the experimenter will introduce another stimulus. The infant will then dishabituate by attending to the new stimulus. In each case, the experimenter measures the time it takes for the infant to habituate to each stimulus.
As an example of the use of this method, research by Susan Hespos and colleagues studied five-month-old infants' responses to the physics of liquids and solids. Infants in this research were shown liquid being poured from one glass to another until they were habituated to the event. That is, they spent less time looking at this event. Then, the infants were shown an event in which the liquid turned to a solid, which tumbled from the glass rather flowed. The infants looked longer at the new event; that is, they dishabituated.
Researchers infer that the longer the infant takes to habituate to a new stimulus, the more it violates his or her expectations of physical phenomena. When an adult observes an optical illusion that seems physically impossible, they will attend to it until it makes sense.
It is commonly believed that our understanding of physical laws emerges strictly from experience. But research shows that infants, who do not yet have such expansive knowledge of the world, have the same extended reaction to events that appear physically impossible. Such studies hypothesize that all people are born with an innate ability to understand the physical world.
Smith and Casati (1994) have reviewed the early history of naïve physics, and especially the role of the Italian psychologist Paolo Bozzi.
=== Types of experiments ===
The basic experimental procedure of a study on naïve physics involves three steps: prediction of the infant's expectation, violation of that expectation, and measurement of the results. As mentioned above, the physically impossible event holds the infant's attention longer, indicating surprise when expectations are violated.
==== Solidity ====
An experiment that tests an infant's knowledge of solidity involves the impossible event of one solid object passing through another. First, the infant is shown a flat, solid square moving from 0° to 180° in an arch formation. Next, a solid block is placed in the path of the screen, preventing it from completing its full range of motion. The infant habituates to this event, as it is what anyone would expect. Then, the experimenter creates the impossible event, and the solid screen passes through the solid block. The infant is confused by the event and attends longer than in probable event trial.
==== Occlusion ====
An occlusion event tests the knowledge that an object exists even if it is not immediately visible. Jean Piaget originally called this concept object permanence. When Piaget formed his developmental theory in the 1950s, he claimed that object permanence is learned, not innate. The children's game peek-a-boo is a classic example of this phenomenon, and one which obscures the true grasp infants have on permanence. To disprove this notion, an experimenter designs an impossible occlusion event. The infant is shown a block and a transparent screen. The infant habituates, then a solid panel is placed in front of the objects to block them from view. When the panel is removed, the block is gone, but the screen remains. The infant is confused because the block has disappeared indicating that they understand that objects maintain location in space and do not simply disappear.
==== Containment ====
A containment event tests the infant's recognition that an object that is bigger than a container cannot fit completely into that container. Elizabeth Spelke, one of the psychologists who founded the naïve physics movement, identified the continuity principle, which conveys an understanding that objects exist continuously in time and space. Both occlusion and containment experiments hinge on the continuity principle. In the experiment, the infant is shown a tall cylinder and a tall cylindrical container. The experimenter demonstrates that the tall cylinder fits into the tall container, and the infant is bored by the expected physical outcome. The experimenter then places the tall cylinder completely into a much shorter cylindrical container, and the impossible event confuses the infant. Extended attention demonstrates the infant's understanding that containers cannot hold objects that exceed them in height.
=== Baillargeon's research ===
The published findings of Renee Baillargeon brought innate knowledge to the forefront in psychological research. Her research method centered on the visual preference technique. Baillargeon and her followers studied how infants show preference to one stimulus over another. Experimenters judge preference by the length of time an infant will stare at a stimulus before habituating. Researchers believe that preference indicates the infant's ability to discriminate between the two events.
== See also ==
Cartoon physics
Common sense
Elizabeth Spelke
Folk psychology
Renee Baillargeon
Weak ontology
== References == | Wikipedia/Folk_physics |
Centrifugal casting or rotocasting is a casting technique that is typically used to cast thin-walled cylinders. It is typically used to cast materials such as metals, glass, and concrete. A high quality is attainable by control of metallurgy and crystal structure. Unlike most other casting techniques, centrifugal casting is chiefly used to manufacture rotationally symmetric stock materials in standard sizes for further machining, rather than shaped parts tailored to a particular end-use.
== Materials ==
Typical materials that can be centrifugal cast are metals, cements, concretes, glass, and pottery materials. Typical metals cast are iron, steel, stainless steels, and alloys of nickel, aluminum, and copper, magnesium.
Two materials can be combined by introducing a second material during the process. A common example is cast iron pipe coated on the interior with cement.
== Process for casting metal ==
In centrifugal casting, a permanent mold is rotated continuously at high speeds (300 to 3000 rpm) as the molten metal is poured. The molten metal spreads along the inside mold wall, where it solidifies after cooling. The casting is usually a fine-grained casting with an especially fine-grained outer diameter, due to the rapid cooling at the surface of the mold. Lighter impurities and inclusions move towards the inside diameter and can be machined away following the casting.
Casting machines may be either horizontal or vertical-axis. Horizontal axis machines are preferred for long, thin cylinders, vertical machines for rings and bearings.
Castings usually solidify from the outside in. This directional solidification improves some metallurgical properties. Often the inner and outermost layers are removed and only the intermediary columnar zone is used.
Centrifugal casting was the invention of Alfred Krupp, who used it to manufacture railway tyres (cast steel tyres for railway wheels) starting in 1852.
== Applications ==
Typical parts made by this process are pipes, flywheels, cylinder liners, and other parts that are axi-symmetric. It is notably used to cast cylinder liners and sleeve valves for piston engines, parts which could not be reliably manufactured otherwise. UFIP is notable for applying this process to the manufacture of Cymbals.
=== Features of centrifugal casting ===
Castings can be made in almost any length, thickness and diameter.
Different wall thicknesses can be produced from the same mold.
Eliminates the need for cores.
Good mechanical properties due to the grain structure formed by centrifugal action.
Typically cylindrical shapes are produced:
In sizes of up to 6 m (20 ft) diameter and 15 m (49 ft) length.
With a wall thickness range from 2.5 to 125 mm (0.098 to 4.921 in).
In tolerance limits of the outer diameter of 2.5 mm (0.098 in) an die inner diameter of 3.8 mm (0.15 in).
In a surface finish from 2.5 to 12.5 mm (0.098 to 0.492 in) rms.
=== Glass ===
The technique is known in the glass industry as "spinning". The centrifugal force pushes the molten glass against the mold wall, where it solidifies. The cooling process often takes between 16 and 72 hours depending on the impurities or volume of material. Typical products made using this process are television tubes and missile nose cones.
Spin casting is also used to manufacture large telescope mirrors, where the natural curve followed by the molten glass greatly reduces the amount of grinding required. Rather than pouring glass into a mold an entire turntable containing the peripheral mold and the back pattern (a honeycomb pattern to reduce the mass of the finished product) is contained within a furnace and charged with the glass material used. The assembly is then heated and spun at slow speed until the glass is liquid, then gradually cooled over a period of months.
Centrifugal casting is also commonly used to shape glass into spherical objects such as marbles.
== Benefits ==
Cylinders and shapes with rotational symmetry are most commonly cast by this technique. Long castings are often produced with the long axis parallel to the ground rather than standing up in order to distribute the effect of gravity evenly.
Thin-walled cylinders are difficult to cast by other means. Centrifugal casting is particularly suited as they behave in the manner of shallow flat castings relative to the direction of the centrifugal force.
Centrifugal casting is also used to manufacture disk and cylinder shaped objects such as railway carriage wheels or machine fittings where grain, flow, and balance are important to the durability and utility of the finished product.
Noncircular shapes may also be cast providing the shape is relatively constant in radius.
== See also ==
Rotating furnace – Device for making axially symmetric paraboloids
Spin casting – Method of utilizing centrifugal force to produce castings from a rubber mold
Spin casting (mirrors) – Technique for constructing large parabolic mirrors
== References ==
== Further reading ==
Kalpakjian, Serope; Schmid, Steven R. Manufacturing Engineering and Technology (5th ed.). p. 525.
== External links ==
animation of centrifugal casting process
Efunda site page with centrifugal casting fundamentals
Centrifugal Casting Video
Cylinder Liner Manufacturing (Centrifugal Casting Process Video)
Centrifugal Casting Ductile Iron Pipe | Wikipedia/Centrifugal_casting_(industrial) |
In physics, a gauge theory is a type of field theory in which the Lagrangian, and hence the dynamics of the system itself, does not change under local transformations according to certain smooth families of operations (Lie groups). Formally, the Lagrangian is invariant under these transformations.
The term "gauge" refers to any specific mathematical formalism to regulate redundant degrees of freedom in the Lagrangian of a physical system. The transformations between possible gauges, called gauge transformations, form a Lie group—referred to as the symmetry group or the gauge group of the theory. Associated with any Lie group is the Lie algebra of group generators. For each group generator there necessarily arises a corresponding field (usually a vector field) called the gauge field. Gauge fields are included in the Lagrangian to ensure its invariance under the local group transformations (called gauge invariance). When such a theory is quantized, the quanta of the gauge fields are called gauge bosons. If the symmetry group is non-commutative, then the gauge theory is referred to as non-abelian gauge theory, the usual example being the Yang–Mills theory.
Many powerful theories in physics are described by Lagrangians that are invariant under some symmetry transformation groups. When they are invariant under a transformation identically performed at every point in the spacetime in which the physical processes occur, they are said to have a global symmetry. Local symmetry, the cornerstone of gauge theories, is a stronger constraint. In fact, a global symmetry is just a local symmetry whose group's parameters are fixed in spacetime (the same way a constant value can be understood as a function of a certain parameter, the output of which is always the same).
Gauge theories are important as the successful field theories explaining the dynamics of elementary particles. Quantum electrodynamics is an abelian gauge theory with the symmetry group U(1) and has one gauge field, the electromagnetic four-potential, with the photon being the gauge boson. The Standard Model is a non-abelian gauge theory with the symmetry group U(1) × SU(2) × SU(3) and has a total of twelve gauge bosons: the photon, three weak bosons and eight gluons.
Gauge theories are also important in explaining gravitation in the theory of general relativity. Its case is somewhat unusual in that the gauge field is a tensor, the Lanczos tensor. Theories of quantum gravity, beginning with gauge gravitation theory, also postulate the existence of a gauge boson known as the graviton. Gauge symmetries can be viewed as analogues of the principle of general covariance of general relativity in which the coordinate system can be chosen freely under arbitrary diffeomorphisms of spacetime. Both gauge invariance and diffeomorphism invariance reflect a redundancy in the description of the system. An alternative theory of gravitation, gauge theory gravity, replaces the principle of general covariance with a true gauge principle with new gauge fields.
Historically, these ideas were first stated in the context of classical electromagnetism and later in general relativity. However, the modern importance of gauge symmetries appeared first in the relativistic quantum mechanics of electrons – quantum electrodynamics, elaborated on below. Today, gauge theories are useful in condensed matter, nuclear and high energy physics among other subfields.
== History ==
The concept and the name of gauge theory derives from the work of Hermann Weyl in 1918. Weyl, in an attempt to generalize the geometrical ideas of general relativity to include electromagnetism, conjectured that Eichinvarianz or invariance under the change of scale (or "gauge") might also be a local symmetry of general relativity. After the development of quantum mechanics, Weyl, Vladimir Fock and Fritz London replaced the simple scale factor with a complex quantity and turned the scale transformation into a change of phase, which is a U(1) gauge symmetry. This explained the electromagnetic field effect on the wave function of a charged quantum mechanical particle. Weyl's 1929 paper introduced the modern concept of gauge invariance subsequently popularized by Wolfgang Pauli in his 1941 review. In retrospect, James Clerk Maxwell's formulation, in 1864–65, of electrodynamics in "A Dynamical Theory of the Electromagnetic Field" suggested the possibility of invariance, when he stated that any vector field whose curl vanishes—and can therefore normally be written as a gradient of a function—could be added to the vector potential without affecting the magnetic field. Similarly unnoticed, David Hilbert had derived the Einstein field equations by postulating the invariance of the action under a general coordinate transformation. The importance of these symmetry invariances remained unnoticed until Weyl's work.
Inspired by Pauli's descriptions of connection between charge conservation and field theory driven by invariance, Chen Ning Yang sought a field theory for atomic nuclei binding based on conservation of nuclear isospin.: 202 In 1954, Yang and Robert Mills generalized the gauge invariance of electromagnetism, constructing a theory based on the action of the (non-abelian) SU(2) symmetry group on the isospin doublet of protons and neutrons. This is similar to the action of the U(1) group on the spinor fields of quantum electrodynamics.
The Yang–Mills theory became the prototype theory to resolve some of the confusion in elementary particle physics.
This idea later found application in the quantum field theory of the weak force, and its unification with electromagnetism in the electroweak theory. Gauge theories became even more attractive when it was realized that non-abelian gauge theories reproduced a feature called asymptotic freedom. Asymptotic freedom was believed to be an important characteristic of strong interactions. This motivated searching for a strong force gauge theory. This theory, now known as quantum chromodynamics, is a gauge theory with the action of the SU(3) group on the color triplet of quarks. The Standard Model unifies the description of electromagnetism, weak interactions and strong interactions in the language of gauge theory.
In the 1970s, Michael Atiyah began studying the mathematics of solutions to the classical Yang–Mills equations. In 1983, Atiyah's student Simon Donaldson built on this work to show that the differentiable classification of smooth 4-manifolds is very different from their classification up to homeomorphism. Michael Freedman used Donaldson's work to exhibit exotic R4s, that is, exotic differentiable structures on Euclidean 4-dimensional space. This led to an increasing interest in gauge theory for its own sake, independent of its successes in fundamental physics. In 1994, Edward Witten and Nathan Seiberg invented gauge-theoretic techniques based on supersymmetry that enabled the calculation of certain topological invariants (the Seiberg–Witten invariants). These contributions to mathematics from gauge theory have led to a renewed interest in this area.
The importance of gauge theories in physics is exemplified in the success of the mathematical formalism in providing a unified framework to describe the quantum field theories of electromagnetism, the weak force and the strong force. This theory, known as the Standard Model, accurately describes experimental predictions regarding three of the four fundamental forces of nature, and is a gauge theory with the gauge group SU(3) × SU(2) × U(1). Modern theories like string theory, as well as general relativity, are, in one way or another, gauge theories.
See Jackson and Okun for early history of gauge and Pickering for more about the history of gauge and quantum field theories.
== Description ==
=== Global and local symmetries ===
==== Global symmetry ====
In physics, the mathematical description of any physical situation usually contains excess degrees of freedom; the same physical situation is equally well described by many equivalent mathematical configurations. For instance, in Newtonian dynamics, if two configurations are related by a Galilean transformation (an inertial change of reference frame) they represent the same physical situation. These transformations form a group of "symmetries" of the theory, and a physical situation corresponds not to an individual mathematical configuration but to a class of configurations related to one another by this symmetry group.
This idea can be generalized to include local as well as global symmetries, analogous to much more abstract "changes of coordinates" in a situation where there is no preferred "inertial" coordinate system that covers the entire physical system. A gauge theory is a mathematical model that has symmetries of this kind, together with a set of techniques for making physical predictions consistent with the symmetries of the model.
==== Example of global symmetry ====
When a quantity occurring in the mathematical configuration is not just a number but has some geometrical significance, such as a velocity or an axis of rotation, its representation as numbers arranged in a vector or matrix is also changed by a coordinate transformation. For instance, if one description of a pattern of fluid flow states that the fluid velocity in the neighborhood of (x = 1, y = 0) is 1 m/s in the positive x direction, then a description of the same situation in which the coordinate system has been rotated clockwise by 90 degrees states that the fluid velocity in the neighborhood of (x = 0, y= −1) is 1 m/s in the negative y direction. The coordinate transformation has affected both the coordinate system used to identify the location of the measurement and the basis in which its value is expressed. As long as this transformation is performed globally (affecting the coordinate basis in the same way at every point), the effect on values that represent the rate of change of some quantity along some path in space and time as it passes through point P is the same as the effect on values that are truly local to P.
==== Local symmetry ====
===== Use of fiber bundles to describe local symmetries =====
In order to adequately describe physical situations in more complex theories, it is often necessary to introduce a "coordinate basis" for some of the objects of the theory that do not have this simple relationship to the coordinates used to label points in space and time. (In mathematical terms, the theory involves a fiber bundle in which the fiber at each point of the base space consists of possible coordinate bases for use when describing the values of objects at that point.) In order to spell out a mathematical configuration, one must choose a particular coordinate basis at each point (a local section of the fiber bundle) and express the values of the objects of the theory (usually "fields" in the physicist's sense) using this basis. Two such mathematical configurations are equivalent (describe the same physical situation) if they are related by a transformation of this abstract coordinate basis (a change of local section, or gauge transformation).
In most gauge theories, the set of possible transformations of the abstract gauge basis at an individual point in space and time is a finite-dimensional Lie group. The simplest such group is U(1), which appears in the modern formulation of quantum electrodynamics (QED) via its use of complex numbers. QED is generally regarded as the first, and simplest, physical gauge theory. The set of possible gauge transformations of the entire configuration of a given gauge theory also forms a group, the gauge group of the theory. An element of the gauge group can be parameterized by a smoothly varying function from the points of spacetime to the (finite-dimensional) Lie group, such that the value of the function and its derivatives at each point represents the action of the gauge transformation on the fiber over that point.
A gauge transformation with constant parameter at every point in space and time is analogous to a rigid rotation of the geometric coordinate system; it represents a global symmetry of the gauge representation. As in the case of a rigid rotation, this gauge transformation affects expressions that represent the rate of change along a path of some gauge-dependent quantity in the same way as those that represent a truly local quantity. A gauge transformation whose parameter is not a constant function is referred to as a local symmetry; its effect on expressions that involve a derivative is qualitatively different from that on expressions that do not. (This is analogous to a non-inertial change of reference frame, which can produce a Coriolis effect.)
=== Gauge fields ===
The "gauge covariant" version of a gauge theory accounts for this effect by introducing a gauge field (in mathematical language, an Ehresmann connection) and formulating all rates of change in terms of the covariant derivative with respect to this connection. The gauge field becomes an essential part of the description of a mathematical configuration. A configuration in which the gauge field can be eliminated by a gauge transformation has the property that its field strength (in mathematical language, its curvature) is zero everywhere; a gauge theory is not limited to these configurations. In other words, the distinguishing characteristic of a gauge theory is that the gauge field does not merely compensate for a poor choice of coordinate system; there is generally no gauge transformation that makes the gauge field vanish.
When analyzing the dynamics of a gauge theory, the gauge field must be treated as a dynamical variable, similar to other objects in the description of a physical situation. In addition to its interaction with other objects via the covariant derivative, the gauge field typically contributes energy in the form of a "self-energy" term. One can obtain the equations for the gauge theory by:
starting from a naïve ansatz without the gauge field (in which the derivatives appear in a "bare" form);
listing those global symmetries of the theory that can be characterized by a continuous parameter (generally an abstract equivalent of a rotation angle);
computing the correction terms that result from allowing the symmetry parameter to vary from place to place; and
reinterpreting these correction terms as couplings to one or more gauge fields, and giving these fields appropriate self-energy terms and dynamical behavior.
This is the sense in which a gauge theory "extends" a global symmetry to a local symmetry, and closely resembles the historical development of the gauge theory of gravity known as general relativity.
=== Physical experiments ===
Gauge theories used to model the results of physical experiments engage in:
limiting the universe of possible configurations to those consistent with the information used to set up the experiment, and then
computing the probability distribution of the possible outcomes that the experiment is designed to measure.
We cannot express the mathematical descriptions of the "setup information" and the "possible measurement outcomes", or the "boundary conditions" of the experiment, without reference to a particular coordinate system, including a choice of gauge. One assumes an adequate experiment isolated from "external" influence that is itself a gauge-dependent statement. Mishandling gauge dependence calculations in boundary conditions is a frequent source of anomalies, and approaches to anomaly avoidance classifies gauge theories.
=== Continuum theories ===
The two gauge theories mentioned above, continuum electrodynamics and general relativity, are continuum field theories. The techniques of calculation in a continuum theory implicitly assume that:
given a completely fixed choice of gauge, the boundary conditions of an individual configuration are completely described
given a completely fixed gauge and a complete set of boundary conditions, the least action determines a unique mathematical configuration and therefore a unique physical situation consistent with these bounds
fixing the gauge introduces no anomalies in the calculation, due either to gauge dependence in describing partial information about boundary conditions or to incompleteness of the theory.
Determination of the likelihood of possible measurement outcomes proceed by:
establishing a probability distribution over all physical situations determined by boundary conditions consistent with the setup information
establishing a probability distribution of measurement outcomes for each possible physical situation
convolving these two probability distributions to get a distribution of possible measurement outcomes consistent with the setup information
These assumptions have enough validity across a wide range of energy scales and experimental conditions to allow these theories to make accurate predictions about almost all of the phenomena encountered in daily life: light, heat, and electricity, eclipses, spaceflight, etc. They fail only at the smallest and largest scales due to omissions in the theories themselves, and when the mathematical techniques themselves break down, most notably in the case of turbulence and other chaotic phenomena.
=== Quantum field theories ===
Other than these classical continuum field theories, the most widely known gauge theories are quantum field theories, including quantum electrodynamics and the Standard Model of elementary particle physics. The starting point of a quantum field theory is much like that of its continuum analog: a gauge-covariant action integral that characterizes "allowable" physical situations according to the principle of least action. However, continuum and quantum theories differ significantly in how they handle the excess degrees of freedom represented by gauge transformations. Continuum theories, and most pedagogical treatments of the simplest quantum field theories, use a gauge fixing prescription to reduce the orbit of mathematical configurations that represent a given physical situation to a smaller orbit related by a smaller gauge group (the global symmetry group, or perhaps even the trivial group).
More sophisticated quantum field theories, in particular those that involve a non-abelian gauge group, break the gauge symmetry within the techniques of perturbation theory by introducing additional fields (the Faddeev–Popov ghosts) and counterterms motivated by anomaly cancellation, in an approach known as BRST quantization. While these concerns are in one sense highly technical, they are also closely related to the nature of measurement, the limits on knowledge of a physical situation, and the interactions between incompletely specified experimental conditions and incompletely understood physical theory. The mathematical techniques that have been developed in order to make gauge theories tractable have found many other applications, from solid-state physics and crystallography to low-dimensional topology.
== Classical gauge theory ==
=== Classical electromagnetism ===
In electrostatics, one can either discuss the electric field, E, or its corresponding electric potential, V. Knowledge of one makes it possible to find the other, except that potentials differing by a constant,
V
↦
V
+
C
{\displaystyle V\mapsto V+C}
, correspond to the same electric field. This is because the electric field relates to changes in the potential from one point in space to another, and the constant C would cancel out when subtracting to find the change in potential. In terms of vector calculus, the electric field is the gradient of the potential,
E
=
−
∇
V
{\displaystyle \mathbf {E} =-\nabla V}
. Generalizing from static electricity to electromagnetism, we have a second potential, the vector potential A, with
E
=
−
∇
V
−
∂
A
∂
t
B
=
∇
×
A
{\displaystyle {\begin{aligned}\mathbf {E} &=-\nabla V-{\frac {\partial \mathbf {A} }{\partial t}}\\\mathbf {B} &=\nabla \times \mathbf {A} \end{aligned}}}
The general gauge transformations now become not just
V
↦
V
+
C
{\displaystyle V\mapsto V+C}
but
A
↦
A
+
∇
f
V
↦
V
−
∂
f
∂
t
{\displaystyle {\begin{aligned}\mathbf {A} &\mapsto \mathbf {A} +\nabla f\\V&\mapsto V-{\frac {\partial f}{\partial t}}\end{aligned}}}
where f is any twice continuously differentiable function that depends on position and time. The electromagnetic fields remain the same under the gauge transformation.
=== Example: scalar O(n) gauge theory ===
The remainder of this section requires some familiarity with classical or quantum field theory, and the use of Lagrangians.
Definitions in this section: gauge group, gauge field, interaction Lagrangian, gauge boson.
The following illustrates how local gauge invariance can be "motivated" heuristically starting from global symmetry properties, and how it leads to an interaction between originally non-interacting fields.
Consider a set of
n
{\displaystyle n}
non-interacting real scalar fields, with equal masses m. This system is described by an action that is the sum of the (usual) action for each scalar field
φ
i
{\displaystyle \varphi _{i}}
S
=
∫
d
4
x
∑
i
=
1
n
[
1
2
∂
μ
φ
i
∂
μ
φ
i
−
1
2
m
2
φ
i
2
]
{\displaystyle {\mathcal {S}}=\int \,\mathrm {d} ^{4}x\sum _{i=1}^{n}\left[{\frac {1}{2}}\partial _{\mu }\varphi _{i}\partial ^{\mu }\varphi _{i}-{\frac {1}{2}}m^{2}\varphi _{i}^{2}\right]}
The Lagrangian (density) can be compactly written as
L
=
1
2
(
∂
μ
Φ
)
T
∂
μ
Φ
−
1
2
m
2
Φ
T
Φ
{\displaystyle \ {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\Phi )^{\mathsf {T}}\partial ^{\mu }\Phi -{\frac {1}{2}}m^{2}\Phi ^{\mathsf {T}}\Phi }
by introducing a vector of fields
Φ
T
=
(
φ
1
,
φ
2
,
…
,
φ
n
)
{\displaystyle \ \Phi ^{\mathsf {T}}=(\varphi _{1},\varphi _{2},\ldots ,\varphi _{n})}
The term
∂
μ
Φ
{\displaystyle \partial _{\mu }\Phi }
is the partial derivative of
Φ
{\displaystyle \Phi }
along dimension
μ
{\displaystyle \mu }
.
It is now transparent that the Lagrangian is invariant under the transformation
Φ
↦
Φ
′
=
G
Φ
{\displaystyle \ \Phi \mapsto \Phi '=G\Phi }
whenever G is a constant matrix belonging to the n-by-n orthogonal group O(n). This is seen to preserve the Lagrangian, since the derivative of
Φ
′
{\displaystyle \Phi '}
transforms identically to
Φ
{\displaystyle \Phi }
and both quantities appear inside dot products in the Lagrangian (orthogonal transformations preserve the dot product).
(
∂
μ
Φ
)
↦
(
∂
μ
Φ
)
′
=
G
∂
μ
Φ
{\displaystyle \ (\partial _{\mu }\Phi )\mapsto (\partial _{\mu }\Phi )'=G\partial _{\mu }\Phi }
This characterizes the global symmetry of this particular Lagrangian, and the symmetry group is often called the gauge group; the mathematical term is structure group, especially in the theory of G-structures. Incidentally, Noether's theorem implies that invariance under this group of transformations leads to the conservation of the currents
J
μ
a
=
i
∂
μ
Φ
T
T
a
Φ
{\displaystyle \ J_{\mu }^{a}=i\partial _{\mu }\Phi ^{\mathsf {T}}T^{a}\Phi }
where the Ta matrices are generators of the SO(n) group. There is one conserved current for every generator.
Now, demanding that this Lagrangian should have local O(n)-invariance requires that the G matrices (which were earlier constant) should be allowed to become functions of the spacetime coordinates x.
In this case, the G matrices do not "pass through" the derivatives, when G = G(x),
∂
μ
(
G
Φ
)
≠
G
(
∂
μ
Φ
)
{\displaystyle \ \partial _{\mu }(G\Phi )\neq G(\partial _{\mu }\Phi )}
The failure of the derivative to commute with "G" introduces an additional term (in keeping with the product rule), which spoils the invariance of the Lagrangian. In order to rectify this we define a new derivative operator such that the derivative of
Φ
′
{\displaystyle \Phi '}
again transforms identically with
Φ
{\displaystyle \Phi }
(
D
μ
Φ
)
′
=
G
D
μ
Φ
{\displaystyle \ (D_{\mu }\Phi )'=GD_{\mu }\Phi }
This new "derivative" is called a (gauge) covariant derivative and takes the form
D
μ
=
∂
μ
−
i
g
A
μ
{\displaystyle \ D_{\mu }=\partial _{\mu }-igA_{\mu }}
where g is called the coupling constant; a quantity defining the strength of an interaction.
After a simple calculation we can see that the gauge field A(x) must transform as follows
A
μ
′
=
G
A
μ
G
−
1
−
i
g
(
∂
μ
G
)
G
−
1
{\displaystyle \ A'_{\mu }=GA_{\mu }G^{-1}-{\frac {i}{g}}(\partial _{\mu }G)G^{-1}}
The gauge field is an element of the Lie algebra, and can therefore be expanded as
A
μ
=
∑
a
A
μ
a
T
a
{\displaystyle \ A_{\mu }=\sum _{a}A_{\mu }^{a}T^{a}}
There are therefore as many gauge fields as there are generators of the Lie algebra.
Finally, we now have a locally gauge invariant Lagrangian
L
l
o
c
=
1
2
(
D
μ
Φ
)
T
D
μ
Φ
−
1
2
m
2
Φ
T
Φ
{\displaystyle \ {\mathcal {L}}_{\mathrm {loc} }={\frac {1}{2}}(D_{\mu }\Phi )^{\mathsf {T}}D^{\mu }\Phi -{\frac {1}{2}}m^{2}\Phi ^{\mathsf {T}}\Phi }
Pauli uses the term gauge transformation of the first type to mean the transformation of
Φ
{\displaystyle \Phi }
, while the compensating transformation in
A
{\displaystyle A}
is called a gauge transformation of the second type.
The difference between this Lagrangian and the original globally gauge-invariant Lagrangian is seen to be the interaction Lagrangian
L
i
n
t
=
i
g
2
Φ
T
A
μ
T
∂
μ
Φ
+
i
g
2
(
∂
μ
Φ
)
T
A
μ
Φ
−
g
2
2
(
A
μ
Φ
)
T
A
μ
Φ
{\displaystyle \ {\mathcal {L}}_{\mathrm {int} }=i{\frac {g}{2}}\Phi ^{\mathsf {T}}A_{\mu }^{\mathsf {T}}\partial ^{\mu }\Phi +i{\frac {g}{2}}(\partial _{\mu }\Phi )^{\mathsf {T}}A^{\mu }\Phi -{\frac {g^{2}}{2}}(A_{\mu }\Phi )^{\mathsf {T}}A^{\mu }\Phi }
This term introduces interactions between the n scalar fields just as a consequence of the demand for local gauge invariance. However, to make this interaction physical and not completely arbitrary, the mediator A(x) needs to propagate in space. That is dealt with in the next section by adding yet another term,
L
g
f
{\displaystyle {\mathcal {L}}_{\mathrm {gf} }}
, to the Lagrangian. In the quantized version of the obtained classical field theory, the quanta of the gauge field A(x) are called gauge bosons. The interpretation of the interaction Lagrangian in quantum field theory is of scalar bosons interacting by the exchange of these gauge bosons.
=== Yang–Mills Lagrangian for the gauge field ===
The picture of a classical gauge theory developed in the previous section is almost complete, except for the fact that to define the covariant derivatives D, one needs to know the value of the gauge field
A
(
x
)
{\displaystyle A(x)}
at all spacetime points. Instead of manually specifying the values of this field, it can be given as the solution to a field equation. Further requiring that the Lagrangian that generates this field equation is locally gauge invariant as well, one possible form for the gauge field Lagrangian is
L
gf
=
−
1
2
tr
(
F
μ
ν
F
μ
ν
)
=
−
1
4
F
a
μ
ν
F
μ
ν
a
{\displaystyle {\mathcal {L}}_{\text{gf}}=-{\frac {1}{2}}\operatorname {tr} \left(F^{\mu \nu }F_{\mu \nu }\right)=-{\frac {1}{4}}F^{a\mu \nu }F_{\mu \nu }^{a}}
where the
F
μ
ν
a
{\displaystyle F_{\mu \nu }^{a}}
are obtained from potentials
A
μ
a
{\displaystyle A_{\mu }^{a}}
, being the components of
A
(
x
)
{\displaystyle A(x)}
, by
F
μ
ν
a
=
∂
μ
A
ν
a
−
∂
ν
A
μ
a
+
g
∑
b
,
c
f
a
b
c
A
μ
b
A
ν
c
{\displaystyle F_{\mu \nu }^{a}=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+g\sum _{b,c}f^{abc}A_{\mu }^{b}A_{\nu }^{c}}
and the
f
a
b
c
{\displaystyle f^{abc}}
are the structure constants of the Lie algebra of the generators of the gauge group. This formulation of the Lagrangian is called a Yang–Mills action. Other gauge invariant actions also exist (e.g., nonlinear electrodynamics, Born–Infeld action, Chern–Simons model, theta term, etc.).
In this Lagrangian term there is no field whose transformation counterweighs the one of
A
{\displaystyle A}
. Invariance of this term under gauge transformations is a particular case of a priori classical (geometrical) symmetry. This symmetry must be restricted in order to perform quantization, the procedure being denominated gauge fixing, but even after restriction, gauge transformations may be possible.
The complete Lagrangian for the gauge theory is now
L
=
L
loc
+
L
gf
=
L
global
+
L
int
+
L
gf
{\displaystyle {\mathcal {L}}={\mathcal {L}}_{\text{loc}}+{\mathcal {L}}_{\text{gf}}={\mathcal {L}}_{\text{global}}+{\mathcal {L}}_{\text{int}}+{\mathcal {L}}_{\text{gf}}}
=== Example: electrodynamics ===
As a simple application of the formalism developed in the previous sections, consider the case of electrodynamics, with only the electron field. The bare-bones action that generates the electron field's Dirac equation is
S
=
∫
ψ
¯
(
i
ℏ
c
γ
μ
∂
μ
−
m
c
2
)
ψ
d
4
x
{\displaystyle {\mathcal {S}}=\int {\bar {\psi }}\left(i\hbar c\,\gamma ^{\mu }\partial _{\mu }-mc^{2}\right)\psi \,\mathrm {d} ^{4}x}
The global symmetry for this system is
ψ
↦
e
i
θ
ψ
{\displaystyle \psi \mapsto e^{i\theta }\psi }
The gauge group here is U(1), just rotations of the phase angle of the field, with the particular rotation determined by the constant θ.
"Localising" this symmetry implies the replacement of θ by θ(x). An appropriate covariant derivative is then
D
μ
=
∂
μ
−
i
e
ℏ
A
μ
{\displaystyle D_{\mu }=\partial _{\mu }-i{\frac {e}{\hbar }}A_{\mu }}
Identifying the "charge" e (not to be confused with the mathematical constant e in the symmetry description) with the usual electric charge (this is the origin of the usage of the term in gauge theories), and the gauge field A(x) with the four-vector potential of the electromagnetic field results in an interaction Lagrangian
L
int
=
e
ℏ
ψ
¯
(
x
)
γ
μ
ψ
(
x
)
A
μ
(
x
)
=
J
μ
(
x
)
A
μ
(
x
)
{\displaystyle {\mathcal {L}}_{\text{int}}={\frac {e}{\hbar }}{\bar {\psi }}(x)\gamma ^{\mu }\psi (x)A_{\mu }(x)=J^{\mu }(x)A_{\mu }(x)}
where
J
μ
(
x
)
=
e
ℏ
ψ
¯
(
x
)
γ
μ
ψ
(
x
)
{\displaystyle J^{\mu }(x)={\frac {e}{\hbar }}{\bar {\psi }}(x)\gamma ^{\mu }\psi (x)}
is the electric current four vector in the Dirac field. The gauge principle is therefore seen to naturally introduce the so-called minimal coupling of the electromagnetic field to the electron field.
Adding a Lagrangian for the gauge field
A
μ
(
x
)
{\displaystyle A_{\mu }(x)}
in terms of the field strength tensor exactly as in electrodynamics, one obtains the Lagrangian used as the starting point in quantum electrodynamics.
L
QED
=
ψ
¯
(
i
ℏ
c
γ
μ
D
μ
−
m
c
2
)
ψ
−
1
4
μ
0
F
μ
ν
F
μ
ν
{\displaystyle {\mathcal {L}}_{\text{QED}}={\bar {\psi }}\left(i\hbar c\,\gamma ^{\mu }D_{\mu }-mc^{2}\right)\psi -{\frac {1}{4\mu _{0}}}F_{\mu \nu }F^{\mu \nu }}
== Mathematical formalism ==
Gauge theories are usually discussed in the language of differential geometry. Mathematically, a gauge is just a choice of a (local) section of some principal bundle. A gauge transformation is just a transformation between two such sections.
Although gauge theory is dominated by the study of connections (primarily because it's mainly studied by high-energy physicists), the idea of a connection is not central to gauge theory in general. In fact, a result in general gauge theory shows that affine representations (i.e., affine modules) of the gauge transformations can be classified as sections of a jet bundle satisfying certain properties. There are representations that transform covariantly pointwise (called by physicists gauge transformations of the first kind), representations that transform as a connection form (called by physicists gauge transformations of the second kind, an affine representation)—and other more general representations, such as the B field in BF theory. There are more general nonlinear representations (realizations), but these are extremely complicated. Still, nonlinear sigma models transform nonlinearly, so there are applications.
If there is a principal bundle P whose base space is space or spacetime and structure group is a Lie group, then the sections of P form a principal homogeneous space of the group of gauge transformations.
Connections (gauge connection) define this principal bundle, yielding a covariant derivative ∇ in each associated vector bundle. If a local frame is chosen (a local basis of sections), then this covariant derivative is represented by the connection form A, a Lie algebra-valued 1-form, which is called the gauge potential in physics. This is evidently not an intrinsic but a frame-dependent quantity. The curvature form F, a Lie algebra-valued 2-form that is an intrinsic quantity, is constructed from a connection form by
F
=
d
A
+
A
∧
A
{\displaystyle \mathbf {F} =\mathrm {d} \mathbf {A} +\mathbf {A} \wedge \mathbf {A} }
where d stands for the exterior derivative and
∧
{\displaystyle \wedge }
stands for the wedge product. (
A
{\displaystyle \mathbf {A} }
is an element of the vector space spanned by the generators
T
a
{\displaystyle T^{a}}
, and so the components of
A
{\displaystyle \mathbf {A} }
do not commute with one another. Hence the wedge product
A
∧
A
{\displaystyle \mathbf {A} \wedge \mathbf {A} }
does not vanish.)
Infinitesimal gauge transformations form a Lie algebra, which is characterized by a smooth Lie-algebra-valued scalar, ε. Under such an infinitesimal gauge transformation,
δ
ε
A
=
[
ε
,
A
]
−
d
ε
{\displaystyle \delta _{\varepsilon }\mathbf {A} =[\varepsilon ,\mathbf {A} ]-\mathrm {d} \varepsilon }
where
[
⋅
,
⋅
]
{\displaystyle [\cdot ,\cdot ]}
is the Lie bracket.
One nice thing is that if
δ
ε
X
=
ε
X
{\displaystyle \delta _{\varepsilon }X=\varepsilon X}
, then
δ
ε
D
X
=
ε
D
X
{\displaystyle \delta _{\varepsilon }DX=\varepsilon DX}
where D is the covariant derivative
D
X
=
d
e
f
d
X
+
A
X
{\displaystyle DX\ {\stackrel {\mathrm {def} }{=}}\ \mathrm {d} X+\mathbf {A} X}
Also,
δ
ε
F
=
[
ε
,
F
]
{\displaystyle \delta _{\varepsilon }\mathbf {F} =[\varepsilon ,\mathbf {F} ]}
, which means
F
{\displaystyle \mathbf {F} }
transforms covariantly.
Not all gauge transformations can be generated by infinitesimal gauge transformations in general. An example is when the base manifold is a compact manifold without boundary such that the homotopy class of mappings from that manifold to the Lie group is nontrivial. See instanton for an example.
The Yang–Mills action is now given by
1
4
g
2
∫
Tr
[
⋆
F
∧
F
]
{\displaystyle {\frac {1}{4g^{2}}}\int \operatorname {Tr} [{\star }F\wedge F]}
where
⋆
{\displaystyle {\star }}
is the Hodge star operator and the integral is defined as in differential geometry.
A quantity which is gauge-invariant (i.e., invariant under gauge transformations) is the Wilson loop, which is defined over any closed path, γ, as follows:
χ
(
ρ
)
(
P
{
e
∫
γ
A
}
)
{\displaystyle \chi ^{(\rho )}\left({\mathcal {P}}\left\{e^{\int _{\gamma }A}\right\}\right)}
where χ is the character of a complex representation ρ and
P
{\displaystyle {\mathcal {P}}}
represents the path-ordered operator.
The formalism of gauge theory carries over to a general setting. For example, it is sufficient to ask that a vector bundle have a metric connection; when one does so, one finds that the metric connection satisfies the Yang–Mills equations of motion.
== Quantization of gauge theories ==
Gauge theories may be quantized by specialization of methods which are applicable to any quantum field theory. However, because of the subtleties imposed by the gauge constraints (see section on Mathematical formalism, above) there are many technical problems to be solved which do not arise in other field theories. At the same time, the richer structure of gauge theories allows simplification of some computations: for example Ward identities connect different renormalization constants.
=== Methods and aims ===
The first gauge theory quantized was quantum electrodynamics (QED). The first methods developed for this involved gauge fixing and then applying canonical quantization. The Gupta–Bleuler method was also developed to handle this problem. Non-abelian gauge theories are now handled by a variety of means. Methods for quantization are covered in the article on quantization.
The main point to quantization is to be able to compute quantum amplitudes for various processes allowed by the theory. Technically, they reduce to the computations of certain correlation functions in the vacuum state. This involves a renormalization of the theory.
When the running coupling of the theory is small enough, then all required quantities may be computed in perturbation theory. Quantization schemes intended to simplify such computations (such as canonical quantization) may be called perturbative quantization schemes. At present some of these methods lead to the most precise experimental tests of gauge theories.
However, in most gauge theories, there are many interesting questions which are non-perturbative. Quantization schemes suited to these problems (such as lattice gauge theory) may be called non-perturbative quantization schemes. Precise computations in such schemes often require supercomputing, and are therefore less well-developed currently than other schemes.
=== Anomalies ===
Some of the symmetries of the classical theory are then seen not to hold in the quantum theory; a phenomenon called an anomaly. Among the most well known are:
The scale anomaly, which gives rise to a running coupling constant. In QED this gives rise to the phenomenon of the Landau pole. In quantum chromodynamics (QCD) this leads to asymptotic freedom.
The chiral anomaly in either chiral or vector field theories with fermions. This has close connection with topology through the notion of instantons. In QCD this anomaly causes the decay of a pion to two photons.
The gauge anomaly, which must cancel in any consistent physical theory. In the electroweak theory this cancellation requires an equal number of quarks and leptons.
== Pure gauge ==
A pure gauge is the set of field configurations obtained by a gauge transformation on the null-field configuration, i.e., a gauge transform of zero. So it is a particular "gauge orbit" in the field configuration's space.
Thus, in the abelian case, where
A
μ
(
x
)
→
A
μ
′
(
x
)
=
A
μ
(
x
)
+
∂
μ
f
(
x
)
{\displaystyle A_{\mu }(x)\rightarrow A'_{\mu }(x)=A_{\mu }(x)+\partial _{\mu }f(x)}
, the pure gauge is just the set of field configurations
A
μ
′
(
x
)
=
∂
μ
f
(
x
)
{\displaystyle A'_{\mu }(x)=\partial _{\mu }f(x)}
for all f(x).
== See also ==
== References ==
== Bibliography ==
General readers
Schumm, Bruce (2004) Deep Down Things. Johns Hopkins University Press. Esp. chpt. 8. A serious attempt by a physicist to explain gauge theory and the Standard Model with little formal mathematics.
Carroll, Sean (2024). The Biggest Ideas in the Universe : Quanta and Fields. Dutton. p. 193-234 (chap 9 : Gauge Theory, and chap 10 : Phases). ISBN 978-0-5931-8660-2.
Texts
Bailin, David; Love, Alexander (2019). Introduction to Gauge Field Theory. Taylor & Francis. ISBN 9780203750100.
Cheng, T.-P.; Li, L.-F. (1983). Gauge Theory of Elementary Particle Physics. Oxford University Press. ISBN 0-19-851961-3.
Frampton, P. (2008). Gauge Field Theories (3rd ed.). Wiley-VCH.
Kane, G.L. (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 0-201-11749-5.
Quigg, Chris (1983). Gauge Theories of the Strong, Weak and Electromagnetic Interactions. Addison-Wesley. ISBN 0-8053-6021-2.
Articles
Becchi, C. (1997). "Introduction to Gauge Theories". arXiv:hep-ph/9705211.
Gross, D. (1992). "Gauge theory – Past, Present and Future". Retrieved 2009-04-23.
Jackson, J.D. (2002). "From Lorenz to Coulomb and other explicit gauge transformations". Am. J. Phys. 70 (9): 917–928. arXiv:physics/0204034. Bibcode:2002AmJPh..70..917J. doi:10.1119/1.1491265. S2CID 119652556.
Svetlichny, George (1999). "Preparation for Gauge Theory". arXiv:math-ph/9902027.
== External links ==
"Gauge transformation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Yang–Mills equations on DispersiveWiki
Gauge theories on Scholarpedia | Wikipedia/Gauge_field_theory |
Newton's law of universal gravitation describes gravity as a force by stating that every particle attracts every other particle in the universe with a force that is proportional to the product of their masses and inversely proportional to the square of the distance between their centers of mass. Separated objects attract and are attracted as if all their mass were concentrated at their centers. The publication of the law has become known as the "first great unification", as it marked the unification of the previously described phenomena of gravity on Earth with known astronomical behaviors.
This is a general physical law derived from empirical observations by what Isaac Newton called inductive reasoning. It is a part of classical mechanics and was formulated in Newton's work Philosophiæ Naturalis Principia Mathematica (Latin for 'Mathematical Principles of Natural Philosophy' (the Principia)), first published on 5 July 1687.
The equation for universal gravitation thus takes the form:
F
=
G
m
1
m
2
r
2
,
{\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}},}
where F is the gravitational force acting between two objects, m1 and m2 are the masses of the objects, r is the distance between the centers of their masses, and G is the gravitational constant.
The first test of Newton's law of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. It took place 111 years after the publication of Newton's Principia and approximately 71 years after his death.
Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws, where force is inversely proportional to the square of the distance between the bodies. Coulomb's law has charge in place of mass and a different constant.
Newton's law was later superseded by Albert Einstein's theory of general relativity, but the universality of the gravitational constant is intact and the law still continues to be used as an excellent approximation of the effects of gravity in most applications. Relativity is required only when there is a need for extreme accuracy, or when dealing with very strong gravitational fields, such as those found near extremely massive and dense objects, or at small distances (such as Mercury's orbit around the Sun).
== History ==
Before Newton's law of gravity, there were many theories explaining gravity. Philosophers made observations about things falling down − and developed theories why they do – as early as Aristotle who thought that rocks fall to the ground because seeking the ground was an essential part of their nature.
Around 1600, the scientific method began to take root. René Descartes started over with a more fundamental view, developing ideas of matter and action independent of theology. Galileo Galilei wrote about experimental measurements of falling and rolling objects. Johannes Kepler's laws of planetary motion summarized Tycho Brahe's astronomical observations.: 132
Around 1666 Isaac Newton developed the idea that Kepler's laws must also apply to the orbit of the Moon around the Earth and then to all objects on Earth. The analysis required assuming that the gravitation force acted as if all of the mass of the Earth were concentrated at its center, an unproven conjecture at that time. His calculations of the Moon orbit time was within 16% of the known value. By 1680, new values for the diameter of the Earth improved his orbit time to within 1.6%, but more importantly Newton had found a proof of his earlier conjecture.: 201
In 1687 Newton published his Principia which combined his laws of motion with new mathematical analysis to explain Kepler's empirical results.: 134 His explanation was in the form of a law of universal gravitation: any two bodies are attracted by a force proportional to their mass and inversely proportional to their separation squared.: 28 Newton's original formula was:
F
o
r
c
e
o
f
g
r
a
v
i
t
y
∝
m
a
s
s
o
f
o
b
j
e
c
t
1
×
m
a
s
s
o
f
o
b
j
e
c
t
2
d
i
s
t
a
n
c
e
f
r
o
m
c
e
n
t
e
r
s
2
{\displaystyle {\rm {Force\,of\,gravity}}\propto {\frac {\rm {mass\,of\,object\,1\,\times \,mass\,of\,object\,2}}{\rm {distance\,from\,centers^{2}}}}}
where the symbol
∝
{\displaystyle \propto }
means "is proportional to". To make this into an equal-sided formula or equation, there needed to be a multiplying factor or constant that would give the correct force of gravity no matter the value of the masses or distance between them (the gravitational constant). Newton would need an accurate measure of this constant to prove his inverse-square law. When Newton presented Book 1 of the unpublished text in April 1686 to the Royal Society, Robert Hooke made a claim that Newton had obtained the inverse square law from him, ultimately a frivolous accusation.: 204
=== Newton's "causes hitherto unknown" ===
While Newton was able to formulate his law of gravity in his monumental work, he was deeply uncomfortable with the notion of "action at a distance" that his equations implied. In 1692, in his third letter to Bentley, he wrote: "That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it.": 26
Newton's 1713 General Scholium in the second edition of Principia explains his model of gravity, translated in this case by Samuel Clarke:
I have explained the Pharnomena of the Heavens and the Sea, by the Force of Gravity; but the Cause of Gravity I have not yet assigned. It is a Force arising from some Cause, which reaches to the very Centers of the Sun and Planets, without any diminution of its Force: And it acts, not proportionally to the Surfaces of the Particles it acts upon, as Mechanical Causes use to do; but proportionally to the Quantity of Solid Matter: And its Action reaches every way to immense Distances, decreasing always in a duplicate ratio of the Distances. But the Cause of these Properties of Gravity, I have not yet found deducible from Pharnomena: And Hypotheses I make not.: 383
The last sentence is Newton's famous and highly debated Latin phrase Hypotheses non fingo. In other translations it comes out "I feign no hypotheses".
== Modern form ==
In modern language, the law states the following:
Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in meters (m), and the constant G is 6.67430(15)×10−11 m3⋅kg−1⋅s−2. The value of the constant G was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G. This experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. It took place 111 years after the publication of Newton's Principia and 71 years after Newton's death, so none of Newton's calculations could use the value of G; instead he could only calculate a force relative to another force.
== Bodies with spatial extent ==
If the bodies in question have spatial extent (as opposed to being point masses), then the gravitational force between them is calculated by summing the contributions of the notional point masses that constitute the bodies. In the limit, as the component point masses become "infinitely small", this entails integrating the force (in vector form, see below) over the extents of the two bodies.
In this way, it can be shown that an object with a spherically symmetric distribution of mass exerts the same gravitational attraction on external bodies as if all the object's mass were concentrated at a point at its center. (This is not generally true for non-spherically symmetrical bodies.)
For points inside a spherically symmetric distribution of matter, Newton's shell theorem can be used to find the gravitational force. The theorem tells us how different parts of the mass distribution affect the gravitational force measured at a point located a distance r0 from the center of the mass distribution:
The portion of the mass that is located at radii r < r0 causes the same force at the radius r0 as if all of the mass enclosed within a sphere of radius r0 was concentrated at the center of the mass distribution (as noted above).
The portion of the mass that is located at radii r > r0 exerts no net gravitational force at the radius r0 from the center. That is, the individual gravitational forces exerted on a point at radius r0 by the elements of the mass outside the radius r0 cancel each other.
As a consequence, for example, within a shell of uniform thickness and density there is no net gravitational acceleration anywhere within the hollow sphere.
== Vector form ==
Newton's law of universal gravitation can be written as a vector equation to account for the direction of the gravitational force as well as its magnitude. In this formula, quantities in bold represent vectors.
F
21
=
−
G
m
1
m
2
|
r
21
|
2
r
^
21
=
−
G
m
1
m
2
|
r
21
|
3
r
21
{\displaystyle \mathbf {F} _{21}=-G{m_{1}m_{2} \over {|\mathbf {r} _{21}|}^{2}}{\hat {\mathbf {r} }}_{21}=-G{m_{1}m_{2} \over {|\mathbf {r} _{21}|}^{3}}\mathbf {r} _{21}}
where
F21 is the force applied on body 2 exerted by body 1,
G is the gravitational constant,
m1 and m2 are respectively the masses of bodies 1 and 2,
r21 = r2 − r1 is the displacement vector between bodies 1 and 2, and
r
^
21
=
d
e
f
r
2
−
r
1
|
r
2
−
r
1
|
{\displaystyle {\hat {\mathbf {r} }}_{21}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {\mathbf {r_{2}-r_{1}} }{|\mathbf {r_{2}-r_{1}} |}}}
is the unit vector from body 1 to body 2.
It can be seen that the vector form of the equation is the same as the scalar form given earlier, except that F is now a vector quantity, and the right hand side is multiplied by the appropriate unit vector. Also, it can be seen that F12 = −F21.
== Gravity field ==
The gravitational field is a vector field that describes the gravitational force that would be applied on an object in any given point in space, per unit mass. It is actually equal to the gravitational acceleration at that point.
It is a generalisation of the vector form, which becomes particularly useful if more than two objects are involved (such as a rocket between the Earth and the Moon). For two objects (e.g. object 2 is a rocket, object 1 the Earth), we simply write r instead of r12 and m instead of m2 and define the gravitational field g(r) as:
g
(
r
)
=
−
G
m
1
|
r
|
2
r
^
{\displaystyle \mathbf {g} (\mathbf {r} )=-G{m_{1} \over {{\vert \mathbf {r} \vert }^{2}}}\,\mathbf {\hat {r}} }
so that we can write:
F
(
r
)
=
m
g
(
r
)
.
{\displaystyle \mathbf {F} (\mathbf {r} )=m\mathbf {g} (\mathbf {r} ).}
This formulation is dependent on the objects causing the field. The field has units of acceleration; in SI, this is m/s2.
Gravitational fields are also conservative; that is, the work done by gravity from one position to another is path-independent. This has the consequence that there exists a gravitational potential field V(r) such that
g
(
r
)
=
−
∇
V
(
r
)
.
{\displaystyle \mathbf {g} (\mathbf {r} )=-\nabla V(\mathbf {r} ).}
If m1 is a point mass or the mass of a sphere with homogeneous mass distribution, the force field g(r) outside the sphere is isotropic, i.e., depends only on the distance r from the center of the sphere. In that case
V
(
r
)
=
−
G
m
1
r
.
{\displaystyle V(r)=-G{\frac {m_{1}}{r}}.}
As per Gauss's law, field in a symmetric body can be found by the mathematical equation:
where
∂
V
{\displaystyle \partial V}
is a closed surface and
M
enc
{\displaystyle M_{\text{enc}}}
is the mass enclosed by the surface.
Hence, for a hollow sphere of radius
R
{\displaystyle R}
and total mass
M
{\displaystyle M}
,
|
g
(
r
)
|
=
{
0
,
if
r
<
R
G
M
r
2
,
if
r
≥
R
{\displaystyle |\mathbf {g(r)} |={\begin{cases}0,&{\text{if }}r<R\\\\{\dfrac {GM}{r^{2}}},&{\text{if }}r\geq R\end{cases}}}
For a uniform solid sphere of radius
R
{\displaystyle R}
and total mass
M
{\displaystyle M}
,
|
g
(
r
)
|
=
{
G
M
r
R
3
,
if
r
<
R
G
M
r
2
,
if
r
≥
R
{\displaystyle |\mathbf {g(r)} |={\begin{cases}{\dfrac {GMr}{R^{3}}},&{\text{if }}r<R\\\\{\dfrac {GM}{r^{2}}},&{\text{if }}r\geq R\end{cases}}}
== Limitations ==
Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore widely used. Deviations from it are small when the dimensionless quantities
ϕ
/
c
2
{\displaystyle \phi /c^{2}}
and
(
v
/
c
)
2
{\displaystyle (v/c)^{2}}
are both much less than one, where
ϕ
{\displaystyle \phi }
is the gravitational potential,
v
{\displaystyle v}
is the velocity of the objects being studied, and
c
{\displaystyle c}
is the speed of light in vacuum. For example, Newtonian gravity provides an accurate description of the Earth/Sun system, since
ϕ
c
2
=
G
M
s
u
n
r
o
r
b
i
t
c
2
∼
10
−
8
,
(
v
E
a
r
t
h
c
)
2
=
(
2
π
r
o
r
b
i
t
(
1
y
r
)
c
)
2
∼
10
−
8
,
{\displaystyle {\frac {\phi }{c^{2}}}={\frac {GM_{\mathrm {sun} }}{r_{\mathrm {orbit} }c^{2}}}\sim 10^{-8},\quad \left({\frac {v_{\mathrm {Earth} }}{c}}\right)^{2}=\left({\frac {2\pi r_{\mathrm {orbit} }}{(1\ \mathrm {yr} )c}}\right)^{2}\sim 10^{-8},}
where
r
orbit
{\displaystyle r_{\text{orbit}}}
is the radius of the Earth's orbit around the Sun.
In situations where either dimensionless parameter is large, then general relativity must be used to describe the system. General relativity reduces to Newtonian gravity in the limit of small potential and low velocities, so Newton's law of gravitation is often said to be the low-gravity limit of general relativity.
=== Observations conflicting with Newton's formula ===
Newton's theory does not fully explain the precession of the perihelion of the orbits of the planets, especially that of Mercury, which was detected long after the life of Newton. There is a 43 arcsecond per century discrepancy between the Newtonian calculation, which arises only from the gravitational attractions from the other planets, and the observed precession, made with advanced telescopes during the 19th century.
The predicted angular deflection of light rays by gravity (treated as particles travelling at the expected speed) that is calculated by using Newton's theory is only one-half of the deflection that is observed by astronomers. Calculations using general relativity are in much closer agreement with the astronomical observations.
In spiral galaxies, the orbiting of stars around their centers seems to strongly disobey both Newton's law of universal gravitation and general relativity. Astrophysicists, however, explain this marked phenomenon by assuming the presence of large amounts of dark matter.
=== Einstein's solution ===
The first two conflicts with observations above were explained by Einstein's theory of general relativity, in which gravitation is a manifestation of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, energy and momentum distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. This allowed a description of the motions of light and mass that was consistent with all available observations. In general relativity, the gravitational force is a fictitious force resulting from the curvature of spacetime, because the gravitational acceleration of a body in free fall is due to its world line being a geodesic of spacetime.
== Extensions ==
In recent years, quests for non-inverse square terms in the law of gravity have been carried out by neutron interferometry.
== Solutions ==
The two-body problem has been completely solved, as has the restricted three-body problem.
The n-body problem is an ancient, classical problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem – from the time of the Greeks and on – has been motivated by the desire to understand the motions of the Sun, planets and the visible stars. The classical problem can be informally stated as: given the quasi-steady orbital properties (instantaneous position, velocity and time) of a group of celestial bodies, predict their interactive forces; and consequently, predict their true orbital motions for all future times.
In the 20th century, understanding the dynamics of globular cluster star systems became an important n-body problem too. The n-body problem in general relativity is considerably more difficult to solve.
== See also ==
Bentley's paradox – Cosmological paradox involving gravity
Gauss's law for gravity – Restatement of Newton's law of universal gravitation
Jordan and Einstein frames – different conventions for the metric tensor, in a theory of a dilaton coupled to gravityPages displaying wikidata descriptions as a fallback
Kepler orbit – Celestial orbit whose trajectory is a conic section in the orbital plane
Newton's cannonball – Thought experiment about gravity
Newton's laws of motion – Laws in physics about force and motion
Social gravity – Social theory
Static forces and virtual-particle exchange – Physical interaction in post-classical physics
== Notes ==
== References ==
== External links ==
Media related to Newton's law of universal gravitation at Wikimedia Commons
Feather and Hammer Drop on Moon on YouTube
Newton's Law of Universal Gravitation Javascript calculator | Wikipedia/Newton's_theory_of_gravitation |
A classical field theory is a physical theory that predicts how one or more fields in physics interact with matter through field equations, without considering effects of quantization; theories that incorporate quantum mechanics are called quantum field theories. In most contexts, 'classical field theory' is specifically intended to describe electromagnetism and gravitation, two of the fundamental forces of nature.
A physical field can be thought of as the assignment of a physical quantity at each point of space and time. For example, in a weather forecast, the wind velocity during a day over a country is described by assigning a vector to each point in space. Each vector represents the direction of the movement of air at that point, so the set of all wind vectors in an area at a given point in time constitutes a vector field. As the day progresses, the directions in which the vectors point change as the directions of the wind change.
The first field theories, Newtonian gravitation and Maxwell's equations of electromagnetic fields were developed in classical physics before the advent of relativity theory in 1905, and had to be revised to be consistent with that theory. Consequently, classical field theories are usually categorized as non-relativistic and relativistic. Modern field theories are usually expressed using the mathematics of tensor calculus. A more recent alternative mathematical formalism describes classical fields as sections of mathematical objects called fiber bundles.
== History ==
Michael Faraday coined the term "field" and lines of forces to explain electric and magnetic phenomena. Lord Kelvin in 1851 formalized the concept of field in different areas of physics.
== Non-relativistic field theories ==
Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described.
=== Newtonian gravitation ===
The first field theory of gravity was Newton's theory of gravitation in which the mutual interaction between two masses obeys an inverse square law. This was very useful for predicting the motion of planets around the Sun.
Any massive body M has a gravitational field g which describes its influence on other massive bodies. The gravitational field of M at a point r in space is found by determining the force F that M exerts on a small test mass m located at r, and then dividing by m:
g
(
r
)
=
F
(
r
)
m
.
{\displaystyle \mathbf {g} (\mathbf {r} )={\frac {\mathbf {F} (\mathbf {r} )}{m}}.}
Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence on the behavior of M.
According to Newton's law of universal gravitation, F(r) is given by
F
(
r
)
=
−
G
M
m
r
2
r
^
,
{\displaystyle \mathbf {F} (\mathbf {r} )=-{\frac {GMm}{r^{2}}}{\hat {\mathbf {r} }},}
where
r
^
{\displaystyle {\hat {\mathbf {r} }}}
is a unit vector pointing along the line from M to m, and G is Newton's gravitational constant. Therefore, the gravitational field of M is
g
(
r
)
=
F
(
r
)
m
=
−
G
M
r
2
r
^
.
{\displaystyle \mathbf {g} (\mathbf {r} )={\frac {\mathbf {F} (\mathbf {r} )}{m}}=-{\frac {GM}{r^{2}}}{\hat {\mathbf {r} }}.}
The experimental observation that inertial mass and gravitational mass are equal to unprecedented levels of accuracy leads to the identification of the gravitational field strength as identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity.
For a discrete collection of masses, Mi, located at points, ri, the gravitational field at a point r due to the masses is
g
(
r
)
=
−
G
∑
i
M
i
(
r
−
r
i
)
|
r
−
r
i
|
3
,
{\displaystyle \mathbf {g} (\mathbf {r} )=-G\sum _{i}{\frac {M_{i}(\mathbf {r} -\mathbf {r_{i}} )}{|\mathbf {r} -\mathbf {r} _{i}|^{3}}}\,,}
If we have a continuous mass distribution ρ instead, the sum is replaced by an integral,
g
(
r
)
=
−
G
∭
V
ρ
(
x
)
d
3
x
(
r
−
x
)
|
r
−
x
|
3
,
{\displaystyle \mathbf {g} (\mathbf {r} )=-G\iiint _{V}{\frac {\rho (\mathbf {x} )d^{3}\mathbf {x} (\mathbf {r} -\mathbf {x} )}{|\mathbf {r} -\mathbf {x} |^{3}}}\,,}
Note that the direction of the field points from the position r to the position of the masses ri; this is ensured by the minus sign. In a nutshell, this means all masses attract.
In the integral form Gauss's law for gravity is
∬
g
⋅
d
S
=
−
4
π
G
M
{\displaystyle \iint \mathbf {g} \cdot d\mathbf {S} =-4\pi GM}
while in differential form it is
∇
⋅
g
=
−
4
π
G
ρ
m
{\displaystyle \nabla \cdot \mathbf {g} =-4\pi G\rho _{m}}
Therefore, the gravitational field g can be written in terms of the gradient of a gravitational potential φ(r):
g
(
r
)
=
−
∇
ϕ
(
r
)
.
{\displaystyle \mathbf {g} (\mathbf {r} )=-\nabla \phi (\mathbf {r} ).}
This is a consequence of the gravitational force F being conservative.
=== Electromagnetism ===
==== Electrostatics ====
A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E generated by the source charge Q so that F = qE:
E
(
r
)
=
F
(
r
)
q
.
{\displaystyle \mathbf {E} (\mathbf {r} )={\frac {\mathbf {F} (\mathbf {r} )}{q}}.}
Using this and Coulomb's law the electric field due to a single charged particle is
E
=
1
4
π
ε
0
Q
r
2
r
^
.
{\displaystyle \mathbf {E} ={\frac {1}{4\pi \varepsilon _{0}}}{\frac {Q}{r^{2}}}{\hat {\mathbf {r} }}\,.}
The electric field is conservative, and hence is given by the gradient of a scalar potential, V(r)
E
(
r
)
=
−
∇
V
(
r
)
.
{\displaystyle \mathbf {E} (\mathbf {r} )=-\nabla V(\mathbf {r} )\,.}
Gauss's law for electricity is in integral form
∬
E
⋅
d
S
=
Q
ε
0
{\displaystyle \iint \mathbf {E} \cdot d\mathbf {S} ={\frac {Q}{\varepsilon _{0}}}}
while in differential form
∇
⋅
E
=
ρ
e
ε
0
.
{\displaystyle \nabla \cdot \mathbf {E} ={\frac {\rho _{e}}{\varepsilon _{0}}}\,.}
==== Magnetostatics ====
A steady current I flowing along a path ℓ will exert a force on nearby charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is
F
(
r
)
=
q
v
×
B
(
r
)
,
{\displaystyle \mathbf {F} (\mathbf {r} )=q\mathbf {v} \times \mathbf {B} (\mathbf {r} ),}
where B(r) is the magnetic field, which is determined from I by the Biot–Savart law:
B
(
r
)
=
μ
0
I
4
π
∫
d
ℓ
×
d
r
^
r
2
.
{\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}I}{4\pi }}\int {\frac {d{\boldsymbol {\ell }}\times d{\hat {\mathbf {r} }}}{r^{2}}}.}
The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r):
B
(
r
)
=
∇
×
A
(
r
)
{\displaystyle \mathbf {B} (\mathbf {r} )=\nabla \times \mathbf {A} (\mathbf {r} )}
Gauss's law for magnetism in integral form is
∬
B
⋅
d
S
=
0
,
{\displaystyle \iint \mathbf {B} \cdot d\mathbf {S} =0,}
while in differential form it is
∇
⋅
B
=
0.
{\displaystyle \nabla \cdot \mathbf {B} =0.}
The physical interpretation is that there are no magnetic monopoles.
==== Electrodynamics ====
In general, in the presence of both a charge density ρ(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to the electric charge density (charge per unit volume) ρ and current density (electric current per unit area) J.
Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ρ and J, and from there the electric and magnetic fields are determined via the relations
E
=
−
∇
V
−
∂
A
∂
t
{\displaystyle \mathbf {E} =-\nabla V-{\frac {\partial \mathbf {A} }{\partial t}}}
B
=
∇
×
A
.
{\displaystyle \mathbf {B} =\nabla \times \mathbf {A} .}
=== Continuum mechanics ===
==== Fluid dynamics ====
Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0}
and the Navier–Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid,
∂
∂
t
(
ρ
u
)
+
∇
⋅
(
ρ
u
⊗
u
+
p
I
)
=
∇
⋅
τ
+
ρ
b
{\displaystyle {\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} +p\mathbf {I} )=\nabla \cdot {\boldsymbol {\tau }}+\rho \mathbf {b} }
if the density ρ, pressure p, deviatoric stress tensor τ of the fluid, as well as external body forces b, are all given. The velocity field u is the vector field to solve for.
=== Other examples ===
In 1839, James MacCullagh presented field equations to describe reflection and refraction in "An essay toward a dynamical theory of crystalline reflection and refraction".
== Potential theory ==
The term "potential theory" arises from the fact that, in 19th century physics, the fundamental forces of nature were believed to be derived from scalar potentials which satisfied Laplace's equation. Poisson addressed the question of the stability of the planetary orbits, which had already been settled by Lagrange to the first degree of approximation from the perturbation forces, and derived the Poisson's equation, named after him. The general form of this equation is
∇
2
ϕ
=
σ
{\displaystyle \nabla ^{2}\phi =\sigma }
where σ is a source function (as a density, a quantity per unit volume) and ø the scalar potential to solve for.
In Newtonian gravitation, masses are the sources of the field so that field lines terminate at objects that have mass. Similarly, charges are the sources and sinks of electrostatic fields: positive charges emanate electric field lines, and field lines terminate at negative charges. These field concepts are also illustrated in the general divergence theorem, specifically Gauss's law's for gravity and electricity. For the cases of time-independent gravity and electromagnetism, the fields are gradients of corresponding potentials
g
=
−
∇
ϕ
g
,
E
=
−
∇
ϕ
e
{\displaystyle \mathbf {g} =-\nabla \phi _{g}\,,\quad \mathbf {E} =-\nabla \phi _{e}}
so substituting these into Gauss' law for each case obtains
∇
2
ϕ
g
=
4
π
G
ρ
g
,
∇
2
ϕ
e
=
4
π
k
e
ρ
e
=
−
ρ
e
ε
0
{\displaystyle \nabla ^{2}\phi _{g}=4\pi G\rho _{g}\,,\quad \nabla ^{2}\phi _{e}=4\pi k_{e}\rho _{e}=-{\rho _{e} \over \varepsilon _{0}}}
where ρg is the mass density, ρe the charge density, G the gravitational constant and ke = 1/4πε0 the electric force constant.
Incidentally, this similarity arises from the similarity between Newton's law of gravitation and Coulomb's law.
In the case where there is no source term (e.g. vacuum, or paired charges), these potentials obey Laplace's equation:
∇
2
ϕ
=
0.
{\displaystyle \nabla ^{2}\phi =0.}
For a distribution of mass (or charge), the potential can be expanded in a series of spherical harmonics, and the nth term in the series can be viewed as a potential arising from the 2n-moments (see multipole expansion). For many purposes only the monopole, dipole, and quadrupole terms are needed in calculations.
== Relativistic field theory ==
Modern formulations of classical field theories generally require Lorentz covariance as this is now recognised as a fundamental aspect of nature. A field theory tends to be expressed mathematically by using Lagrangians. This is a function that, when subjected to an action principle, gives rise to the field equations and a conservation law for the theory. The action is a Lorentz scalar, from which the field equations and symmetries can be readily derived.
Throughout we use units such that the speed of light in vacuum is 1, i.e. c = 1.
=== Lagrangian dynamics ===
Given a field tensor
ϕ
{\displaystyle \phi }
, a scalar called the Lagrangian density
L
(
ϕ
,
∂
ϕ
,
∂
∂
ϕ
,
…
,
x
)
{\displaystyle {\mathcal {L}}(\phi ,\partial \phi ,\partial \partial \phi ,\ldots ,x)}
can be constructed from
ϕ
{\displaystyle \phi }
and its derivatives.
From this density, the action functional can be constructed by integrating over spacetime,
S
=
∫
L
−
g
d
4
x
.
{\displaystyle {\mathcal {S}}=\int {{\mathcal {L}}{\sqrt {-g}}\,\mathrm {d} ^{4}x}.}
Where
−
g
d
4
x
{\displaystyle {\sqrt {-g}}\,\mathrm {d} ^{4}x}
is the volume form in curved spacetime.
(
g
≡
det
(
g
μ
ν
)
)
{\displaystyle (g\equiv \det(g_{\mu \nu }))}
Therefore, the Lagrangian itself is equal to the integral of the Lagrangian density over all space.
Then by enforcing the action principle, the Euler–Lagrange equations are obtained
δ
S
δ
ϕ
=
∂
L
∂
ϕ
−
∂
μ
(
∂
L
∂
(
∂
μ
ϕ
)
)
+
⋯
+
(
−
1
)
m
∂
μ
1
∂
μ
2
⋯
∂
μ
m
−
1
∂
μ
m
(
∂
L
∂
(
∂
μ
1
∂
μ
2
⋯
∂
μ
m
−
1
∂
μ
m
ϕ
)
)
=
0.
{\displaystyle {\frac {\delta {\mathcal {S}}}{\delta \phi }}={\frac {\partial {\mathcal {L}}}{\partial \phi }}-\partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi )}}\right)+\cdots +(-1)^{m}\partial _{\mu _{1}}\partial _{\mu _{2}}\cdots \partial _{\mu _{m-1}}\partial _{\mu _{m}}\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu _{1}}\partial _{\mu _{2}}\cdots \partial _{\mu _{m-1}}\partial _{\mu _{m}}\phi )}}\right)=0.}
== Relativistic fields ==
Two of the most well-known Lorentz-covariant classical field theories are now described.
=== Electromagnetism ===
Historically, the first (classical) field theories were those describing the electric and magnetic fields (separately). After numerous experiments, it was found that these two fields were related, or, in fact, two aspects of the same field: the electromagnetic field. Maxwell's theory of electromagnetism describes the interaction of charged matter with the electromagnetic field. The first formulation of this field theory used vector fields to describe the electric and magnetic fields. With the advent of special relativity, a more complete formulation using tensor fields was found. Instead of using two vector fields describing the electric and magnetic fields, a tensor field representing these two fields together is used.
The electromagnetic four-potential is defined to be Aa = (−φ, A), and the electromagnetic four-current ja = (−ρ, j). The electromagnetic field at any point in spacetime is described by the antisymmetric (0,2)-rank electromagnetic field tensor
F
a
b
=
∂
a
A
b
−
∂
b
A
a
.
{\displaystyle F_{ab}=\partial _{a}A_{b}-\partial _{b}A_{a}.}
==== The Lagrangian ====
To obtain the dynamics for this field, we try and construct a scalar from the field. In the vacuum, we have
L
=
−
1
4
μ
0
F
a
b
F
a
b
.
{\displaystyle {\mathcal {L}}=-{\frac {1}{4\mu _{0}}}F^{ab}F_{ab}\,.}
We can use gauge field theory to get the interaction term, and this gives us
L
=
−
1
4
μ
0
F
a
b
F
a
b
−
j
a
A
a
.
{\displaystyle {\mathcal {L}}=-{\frac {1}{4\mu _{0}}}F^{ab}F_{ab}-j^{a}A_{a}\,.}
==== The equations ====
To obtain the field equations, the electromagnetic tensor in the Lagrangian density needs to be replaced by its definition in terms of the 4-potential A, and it's this potential which enters the Euler-Lagrange equations. The EM field F is not varied in the EL equations. Therefore,
∂
b
(
∂
L
∂
(
∂
b
A
a
)
)
=
∂
L
∂
A
a
.
{\displaystyle \partial _{b}\left({\frac {\partial {\mathcal {L}}}{\partial \left(\partial _{b}A_{a}\right)}}\right)={\frac {\partial {\mathcal {L}}}{\partial A_{a}}}\,.}
Evaluating the derivative of the Lagrangian density with respect to the field components
∂
L
∂
A
a
=
μ
0
j
a
,
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial A_{a}}}=\mu _{0}j^{a}\,,}
and the derivatives of the field components
∂
L
∂
(
∂
b
A
a
)
=
F
a
b
,
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial (\partial _{b}A_{a})}}=F^{ab}\,,}
obtains Maxwell's equations in vacuum. The source equations (Gauss' law for electricity and the Maxwell-Ampère law) are
∂
b
F
a
b
=
μ
0
j
a
.
{\displaystyle \partial _{b}F^{ab}=\mu _{0}j^{a}\,.}
while the other two (Gauss' law for magnetism and Faraday's law) are obtained from the fact that F is the 4-curl of A, or, in other words, from the fact that the Bianchi identity holds for the electromagnetic field tensor.
6
F
[
a
b
,
c
]
=
F
a
b
,
c
+
F
c
a
,
b
+
F
b
c
,
a
=
0.
{\displaystyle 6F_{[ab,c]}\,=F_{ab,c}+F_{ca,b}+F_{bc,a}=0.}
where the comma indicates a partial derivative.
=== Gravitation ===
After Newtonian gravitation was found to be inconsistent with special relativity, Albert Einstein formulated a new theory of gravitation called general relativity. This treats gravitation as a geometric phenomenon ('curved spacetime') caused by masses and represents the gravitational field mathematically by a tensor field called the metric tensor. The Einstein field equations describe how this curvature is produced. Newtonian gravitation is now superseded by Einstein's theory of general relativity, in which gravitation is thought of as being due to a curved spacetime, caused by masses. The Einstein field equations,
G
a
b
=
κ
T
a
b
{\displaystyle G_{ab}=\kappa T_{ab}}
describe how this curvature is produced by matter and radiation, where Gab is the Einstein tensor,
G
a
b
=
R
a
b
−
1
2
R
g
a
b
{\displaystyle G_{ab}\,=R_{ab}-{\frac {1}{2}}Rg_{ab}}
written in terms of the Ricci tensor Rab and Ricci scalar R = Rabgab, Tab is the stress–energy tensor and κ = 8πG/c4 is a constant. In the absence of matter and radiation (including sources) the 'vacuum field equations,
G
a
b
=
0
{\displaystyle G_{ab}=0}
can be derived by varying the Einstein–Hilbert action,
S
=
∫
R
−
g
d
4
x
{\displaystyle S=\int R{\sqrt {-g}}\,d^{4}x}
with respect to the metric, where g is the determinant of the metric tensor gab. Solutions of the vacuum field equations are called vacuum solutions. An alternative interpretation, due to Arthur Eddington, is that
R
{\displaystyle R}
is fundamental,
T
{\displaystyle T}
is merely one aspect of
R
{\displaystyle R}
, and
κ
{\displaystyle \kappa }
is forced by the choice of units.
=== Further examples ===
Further examples of Lorentz-covariant classical field theories are
Klein-Gordon theory for real or complex scalar fields
Dirac theory for a Dirac spinor field
Yang–Mills theory for a non-abelian gauge field
== Unification attempts ==
Attempts to create a unified field theory based on classical physics are classical unified field theories. During the years between the two World Wars, the idea of unification of gravity with electromagnetism was actively pursued by several mathematicians and physicists like Albert Einstein, Theodor Kaluza, Hermann Weyl, Arthur Eddington, Gustav Mie and Ernst Reichenbacher.
Early attempts to create such theory were based on incorporation of electromagnetic fields into the geometry of general relativity. In 1918, the case for the first geometrization of the electromagnetic field was proposed in 1918 by Hermann Weyl.
In 1919, the idea of a five-dimensional approach was suggested by Theodor Kaluza. From that, a theory called Kaluza-Klein Theory was developed. It attempts to unify gravitation and electromagnetism, in a five-dimensional space-time.
There are several ways of extending the representational framework for a unified field theory which have been considered by Einstein and other researchers. These extensions in general are based in two options. The first option is based in relaxing the conditions imposed on the original formulation, and the second is based in introducing other mathematical objects into the theory. An example of the first option is relaxing the restrictions to four-dimensional space-time by considering higher-dimensional representations. That is used in Kaluza-Klein Theory. For the second, the most prominent example arises from the concept of the affine connection that was introduced into the theory of general relativity mainly through the work of Tullio Levi-Civita and Hermann Weyl.
Further development of quantum field theory changed the focus of searching for unified field theory from classical to quantum description. Because of that, many theoretical physicists gave up looking for a classical unified field theory. Quantum field theory would include unification of two other fundamental forces of nature, the strong and weak nuclear force which act on the subatomic level.
== See also ==
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
Thidé, Bo. "Electromagnetic Field Theory" (PDF). Archived from the original (PDF) on September 17, 2003. Retrieved February 14, 2006.
Carroll, Sean M. (1997). "Lecture Notes on General Relativity". arXiv:gr-qc/9712019. Bibcode:1997gr.qc....12019C. {{cite journal}}: Cite journal requires |journal= (help)
Binney, James J. "Lecture Notes on Classical Fields" (PDF). Retrieved April 30, 2007.
Sardanashvily, G. (November 2008). "Advanced Classical Field Theory". International Journal of Geometric Methods in Modern Physics. 5 (7): 1163–1189. arXiv:0811.0331. Bibcode:2008IJGMM..05.1163S. doi:10.1142/S0219887808003247. ISBN 978-981-283-895-7. S2CID 13884729. | Wikipedia/Field_equations |
In physics, Kaluza–Klein theory (KK theory) is a classical unified field theory of gravitation and electromagnetism built around the idea of a fifth dimension beyond the common 4D of space and time and considered an important precursor to string theory. In their setup, the vacuum has the usual 3 dimensions of space and one dimension of time but with another microscopic extra spatial dimension in the shape of a tiny circle. Gunnar Nordström had an earlier, similar idea. But in that case, a fifth component was added to the electromagnetic vector potential, representing the Newtonian gravitational potential, and writing the Maxwell equations in five dimensions.
The five-dimensional (5D) theory developed in three steps. The original hypothesis came from Theodor Kaluza, who sent his results to Albert Einstein in 1919 and published them in 1921. Kaluza presented a purely classical extension of general relativity to 5D, with a metric tensor of 15 components. Ten components are identified with the 4D spacetime metric, four components with the electromagnetic vector potential, and one component with an unidentified scalar field sometimes called the "radion" or the "dilaton". Correspondingly, the 5D Einstein equations yield the 4D Einstein field equations, the Maxwell equations for the electromagnetic field, and an equation for the scalar field. Kaluza also introduced the "cylinder condition" hypothesis, that no component of the five-dimensional metric depends on the fifth dimension. Without this restriction, terms are introduced that involve derivatives of the fields with respect to the fifth coordinate, and this extra degree of freedom makes the mathematics of the fully variable 5D relativity enormously complex. Standard 4D physics seems to manifest this "cylinder condition" and, along with it, simpler mathematics.
In 1926, Oskar Klein gave Kaluza's classical five-dimensional theory a quantum interpretation, to accord with the then-recent discoveries of Werner Heisenberg and Erwin Schrödinger. Klein introduced the hypothesis that the fifth dimension was curled up and microscopic, to explain the cylinder condition. Klein suggested that the geometry of the extra fifth dimension could take the form of a circle, with the radius of 10−30 cm. More precisely, the radius of the circular dimension is 23 times the Planck length, which in turn is of the order of 10−33 cm. Klein also made a contribution to the classical theory by providing a properly normalized 5D metric. Work continued on the Kaluza field theory during the 1930s by Einstein and colleagues at Princeton University.
In the 1940s, the classical theory was completed, and the full field equations including the scalar field were obtained by three independent research groups: Yves Thiry, working in France on his dissertation under André Lichnerowicz; Pascual Jordan, Günther Ludwig, and Claus Müller in Germany, with critical input from Wolfgang Pauli and Markus Fierz; and Paul Scherrer working alone in Switzerland. Jordan's work led to the scalar–tensor theory of Brans–Dicke; Carl H. Brans and Robert H. Dicke were apparently unaware of Thiry or Scherrer. The full Kaluza equations under the cylinder condition are quite complex, and most English-language reviews, as well as the English translations of Thiry, contain some errors. The curvature tensors for the complete Kaluza equations were evaluated using tensor-algebra software in 2015, verifying results of J. A. Ferrari and R. Coquereaux & G. Esposito-Farese. The 5D covariant form of the energy–momentum source terms is treated by L. L. Williams.
== Kaluza hypothesis ==
In his 1921 article, Kaluza established all the elements of the classical five-dimensional theory: the Kaluza–Klein metric, the Kaluza–Klein–Einstein field equations, the equations of motion, the stress–energy tensor, and the cylinder condition. With no free parameters, it merely extends general relativity to five dimensions. One starts by hypothesizing a form of the five-dimensional Kaluza–Klein metric
g
~
a
b
{\displaystyle {\widetilde {g}}_{ab}}
, where Latin indices span five dimensions. Let one also introduce the four-dimensional spacetime metric
g
μ
ν
{\displaystyle {g}_{\mu \nu }}
, where Greek indices span the usual four dimensions of space and time; a 4-vector
A
μ
{\displaystyle A^{\mu }}
identified with the electromagnetic vector potential; and a scalar field
ϕ
{\displaystyle \phi }
. Then decompose the 5D metric so that the 4D metric is framed by the electromagnetic vector potential, with the scalar field at the fifth diagonal. This can be visualized as
g
~
a
b
≡
[
g
μ
ν
+
ϕ
2
A
μ
A
ν
ϕ
2
A
μ
ϕ
2
A
ν
ϕ
2
]
.
{\displaystyle {\widetilde {g}}_{ab}\equiv {\begin{bmatrix}g_{\mu \nu }+\phi ^{2}A_{\mu }A_{\nu }&\phi ^{2}A_{\mu }\\\phi ^{2}A_{\nu }&\phi ^{2}\end{bmatrix}}.}
One can write more precisely
g
~
μ
ν
≡
g
μ
ν
+
ϕ
2
A
μ
A
ν
,
g
~
5
ν
≡
g
~
ν
5
≡
ϕ
2
A
ν
,
g
~
55
≡
ϕ
2
,
{\displaystyle {\widetilde {g}}_{\mu \nu }\equiv g_{\mu \nu }+\phi ^{2}A_{\mu }A_{\nu },\qquad {\widetilde {g}}_{5\nu }\equiv {\widetilde {g}}_{\nu 5}\equiv \phi ^{2}A_{\nu },\qquad {\widetilde {g}}_{55}\equiv \phi ^{2},}
where the index
5
{\displaystyle 5}
indicates the fifth coordinate by convention, even though the first four coordinates are indexed with 0, 1, 2, and 3. The associated inverse metric is
g
~
a
b
≡
[
g
μ
ν
−
A
μ
−
A
ν
g
α
β
A
α
A
β
+
1
ϕ
2
]
.
{\displaystyle {\widetilde {g}}^{ab}\equiv {\begin{bmatrix}g^{\mu \nu }&-A^{\mu }\\-A^{\nu }&g_{\alpha \beta }A^{\alpha }A^{\beta }+{\frac {1}{\phi ^{2}}}\end{bmatrix}}.}
This decomposition is quite general, and all terms are dimensionless. Kaluza then applies the machinery of standard general relativity to this metric. The field equations are obtained from five-dimensional Einstein equations, and the equations of motion from the five-dimensional geodesic hypothesis. The resulting field equations provide both the equations of general relativity and of electrodynamics; the equations of motion provide the four-dimensional geodesic equation and the Lorentz force law, and one finds that electric charge is identified with motion in the fifth dimension.
The hypothesis for the metric implies an invariant five-dimensional length element
d
s
{\displaystyle ds}
:
d
s
2
≡
g
~
a
b
d
x
a
d
x
b
=
g
μ
ν
d
x
μ
d
x
ν
+
ϕ
2
(
A
ν
d
x
ν
+
d
x
5
)
2
.
{\displaystyle ds^{2}\equiv {\widetilde {g}}_{ab}\,dx^{a}\,dx^{b}=g_{\mu \nu }\,dx^{\mu }\,dx^{\nu }+\phi ^{2}(A_{\nu }\,dx^{\nu }+dx^{5})^{2}.}
== Field equations from the Kaluza hypothesis ==
The Kaluza–Klein–Einstein field equations of the five-dimensional theory were never adequately provided by Kaluza or Klein because they ignored the scalar field. The full Kaluza field equations are generally attributed to Thiry, who obtained vacuum field equations, although Kaluza originally provided a stress–energy tensor for his theory, and Thiry included a stress–energy tensor in his thesis. But as described by Gonner, several independent groups worked on the field equations in the 1940s and earlier. Thiry is perhaps best known only because an English translation was provided by Applequist, Chodos, & Freund in their review book. Applequist et al. also provided an English translation of Kaluza's article. Translations of the three (1946, 1947, 1948) Jordan articles can be found on the ResearchGate and Academia.edu archives. The first correct English-language Kaluza field equations, including the scalar field, were provided by Williams.
To obtain the 5D Kaluza–Klein–Einstein field equations, the 5D Kaluza–Klein–Christoffel symbols
Γ
~
b
c
a
{\displaystyle {\widetilde {\Gamma }}_{bc}^{a}}
are calculated from the 5D Kaluza–Klein metric
g
~
a
b
{\displaystyle {\widetilde {g}}_{ab}}
, and the 5D Kaluza–Klein–Ricci tensor
R
~
a
b
{\displaystyle {\widetilde {R}}_{ab}}
is calculated from the 5D connections.
The classic results of Thiry and other authors presume the cylinder condition:
∂
g
~
a
b
∂
x
5
=
0.
{\displaystyle {\frac {\partial {\widetilde {g}}_{ab}}{\partial x^{5}}}=0.}
Without this assumption, the field equations become much more complex, providing many more degrees of freedom that can be identified with various new fields. Paul Wesson and colleagues have pursued relaxation of the cylinder condition to gain extra terms that can be identified with the matter fields, for which Kaluza otherwise inserted a stress–energy tensor by hand.
It has been an objection to the original Kaluza hypothesis to invoke the fifth dimension only to negate its dynamics. But Thiry argued that the interpretation of the Lorentz force law in terms of a five-dimensional geodesic militates strongly for a fifth dimension irrespective of the cylinder condition. Most authors have therefore employed the cylinder condition in deriving the field equations. Furthermore, vacuum equations are typically assumed for which
R
~
a
b
=
0
,
{\displaystyle {\widetilde {R}}_{ab}=0,}
where
R
~
a
b
≡
∂
c
Γ
~
a
b
c
−
∂
b
Γ
~
c
a
c
+
Γ
~
c
d
c
Γ
~
a
b
d
−
Γ
~
b
d
c
Γ
~
a
c
d
{\displaystyle {\widetilde {R}}_{ab}\equiv \partial _{c}{\widetilde {\Gamma }}_{ab}^{c}-\partial _{b}{\widetilde {\Gamma }}_{ca}^{c}+{\widetilde {\Gamma }}_{cd}^{c}{\widetilde {\Gamma }}_{ab}^{d}-{\widetilde {\Gamma }}_{bd}^{c}{\widetilde {\Gamma }}_{ac}^{d}}
and
Γ
~
b
c
a
≡
1
2
g
~
a
d
(
∂
b
g
~
d
c
+
∂
c
g
~
d
b
−
∂
d
g
~
b
c
)
.
{\displaystyle {\widetilde {\Gamma }}_{bc}^{a}\equiv {\frac {1}{2}}{\widetilde {g}}^{ad}(\partial _{b}{\widetilde {g}}_{dc}+\partial _{c}{\widetilde {g}}_{db}-\partial _{d}{\widetilde {g}}_{bc}).}
The vacuum field equations obtained in this way by Thiry and Jordan's group are as follows.
The field equation for
ϕ
{\displaystyle \phi }
is obtained from
R
~
55
=
0
⇒
◻
ϕ
=
1
4
ϕ
3
F
α
β
F
α
β
,
{\displaystyle {\widetilde {R}}_{55}=0\Rightarrow \Box \phi ={\frac {1}{4}}\phi ^{3}F^{\alpha \beta }F_{\alpha \beta },}
where
F
α
β
≡
∂
α
A
β
−
∂
β
A
α
,
{\displaystyle F_{\alpha \beta }\equiv \partial _{\alpha }A_{\beta }-\partial _{\beta }A_{\alpha },}
◻
≡
g
μ
ν
∇
μ
∇
ν
,
{\displaystyle \Box \equiv g^{\mu \nu }\nabla _{\mu }\nabla _{\nu },}
and
∇
μ
{\displaystyle \nabla _{\mu }}
is a standard, 4D covariant derivative. It shows that the electromagnetic field is a source for the scalar field. Note that the scalar field cannot be set to a constant without constraining the electromagnetic field. The earlier treatments by Kaluza and Klein did not have an adequate description of the scalar field and did not realize the implied constraint on the electromagnetic field by assuming the scalar field to be constant.
The field equation for
A
ν
{\displaystyle A^{\nu }}
is obtained from
R
~
5
α
=
0
=
1
2
ϕ
g
β
μ
∇
μ
(
ϕ
3
F
α
β
)
−
A
α
ϕ
◻
ϕ
.
{\displaystyle {\widetilde {R}}_{5\alpha }=0={\frac {1}{2\phi }}g^{\beta \mu }\nabla _{\mu }(\phi ^{3}F_{\alpha \beta })-A_{\alpha }\phi \Box \phi .}
It has the form of the vacuum Maxwell equations if the scalar field is constant.
The field equation for the 4D Ricci tensor
R
μ
ν
{\displaystyle R_{\mu \nu }}
is obtained from
R
~
μ
ν
−
1
2
g
~
μ
ν
R
~
=
0
⇒
R
μ
ν
−
1
2
g
μ
ν
R
=
1
2
ϕ
2
(
g
α
β
F
μ
α
F
ν
β
−
1
4
g
μ
ν
F
α
β
F
α
β
)
+
1
ϕ
(
∇
μ
∇
ν
ϕ
−
g
μ
ν
◻
ϕ
)
,
{\displaystyle {\begin{aligned}{\widetilde {R}}_{\mu \nu }-{\frac {1}{2}}{\widetilde {g}}_{\mu \nu }{\widetilde {R}}&=0\Rightarrow \\R_{\mu \nu }-{\frac {1}{2}}g_{\mu \nu }R&={\frac {1}{2}}\phi ^{2}\left(g^{\alpha \beta }F_{\mu \alpha }F_{\nu \beta }-{\frac {1}{4}}g_{\mu \nu }F_{\alpha \beta }F^{\alpha \beta }\right)+{\frac {1}{\phi }}(\nabla _{\mu }\nabla _{\nu }\phi -g_{\mu \nu }\Box \phi ),\end{aligned}}}
where
R
{\displaystyle R}
is the standard 4D Ricci scalar.
This equation shows the remarkable result, called the "Kaluza miracle", that the precise form for the electromagnetic stress–energy tensor emerges from the 5D vacuum equations as a source in the 4D equations: field from the vacuum. This relation allows the definitive identification of
A
μ
{\displaystyle A^{\mu }}
with the electromagnetic vector potential. Therefore, the field needs to be rescaled with a conversion constant
k
{\displaystyle k}
such that
A
μ
→
k
A
μ
{\displaystyle A^{\mu }\to kA^{\mu }}
.
The relation above shows that we must have
k
2
2
=
8
π
G
c
4
1
μ
0
=
2
G
c
2
4
π
ϵ
0
,
{\displaystyle {\frac {k^{2}}{2}}={\frac {8\pi G}{c^{4}}}{\frac {1}{\mu _{0}}}={\frac {2G}{c^{2}}}4\pi \epsilon _{0},}
where
G
{\displaystyle G}
is the gravitational constant, and
μ
0
{\displaystyle \mu _{0}}
is the permeability of free space. In the Kaluza theory, the gravitational constant can be understood as an electromagnetic coupling constant in the metric. There is also a stress–energy tensor for the scalar field. The scalar field behaves like a variable gravitational constant, in terms of modulating the coupling of electromagnetic stress–energy to spacetime curvature. The sign of
ϕ
2
{\displaystyle \phi ^{2}}
in the metric is fixed by correspondence with 4D theory so that electromagnetic energy densities are positive. It is often assumed that the fifth coordinate is spacelike in its signature in the metric.
In the presence of matter, the 5D vacuum condition cannot be assumed. Indeed, Kaluza did not assume it. The full field equations require evaluation of the 5D Kaluza–Klein–Einstein tensor
G
~
a
b
≡
R
~
a
b
−
1
2
g
~
a
b
R
~
,
{\displaystyle {\widetilde {G}}_{ab}\equiv {\widetilde {R}}_{ab}-{\frac {1}{2}}{\widetilde {g}}_{ab}{\widetilde {R}},}
as seen in the recovery of the electromagnetic stress–energy tensor above. The 5D curvature tensors are complex, and most English-language reviews contain errors in either
G
~
a
b
{\displaystyle {\widetilde {G}}_{ab}}
or
R
~
a
b
{\displaystyle {\widetilde {R}}_{ab}}
, as does the English translation of Thiry. In 2015, a complete set of 5D curvature tensors under the cylinder condition, evaluated using tensor-algebra software, was produced.
== Equations of motion from the Kaluza hypothesis ==
The equations of motion are obtained from the five-dimensional geodesic hypothesis in terms of a 5-velocity
U
~
a
≡
d
x
a
/
d
s
{\displaystyle {\widetilde {U}}^{a}\equiv dx^{a}/ds}
:
U
~
b
∇
~
b
U
~
a
=
d
U
~
a
d
s
+
Γ
~
b
c
a
U
~
b
U
~
c
=
0.
{\displaystyle {\widetilde {U}}^{b}{\widetilde {\nabla }}_{b}{\widetilde {U}}^{a}={\frac {d{\widetilde {U}}^{a}}{ds}}+{\widetilde {\Gamma }}_{bc}^{a}{\widetilde {U}}^{b}{\widetilde {U}}^{c}=0.}
This equation can be recast in several ways, and it has been studied in various forms by authors including Kaluza, Pauli, Gross & Perry, Gegenberg & Kunstatter, and Wesson & Ponce de Leon, but it is instructive to convert it back to the usual 4-dimensional length element
c
2
d
τ
2
≡
g
μ
ν
d
x
μ
d
x
ν
{\displaystyle c^{2}\,d\tau ^{2}\equiv g_{\mu \nu }\,dx^{\mu }\,dx^{\nu }}
, which is related to the 5-dimensional length element
d
s
{\displaystyle ds}
as given above:
d
s
2
=
c
2
d
τ
2
+
ϕ
2
(
k
A
ν
d
x
ν
+
d
x
5
)
2
.
{\displaystyle ds^{2}=c^{2}\,d\tau ^{2}+\phi ^{2}(kA_{\nu }\,dx^{\nu }+dx^{5})^{2}.}
Then the 5D geodesic equation can be written for the spacetime components of the 4-velocity:
U
ν
≡
d
x
ν
d
τ
,
{\displaystyle U^{\nu }\equiv {\frac {dx^{\nu }}{d\tau }},}
d
U
ν
d
τ
+
Γ
~
α
β
μ
U
α
U
β
+
2
Γ
~
5
α
μ
U
α
U
5
+
Γ
~
55
μ
(
U
5
)
2
+
U
μ
d
d
τ
ln
c
d
τ
d
s
=
0.
{\displaystyle {\frac {dU^{\nu }}{d\tau }}+{\widetilde {\Gamma }}_{\alpha \beta }^{\mu }U^{\alpha }U^{\beta }+2{\widetilde {\Gamma }}_{5\alpha }^{\mu }U^{\alpha }U^{5}+{\widetilde {\Gamma }}_{55}^{\mu }(U^{5})^{2}+U^{\mu }{\frac {d}{d\tau }}\ln {\frac {c\,d\tau }{ds}}=0.}
The term quadratic in
U
ν
{\displaystyle U^{\nu }}
provides the 4D geodesic equation plus some electromagnetic terms:
Γ
~
α
β
μ
=
Γ
α
β
μ
+
1
2
g
μ
ν
k
2
ϕ
2
(
A
α
F
β
ν
+
A
β
F
α
ν
−
A
α
A
β
∂
ν
ln
ϕ
2
)
.
{\displaystyle {\widetilde {\Gamma }}_{\alpha \beta }^{\mu }=\Gamma _{\alpha \beta }^{\mu }+{\frac {1}{2}}g^{\mu \nu }k^{2}\phi ^{2}(A_{\alpha }F_{\beta \nu }+A_{\beta }F_{\alpha \nu }-A_{\alpha }A_{\beta }\partial _{\nu }\ln \phi ^{2}).}
The term linear in
U
ν
{\displaystyle U^{\nu }}
provides the Lorentz force law:
Γ
~
5
α
μ
=
1
2
g
μ
ν
k
ϕ
2
(
F
α
ν
−
A
α
∂
ν
ln
ϕ
2
)
.
{\displaystyle {\widetilde {\Gamma }}_{5\alpha }^{\mu }={\frac {1}{2}}g^{\mu \nu }k\phi ^{2}(F_{\alpha \nu }-A_{\alpha }\partial _{\nu }\ln \phi ^{2}).}
This is another expression of the "Kaluza miracle". The same hypothesis for the 5D metric that provides electromagnetic stress–energy in the Einstein equations, also provides the Lorentz force law in the equation of motions along with the 4D geodesic equation. Yet correspondence with the Lorentz force law requires that we identify the component of 5-velocity along the fifth dimension with electric charge:
k
U
5
=
k
d
x
5
d
τ
→
q
m
c
,
{\displaystyle kU^{5}=k{\frac {dx^{5}}{d\tau }}\to {\frac {q}{mc}},}
where
m
{\displaystyle m}
is particle mass, and
q
{\displaystyle q}
is particle electric charge. Thus electric charge is understood as motion along the fifth dimension. The fact that the Lorentz force law could be understood as a geodesic in five dimensions was to Kaluza a primary motivation for considering the five-dimensional hypothesis, even in the presence of the aesthetically unpleasing cylinder condition.
Yet there is a problem: the term quadratic in
U
5
{\displaystyle U^{5}}
,
Γ
~
55
μ
=
−
1
2
g
μ
α
∂
α
ϕ
2
.
{\displaystyle {\widetilde {\Gamma }}_{55}^{\mu }=-{\frac {1}{2}}g^{\mu \alpha }\partial _{\alpha }\phi ^{2}.}
If there is no gradient in the scalar field, the term quadratic in
U
5
{\displaystyle U^{5}}
vanishes. But otherwise the expression above implies
U
5
∼
c
q
/
m
G
1
/
2
.
{\displaystyle U^{5}\sim c{\frac {q/m}{G^{1/2}}}.}
For elementary particles,
U
5
>
10
20
c
{\displaystyle U^{5}>10^{20}c}
. The term quadratic in
U
5
{\displaystyle U^{5}}
should dominate the equation, perhaps in contradiction to experience. This was the main shortfall of the five-dimensional theory as Kaluza saw it, and he gives it some discussion in his original article.
The equation of motion for
U
5
{\displaystyle U^{5}}
is particularly simple under the cylinder condition. Start with the alternate form of the geodesic equation, written for the covariant 5-velocity:
d
U
~
a
d
s
=
1
2
U
~
b
U
~
c
∂
g
~
b
c
∂
x
a
.
{\displaystyle {\frac {d{\widetilde {U}}_{a}}{ds}}={\frac {1}{2}}{\widetilde {U}}^{b}{\widetilde {U}}^{c}{\frac {\partial {\widetilde {g}}_{bc}}{\partial x^{a}}}.}
This means that under the cylinder condition,
U
~
5
{\displaystyle {\widetilde {U}}_{5}}
is a constant of the five-dimensional motion:
U
~
5
=
g
~
5
a
U
~
a
=
ϕ
2
c
d
τ
d
s
(
k
A
ν
U
ν
+
U
5
)
=
constant
.
{\displaystyle {\widetilde {U}}_{5}={\widetilde {g}}_{5a}{\widetilde {U}}^{a}=\phi ^{2}{\frac {c\,d\tau }{ds}}(kA_{\nu }U^{\nu }+U^{5})={\text{constant}}.}
== Kaluza's hypothesis for the matter stress–energy tensor ==
Kaluza proposed a five-dimensional matter stress tensor
T
~
M
a
b
{\displaystyle {\widetilde {T}}_{M}^{ab}}
of the form
T
~
M
a
b
=
ρ
d
x
a
d
s
d
x
b
d
s
,
{\displaystyle {\widetilde {T}}_{M}^{ab}=\rho {\frac {dx^{a}}{ds}}{\frac {dx^{b}}{ds}},}
where
ρ
{\displaystyle \rho }
is a density, and the length element
d
s
{\displaystyle ds}
is as defined above.
Then the spacetime component gives a typical "dust" stress–energy tensor:
T
~
M
μ
ν
=
ρ
d
x
μ
d
s
d
x
ν
d
s
.
{\displaystyle {\widetilde {T}}_{M}^{\mu \nu }=\rho {\frac {dx^{\mu }}{ds}}{\frac {dx^{\nu }}{ds}}.}
The mixed component provides a 4-current source for the Maxwell equations:
T
~
M
5
μ
=
ρ
d
x
μ
d
s
d
x
5
d
s
=
ρ
U
μ
q
k
m
c
.
{\displaystyle {\widetilde {T}}_{M}^{5\mu }=\rho {\frac {dx^{\mu }}{ds}}{\frac {dx^{5}}{ds}}=\rho U^{\mu }{\frac {q}{kmc}}.}
Just as the five-dimensional metric comprises the four-dimensional metric framed by the electromagnetic vector potential, the five-dimensional stress–energy tensor comprises the four-dimensional stress–energy tensor framed by the vector 4-current.
== Quantum interpretation of Klein ==
Kaluza's original hypothesis was purely classical and extended discoveries of general relativity. By the time of Klein's contribution, the discoveries of Heisenberg, Schrödinger, and Louis de Broglie were receiving a lot of attention. Klein's Nature article suggested that the fifth dimension is closed and periodic, and that the identification of electric charge with motion in the fifth dimension can be interpreted as standing waves of wavelength
λ
5
{\displaystyle \lambda ^{5}}
, much like the electrons around a nucleus in the Bohr model of the atom. The quantization of electric charge could then be nicely understood in terms of integer multiples of fifth-dimensional momentum. Combining the previous Kaluza result for
U
5
{\displaystyle U^{5}}
in terms of electric charge, and a de Broglie relation for momentum
p
5
=
h
/
λ
5
{\displaystyle p^{5}=h/\lambda ^{5}}
, Klein obtained an expression for the 0th mode of such waves:
m
U
5
=
c
q
G
1
/
2
=
h
λ
5
⇒
λ
5
∼
h
G
1
/
2
c
q
,
{\displaystyle mU^{5}={\frac {cq}{G^{1/2}}}={\frac {h}{\lambda ^{5}}}\quad \Rightarrow \quad \lambda ^{5}\sim {\frac {hG^{1/2}}{cq}},}
where
h
{\displaystyle h}
is the Planck constant. Klein found that
λ
5
∼
10
−
30
{\displaystyle \lambda ^{5}\sim 10^{-30}}
cm, and thereby an explanation for the cylinder condition in this small value.
Klein's Zeitschrift für Physik article of the same year, gave a more detailed treatment that explicitly invoked the techniques of Schrödinger and de Broglie. It recapitulated much of the classical theory of Kaluza described above, and then departed into Klein's quantum interpretation. Klein solved a Schrödinger-like wave equation using an expansion in terms of fifth-dimensional waves resonating in the closed, compact fifth dimension.
== Quantum field theory interpretation ==
== Group theory interpretation ==
In 1926, Oskar Klein proposed that the fourth spatial dimension is curled up in a circle of a very small radius, so that a particle moving a short distance along that axis would return to where it began. The distance a particle can travel before reaching its initial position is said to be the size of the dimension. This extra dimension is a compact set, and construction of this compact dimension is referred to as compactification.
In modern geometry, the extra fifth dimension can be understood to be the circle group U(1), as electromagnetism can essentially be formulated as a gauge theory on a fiber bundle, the circle bundle, with gauge group U(1). In Kaluza–Klein theory this group suggests that gauge symmetry is the symmetry of circular compact dimensions. Once this geometrical interpretation is understood, it is relatively straightforward to replace U(1) by a general Lie group. Such generalizations are often called Yang–Mills theories. If a distinction is drawn, then it is that Yang–Mills theories occur on a flat spacetime, whereas Kaluza–Klein treats the more general case of curved spacetime. The base space of Kaluza–Klein theory need not be four-dimensional spacetime; it can be any (pseudo-)Riemannian manifold, or even a supersymmetric manifold or orbifold or even a noncommutative space.
The construction can be outlined, roughly, as follows. One starts by considering a principal fiber bundle P with gauge group G over a manifold M. Given a connection on the bundle, and a metric on the base manifold, and a gauge invariant metric on the tangent of each fiber, one can construct a bundle metric defined on the entire bundle. Computing the scalar curvature of this bundle metric, one finds that it is constant on each fiber: this is the "Kaluza miracle". One did not have to explicitly impose a cylinder condition, or to compactify: by assumption, the gauge group is already compact. Next, one takes this scalar curvature as the Lagrangian density, and, from this, constructs the Einstein–Hilbert action for the bundle, as a whole. The equations of motion, the Euler–Lagrange equations, can be then obtained by considering where the action is stationary with respect to variations of either the metric on the base manifold, or of the gauge connection. Variations with respect to the base metric gives the Einstein field equations on the base manifold, with the energy–momentum tensor given by the curvature (field strength) of the gauge connection. On the flip side, the action is stationary against variations of the gauge connection precisely when the gauge connection solves the Yang–Mills equations. Thus, by applying a single idea: the principle of least action, to a single quantity: the scalar curvature on the bundle (as a whole), one obtains simultaneously all of the needed field equations, for both the spacetime and the gauge field.
As an approach to the unification of the forces, it is straightforward to apply the Kaluza–Klein theory in an attempt to unify gravity with the strong and electroweak forces by using the symmetry group of the Standard Model, SU(3) × SU(2) × U(1). However, an attempt to convert this interesting geometrical construction into a bona-fide model of reality flounders on a number of issues, including the fact that the fermions must be introduced in an artificial way (in nonsupersymmetric models). Nonetheless, KK remains an important touchstone in theoretical physics and is often embedded in more sophisticated theories. It is studied in its own right as an object of geometric interest in K-theory.
Even in the absence of a completely satisfying theoretical physics framework, the idea of exploring extra, compactified, dimensions is of considerable interest in the experimental physics and astrophysics communities. A variety of predictions, with real experimental consequences, can be made (in the case of large extra dimensions and warped models). For example, on the simplest of principles, one might expect to have standing waves in the extra compactified dimension(s). If a spatial extra dimension is of radius R, the invariant mass of such standing waves would be Mn = nh/Rc with n an integer, h being the Planck constant and c the speed of light. This set of possible mass values is often called the Kaluza–Klein tower. Similarly, in Thermal quantum field theory a compactification of the euclidean time dimension leads to the Matsubara frequencies and thus to a discretized thermal energy spectrum.
However, Klein's approach to a quantum theory is flawed and, for example, leads to a calculated electron mass in the order of magnitude of the Planck mass.
Examples of experimental pursuits include work by the CDF collaboration, which has re-analyzed particle collider data for the signature of effects associated with large extra dimensions/warped models.
Robert Brandenberger and Cumrun Vafa have speculated that in the early universe, cosmic inflation causes three of the space dimensions to expand to cosmological size while the remaining dimensions of space remained microscopic.
== Space–time–matter theory ==
One particular variant of Kaluza–Klein theory is space–time–matter theory or induced matter theory, chiefly promulgated by Paul Wesson and other members of the Space–Time–Matter Consortium. In this version of the theory, it is noted that solutions to the equation
R
~
a
b
=
0
{\displaystyle {\widetilde {R}}_{ab}=0}
may be re-expressed so that in four dimensions, these solutions satisfy Einstein's equations
G
μ
ν
=
8
π
T
μ
ν
{\displaystyle G_{\mu \nu }=8\pi T_{\mu \nu }\,}
with the precise form of the Tμν following from the Ricci-flat condition on the five-dimensional space. In other words, the cylinder condition of the previous development is dropped, and the stress–energy now comes from the derivatives of the 5D metric with respect to the fifth coordinate. Because the energy–momentum tensor is normally understood to be due to concentrations of matter in four-dimensional space, the above result is interpreted as saying that four-dimensional matter is induced from geometry in five-dimensional space.
In particular, the soliton solutions of
R
~
a
b
=
0
{\displaystyle {\widetilde {R}}_{ab}=0}
can be shown to contain the Friedmann–Lemaître–Robertson–Walker metric in both radiation-dominated (early universe) and matter-dominated (later universe) forms. The general equations can be shown to be sufficiently consistent with classical tests of general relativity to be acceptable on physical principles, while still leaving considerable freedom to also provide interesting cosmological models.
== Geometric interpretation ==
The Kaluza–Klein theory has a particularly elegant presentation in terms of geometry. In a certain sense, it looks just like ordinary gravity in free space, except that it is phrased in five dimensions instead of four.
=== Einstein equations ===
The equations governing ordinary gravity in free space can be obtained from an action, by applying the variational principle to a certain action. Let M be a (pseudo-)Riemannian manifold, which may be taken as the spacetime of general relativity. If g is the metric on this manifold, one defines the action S(g) as
S
(
g
)
=
∫
M
R
(
g
)
vol
(
g
)
,
{\displaystyle S(g)=\int _{M}R(g)\operatorname {vol} (g),}
where R(g) is the scalar curvature, and vol(g) is the volume element. By applying the variational principle to the action
δ
S
(
g
)
δ
g
=
0
,
{\displaystyle {\frac {\delta S(g)}{\delta g}}=0,}
one obtains precisely the Einstein equations for free space:
R
i
j
−
1
2
g
i
j
R
=
0
,
{\displaystyle R_{ij}-{\frac {1}{2}}g_{ij}R=0,}
where Rij is the Ricci tensor.
=== Maxwell equations ===
By contrast, the Maxwell equations describing electromagnetism can be understood to be the Hodge equations of a principal U(1)-bundle or circle bundle
π
:
P
→
M
{\displaystyle \pi :P\to M}
with fiber U(1). That is, the electromagnetic field
F
{\displaystyle F}
is a harmonic 2-form in the space
Ω
2
(
M
)
{\displaystyle \Omega ^{2}(M)}
of differentiable 2-forms on the manifold
M
{\displaystyle M}
. In the absence of charges and currents, the free-field Maxwell equations are
d
F
=
0
and
d
⋆
F
=
0
,
{\displaystyle \mathrm {d} F=0\quad {\text{and}}\quad \mathrm {d} {\star }F=0,}
where
⋆
{\displaystyle \star }
is the Hodge star operator.
=== Kaluza–Klein geometry ===
To build the Kaluza–Klein theory, one picks an invariant metric on the circle
S
1
{\displaystyle S^{1}}
that is the fiber of the U(1)-bundle of electromagnetism. In this discussion, an invariant metric is simply one that is invariant under rotations of the circle. Suppose that this metric gives the circle a total length
Λ
{\displaystyle \Lambda }
. One then considers metrics
g
^
{\displaystyle {\widehat {g}}}
on the bundle
P
{\displaystyle P}
that are consistent with both the fiber metric, and the metric on the underlying manifold
M
{\displaystyle M}
. The consistency conditions are:
The projection of
g
^
{\displaystyle {\widehat {g}}}
to the vertical subspace
Vert
p
P
⊂
T
p
P
{\displaystyle \operatorname {Vert} _{p}P\subset T_{p}P}
needs to agree with metric on the fiber over a point in the manifold
M
{\displaystyle M}
.
The projection of
g
^
{\displaystyle {\widehat {g}}}
to the horizontal subspace
Hor
p
P
⊂
T
p
P
{\displaystyle \operatorname {Hor} _{p}P\subset T_{p}P}
of the tangent space at point
p
∈
P
{\displaystyle p\in P}
must be isomorphic to the metric
g
{\displaystyle g}
on
M
{\displaystyle M}
at
π
(
P
)
{\displaystyle \pi (P)}
.
The Kaluza–Klein action for such a metric is given by
S
(
g
^
)
=
∫
P
R
(
g
^
)
vol
(
g
^
)
.
{\displaystyle S({\widehat {g}})=\int _{P}R({\widehat {g}})\operatorname {vol} ({\widehat {g}}).}
The scalar curvature, written in components, then expands to
R
(
g
^
)
=
π
∗
(
R
(
g
)
−
Λ
2
2
|
F
|
2
)
,
{\displaystyle R({\widehat {g}})=\pi ^{*}\left(R(g)-{\frac {\Lambda ^{2}}{2}}|F|^{2}\right),}
where
π
∗
{\displaystyle \pi ^{*}}
is the pullback of the fiber bundle projection
π
:
P
→
M
{\displaystyle \pi :P\to M}
. The connection
A
{\displaystyle A}
on the fiber bundle is related to the electromagnetic field strength as
π
∗
F
=
d
A
.
{\displaystyle \pi ^{*}F=dA.}
That there always exists such a connection, even for fiber bundles of arbitrarily complex topology, is a result from homology and specifically, K-theory. Applying Fubini's theorem and integrating on the fiber, one gets
S
(
g
^
)
=
Λ
∫
M
(
R
(
g
)
−
1
Λ
2
|
F
|
2
)
vol
(
g
)
.
{\displaystyle S({\widehat {g}})=\Lambda \int _{M}\left(R(g)-{\frac {1}{\Lambda ^{2}}}|F|^{2}\right)\operatorname {vol} (g).}
Varying the action with respect to the component
A
{\displaystyle A}
, one regains the Maxwell equations. Applying the variational principle to the base metric
g
{\displaystyle g}
, one gets the Einstein equations
R
i
j
−
1
2
g
i
j
R
=
1
Λ
2
T
i
j
{\displaystyle R_{ij}-{\frac {1}{2}}g_{ij}R={\frac {1}{\Lambda ^{2}}}T_{ij}}
with the electromagnetic stress–energy tensor being given by
T
i
j
=
F
i
k
F
j
l
g
k
l
−
1
4
g
i
j
|
F
|
2
.
{\displaystyle T^{ij}=F^{ik}F^{jl}g_{kl}-{\frac {1}{4}}g^{ij}|F|^{2}.}
The original theory identifies
Λ
{\displaystyle \Lambda }
with the fiber metric
g
55
{\displaystyle g_{55}}
and allows
Λ
{\displaystyle \Lambda }
to vary from fiber to fiber. In this case, the coupling between gravity and the electromagnetic field is not constant, but has its own dynamical field, the radion.
=== Generalizations ===
In the above, the size of the loop
Λ
{\displaystyle \Lambda }
acts as a coupling constant between the gravitational field and the electromagnetic field. If the base manifold is four-dimensional, the Kaluza–Klein manifold P is five-dimensional. The fifth dimension is a compact space and is called the compact dimension. The technique of introducing compact dimensions to obtain a higher-dimensional manifold is referred to as compactification. Compactification does not produce group actions on chiral fermions except in very specific cases: the dimension of the total space must be 2 mod 8, and the G-index of the Dirac operator of the compact space must be nonzero.
The above development generalizes in a more-or-less straightforward fashion to general principal G-bundles for some arbitrary Lie group G taking the place of U(1). In such a case, the theory is often referred to as a Yang–Mills theory and is sometimes taken to be synonymous. If the underlying manifold is supersymmetric, the resulting theory is a super-symmetric Yang–Mills theory.
== Empirical tests ==
No experimental or observational signs of extra dimensions have been officially reported. Many theoretical search techniques for detecting Kaluza–Klein resonances have been proposed using the mass couplings of such resonances with the top quark. An analysis of results from the LHC in December 2010 severely constrains theories with large extra dimensions.
The observation of a Higgs-like boson at the LHC establishes a new empirical test which can be applied to the search for Kaluza–Klein resonances and supersymmetric particles.
The loop Feynman diagrams that exist in the Higgs interactions allow any particle with electric charge and mass to run in such a loop. Standard Model particles besides the top quark and W boson do not make big contributions to the cross-section observed in the H → γγ decay, but if there are new particles beyond the Standard Model, they could potentially change the ratio of the predicted Standard Model H → γγ cross-section to the experimentally observed cross-section. Hence a measurement of any dramatic change to the H → γγ cross-section predicted by the Standard Model is crucial in probing the physics beyond it.
An article from July 2018 gives some hope for this theory; in the article they dispute that gravity is leaking into higher dimensions as in brane theory. However, the article does demonstrate that electromagnetism and gravity share the same number of dimensions, and this fact lends support to Kaluza–Klein theory; whether the number of dimensions is really 3 + 1 or in fact 4 + 1 is the subject of further debate.
== See also ==
== Notes ==
== References ==
Kaluza, Theodor (1921). "Zum Unitätsproblem in der Physik". Sitzungsber. Preuss. Akad. Wiss. Berlin. (Math. Phys.): 966–972. Bibcode:1921SPAW.......966K. https://archive.org/details/sitzungsberichte1921preussi
Klein, Oskar (1926). "Quantentheorie und fünfdimensionale Relativitätstheorie". Zeitschrift für Physik A. 37 (12): 895–906. Bibcode:1926ZPhy...37..895K. doi:10.1007/BF01397481.
Witten, Edward (1981). "Search for a realistic Kaluza–Klein theory". Nuclear Physics B. 186 (3): 412–428. Bibcode:1981NuPhB.186..412W. doi:10.1016/0550-3213(81)90021-3.
Appelquist, Thomas; Chodos, Alan; Freund, Peter G. O. (1987). Modern Kaluza–Klein Theories. Menlo Park, Cal.: Addison–Wesley. ISBN 978-0-201-09829-7. (Includes reprints of the above articles as well as those of other important papers relating to Kaluza–Klein theory.)
Duff, M. J. (1994). "Kaluza–Klein Theory in Perspective". In Lindström, Ulf (ed.). Proceedings of the Symposium 'The Oskar Klein Centenary'. Singapore: World Scientific. pp. 22–35. ISBN 978-981-02-2332-8.
Overduin, J. M.; Wesson, P. S. (1997). "Kaluza–Klein Gravity". Physics Reports. 283 (5): 303–378. arXiv:gr-qc/9805018. Bibcode:1997PhR...283..303O. doi:10.1016/S0370-1573(96)00046-4. S2CID 119087814.
Wesson, Paul S. (2006). Five-Dimensional Physics: Classical and Quantum Consequences of Kaluza–Klein Cosmology. Singapore: World Scientific. Bibcode:2006fdpc.book.....W. ISBN 978-981-256-661-4.
== Further reading ==
The CDF Collaboration, Search for Extra Dimensions using Missing Energy at CDF, (2004) (A simplified presentation of the search made for extra dimensions at the Collider Detector at Fermilab (CDF) particle physics facility.)
John M. Pierre, SUPERSTRINGS! Extra Dimensions, (2003).
Chris Pope, Lectures on Kaluza–Klein Theory.
Edward Witten (2014). "A Note On Einstein, Bergmann, and the Fifth Dimension", arXiv:1401.8048 | Wikipedia/Kaluza-Klein_Theory |
In the general theory of relativity, the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it.
The equations were published by Albert Einstein in 1915 in the form of a tensor equation which related the local spacetime curvature (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress–energy tensor).
Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass–energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress–energy–momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of nonlinear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation.
As well as implying local energy–momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light.
Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves.
== Mathematical form ==
The Einstein field equations (EFE) may be written in the form:
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },}
where Gμν is the Einstein tensor, gμν is the metric tensor, Tμν is the stress–energy tensor, Λ is the cosmological constant and κ is the Einstein gravitational constant.
The Einstein tensor is defined as
G
μ
ν
=
R
μ
ν
−
1
2
R
g
μ
ν
,
{\displaystyle G_{\mu \nu }=R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu },}
where Rμν is the Ricci curvature tensor, and R is the scalar curvature. This is a symmetric second-degree tensor that depends on only the metric tensor and its first and second derivatives.
The Einstein gravitational constant is defined as
κ
=
8
π
G
c
4
≈
2.07665
×
10
−
43
N
−
1
,
{\displaystyle \kappa ={\frac {8\pi G}{c^{4}}}\approx 2.07665\times 10^{-43}\,{\textrm {N}}^{-1},}
where G is the Newtonian constant of gravitation and c is the speed of light in vacuum.
The EFE can thus also be written as
R
μ
ν
−
1
2
R
g
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
.
{\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }.}
In standard units, each term on the left has quantity dimension of L−2.
The expression on the left represents the curvature of spacetime as determined by the metric; the expression on the right represents the stress–energy–momentum content of spacetime. The EFE can then be interpreted as a set of equations dictating how stress–energy–momentum determines the curvature of spacetime.
These equations, together with the geodesic equation, which dictates how freely falling matter moves through spacetime, form the core of the mathematical formulation of general relativity.
The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge-fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in n dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when Tμν is everywhere zero) define Einstein manifolds.
The equations are more complex than they appear. Given a specified distribution of matter and energy in the form of a stress–energy tensor, the EFE are understood to be equations for the metric tensor gμν, since both the Ricci tensor and scalar curvature depend on the metric in a complicated nonlinear manner. When fully written out, the EFE are a system of ten coupled, nonlinear, hyperbolic-elliptic partial differential equations.
=== Sign convention ===
The above form of the EFE is the standard established by Misner, Thorne, and Wheeler (MTW). The authors analyzed conventions that exist and classified these according to three signs ([S1] [S2] [S3]):
g
μ
ν
=
[
S
1
]
×
diag
(
−
1
,
+
1
,
+
1
,
+
1
)
R
μ
α
β
γ
=
[
S
2
]
×
(
Γ
α
γ
,
β
μ
−
Γ
α
β
,
γ
μ
+
Γ
σ
β
μ
Γ
γ
α
σ
−
Γ
σ
γ
μ
Γ
β
α
σ
)
G
μ
ν
=
[
S
3
]
×
κ
T
μ
ν
{\displaystyle {\begin{aligned}g_{\mu \nu }&=[S1]\times \operatorname {diag} (-1,+1,+1,+1)\\[6pt]{R^{\mu }}_{\alpha \beta \gamma }&=[S2]\times \left(\Gamma _{\alpha \gamma ,\beta }^{\mu }-\Gamma _{\alpha \beta ,\gamma }^{\mu }+\Gamma _{\sigma \beta }^{\mu }\Gamma _{\gamma \alpha }^{\sigma }-\Gamma _{\sigma \gamma }^{\mu }\Gamma _{\beta \alpha }^{\sigma }\right)\\[6pt]G_{\mu \nu }&=[S3]\times \kappa T_{\mu \nu }\end{aligned}}}
The third sign above is related to the choice of convention for the Ricci tensor:
R
μ
ν
=
[
S
2
]
×
[
S
3
]
×
R
α
μ
α
ν
{\displaystyle R_{\mu \nu }=[S2]\times [S3]\times {R^{\alpha }}_{\mu \alpha \nu }}
With these definitions Misner, Thorne, and Wheeler classify themselves as (+ + +), whereas Weinberg (1972) is (+ − −), Peebles (1980) and Efstathiou et al. (1990) are (− + +), Rindler (1977), Atwater (1974), Collins Martin & Squires (1989) and Peacock (1999) are (− + −).
Authors including Einstein have used a different sign in their definition for the Ricci tensor which results in the sign of the constant on the right side being negative:
R
μ
ν
−
1
2
R
g
μ
ν
−
Λ
g
μ
ν
=
−
κ
T
μ
ν
.
{\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }-\Lambda g_{\mu \nu }=-\kappa T_{\mu \nu }.}
The sign of the cosmological term would change in both these versions if the (+ − − −) metric sign convention is used rather than the MTW (− + + +) metric sign convention adopted here.
=== Equivalent formulations ===
Taking the trace with respect to the metric of both sides of the EFE one gets
R
−
D
2
R
+
D
Λ
=
κ
T
,
{\displaystyle R-{\frac {D}{2}}R+D\Lambda =\kappa T,}
where D is the spacetime dimension. Solving for R and substituting this in the original EFE, one gets the following equivalent "trace-reversed" form:
R
μ
ν
−
2
D
−
2
Λ
g
μ
ν
=
κ
(
T
μ
ν
−
1
D
−
2
T
g
μ
ν
)
.
{\displaystyle R_{\mu \nu }-{\frac {2}{D-2}}\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{D-2}}Tg_{\mu \nu }\right).}
In D = 4 dimensions this reduces to
R
μ
ν
−
Λ
g
μ
ν
=
κ
(
T
μ
ν
−
1
2
T
g
μ
ν
)
.
{\displaystyle R_{\mu \nu }-\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{2}}T\,g_{\mu \nu }\right).}
Reversing the trace again would restore the original EFE. The trace-reversed form may be more convenient in some cases (for example, when one is interested in weak-field limit and can replace gμν in the expression on the right with the Minkowski metric without significant loss of accuracy).
== Cosmological constant ==
In the Einstein field equations
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }\,,}
the term containing the cosmological constant Λ was absent from the version in which he originally published them. Einstein then included the term with the cosmological constant to allow for a universe that is not expanding or contracting. This effort was unsuccessful because:
any desired steady state solution described by this equation is unstable, and
observations by Edwin Hubble showed that our universe is expanding.
Einstein then abandoned Λ, remarking to George Gamow "that the introduction of the cosmological term was the biggest blunder of his life".
The inclusion of this term does not create inconsistencies. For many years the cosmological constant was almost universally assumed to be zero. More recent astronomical observations have shown an accelerating expansion of the universe, and to explain this a positive value of Λ is needed. The effect of the cosmological constant is negligible at the scale of a galaxy or smaller.
Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side and incorporated as part of the stress–energy tensor:
T
μ
ν
(
v
a
c
)
=
−
Λ
κ
g
μ
ν
.
{\displaystyle T_{\mu \nu }^{\mathrm {(vac)} }=-{\frac {\Lambda }{\kappa }}g_{\mu \nu }\,.}
This tensor describes a vacuum state with an energy density ρvac and isotropic pressure pvac that are fixed constants and given by
ρ
v
a
c
=
−
p
v
a
c
=
Λ
κ
,
{\displaystyle \rho _{\mathrm {vac} }=-p_{\mathrm {vac} }={\frac {\Lambda }{\kappa }},}
where it is assumed that Λ has SI unit m−2 and κ is defined as above.
The existence of a cosmological constant is thus equivalent to the existence of a vacuum energy and a pressure of opposite sign. This has led to the terms "cosmological constant" and "vacuum energy" being used interchangeably in general relativity.
== Features ==
=== Conservation of energy and momentum ===
General relativity is consistent with the local conservation of energy and momentum expressed as
∇
β
T
α
β
=
T
α
β
;
β
=
0.
{\displaystyle \nabla _{\beta }T^{\alpha \beta }={T^{\alpha \beta }}_{;\beta }=0.}
which expresses the local conservation of stress–energy. This conservation law is a physical requirement. With his field equations Einstein ensured that general relativity is consistent with this conservation condition.
=== Nonlinearity ===
The nonlinearity of the EFE distinguishes general relativity from many other fundamental physical theories. For example, Maxwell's equations of electromagnetism are linear in the electric and magnetic fields, and charge and current distributions (i.e. the sum of two solutions is also a solution); another example is the Schrödinger equation of quantum mechanics, which is linear in the wavefunction.
=== Correspondence principle ===
The EFE reduce to Newton's law of gravity by using both the weak-field approximation and the low-velocity approximation. The constant G appearing in the EFE is determined by making these two approximations.
== Vacuum field equations ==
If the energy–momentum tensor Tμν is zero in the region under consideration, then the field equations are also referred to as the vacuum field equations. By setting Tμν = 0 in the trace-reversed field equations, the vacuum field equations, also known as 'Einstein vacuum equations' (EVE), can be written as
R
μ
ν
=
0
.
{\displaystyle R_{\mu \nu }=0\,.}
In the case of nonzero cosmological constant, the equations are
R
μ
ν
=
Λ
D
2
−
1
g
μ
ν
.
{\displaystyle R_{\mu \nu }={\frac {\Lambda }{{\frac {D}{2}}-1}}g_{\mu \nu }\,.}
The solutions to the vacuum field equations are called vacuum solutions. Flat Minkowski space is the simplest example of a vacuum solution. Nontrivial examples include the Schwarzschild solution and the Kerr solution.
Manifolds with a vanishing Ricci tensor, Rμν = 0, are referred to as Ricci-flat manifolds and manifolds with a Ricci tensor proportional to the metric as Einstein manifolds.
== Einstein–Maxwell equations ==
If the energy–momentum tensor Tμν is that of an electromagnetic field in free space, i.e. if the electromagnetic stress–energy tensor
T
α
β
=
−
1
μ
0
(
F
α
ψ
F
ψ
β
+
1
4
g
α
β
F
ψ
τ
F
ψ
τ
)
{\displaystyle T^{\alpha \beta }=\,-{\frac {1}{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right)}
is used, then the Einstein field equations are called the Einstein–Maxwell equations (with cosmological constant Λ, taken to be zero in conventional relativity theory):
G
α
β
+
Λ
g
α
β
=
κ
μ
0
(
F
α
ψ
F
ψ
β
+
1
4
g
α
β
F
ψ
τ
F
ψ
τ
)
.
{\displaystyle G^{\alpha \beta }+\Lambda g^{\alpha \beta }={\frac {\kappa }{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right).}
Additionally, the covariant Maxwell equations are also applicable in free space:
F
α
β
;
β
=
0
F
[
α
β
;
γ
]
=
1
3
(
F
α
β
;
γ
+
F
β
γ
;
α
+
F
γ
α
;
β
)
=
1
3
(
F
α
β
,
γ
+
F
β
γ
,
α
+
F
γ
α
,
β
)
=
0
,
{\displaystyle {\begin{aligned}{F^{\alpha \beta }}_{;\beta }&=0\\F_{[\alpha \beta ;\gamma ]}&={\tfrac {1}{3}}\left(F_{\alpha \beta ;\gamma }+F_{\beta \gamma ;\alpha }+F_{\gamma \alpha ;\beta }\right)={\tfrac {1}{3}}\left(F_{\alpha \beta ,\gamma }+F_{\beta \gamma ,\alpha }+F_{\gamma \alpha ,\beta }\right)=0,\end{aligned}}}
where the semicolon represents a covariant derivative, and the brackets denote anti-symmetrization. The first equation asserts that the 4-divergence of the 2-form F is zero, and the second that its exterior derivative is zero. From the latter, it follows by the Poincaré lemma that in a coordinate chart it is possible to introduce an electromagnetic field potential Aα such that
F
α
β
=
A
α
;
β
−
A
β
;
α
=
A
α
,
β
−
A
β
,
α
{\displaystyle F_{\alpha \beta }=A_{\alpha ;\beta }-A_{\beta ;\alpha }=A_{\alpha ,\beta }-A_{\beta ,\alpha }}
in which the comma denotes a partial derivative. This is often taken as equivalent to the covariant Maxwell equation from which it is derived. However, there are global solutions of the equation that may lack a globally defined potential.
== Solutions ==
The solutions of the Einstein field equations are metrics of spacetime. These metrics describe the structure of the spacetime including the inertial motion of objects in the spacetime. As the field equations are non-linear, they cannot always be completely solved (i.e. without making approximations). For example, there is no known complete solution for a spacetime with two massive bodies in it (which is a theoretical model of a binary star system, for example). However, approximations are usually made in these cases. These are commonly referred to as post-Newtonian approximations. Even so, there are several cases where the field equations have been solved completely, and those are called exact solutions.
The study of exact solutions of Einstein's field equations is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe.
One can also discover new solutions of the Einstein field equations via the method of orthonormal frames as pioneered by Ellis and MacCallum. In this approach, the Einstein field equations are reduced to a set of coupled, nonlinear, ordinary differential equations. As discussed by Hsu and Wainwright, self-similar solutions to the Einstein field equations are fixed points of the resulting dynamical system. New solutions have been discovered using these methods by LeBlanc and Kohli and Haslam.
== Linearized EFE ==
The nonlinearity of the EFE makes finding exact solutions difficult. One way of solving the field equations is to make an approximation, namely, that far from the source(s) of gravitating matter, the gravitational field is very weak and the spacetime approximates that of Minkowski space. The metric is then written as the sum of the Minkowski metric and a term representing the deviation of the true metric from the Minkowski metric, ignoring higher-power terms. This linearization procedure can be used to investigate the phenomena of gravitational radiation.
== Polynomial form ==
Despite the EFE as written containing the inverse of the metric tensor, they can be arranged in a form that contains the metric tensor in polynomial form and without its inverse. First, the determinant of the metric in 4 dimensions can be written
det
(
g
)
=
1
24
ε
α
β
γ
δ
ε
κ
λ
μ
ν
g
α
κ
g
β
λ
g
γ
μ
g
δ
ν
{\displaystyle \det(g)={\tfrac {1}{24}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\alpha \kappa }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}
using the Levi-Civita symbol; and the inverse of the metric in 4 dimensions can be written as:
g
α
κ
=
1
6
ε
α
β
γ
δ
ε
κ
λ
μ
ν
g
β
λ
g
γ
μ
g
δ
ν
det
(
g
)
.
{\displaystyle g^{\alpha \kappa }={\frac {{\tfrac {1}{6}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}{\det(g)}}\,.}
Substituting this expression of the inverse of the metric into the equations then multiplying both sides by a suitable power of det(g) to eliminate it from the denominator results in polynomial equations in the metric tensor and its first and second derivatives. The Einstein–Hilbert action from which the equations are derived can also be written in polynomial form by suitable redefinitions of the fields.
== See also ==
== Notes ==
== References ==
See General relativity resources.
Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 978-0-7167-0344-0.
Weinberg, Steven (1972). Gravitation and Cosmology. John Wiley & Sons. ISBN 0-471-92567-5.
Peacock, John A. (1999). Cosmological Physics. Cambridge University Press. ISBN 978-0521410724.
== External links ==
"Einstein equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Caltech Tutorial on Relativity — A simple introduction to Einstein's Field Equations.
The Meaning of Einstein's Equation — An explanation of Einstein's field equation, its derivation, and some of its consequences
Video Lecture on Einstein's Field Equations by MIT Physics Professor Edmund Bertschinger.
Arch and scaffold: How Einstein found his field equations Physics Today November 2015, History of the Development of the Field Equations
=== External images ===
The Einstein field equation on the wall of the Museum Boerhaave in downtown Leiden
Suzanne Imber, "The impact of general relativity on the Atacama Desert", Einstein field equation on the side of a train in Bolivia. | Wikipedia/Einstein_field_equation |
The theory of relativity usually encompasses two interrelated physics theories by Albert Einstein: special relativity and general relativity, proposed and published in 1905 and 1915, respectively. Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to the forces of nature. It applies to the cosmological and astrophysical realm, including astronomy.
The theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.
== Development and acceptance ==
Albert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work.
Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916.
The term "theory of relativity" was based on the expression "relative theory" (German: Relativtheorie) used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression "theory of relativity" (German: Relativitätstheorie).
By the 1920s, the physics community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of atomic physics, nuclear physics, and quantum mechanics.
By comparison, general relativity did not appear to be as useful, beyond making minor corrections to predictions of Newtonian gravitation theory. It seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics seemed difficult and fully understandable only by a small number of people. Around 1960, general relativity became central to physics and astronomy. New mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. As astronomical phenomena were discovered, such as quasars (1963), the 3-kelvin microwave background radiation (1965), pulsars (1967), and the first black hole candidates (1981), the theory explained their attributes, and measurement of them further confirmed the theory.
== Special relativity ==
Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:
The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity).
The speed of light in vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source.
The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are:
Relativity of simultaneity: Two events, simultaneous for one observer, may not be simultaneous for another observer if the observers are in relative motion.
Time dilation: Moving clocks are measured to tick more slowly than an observer's "stationary" clock.
Length contraction: Objects are measured to be shortened in the direction that they are moving with respect to the observer.
Maximum speed is finite: No physical object, message or field line can travel faster than the speed of light in vacuum.
The effect of gravity can only travel through space at the speed of light, not faster or instantaneously.
Mass–energy equivalence: E = mc2, energy and mass are equivalent and transmutable.
Relativistic mass, idea used by some researchers.
The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. (See Maxwell's equations of electromagnetism.)
== General relativity ==
General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example, when standing on the surface of the Earth) are physically identical. The upshot of this is that free fall is inertial motion: an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics. This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so. To resolve this difficulty Einstein first proposed that spacetime is curved. Einstein discussed his idea with mathematician Marcel Grossmann and they concluded that general relativity could be formulated in the context of Riemannian geometry which had been developed in the 1800s.
In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and any momentum within it.
Some of the consequences of general relativity are:
Gravitational time dilation: Clocks run slower in deeper gravitational wells.
Precession: Orbits precess in a way unexpected in Newton's theory of gravity. (This has been observed in the orbit of Mercury and in binary pulsars).
Light deflection: Rays of light bend in the presence of a gravitational field.
Frame-dragging: Rotating masses "drag along" the spacetime around them.
Expansion of the universe: The universe is expanding, and certain components within the universe can accelerate the expansion.
Technically, general relativity is a theory of gravitation whose defining feature is its use of the Einstein field equations. The solutions of the field equations are metric tensors which define the topology of the spacetime and how objects move inertially.
== Experimental evidence ==
Einstein stated that the theory of relativity belongs to a class of "principle-theories". As such, it employs an analytic method, which means that the elements of this theory are not based on hypothesis but on empirical discovery. By observing natural processes, we understand their general characteristics, devise mathematical models to describe what we observed, and by analytical means we deduce the necessary conditions that have to be satisfied. Measurement of separate events must satisfy these conditions and match the theory's conclusions.
=== Tests of special relativity ===
Relativity is a falsifiable theory: It makes predictions that can be tested by experiment. In the case of special relativity, these include the principle of relativity, the constancy of the speed of light, and time dilation. The predictions of special relativity have been confirmed in numerous tests since Einstein published his paper in 1905, but three experiments conducted between 1881 and 1938 were critical to its validation. These are the Michelson–Morley experiment, the Kennedy–Thorndike experiment, and the Ives–Stilwell experiment. Einstein derived the Lorentz transformations from first principles in 1905, but these three experiments allow the transformations to be induced from experimental evidence.
Maxwell's equations—the foundation of classical electromagnetism—describe light as a wave that moves with a characteristic velocity. The modern view is that light needs no medium of transmission, but Maxwell and his contemporaries were convinced that light waves were propagated in a medium, analogous to sound propagating in air, and ripples propagating on the surface of a pond. This hypothetical medium was called the luminiferous aether, at rest relative to the "fixed stars" and through which the Earth moves. Fresnel's partial ether dragging hypothesis ruled out the measurement of first-order (v/c) effects, and although observations of second-order effects (v2/c2) were possible in principle, Maxwell thought they were too small to be detected with then-current technology.
The Michelson–Morley experiment was designed to detect second-order effects of the "aether wind"—the motion of the aether relative to the Earth. Michelson designed an instrument called the Michelson interferometer to accomplish this. The apparatus was sufficiently accurate to detect the expected effects, but he obtained a null result when the first experiment was conducted in 1881, and again in 1887. Although the failure to detect an aether wind was a disappointment, the results were accepted by the scientific community. In an attempt to salvage the aether paradigm, FitzGerald and Lorentz independently created an ad hoc hypothesis in which the length of material bodies changes according to their motion through the aether. This was the origin of FitzGerald–Lorentz contraction, and their hypothesis had no theoretical basis. The interpretation of the null result of the Michelson–Morley experiment is that the round-trip travel time for light is isotropic (independent of direction), but the result alone is not enough to discount the theory of the aether or validate the predictions of special relativity.
While the Michelson–Morley experiment showed that the velocity of light is isotropic, it said nothing about how the magnitude of the velocity changed (if at all) in different inertial frames. The Kennedy–Thorndike experiment was designed to do that, and was first performed in 1932 by Roy Kennedy and Edward Thorndike. They obtained a null result, and concluded that "there is no effect ... unless the velocity of the solar system in space is no more than about half that of the earth in its orbit". That possibility was thought to be too coincidental to provide an acceptable explanation, so from the null result of their experiment it was concluded that the round-trip time for light is the same in all inertial reference frames.
The Ives–Stilwell experiment was carried out by Herbert Ives and G.R. Stilwell first in 1938 and with better accuracy in 1941. It was designed to test the transverse Doppler effect – the redshift of light from a moving source in a direction perpendicular to its velocity—which had been predicted by Einstein in 1905. The strategy was to compare observed Doppler shifts with what was predicted by classical theory, and look for a Lorentz factor correction. Such a correction was observed, from which was concluded that the frequency of a moving atomic clock is altered according to special relativity.
Those classic experiments have been repeated many times with increased precision. Other experiments include, for instance, relativistic energy and momentum increase at high velocities, experimental testing of time dilation, and modern searches for Lorentz violations.
=== Tests of general relativity ===
General relativity has also been confirmed many times, the classic experiments being the perihelion precession of Mercury's orbit, the deflection of light by the Sun, and the gravitational redshift of light. Other tests confirmed the equivalence principle and frame dragging.
== Modern applications ==
Far from being simply of theoretical interest, relativistic effects are important practical engineering concerns. Satellite-based measurement needs to take into account relativistic effects, as each satellite is in motion relative to an Earth-bound user, and is thus in a different frame of reference under the theory of relativity. Global positioning systems such as GPS, GLONASS, and Galileo, must account for all of the relativistic effects in order to work with precision, such as the consequences of the Earth's gravitational field. This is also the case in the high-precision measurement of time. Instruments ranging from electron microscopes to particle accelerators would not work if relativistic considerations were omitted.
== See also ==
Doubly special relativity
Galilean invariance
List of textbooks on relativity
== References ==
== Further reading ==
== External links ==
The dictionary definition of theory of relativity at Wiktionary
Media related to Theory of relativity at Wikimedia Commons | Wikipedia/Relativity_theory |
In the general theory of relativity, the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it.
The equations were published by Albert Einstein in 1915 in the form of a tensor equation which related the local spacetime curvature (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress–energy tensor).
Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass–energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress–energy–momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of nonlinear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation.
As well as implying local energy–momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light.
Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves.
== Mathematical form ==
The Einstein field equations (EFE) may be written in the form:
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },}
where Gμν is the Einstein tensor, gμν is the metric tensor, Tμν is the stress–energy tensor, Λ is the cosmological constant and κ is the Einstein gravitational constant.
The Einstein tensor is defined as
G
μ
ν
=
R
μ
ν
−
1
2
R
g
μ
ν
,
{\displaystyle G_{\mu \nu }=R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu },}
where Rμν is the Ricci curvature tensor, and R is the scalar curvature. This is a symmetric second-degree tensor that depends on only the metric tensor and its first and second derivatives.
The Einstein gravitational constant is defined as
κ
=
8
π
G
c
4
≈
2.07665
×
10
−
43
N
−
1
,
{\displaystyle \kappa ={\frac {8\pi G}{c^{4}}}\approx 2.07665\times 10^{-43}\,{\textrm {N}}^{-1},}
where G is the Newtonian constant of gravitation and c is the speed of light in vacuum.
The EFE can thus also be written as
R
μ
ν
−
1
2
R
g
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
.
{\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }.}
In standard units, each term on the left has quantity dimension of L−2.
The expression on the left represents the curvature of spacetime as determined by the metric; the expression on the right represents the stress–energy–momentum content of spacetime. The EFE can then be interpreted as a set of equations dictating how stress–energy–momentum determines the curvature of spacetime.
These equations, together with the geodesic equation, which dictates how freely falling matter moves through spacetime, form the core of the mathematical formulation of general relativity.
The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge-fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in n dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when Tμν is everywhere zero) define Einstein manifolds.
The equations are more complex than they appear. Given a specified distribution of matter and energy in the form of a stress–energy tensor, the EFE are understood to be equations for the metric tensor gμν, since both the Ricci tensor and scalar curvature depend on the metric in a complicated nonlinear manner. When fully written out, the EFE are a system of ten coupled, nonlinear, hyperbolic-elliptic partial differential equations.
=== Sign convention ===
The above form of the EFE is the standard established by Misner, Thorne, and Wheeler (MTW). The authors analyzed conventions that exist and classified these according to three signs ([S1] [S2] [S3]):
g
μ
ν
=
[
S
1
]
×
diag
(
−
1
,
+
1
,
+
1
,
+
1
)
R
μ
α
β
γ
=
[
S
2
]
×
(
Γ
α
γ
,
β
μ
−
Γ
α
β
,
γ
μ
+
Γ
σ
β
μ
Γ
γ
α
σ
−
Γ
σ
γ
μ
Γ
β
α
σ
)
G
μ
ν
=
[
S
3
]
×
κ
T
μ
ν
{\displaystyle {\begin{aligned}g_{\mu \nu }&=[S1]\times \operatorname {diag} (-1,+1,+1,+1)\\[6pt]{R^{\mu }}_{\alpha \beta \gamma }&=[S2]\times \left(\Gamma _{\alpha \gamma ,\beta }^{\mu }-\Gamma _{\alpha \beta ,\gamma }^{\mu }+\Gamma _{\sigma \beta }^{\mu }\Gamma _{\gamma \alpha }^{\sigma }-\Gamma _{\sigma \gamma }^{\mu }\Gamma _{\beta \alpha }^{\sigma }\right)\\[6pt]G_{\mu \nu }&=[S3]\times \kappa T_{\mu \nu }\end{aligned}}}
The third sign above is related to the choice of convention for the Ricci tensor:
R
μ
ν
=
[
S
2
]
×
[
S
3
]
×
R
α
μ
α
ν
{\displaystyle R_{\mu \nu }=[S2]\times [S3]\times {R^{\alpha }}_{\mu \alpha \nu }}
With these definitions Misner, Thorne, and Wheeler classify themselves as (+ + +), whereas Weinberg (1972) is (+ − −), Peebles (1980) and Efstathiou et al. (1990) are (− + +), Rindler (1977), Atwater (1974), Collins Martin & Squires (1989) and Peacock (1999) are (− + −).
Authors including Einstein have used a different sign in their definition for the Ricci tensor which results in the sign of the constant on the right side being negative:
R
μ
ν
−
1
2
R
g
μ
ν
−
Λ
g
μ
ν
=
−
κ
T
μ
ν
.
{\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }-\Lambda g_{\mu \nu }=-\kappa T_{\mu \nu }.}
The sign of the cosmological term would change in both these versions if the (+ − − −) metric sign convention is used rather than the MTW (− + + +) metric sign convention adopted here.
=== Equivalent formulations ===
Taking the trace with respect to the metric of both sides of the EFE one gets
R
−
D
2
R
+
D
Λ
=
κ
T
,
{\displaystyle R-{\frac {D}{2}}R+D\Lambda =\kappa T,}
where D is the spacetime dimension. Solving for R and substituting this in the original EFE, one gets the following equivalent "trace-reversed" form:
R
μ
ν
−
2
D
−
2
Λ
g
μ
ν
=
κ
(
T
μ
ν
−
1
D
−
2
T
g
μ
ν
)
.
{\displaystyle R_{\mu \nu }-{\frac {2}{D-2}}\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{D-2}}Tg_{\mu \nu }\right).}
In D = 4 dimensions this reduces to
R
μ
ν
−
Λ
g
μ
ν
=
κ
(
T
μ
ν
−
1
2
T
g
μ
ν
)
.
{\displaystyle R_{\mu \nu }-\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{2}}T\,g_{\mu \nu }\right).}
Reversing the trace again would restore the original EFE. The trace-reversed form may be more convenient in some cases (for example, when one is interested in weak-field limit and can replace gμν in the expression on the right with the Minkowski metric without significant loss of accuracy).
== Cosmological constant ==
In the Einstein field equations
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }\,,}
the term containing the cosmological constant Λ was absent from the version in which he originally published them. Einstein then included the term with the cosmological constant to allow for a universe that is not expanding or contracting. This effort was unsuccessful because:
any desired steady state solution described by this equation is unstable, and
observations by Edwin Hubble showed that our universe is expanding.
Einstein then abandoned Λ, remarking to George Gamow "that the introduction of the cosmological term was the biggest blunder of his life".
The inclusion of this term does not create inconsistencies. For many years the cosmological constant was almost universally assumed to be zero. More recent astronomical observations have shown an accelerating expansion of the universe, and to explain this a positive value of Λ is needed. The effect of the cosmological constant is negligible at the scale of a galaxy or smaller.
Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side and incorporated as part of the stress–energy tensor:
T
μ
ν
(
v
a
c
)
=
−
Λ
κ
g
μ
ν
.
{\displaystyle T_{\mu \nu }^{\mathrm {(vac)} }=-{\frac {\Lambda }{\kappa }}g_{\mu \nu }\,.}
This tensor describes a vacuum state with an energy density ρvac and isotropic pressure pvac that are fixed constants and given by
ρ
v
a
c
=
−
p
v
a
c
=
Λ
κ
,
{\displaystyle \rho _{\mathrm {vac} }=-p_{\mathrm {vac} }={\frac {\Lambda }{\kappa }},}
where it is assumed that Λ has SI unit m−2 and κ is defined as above.
The existence of a cosmological constant is thus equivalent to the existence of a vacuum energy and a pressure of opposite sign. This has led to the terms "cosmological constant" and "vacuum energy" being used interchangeably in general relativity.
== Features ==
=== Conservation of energy and momentum ===
General relativity is consistent with the local conservation of energy and momentum expressed as
∇
β
T
α
β
=
T
α
β
;
β
=
0.
{\displaystyle \nabla _{\beta }T^{\alpha \beta }={T^{\alpha \beta }}_{;\beta }=0.}
which expresses the local conservation of stress–energy. This conservation law is a physical requirement. With his field equations Einstein ensured that general relativity is consistent with this conservation condition.
=== Nonlinearity ===
The nonlinearity of the EFE distinguishes general relativity from many other fundamental physical theories. For example, Maxwell's equations of electromagnetism are linear in the electric and magnetic fields, and charge and current distributions (i.e. the sum of two solutions is also a solution); another example is the Schrödinger equation of quantum mechanics, which is linear in the wavefunction.
=== Correspondence principle ===
The EFE reduce to Newton's law of gravity by using both the weak-field approximation and the low-velocity approximation. The constant G appearing in the EFE is determined by making these two approximations.
== Vacuum field equations ==
If the energy–momentum tensor Tμν is zero in the region under consideration, then the field equations are also referred to as the vacuum field equations. By setting Tμν = 0 in the trace-reversed field equations, the vacuum field equations, also known as 'Einstein vacuum equations' (EVE), can be written as
R
μ
ν
=
0
.
{\displaystyle R_{\mu \nu }=0\,.}
In the case of nonzero cosmological constant, the equations are
R
μ
ν
=
Λ
D
2
−
1
g
μ
ν
.
{\displaystyle R_{\mu \nu }={\frac {\Lambda }{{\frac {D}{2}}-1}}g_{\mu \nu }\,.}
The solutions to the vacuum field equations are called vacuum solutions. Flat Minkowski space is the simplest example of a vacuum solution. Nontrivial examples include the Schwarzschild solution and the Kerr solution.
Manifolds with a vanishing Ricci tensor, Rμν = 0, are referred to as Ricci-flat manifolds and manifolds with a Ricci tensor proportional to the metric as Einstein manifolds.
== Einstein–Maxwell equations ==
If the energy–momentum tensor Tμν is that of an electromagnetic field in free space, i.e. if the electromagnetic stress–energy tensor
T
α
β
=
−
1
μ
0
(
F
α
ψ
F
ψ
β
+
1
4
g
α
β
F
ψ
τ
F
ψ
τ
)
{\displaystyle T^{\alpha \beta }=\,-{\frac {1}{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right)}
is used, then the Einstein field equations are called the Einstein–Maxwell equations (with cosmological constant Λ, taken to be zero in conventional relativity theory):
G
α
β
+
Λ
g
α
β
=
κ
μ
0
(
F
α
ψ
F
ψ
β
+
1
4
g
α
β
F
ψ
τ
F
ψ
τ
)
.
{\displaystyle G^{\alpha \beta }+\Lambda g^{\alpha \beta }={\frac {\kappa }{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right).}
Additionally, the covariant Maxwell equations are also applicable in free space:
F
α
β
;
β
=
0
F
[
α
β
;
γ
]
=
1
3
(
F
α
β
;
γ
+
F
β
γ
;
α
+
F
γ
α
;
β
)
=
1
3
(
F
α
β
,
γ
+
F
β
γ
,
α
+
F
γ
α
,
β
)
=
0
,
{\displaystyle {\begin{aligned}{F^{\alpha \beta }}_{;\beta }&=0\\F_{[\alpha \beta ;\gamma ]}&={\tfrac {1}{3}}\left(F_{\alpha \beta ;\gamma }+F_{\beta \gamma ;\alpha }+F_{\gamma \alpha ;\beta }\right)={\tfrac {1}{3}}\left(F_{\alpha \beta ,\gamma }+F_{\beta \gamma ,\alpha }+F_{\gamma \alpha ,\beta }\right)=0,\end{aligned}}}
where the semicolon represents a covariant derivative, and the brackets denote anti-symmetrization. The first equation asserts that the 4-divergence of the 2-form F is zero, and the second that its exterior derivative is zero. From the latter, it follows by the Poincaré lemma that in a coordinate chart it is possible to introduce an electromagnetic field potential Aα such that
F
α
β
=
A
α
;
β
−
A
β
;
α
=
A
α
,
β
−
A
β
,
α
{\displaystyle F_{\alpha \beta }=A_{\alpha ;\beta }-A_{\beta ;\alpha }=A_{\alpha ,\beta }-A_{\beta ,\alpha }}
in which the comma denotes a partial derivative. This is often taken as equivalent to the covariant Maxwell equation from which it is derived. However, there are global solutions of the equation that may lack a globally defined potential.
== Solutions ==
The solutions of the Einstein field equations are metrics of spacetime. These metrics describe the structure of the spacetime including the inertial motion of objects in the spacetime. As the field equations are non-linear, they cannot always be completely solved (i.e. without making approximations). For example, there is no known complete solution for a spacetime with two massive bodies in it (which is a theoretical model of a binary star system, for example). However, approximations are usually made in these cases. These are commonly referred to as post-Newtonian approximations. Even so, there are several cases where the field equations have been solved completely, and those are called exact solutions.
The study of exact solutions of Einstein's field equations is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe.
One can also discover new solutions of the Einstein field equations via the method of orthonormal frames as pioneered by Ellis and MacCallum. In this approach, the Einstein field equations are reduced to a set of coupled, nonlinear, ordinary differential equations. As discussed by Hsu and Wainwright, self-similar solutions to the Einstein field equations are fixed points of the resulting dynamical system. New solutions have been discovered using these methods by LeBlanc and Kohli and Haslam.
== Linearized EFE ==
The nonlinearity of the EFE makes finding exact solutions difficult. One way of solving the field equations is to make an approximation, namely, that far from the source(s) of gravitating matter, the gravitational field is very weak and the spacetime approximates that of Minkowski space. The metric is then written as the sum of the Minkowski metric and a term representing the deviation of the true metric from the Minkowski metric, ignoring higher-power terms. This linearization procedure can be used to investigate the phenomena of gravitational radiation.
== Polynomial form ==
Despite the EFE as written containing the inverse of the metric tensor, they can be arranged in a form that contains the metric tensor in polynomial form and without its inverse. First, the determinant of the metric in 4 dimensions can be written
det
(
g
)
=
1
24
ε
α
β
γ
δ
ε
κ
λ
μ
ν
g
α
κ
g
β
λ
g
γ
μ
g
δ
ν
{\displaystyle \det(g)={\tfrac {1}{24}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\alpha \kappa }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}
using the Levi-Civita symbol; and the inverse of the metric in 4 dimensions can be written as:
g
α
κ
=
1
6
ε
α
β
γ
δ
ε
κ
λ
μ
ν
g
β
λ
g
γ
μ
g
δ
ν
det
(
g
)
.
{\displaystyle g^{\alpha \kappa }={\frac {{\tfrac {1}{6}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}{\det(g)}}\,.}
Substituting this expression of the inverse of the metric into the equations then multiplying both sides by a suitable power of det(g) to eliminate it from the denominator results in polynomial equations in the metric tensor and its first and second derivatives. The Einstein–Hilbert action from which the equations are derived can also be written in polynomial form by suitable redefinitions of the fields.
== See also ==
== Notes ==
== References ==
See General relativity resources.
Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 978-0-7167-0344-0.
Weinberg, Steven (1972). Gravitation and Cosmology. John Wiley & Sons. ISBN 0-471-92567-5.
Peacock, John A. (1999). Cosmological Physics. Cambridge University Press. ISBN 978-0521410724.
== External links ==
"Einstein equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Caltech Tutorial on Relativity — A simple introduction to Einstein's Field Equations.
The Meaning of Einstein's Equation — An explanation of Einstein's field equation, its derivation, and some of its consequences
Video Lecture on Einstein's Field Equations by MIT Physics Professor Edmund Bertschinger.
Arch and scaffold: How Einstein found his field equations Physics Today November 2015, History of the Development of the Field Equations
=== External images ===
The Einstein field equation on the wall of the Museum Boerhaave in downtown Leiden
Suzanne Imber, "The impact of general relativity on the Atacama Desert", Einstein field equation on the side of a train in Bolivia. | Wikipedia/Vacuum_field_equations |
In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behavior of a physical system as a set of mathematical functions in terms of dynamic variables. These variables are usually spatial coordinates and time, but may include momentum components. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics.
== Types ==
There are two main descriptions of motion: dynamics and kinematics. Dynamics is general, since the momenta, forces and energy of the particles are taken into account. In this instance, sometimes the term dynamics refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler–Lagrange equations), and sometimes to the solutions to those equations.
However, kinematics is simpler. It concerns only variables derived from the positions of objects and time. In circumstances of constant acceleration, these simpler equations of motion are usually referred to as the SUVAT equations, arising from the definitions of kinematic quantities: displacement (s), initial velocity (u), final velocity (v), acceleration (a), and time (t).
A differential equation of motion, usually identified as some physical law (for example, F = ma), and applying definitions of physical quantities, is used to set up an equation to solve a kinematics problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a set of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants.
Stated formally, in general, an equation of motion M is a function of the position r of the object, its velocity (the first time derivative of r, v = dr/dt), and its acceleration (the second derivative of r, a = d2r/dt2), and time t. Euclidean vectors in 3D are denoted throughout in bold. This is equivalent to saying an equation of motion in r is a second-order ordinary differential equation (ODE) in r,
M
[
r
(
t
)
,
r
˙
(
t
)
,
r
¨
(
t
)
,
t
]
=
0
,
{\displaystyle M\left[\mathbf {r} (t),\mathbf {\dot {r}} (t),\mathbf {\ddot {r}} (t),t\right]=0\,,}
where t is time, and each overdot denotes one time derivative. The initial conditions are given by the constant values at t = 0,
r
(
0
)
,
r
˙
(
0
)
.
{\displaystyle \mathbf {r} (0)\,,\quad \mathbf {\dot {r}} (0)\,.}
The solution r(t) to the equation of motion, with specified initial values, describes the system for all times t after t = 0. Other dynamical variables like the momentum p of the object, or quantities derived from r and p like angular momentum, can be used in place of r as the quantity to solve for from some equation of motion, although the position of the object at time t is by far the most sought-after quantity.
Sometimes, the equation will be linear and is more likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used. The solutions to nonlinear equations may show chaotic behavior depending on how sensitive the system is to the initial conditions.
== History ==
Kinematics, dynamics and the mathematical models of the universe developed incrementally over three millennia, thanks to many thinkers, only some of whose names we know. In antiquity, priests, astrologers and astronomers predicted solar and lunar eclipses, the solstices and the equinoxes of the Sun and the period of the Moon. But they had nothing other than a set of algorithms to guide them. Equations of motion were not written down for another thousand years.
Medieval scholars in the thirteenth century — for example at the relatively new universities in Oxford and Paris — drew on ancient mathematicians (Euclid and Archimedes) and philosophers (Aristotle) to develop a new body of knowledge, now called physics.
At Oxford, Merton College sheltered a group of scholars devoted to natural science, mainly physics, astronomy and mathematics, who were of similar stature to the intellectuals at the University of Paris. Thomas Bradwardine extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested an exponential law involving force, resistance, distance, velocity and time. Nicholas Oresme further extended Bradwardine's arguments. The Merton school proved that the quantity of motion of a body undergoing a uniformly accelerated motion is equal to the quantity of a uniform motion at the speed achieved halfway through the accelerated motion.
For writers on kinematics before Galileo, since small time intervals could not be measured, the affinity between time and motion was obscure. They used time as a function of distance, and in free fall, greater velocity as a result of greater elevation. Only Domingo de Soto, a Spanish theologian, in his commentary on Aristotle's Physics published in 1545, after defining "uniform difform" motion (which is uniformly accelerated motion) – the word velocity was not used – as proportional to time, declared correctly that this kind of motion was identifiable with freely falling bodies and projectiles, without his proving these propositions or suggesting a formula relating time, velocity and distance. De Soto's comments are remarkably correct regarding the definitions of acceleration (acceleration was a rate of change of motion (velocity) in time) and the observation that acceleration would be negative during ascent.
Discourses such as these spread throughout Europe, shaping the work of Galileo Galilei and others, and helped in laying the foundation of kinematics. Galileo deduced the equation s = 1/2gt2 in his work geometrically, using the Merton rule, now known as a special case of one of the equations of kinematics.
Galileo was the first to show that the path of a projectile is a parabola. Galileo had an understanding of centrifugal force and gave a correct definition of momentum. This emphasis of momentum as a fundamental quantity in dynamics is of prime importance. He measured momentum by the product of velocity and weight; mass is a later concept, developed by Huygens and Newton. In the swinging of a simple pendulum, Galileo says in Discourses that "every momentum acquired in the descent along an arc is equal to that which causes the same moving body to ascend through the same arc." His analysis on projectiles indicates that Galileo had grasped the first law and the second law of motion. He did not generalize and make them applicable to bodies not subject to the earth's gravitation. That step was Newton's contribution.
The term "inertia" was used by Kepler who applied it to bodies at rest. (The first law of motion is now often called the law of inertia.)
Galileo did not fully grasp the third law of motion, the law of the equality of action and reaction, though he corrected some errors of Aristotle. With Stevin and others Galileo also wrote on statics. He formulated the principle of the parallelogram of forces, but he did not fully recognize its scope.
Galileo also was interested by the laws of the pendulum, his first observations of which were as a young man. In 1583, while he was praying in the cathedral at Pisa, his attention was arrested by the motion of the great lamp lighted and left swinging, referencing his own pulse for time keeping. To him the period appeared the same, even after the motion had greatly diminished, discovering the isochronism of the pendulum.
More careful experiments carried out by him later, and described in his Discourses, revealed the period of oscillation varies with the square root of length but is independent of the mass the pendulum.
Thus we arrive at René Descartes, Isaac Newton, Gottfried Leibniz, et al.; and the evolved forms of the equations of motion that begin to be recognized as the modern ones.
Later the equations of motion also appeared in electrodynamics, when describing the motion of charged particles in electric and magnetic fields, the Lorentz force is the general equation which serves as the definition of what is meant by an electric field and magnetic field. With the advent of special relativity and general relativity, the theoretical modifications to spacetime meant the classical equations of motion were also modified to account for the finite speed of light, and curvature of spacetime. In all these cases the differential equations were in terms of a function describing the particle's trajectory in terms of space and time coordinates, as influenced by forces or energy transformations.
However, the equations of quantum mechanics can also be considered "equations of motion", since they are differential equations of the wavefunction, which describes how a quantum state behaves analogously using the space and time coordinates of the particles. There are analogs of equations of motion in other areas of physics, for collections of physical phenomena that can be considered waves, fluids, or fields.
== Kinematic equations for one particle ==
=== Kinematic quantities ===
From the instantaneous position r = r(t), instantaneous meaning at an instant value of time t, the instantaneous velocity v = v(t) and acceleration a = a(t) have the general, coordinate-independent definitions;
v
=
d
r
d
t
,
a
=
d
v
d
t
=
d
2
r
d
t
2
{\displaystyle \mathbf {v} ={\frac {d\mathbf {r} }{dt}}\,,\quad \mathbf {a} ={\frac {d\mathbf {v} }{dt}}={\frac {d^{2}\mathbf {r} }{dt^{2}}}}
Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature.
The rotational analogues are the "angular vector" (angle the particle rotates about some axis) θ = θ(t), angular velocity ω = ω(t), and angular acceleration α = α(t):
θ
=
θ
n
^
,
ω
=
d
θ
d
t
,
α
=
d
ω
d
t
,
{\displaystyle {\boldsymbol {\theta }}=\theta {\hat {\mathbf {n} }}\,,\quad {\boldsymbol {\omega }}={\frac {d{\boldsymbol {\theta }}}{dt}}\,,\quad {\boldsymbol {\alpha }}={\frac {d{\boldsymbol {\omega }}}{dt}}\,,}
where n̂ is a unit vector in the direction of the axis of rotation, and θ is the angle the object turns through about the axis.
The following relation holds for a point-like particle, orbiting about some axis with angular velocity ω:
v
=
ω
×
r
{\displaystyle \mathbf {v} ={\boldsymbol {\omega }}\times \mathbf {r} }
where r is the position vector of the particle (radial from the rotation axis) and v the tangential velocity of the particle. For a rotating continuum rigid body, these relations hold for each point in the rigid body.
=== Uniform acceleration ===
The differential equation of motion for a particle of constant or uniform acceleration in a straight line is simple: the acceleration is constant, so the second derivative of the position of the object is constant. The results of this case are summarized below.
==== Constant translational acceleration in a straight line ====
These equations apply to a particle moving linearly, in three dimensions in a straight line with constant acceleration. Since the position, velocity, and acceleration are collinear (parallel, and lie on the same line) – only the magnitudes of these vectors are necessary, and because the motion is along a straight line, the problem effectively reduces from three dimensions to one.
v
=
v
0
+
a
t
[
1
]
r
=
r
0
+
v
0
t
+
1
2
a
t
2
[
2
]
r
=
r
0
+
1
2
(
v
+
v
0
)
t
[
3
]
v
2
=
v
0
2
+
2
a
(
r
−
r
0
)
[
4
]
r
=
r
0
+
v
t
−
1
2
a
t
2
[
5
]
{\displaystyle {\begin{aligned}v&=v_{0}+at&[1]\\r&=r_{0}+v_{0}t+{\tfrac {1}{2}}{a}t^{2}&[2]\\r&=r_{0}+{\tfrac {1}{2}}\left(v+v_{0}\right)t&[3]\\v^{2}&=v_{0}^{2}+2a\left(r-r_{0}\right)&[4]\\r&=r_{0}+vt-{\tfrac {1}{2}}{a}t^{2}&[5]\\\end{aligned}}}
where:
r0 is the particle's initial position
r is the particle's final position
v0 is the particle's initial velocity
v is the particle's final velocity
a is the particle's acceleration
t is the time interval
Here a is constant acceleration, or in the case of bodies moving under the influence of gravity, the standard gravity g is used. Note that each of the equations contains four of the five variables, so in this situation it is sufficient to know three out of the five variables to calculate the remaining two.
In some programs, such as the IGCSE Physics and IB DP Physics programs (international programs but especially popular in the UK and Europe), the same formulae would be written with a different set of preferred variables. There u replaces v0 and s replaces r - r0. They are often referred to as the SUVAT equations, where "SUVAT" is an acronym from the variables: s = displacement, u = initial velocity, v = final velocity, a = acceleration, t = time. In these variables, the equations of motion would be written
v
=
u
+
a
t
[
1
]
s
=
u
t
+
1
2
a
t
2
[
2
]
s
=
1
2
(
u
+
v
)
t
[
3
]
v
2
=
u
2
+
2
a
s
[
4
]
s
=
v
t
−
1
2
a
t
2
[
5
]
{\displaystyle {\begin{aligned}v&=u+at&[1]\\s&=ut+{\tfrac {1}{2}}at^{2}&[2]\\s&={\tfrac {1}{2}}(u+v)t&[3]\\v^{2}&=u^{2}+2as&[4]\\s&=vt-{\tfrac {1}{2}}at^{2}&[5]\\\end{aligned}}}
==== Constant linear acceleration in any direction ====
The initial position, initial velocity, and acceleration vectors need not be collinear, and the equations of motion take an almost identical form. The only difference is that the square magnitudes of the velocities require the dot product. The derivations are essentially the same as in the collinear case,
v
=
a
t
+
v
0
[
1
]
r
=
r
0
+
v
0
t
+
1
2
a
t
2
[
2
]
r
=
r
0
+
1
2
(
v
+
v
0
)
t
[
3
]
v
2
=
v
0
2
+
2
a
⋅
(
r
−
r
0
)
[
4
]
r
=
r
0
+
v
t
−
1
2
a
t
2
[
5
]
{\displaystyle {\begin{aligned}\mathbf {v} &=\mathbf {a} t+\mathbf {v} _{0}&[1]\\\mathbf {r} &=\mathbf {r} _{0}+\mathbf {v} _{0}t+{\tfrac {1}{2}}\mathbf {a} t^{2}&[2]\\\mathbf {r} &=\mathbf {r} _{0}+{\tfrac {1}{2}}\left(\mathbf {v} +\mathbf {v} _{0}\right)t&[3]\\\mathbf {v} ^{2}&=\mathbf {v} _{0}^{2}+2\mathbf {a} \cdot \left(\mathbf {r} -\mathbf {r} _{0}\right)&[4]\\\mathbf {r} &=\mathbf {r} _{0}+\mathbf {v} t-{\tfrac {1}{2}}\mathbf {a} t^{2}&[5]\\\end{aligned}}}
although the Torricelli equation [4] can be derived using the distributive property of the dot product as follows:
v
2
=
v
⋅
v
=
(
v
0
+
a
t
)
⋅
(
v
0
+
a
t
)
=
v
0
2
+
2
t
(
a
⋅
v
0
)
+
a
2
t
2
{\displaystyle v^{2}=\mathbf {v} \cdot \mathbf {v} =(\mathbf {v} _{0}+\mathbf {a} t)\cdot (\mathbf {v} _{0}+\mathbf {a} t)=v_{0}^{2}+2t(\mathbf {a} \cdot \mathbf {v} _{0})+a^{2}t^{2}}
(
2
a
)
⋅
(
r
−
r
0
)
=
(
2
a
)
⋅
(
v
0
t
+
1
2
a
t
2
)
=
2
t
(
a
⋅
v
0
)
+
a
2
t
2
=
v
2
−
v
0
2
{\displaystyle (2\mathbf {a} )\cdot (\mathbf {r} -\mathbf {r} _{0})=(2\mathbf {a} )\cdot \left(\mathbf {v} _{0}t+{\tfrac {1}{2}}\mathbf {a} t^{2}\right)=2t(\mathbf {a} \cdot \mathbf {v} _{0})+a^{2}t^{2}=v^{2}-v_{0}^{2}}
∴
v
2
=
v
0
2
+
2
(
a
⋅
(
r
−
r
0
)
)
{\displaystyle \therefore v^{2}=v_{0}^{2}+2(\mathbf {a} \cdot (\mathbf {r} -\mathbf {r} _{0}))}
==== Applications ====
Elementary and frequent examples in kinematics involve projectiles, for example a ball thrown upwards into the air. Given initial velocity u, one can calculate how high the ball will travel before it begins to fall. The acceleration is local acceleration of gravity g. While these quantities appear to be scalars, the direction of displacement, speed and acceleration is important. They could in fact be considered as unidirectional vectors. Choosing s to measure up from the ground, the acceleration a must be in fact −g, since the force of gravity acts downwards and therefore also the acceleration on the ball due to it.
At the highest point, the ball will be at rest: therefore v = 0. Using equation [4] in the set above, we have:
s
=
v
2
−
u
2
−
2
g
.
{\displaystyle s={\frac {v^{2}-u^{2}}{-2g}}.}
Substituting and cancelling minus signs gives:
s
=
u
2
2
g
.
{\displaystyle s={\frac {u^{2}}{2g}}.}
==== Constant circular acceleration ====
The analogues of the above equations can be written for rotation. Again these axial vectors must all be parallel to the axis of rotation, so only the magnitudes of the vectors are necessary,
ω
=
ω
0
+
α
t
θ
=
θ
0
+
ω
0
t
+
1
2
α
t
2
θ
=
θ
0
+
1
2
(
ω
0
+
ω
)
t
ω
2
=
ω
0
2
+
2
α
(
θ
−
θ
0
)
θ
=
θ
0
+
ω
t
−
1
2
α
t
2
{\displaystyle {\begin{aligned}\omega &=\omega _{0}+\alpha t\\\theta &=\theta _{0}+\omega _{0}t+{\tfrac {1}{2}}\alpha t^{2}\\\theta &=\theta _{0}+{\tfrac {1}{2}}(\omega _{0}+\omega )t\\\omega ^{2}&=\omega _{0}^{2}+2\alpha (\theta -\theta _{0})\\\theta &=\theta _{0}+\omega t-{\tfrac {1}{2}}\alpha t^{2}\\\end{aligned}}}
where α is the constant angular acceleration, ω is the angular velocity, ω0 is the initial angular velocity, θ is the angle turned through (angular displacement), θ0 is the initial angle, and t is the time taken to rotate from the initial state to the final state.
=== General planar motion ===
These are the kinematic equations for a particle traversing a path in a plane, described by position r = r(t). They are simply the time derivatives of the position vector in plane polar coordinates using the definitions of physical quantities above for angular velocity ω and angular acceleration α. These are instantaneous quantities which change with time.
The position of the particle is
r
=
r
(
r
(
t
)
,
θ
(
t
)
)
=
r
e
^
r
{\displaystyle \mathbf {r} =\mathbf {r} \left(r(t),\theta (t)\right)=r\mathbf {\hat {e}} _{r}}
where êr and êθ are the polar unit vectors. Differentiating with respect to time gives the velocity
v
=
e
^
r
d
r
d
t
+
r
ω
e
^
θ
{\displaystyle \mathbf {v} =\mathbf {\hat {e}} _{r}{\frac {dr}{dt}}+r\omega \mathbf {\hat {e}} _{\theta }}
with radial component dr/dt and an additional component rω due to the rotation. Differentiating with respect to time again obtains the acceleration
a
=
(
d
2
r
d
t
2
−
r
ω
2
)
e
^
r
+
(
r
α
+
2
ω
d
r
d
t
)
e
^
θ
{\displaystyle \mathbf {a} =\left({\frac {d^{2}r}{dt^{2}}}-r\omega ^{2}\right)\mathbf {\hat {e}} _{r}+\left(r\alpha +2\omega {\frac {dr}{dt}}\right)\mathbf {\hat {e}} _{\theta }}
which breaks into the radial acceleration d2r/dt2, centripetal acceleration –rω2, Coriolis acceleration 2ωdr/dt, and angular acceleration rα.
Special cases of motion described by these equations are summarized qualitatively in the table below. Two have already been discussed above, in the cases that either the radial components or the angular components are zero, and the non-zero component of motion describes uniform acceleration.
=== General 3D motions ===
In 3D space, the equations in spherical coordinates (r, θ, φ) with corresponding unit vectors êr, êθ and êφ, the position, velocity, and acceleration generalize respectively to
r
=
r
(
t
)
=
r
e
^
r
v
=
v
e
^
r
+
r
d
θ
d
t
e
^
θ
+
r
d
φ
d
t
sin
θ
e
^
φ
a
=
(
a
−
r
(
d
θ
d
t
)
2
−
r
(
d
φ
d
t
)
2
sin
2
θ
)
e
^
r
+
(
r
d
2
θ
d
t
2
+
2
v
d
θ
d
t
−
r
(
d
φ
d
t
)
2
sin
θ
cos
θ
)
e
^
θ
+
(
r
d
2
φ
d
t
2
sin
θ
+
2
v
d
φ
d
t
sin
θ
+
2
r
d
θ
d
t
d
φ
d
t
cos
θ
)
e
^
φ
{\displaystyle {\begin{aligned}\mathbf {r} &=\mathbf {r} \left(t\right)=r\mathbf {\hat {e}} _{r}\\\mathbf {v} &=v\mathbf {\hat {e}} _{r}+r\,{\frac {d\theta }{dt}}\mathbf {\hat {e}} _{\theta }+r\,{\frac {d\varphi }{dt}}\,\sin \theta \mathbf {\hat {e}} _{\varphi }\\\mathbf {a} &=\left(a-r\left({\frac {d\theta }{dt}}\right)^{2}-r\left({\frac {d\varphi }{dt}}\right)^{2}\sin ^{2}\theta \right)\mathbf {\hat {e}} _{r}\\&+\left(r{\frac {d^{2}\theta }{dt^{2}}}+2v{\frac {d\theta }{dt}}-r\left({\frac {d\varphi }{dt}}\right)^{2}\sin \theta \cos \theta \right)\mathbf {\hat {e}} _{\theta }\\&+\left(r{\frac {d^{2}\varphi }{dt^{2}}}\,\sin \theta +2v\,{\frac {d\varphi }{dt}}\,\sin \theta +2r\,{\frac {d\theta }{dt}}\,{\frac {d\varphi }{dt}}\,\cos \theta \right)\mathbf {\hat {e}} _{\varphi }\end{aligned}}\,\!}
In the case of a constant φ this reduces to the planar equations above.
== Dynamic equations of motion ==
=== Newtonian mechanics ===
The first general equation of motion developed was Newton's second law of motion. In its most general form it states the rate of change of momentum p = p(t) = mv(t) of an object equals the force F = F(x(t), v(t), t) acting on it,: 1112
F
=
d
p
d
t
{\displaystyle \mathbf {F} ={\frac {d\mathbf {p} }{dt}}}
The force in the equation is not the force the object exerts. Replacing momentum by mass times velocity, the law is also written more famously as
F
=
m
a
{\displaystyle \mathbf {F} =m\mathbf {a} }
since m is a constant in Newtonian mechanics.
Newton's second law applies to point-like particles, and to all points in a rigid body. They also apply to each point in a mass continuum, like deformable solids or fluids, but the motion of the system must be accounted for; see material derivative. In the case the mass is not constant, it is not sufficient to use the product rule for the time derivative on the mass and velocity, and Newton's second law requires some modification consistent with conservation of momentum; see variable-mass system.
It may be simple to write down the equations of motion in vector form using Newton's laws of motion, but the components may vary in complicated ways with spatial coordinates and time, and solving them is not easy. Often there is an excess of variables to solve for the problem completely, so Newton's laws are not always the most efficient way to determine the motion of a system. In simple cases of rectangular geometry, Newton's laws work fine in Cartesian coordinates, but in other coordinate systems can become dramatically complex.
The momentum form is preferable since this is readily generalized to more complex systems, such as special and general relativity (see four-momentum).: 112 It can also be used with the momentum conservation. However, Newton's laws are not more fundamental than momentum conservation, because Newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. Momentum conservation is always true for an isolated system not subject to resultant forces.
For a number of particles (see many body problem), the equation of motion for one particle i influenced by other particles is
d
p
i
d
t
=
F
E
+
∑
i
≠
j
F
i
j
{\displaystyle {\frac {d\mathbf {p} _{i}}{dt}}=\mathbf {F} _{E}+\sum _{i\neq j}\mathbf {F} _{ij}}
where pi is the momentum of particle i, Fij is the force on particle i by particle j, and FE is the resultant external force due to any agent not part of system. Particle i does not exert a force on itself.
Euler's laws of motion are similar to Newton's laws, but they are applied specifically to the motion of rigid bodies. The Newton–Euler equations combine the forces and torques acting on a rigid body into a single equation.
Newton's second law for rotation takes a similar form to the translational case,
τ
=
d
L
d
t
,
{\displaystyle {\boldsymbol {\tau }}={\frac {d\mathbf {L} }{dt}}\,,}
by equating the torque acting on the body to the rate of change of its angular momentum L. Analogous to mass times acceleration, the moment of inertia tensor I depends on the distribution of mass about the axis of rotation, and the angular acceleration is the rate of change of angular velocity,
τ
=
I
α
.
{\displaystyle {\boldsymbol {\tau }}=\mathbf {I} {\boldsymbol {\alpha }}.}
Again, these equations apply to point like particles, or at each point of a rigid body.
Likewise, for a number of particles, the equation of motion for one particle i is
d
L
i
d
t
=
τ
E
+
∑
i
≠
j
τ
i
j
,
{\displaystyle {\frac {d\mathbf {L} _{i}}{dt}}={\boldsymbol {\tau }}_{E}+\sum _{i\neq j}{\boldsymbol {\tau }}_{ij}\,,}
where Li is the angular momentum of particle i, τij the torque on particle i by particle j, and τE is resultant external torque (due to any agent not part of system). Particle i does not exert a torque on itself.
=== Applications ===
Some examples of Newton's law include describing the motion of a simple pendulum,
−
m
g
sin
θ
=
m
d
2
(
ℓ
θ
)
d
t
2
⇒
d
2
θ
d
t
2
=
−
g
ℓ
sin
θ
,
{\displaystyle -mg\sin \theta =m{\frac {d^{2}(\ell \theta )}{dt^{2}}}\quad \Rightarrow \quad {\frac {d^{2}\theta }{dt^{2}}}=-{\frac {g}{\ell }}\sin \theta \,,}
and a damped, sinusoidally driven harmonic oscillator,
F
0
sin
(
ω
t
)
=
m
(
d
2
x
d
t
2
+
2
ζ
ω
0
d
x
d
t
+
ω
0
2
x
)
.
{\displaystyle F_{0}\sin(\omega t)=m\left({\frac {d^{2}x}{dt^{2}}}+2\zeta \omega _{0}{\frac {dx}{dt}}+\omega _{0}^{2}x\right)\,.}
For describing the motion of masses due to gravity, Newton's law of gravity can be combined with Newton's second law. For two examples, a ball of mass m thrown in the air, in air currents (such as wind) described by a vector field of resistive forces R = R(r, t),
−
G
m
M
|
r
|
2
e
^
r
+
R
=
m
d
2
r
d
t
2
+
0
⇒
d
2
r
d
t
2
=
−
G
M
|
r
|
2
e
^
r
+
A
{\displaystyle -{\frac {GmM}{|\mathbf {r} |^{2}}}\mathbf {\hat {e}} _{r}+\mathbf {R} =m{\frac {d^{2}\mathbf {r} }{dt^{2}}}+0\quad \Rightarrow \quad {\frac {d^{2}\mathbf {r} }{dt^{2}}}=-{\frac {GM}{|\mathbf {r} |^{2}}}\mathbf {\hat {e}} _{r}+\mathbf {A} }
where G is the gravitational constant, M the mass of the Earth, and A = R/m is the acceleration of the projectile due to the air currents at position r and time t.
The classical N-body problem for N particles each interacting with each other due to gravity is a set of N nonlinear coupled second order ODEs,
d
2
r
i
d
t
2
=
G
∑
i
≠
j
m
j
|
r
j
−
r
i
|
3
(
r
j
−
r
i
)
{\displaystyle {\frac {d^{2}\mathbf {r} _{i}}{dt^{2}}}=G\sum _{i\neq j}{\frac {m_{j}}{|\mathbf {r} _{j}-\mathbf {r} _{i}|^{3}}}(\mathbf {r} _{j}-\mathbf {r} _{i})}
where i = 1, 2, ..., N labels the quantities (mass, position, etc.) associated with each particle.
== Analytical mechanics ==
Using all three coordinates of 3D space is unnecessary if there are constraints on the system. If the system has N degrees of freedom, then one can use a set of N generalized coordinates q(t) = [q1(t), q2(t) ... qN(t)], to define the configuration of the system. They can be in the form of arc lengths or angles. They are a considerable simplification to describe motion, since they take advantage of the intrinsic constraints that limit the system's motion, and the number of coordinates is reduced to a minimum. The time derivatives of the generalized coordinates are the generalized velocities
q
˙
=
d
q
d
t
.
{\displaystyle \mathbf {\dot {q}} ={\frac {d\mathbf {q} }{dt}}\,.}
The Euler–Lagrange equations are
d
d
t
(
∂
L
∂
q
˙
)
=
∂
L
∂
q
,
{\displaystyle {\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\right)={\frac {\partial L}{\partial \mathbf {q} }}\,,}
where the Lagrangian is a function of the configuration q and its time rate of change dq/dt (and possibly time t)
L
=
L
[
q
(
t
)
,
q
˙
(
t
)
,
t
]
.
{\displaystyle L=L\left[\mathbf {q} (t),\mathbf {\dot {q}} (t),t\right]\,.}
Setting up the Lagrangian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled N second order ODEs in the coordinates are obtained.
Hamilton's equations are
p
˙
=
−
∂
H
∂
q
,
q
˙
=
+
∂
H
∂
p
,
{\displaystyle \mathbf {\dot {p}} =-{\frac {\partial H}{\partial \mathbf {q} }}\,,\quad \mathbf {\dot {q}} =+{\frac {\partial H}{\partial \mathbf {p} }}\,,}
where the Hamiltonian
H
=
H
[
q
(
t
)
,
p
(
t
)
,
t
]
,
{\displaystyle H=H\left[\mathbf {q} (t),\mathbf {p} (t),t\right]\,,}
is a function of the configuration q and conjugate "generalized" momenta
p
=
∂
L
∂
q
˙
,
{\displaystyle \mathbf {p} ={\frac {\partial L}{\partial \mathbf {\dot {q}} }}\,,}
in which ∂/∂q = (∂/∂q1, ∂/∂q2, …, ∂/∂qN) is a shorthand notation for a vector of partial derivatives with respect to the indicated variables (see for example matrix calculus for this denominator notation), and possibly time t,
Setting up the Hamiltonian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled 2N first order ODEs in the coordinates qi and momenta pi are obtained.
The Hamilton–Jacobi equation is
−
∂
S
(
q
,
t
)
∂
t
=
H
(
q
,
p
,
t
)
.
{\displaystyle -{\frac {\partial S(\mathbf {q} ,t)}{\partial t}}=H\left(\mathbf {q} ,\mathbf {p} ,t\right)\,.}
where
S
[
q
,
t
]
=
∫
t
1
t
2
L
(
q
,
q
˙
,
t
)
d
t
,
{\displaystyle S[\mathbf {q} ,t]=\int _{t_{1}}^{t_{2}}L(\mathbf {q} ,\mathbf {\dot {q}} ,t)\,dt\,,}
is Hamilton's principal function, also called the classical action is a functional of L. In this case, the momenta are given by
p
=
∂
S
∂
q
.
{\displaystyle \mathbf {p} ={\frac {\partial S}{\partial \mathbf {q} }}\,.}
Although the equation has a simple general form, for a given Hamiltonian it is actually a single first order non-linear PDE, in N + 1 variables. The action S allows identification of conserved quantities for mechanical systems, even when the mechanical problem itself cannot be solved fully, because any differentiable symmetry of the action of a physical system has a corresponding conservation law, a theorem due to Emmy Noether.
All classical equations of motion can be derived from the variational principle known as Hamilton's principle of least action
δ
S
=
0
,
{\displaystyle \delta S=0\,,}
stating the path the system takes through the configuration space is the one with the least action S.
== Electrodynamics ==
In electrodynamics, the force on a charged particle of charge q is the Lorentz force:
F
=
q
(
E
+
v
×
B
)
{\displaystyle \mathbf {F} =q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)}
Combining with Newton's second law gives a first order differential equation of motion, in terms of position of the particle:
m
d
2
r
d
t
2
=
q
(
E
+
d
r
d
t
×
B
)
{\displaystyle m{\frac {d^{2}\mathbf {r} }{dt^{2}}}=q\left(\mathbf {E} +{\frac {d\mathbf {r} }{dt}}\times \mathbf {B} \right)}
or its momentum:
d
p
d
t
=
q
(
E
+
p
×
B
m
)
{\displaystyle {\frac {d\mathbf {p} }{dt}}=q\left(\mathbf {E} +{\frac {\mathbf {p} \times \mathbf {B} }{m}}\right)}
The same equation can be obtained using the Lagrangian (and applying Lagrange's equations above) for a charged particle of mass m and charge q:
L
=
1
2
m
r
˙
⋅
r
˙
+
q
A
⋅
r
˙
−
q
ϕ
{\displaystyle L={\tfrac {1}{2}}m\mathbf {\dot {r}} \cdot \mathbf {\dot {r}} +q\mathbf {A} \cdot {\dot {\mathbf {r} }}-q\phi }
where A and ϕ are the electromagnetic scalar and vector potential fields. The Lagrangian indicates an additional detail: the canonical momentum in Lagrangian mechanics is given by:
P
=
∂
L
∂
r
˙
=
m
r
˙
+
q
A
{\displaystyle \mathbf {P} ={\frac {\partial L}{\partial {\dot {\mathbf {r} }}}}=m{\dot {\mathbf {r} }}+q\mathbf {A} }
instead of just mv, implying the motion of a charged particle is fundamentally determined by the mass and charge of the particle. The Lagrangian expression was first used to derive the force equation.
Alternatively the Hamiltonian (and substituting into the equations):
H
=
(
P
−
q
A
)
2
2
m
+
q
ϕ
{\displaystyle H={\frac {\left(\mathbf {P} -q\mathbf {A} \right)^{2}}{2m}}+q\phi }
can derive the Lorentz force equation.
== General relativity ==
=== Geodesic equation of motion ===
The above equations are valid in flat spacetime. In curved spacetime, things become mathematically more complicated since there is no straight line; this is generalized and replaced by a geodesic of the curved spacetime (the shortest length of curve between two points). For curved manifolds with a metric tensor g, the metric provides the notion of arc length (see line element for details). The differential arc length is given by:: 1199
d
s
=
g
α
β
d
x
α
d
x
β
{\displaystyle ds={\sqrt {g_{\alpha \beta }dx^{\alpha }dx^{\beta }}}}
and the geodesic equation is a second-order differential equation in the coordinates. The general solution is a family of geodesics:: 1200
d
2
x
μ
d
s
2
=
−
Γ
μ
α
β
d
x
α
d
s
d
x
β
d
s
{\displaystyle {\frac {d^{2}x^{\mu }}{ds^{2}}}=-\Gamma ^{\mu }{}_{\alpha \beta }{\frac {dx^{\alpha }}{ds}}{\frac {dx^{\beta }}{ds}}}
where Γ μαβ is a Christoffel symbol of the second kind, which contains the metric (with respect to the coordinate system).
Given the mass-energy distribution provided by the stress–energy tensor T αβ, the Einstein field equations are a set of non-linear second-order partial differential equations in the metric, and imply the curvature of spacetime is equivalent to a gravitational field (see equivalence principle). Mass falling in curved spacetime is equivalent to a mass falling in a gravitational field - because gravity is a fictitious force. The relative acceleration of one geodesic to another in curved spacetime is given by the geodesic deviation equation:
D
2
ξ
α
d
s
2
=
−
R
α
β
γ
δ
d
x
α
d
s
ξ
γ
d
x
δ
d
s
{\displaystyle {\frac {D^{2}\xi ^{\alpha }}{ds^{2}}}=-R^{\alpha }{}_{\beta \gamma \delta }{\frac {dx^{\alpha }}{ds}}\xi ^{\gamma }{\frac {dx^{\delta }}{ds}}}
where ξα = x2α − x1α is the separation vector between two geodesics, D/ds (not just d/ds) is the covariant derivative, and Rαβγδ is the Riemann curvature tensor, containing the Christoffel symbols. In other words, the geodesic deviation equation is the equation of motion for masses in curved spacetime, analogous to the Lorentz force equation for charges in an electromagnetic field.: 34–35
For flat spacetime, the metric is a constant tensor so the Christoffel symbols vanish, and the geodesic equation has the solutions of straight lines. This is also the limiting case when masses move according to Newton's law of gravity.
=== Spinning objects ===
In general relativity, rotational motion is described by the relativistic angular momentum tensor, including the spin tensor, which enter the equations of motion under covariant derivatives with respect to proper time. The Mathisson–Papapetrou–Dixon equations describe the motion of spinning objects moving in a gravitational field.
== Analogues for waves and fields ==
Unlike the equations of motion for describing particle mechanics, which are systems of coupled ordinary differential equations, the analogous equations governing the dynamics of waves and fields are always partial differential equations, since the waves or fields are functions of space and time. For a particular solution, boundary conditions along with initial conditions need to be specified.
Sometimes in the following contexts, the wave or field equations are also called "equations of motion".
=== Field equations ===
Equations that describe the spatial dependence and time evolution of fields are called field equations. These include
Maxwell's equations for the electromagnetic field,
Poisson's equation for Newtonian gravitational or electrostatic field potentials,
the Einstein field equation for gravitation (Newton's law of gravity is a special case for weak gravitational fields and low velocities of particles).
This terminology is not universal: for example although the Navier–Stokes equations govern the velocity field of a fluid, they are not usually called "field equations", since in this context they represent the momentum of the fluid and are called the "momentum equations" instead.
=== Wave equations ===
Equations of wave motion are called wave equations. The solutions to a wave equation give the time-evolution and spatial dependence of the amplitude. Boundary conditions determine if the solutions describe traveling waves or standing waves.
From classical equations of motion and field equations; mechanical, gravitational wave, and electromagnetic wave equations can be derived. The general linear wave equation in 3D is:
1
v
2
∂
2
X
∂
t
2
=
∇
2
X
{\displaystyle {\frac {1}{v^{2}}}{\frac {\partial ^{2}X}{\partial t^{2}}}=\nabla ^{2}X}
where X = X(r, t) is any mechanical or electromagnetic field amplitude, say:
the transverse or longitudinal displacement of a vibrating rod, wire, cable, membrane etc.,
the fluctuating pressure of a medium, sound pressure,
the electric fields E or D, or the magnetic fields B or H,
the voltage V or current I in an alternating current circuit,
and v is the phase velocity. Nonlinear equations model the dependence of phase velocity on amplitude, replacing v by v(X). There are other linear and nonlinear wave equations for very specific applications, see for example the Korteweg–de Vries equation.
=== Quantum theory ===
In quantum theory, the wave and field concepts both appear.
In quantum mechanics the analogue of the classical equations of motion (Newton's law, Euler–Lagrange equation, Hamilton–Jacobi equation, etc.) is the Schrödinger equation in its most general form:
i
ℏ
∂
Ψ
∂
t
=
H
^
Ψ
,
{\displaystyle i\hbar {\frac {\partial \Psi }{\partial t}}={\hat {H}}\Psi \,,}
where Ψ is the wavefunction of the system, Ĥ is the quantum Hamiltonian operator, rather than a function as in classical mechanics, and ħ is the Planck constant divided by 2π. Setting up the Hamiltonian and inserting it into the equation results in a wave equation, the solution is the wavefunction as a function of space and time. The Schrödinger equation itself reduces to the Hamilton–Jacobi equation when one considers the correspondence principle, in the limit that ħ becomes zero. To compare to measurements, operators for observables must be applied the quantum wavefunction according to the experiment performed, leading to either wave-like or particle-like results.
Throughout all aspects of quantum theory, relativistic or non-relativistic, there are various formulations alternative to the Schrödinger equation that govern the time evolution and behavior of a quantum system, for instance:
the Heisenberg equation of motion resembles the time evolution of classical observables as functions of position, momentum, and time, if one replaces dynamical observables by their quantum operators and the classical Poisson bracket by the commutator,
the phase space formulation closely follows classical Hamiltonian mechanics, placing position and momentum on equal footing,
the Feynman path integral formulation extends the principle of least action to quantum mechanics and field theory, placing emphasis on the use of a Lagrangians rather than Hamiltonians.
== See also ==
== References == | Wikipedia/Equation_of_motion |
In mathematics, specifically in the calculus of variations, a variation δf of a function f can be concentrated on an arbitrarily small interval, but not a single point.
Accordingly, the necessary condition of extremum (functional derivative equal zero) appears in a weak formulation (variational form) integrated with an arbitrary function δf. The fundamental lemma of the calculus of variations is typically used to transform this weak formulation into the strong formulation (differential equation), free of the integration with arbitrary function. The proof usually exploits the possibility to choose δf concentrated on an interval on which f keeps sign (positive or negative). Several versions of the lemma are in use. Basic versions are easy to formulate and prove. More powerful versions are used when needed.
== Basic version ==
If a continuous function
f
{\displaystyle f}
on an open interval
(
a
,
b
)
{\displaystyle (a,b)}
satisfies the equality
∫
a
b
f
(
x
)
h
(
x
)
d
x
=
0
{\displaystyle \int _{a}^{b}f(x)h(x)\,\mathrm {d} x=0}
for all compactly supported smooth functions
h
{\displaystyle h}
on
(
a
,
b
)
{\displaystyle (a,b)}
, then
f
{\displaystyle f}
is identically zero.
Here "smooth" may be interpreted as "infinitely differentiable", but often is interpreted as "twice continuously differentiable" or "continuously differentiable" or even just "continuous", since these weaker statements may be strong enough for a given task. "Compactly supported" means "vanishes outside
[
c
,
d
]
{\displaystyle [c,d]}
for some
c
{\displaystyle c}
,
d
{\displaystyle d}
such that
a
<
c
<
d
<
b
{\displaystyle a<c<d<b}
"; but often a weaker statement suffices, assuming only that
h
{\displaystyle h}
(or
h
{\displaystyle h}
and a number of its derivatives) vanishes at the endpoints
a
{\displaystyle a}
,
b
{\displaystyle b}
; in this case the closed interval
[
a
,
b
]
{\displaystyle [a,b]}
is used.
== Proof ==
Suppose
f
(
x
¯
)
≠
0
{\displaystyle f({\bar {x}})\neq 0}
for some
x
¯
∈
(
a
,
b
)
{\displaystyle {\bar {x}}\in (a,b)}
. Since
f
{\displaystyle f}
is continuous, it is nonzero with the same sign for some
c
,
d
{\displaystyle c,d}
such that
a
<
c
<
x
¯
<
d
<
b
{\displaystyle a<c<{\bar {x}}<d<b}
. Without loss of generality, assume
f
(
x
¯
)
>
0
{\displaystyle f({\bar {x}})>0}
. Then take an
h
{\displaystyle h}
that is positive on
(
c
,
d
)
{\displaystyle (c,d)}
and zero elsewhere, for example
h
(
x
)
=
{
exp
(
−
1
(
x
−
c
)
(
d
−
x
)
)
,
c
<
x
<
d
0
,
o
t
h
e
r
w
i
s
e
{\displaystyle h(x)={\begin{cases}\exp \left(-{\frac {1}{(x-c)(d-x)}}\right),&c<x<d\\0,&\mathrm {otherwise} \end{cases}}}
.
Note this bump function satisfies the properties in the statement, including
C
∞
{\displaystyle C^{\infty }}
. Since
∫
a
b
f
(
x
)
h
(
x
)
d
x
>
0
,
{\displaystyle \int _{a}^{b}f(x)h(x)dx>0,}
we reach a contradiction.
== Version for two given functions ==
If a pair of continuous functions f, g on an interval (a,b) satisfies the equality
∫
a
b
(
f
(
x
)
h
(
x
)
+
g
(
x
)
h
′
(
x
)
)
d
x
=
0
{\displaystyle \int _{a}^{b}(f(x)\,h(x)+g(x)\,h'(x))\,\mathrm {d} x=0}
for all compactly supported smooth functions h on (a,b), then g is differentiable, and g' = f everywhere.
The special case for g = 0 is just the basic version.
Here is the special case for f = 0 (often sufficient).
If a continuous function g on an interval (a,b) satisfies the equality
∫
a
b
g
(
x
)
h
′
(
x
)
d
x
=
0
{\displaystyle \int _{a}^{b}g(x)\,h'(x)\,\mathrm {d} x=0}
for all smooth functions h on (a,b) such that
h
(
a
)
=
h
(
b
)
=
0
{\displaystyle h(a)=h(b)=0}
, then g is constant.
If, in addition, continuous differentiability of g is assumed, then integration by parts reduces both statements to the basic version; this case is attributed to Joseph-Louis Lagrange, while the proof of differentiability of g is due to Paul du Bois-Reymond.
== Versions for discontinuous functions ==
The given functions (f, g) may be discontinuous, provided that they are locally integrable (on the given interval). In this case, Lebesgue integration is meant, the conclusions hold almost everywhere (thus, in all continuity points), and differentiability of g is interpreted as local absolute continuity (rather than continuous differentiability). Sometimes the given functions are assumed to be piecewise continuous, in which case Riemann integration suffices, and the conclusions are stated everywhere except the finite set of discontinuity points.
== Higher derivatives ==
If a tuple of continuous functions
f
0
,
f
1
,
…
,
f
n
{\displaystyle f_{0},f_{1},\dots ,f_{n}}
on an interval (a,b) satisfies the equality
∫
a
b
(
f
0
(
x
)
h
(
x
)
+
f
1
(
x
)
h
′
(
x
)
+
⋯
+
f
n
(
x
)
h
(
n
)
(
x
)
)
d
x
=
0
{\displaystyle \int _{a}^{b}(f_{0}(x)\,h(x)+f_{1}(x)\,h'(x)+\dots +f_{n}(x)\,h^{(n)}(x))\,\mathrm {d} x=0}
for all compactly supported smooth functions h on (a,b), then there exist continuously differentiable functions
u
0
,
u
1
,
…
,
u
n
−
1
{\displaystyle u_{0},u_{1},\dots ,u_{n-1}}
on (a,b) such that
f
0
=
u
0
′
,
f
1
=
u
0
+
u
1
′
,
f
2
=
u
1
+
u
2
′
⋮
f
n
−
1
=
u
n
−
2
+
u
n
−
1
′
,
f
n
=
u
n
−
1
{\displaystyle {\begin{aligned}f_{0}&=u'_{0},\\f_{1}&=u_{0}+u'_{1},\\f_{2}&=u_{1}+u'_{2}\\\vdots \\f_{n-1}&=u_{n-2}+u'_{n-1},\\f_{n}&=u_{n-1}\end{aligned}}}
everywhere.
This necessary condition is also sufficient, since the integrand becomes
(
u
0
h
)
′
+
(
u
1
h
′
)
′
+
⋯
+
(
u
n
−
1
h
(
n
−
1
)
)
′
.
{\displaystyle (u_{0}h)'+(u_{1}h')'+\dots +(u_{n-1}h^{(n-1)})'.}
The case n = 1 is just the version for two given functions, since
f
=
f
0
=
u
0
′
{\displaystyle f=f_{0}=u'_{0}}
and
f
1
=
u
0
,
{\displaystyle f_{1}=u_{0},}
thus,
f
0
−
f
1
′
=
0.
{\displaystyle f_{0}-f'_{1}=0.}
In contrast, the case n=2 does not lead to the relation
f
0
−
f
1
′
+
f
2
″
=
0
,
{\displaystyle f_{0}-f'_{1}+f''_{2}=0,}
since the function
f
2
=
u
1
{\displaystyle f_{2}=u_{1}}
need not be differentiable twice. The sufficient condition
f
0
−
f
1
′
+
f
2
″
=
0
{\displaystyle f_{0}-f'_{1}+f''_{2}=0}
is not necessary. Rather, the necessary and sufficient condition may be written as
f
0
−
(
f
1
−
f
2
′
)
′
=
0
{\displaystyle f_{0}-(f_{1}-f'_{2})'=0}
for n=2,
f
0
−
(
f
1
−
(
f
2
−
f
3
′
)
′
)
′
=
0
{\displaystyle f_{0}-(f_{1}-(f_{2}-f'_{3})')'=0}
for n=3, and so on; in general, the brackets cannot be opened because of non-differentiability.
== Vector-valued functions ==
Generalization to vector-valued functions
(
a
,
b
)
→
R
d
{\displaystyle (a,b)\to \mathbb {R} ^{d}}
is straightforward; one applies the results for scalar functions to each coordinate separately, or treats the vector-valued case from the beginning.
== Multivariable functions ==
If a continuous multivariable function f on an open set
Ω
⊂
R
d
{\displaystyle \Omega \subset \mathbb {R} ^{d}}
satisfies the equality
∫
Ω
f
(
x
)
h
(
x
)
d
x
=
0
{\displaystyle \int _{\Omega }f(x)\,h(x)\,\mathrm {d} x=0}
for all compactly supported smooth functions h on Ω, then f is identically zero.
Similarly to the basic version, one may consider a continuous function f on the closure of Ω, assuming that h vanishes on the boundary of Ω (rather than compactly supported).
Here is a version for discontinuous multivariable functions.
Let
Ω
⊂
R
d
{\displaystyle \Omega \subset \mathbb {R} ^{d}}
be an open set, and
f
∈
L
2
(
Ω
)
{\displaystyle f\in L^{2}(\Omega )}
satisfy the equality
∫
Ω
f
(
x
)
h
(
x
)
d
x
=
0
{\displaystyle \int _{\Omega }f(x)\,h(x)\,\mathrm {d} x=0}
for all compactly supported smooth functions h on Ω. Then f=0 (in L2, that is, almost everywhere).
== Applications ==
This lemma is used to prove that extrema of the functional
J
[
y
]
=
∫
x
0
x
1
L
(
t
,
y
(
t
)
,
y
˙
(
t
)
)
d
t
{\displaystyle J[y]=\int _{x_{0}}^{x_{1}}L(t,y(t),{\dot {y}}(t))\,\mathrm {d} t}
are weak solutions
y
:
[
x
0
,
x
1
]
→
V
{\displaystyle y:[x_{0},x_{1}]\to V}
(for an appropriate vector space
V
{\displaystyle V}
) of the Euler–Lagrange equation
∂
L
(
t
,
y
(
t
)
,
y
˙
(
t
)
)
∂
y
=
d
d
t
∂
L
(
t
,
y
(
t
)
,
y
˙
(
t
)
)
∂
y
˙
.
{\displaystyle {\partial L(t,y(t),{\dot {y}}(t)) \over \partial y}={\mathrm {d} \over \mathrm {d} t}{\partial L(t,y(t),{\dot {y}}(t)) \over \partial {\dot {y}}}.}
The Euler–Lagrange equation plays a prominent role in classical mechanics and differential geometry.
== Notes ==
== References ==
Jost, Jürgen; Li-Jost, Xianqing (1998), Calculus of variations, Cambridge University
Gelfand, I.M.; Fomin, S.V. (1963), Calculus of variations, Prentice-Hall (transl. from Russian).
Hestenes, Magnus R. (1966), Calculus of variations and optimal control theory, John Wiley
Giaquinta, Mariano; Hildebrandt, Stefan (1996), Calculus of Variations I, Springer
Liberzon, Daniel (2012), Calculus of Variations and Optimal Control Theory, Princeton University Press | Wikipedia/Fundamental_lemma_of_the_calculus_of_variations |
The law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time. In the case of a closed system, the principle says that the total amount of energy within the system can only be changed through energy entering or leaving the system. Energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. For instance, chemical energy is converted to kinetic energy when a stick of dynamite explodes. If one adds up all forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite.
Classically, the conservation of energy was distinct from the conservation of mass. However, special relativity shows that mass is related to energy and vice versa by
E
=
m
c
2
{\displaystyle E=mc^{2}}
, the equation representing mass–energy equivalence, and science now takes the view that mass-energy as a whole is conserved. This implies that mass can be converted to energy, and vice versa. This is observed in the nuclear binding energy of atomic nuclei, where a mass defect is measured. It is believed that mass-energy equivalence becomes important in extreme physical conditions, such as those that likely existed in the universe very shortly after the Big Bang or when black holes emit Hawking radiation.
Given the stationary-action principle, the conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry; that is, from the fact that the laws of physics do not change over time.
A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist; that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Depending on the definition of energy, the conservation of energy can arguably be violated by general relativity on the cosmological scale. In quantum mechanics, Noether's theorem is known to apply to the expected value, making any consistent conservation violation provably impossible, but whether individual conservation-violating events could ever exist or be observed is subject to some debate.
== History ==
Ancient philosophers as far back as Thales of Miletus c. 550 BCE had inklings of the conservation of some underlying substance of which everything is made. However, there is no particular reason to identify their theories with what we know today as "mass-energy" (for example, Thales thought it was water). Empedocles (490–430 BCE) wrote that in his universal system, composed of four roots (earth, air, water, fire), "nothing comes to be or perishes"; instead, these elements suffer continual rearrangement. Epicurus (c. 350 BCE) on the other hand believed everything in the universe to be composed of indivisible units of matter—the ancient precursor to 'atoms'—and he too had some idea of the necessity of conservation, stating that "the sum total of things was always such as it is now, and such it will ever remain."
In 1605, the Flemish scientist Simon Stevin was able to solve a number of problems in statics based on the principle that perpetual motion was impossible.
In 1639, Galileo published his analysis of several situations—including the celebrated "interrupted pendulum"—which can be described (in modern language) as conservatively converting potential energy to kinetic energy and back again. Essentially, he pointed out that the height a moving body rises is equal to the height from which it falls, and used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height to which a moving body ascends on a frictionless surface does not depend on the shape of the surface.
In 1669, Christiaan Huygens published a brief account on his laws of collision. Among the quantities he listed as being invariant before and after the collision of bodies were both the sum of their linear momenta as well as the sum of their kinetic energies. However, the difference between elastic and inelastic collision was not understood at the time. This led to the dispute among later researchers as to which of these conserved quantities was the more fundamental. In his Horologium Oscillatorium, Huygens gave a much clearer statement regarding the height of ascent of a moving body, and connected this idea with the impossibility of perpetual motion. His study of the dynamics of pendulum motion was based on a single principle, known as Torricelli's Principle: that the center of gravity of a heavy object, or collection of objects, cannot lift itself. Using this principle, Huygens was able to derive the formula for the center of oscillation by an "energy" method, without dealing with forces or torques.
Between 1676 and 1689, Gottfried Leibniz first attempted a mathematical formulation of the kind of energy that is associated with motion (kinetic energy). Using Huygens's work on collision, Leibniz noticed that in many mechanical systems (of several masses mi, each with velocity vi),
∑
i
m
i
v
i
2
{\displaystyle \sum _{i}m_{i}v_{i}^{2}}
was conserved so long as the masses did not interact. He called this quantity the vis viva or living force of the system. The principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, including Isaac Newton, held that the conservation of momentum, which holds even in systems with friction, as defined by the momentum:
∑
i
m
i
v
i
{\displaystyle \sum _{i}m_{i}v_{i}}
was the conserved vis viva. It was later shown that both quantities are conserved simultaneously given the proper conditions, such as in an elastic collision.
In 1687, Isaac Newton published his Principia, which set out his laws of motion. It was organized around the concept of force and momentum. However, the researchers were quick to recognize that the principles set out in the book, while fine for point masses, were not sufficient to tackle the motions of rigid and fluid bodies. Some other principles were also required.
By the 1690s, Leibniz was arguing that conservation of vis viva and conservation of momentum undermined the then-popular philosophical doctrine of interactionist dualism. (During the 19th century, when conservation of energy was better understood, Leibniz's basic argument would gain widespread acceptance. Some modern scholars continue to champion specifically conservation-based attacks on dualism, while others subsume the argument into a more general argument about causal closure.)
The law of conservation of vis viva was championed by the father and son duo, Johann and Daniel Bernoulli. The former enunciated the principle of virtual work as used in statics in its full generality in 1715, while the latter based his Hydrodynamica, published in 1738, on this single vis viva conservation principle. Daniel's study of loss of vis viva of flowing water led him to formulate the Bernoulli's principle, which asserts the loss to be proportional to the change in hydrodynamic pressure. Daniel also formulated the notion of work and efficiency for hydraulic machines; and he gave a kinetic theory of gases, and linked the kinetic energy of gas molecules with the temperature of the gas.
This focus on the vis viva by the continental physicists eventually led to the discovery of stationarity principles governing mechanics, such as the D'Alembert's principle, Lagrangian, and Hamiltonian formulations of mechanics.
Émilie du Châtelet (1706–1749) proposed and tested the hypothesis of the conservation of total energy, as distinct from momentum. Inspired by the theories of Gottfried Leibniz, she repeated and publicized an experiment originally devised by Willem 's Gravesande in 1722 in which balls were dropped from different heights into a sheet of soft clay. Each ball's kinetic energy—as indicated by the quantity of material displaced—was shown to be proportional to the square of the velocity. The deformation of the clay was found to be directly proportional to the height from which the balls were dropped, equal to the initial potential energy. Some earlier workers, including Newton and Voltaire, had believed that "energy" was not distinct from momentum and therefore proportional to velocity. According to this understanding, the deformation of the clay should have been proportional to the square root of the height from which the balls were dropped. In classical physics, the correct formula is
E
k
=
1
2
m
v
2
{\displaystyle E_{k}={\frac {1}{2}}mv^{2}}
, where
E
k
{\displaystyle E_{k}}
is the kinetic energy of an object,
m
{\displaystyle m}
its mass and
v
{\displaystyle v}
its speed. On this basis, du Châtelet proposed that energy must always have the same dimensions in any form, which is necessary to be able to consider it in different forms (kinetic, potential, heat, ...).
Engineers such as John Smeaton, Peter Ewart, Carl Holtzmann, Gustave-Adolphe Hirn, and Marc Seguin recognized that conservation of momentum alone was not adequate for practical calculation and made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston. Academics such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics, but in the 18th and 19th centuries, the fate of the lost energy was still unknown.
Gradually it came to be suspected that the heat inevitably generated by motion under friction was another form of vis viva. In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of vis viva and caloric theory. Count Rumford's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat and (that it was important) that the conversion was quantitative and could be predicted (allowing for a universal conversion constant between kinetic energy and heat). Vis viva then started to be known as energy, after the term was first used in that sense by Thomas Young in 1807.
The recalibration of vis viva to
1
2
∑
i
m
i
v
i
2
{\displaystyle {\frac {1}{2}}\sum _{i}m_{i}v_{i}^{2}}
which can be understood as converting kinetic energy to work, was largely the result of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819–1839. The former called the quantity quantité de travail (quantity of work) and the latter, travail mécanique (mechanical work), and both championed its use in engineering calculations.
In the paper Über die Natur der Wärme (German "On the Nature of Heat/Warmth"), published in the Zeitschrift für Physik in 1837, Karl Friedrich Mohr gave one of the earliest general statements of the doctrine of the conservation of energy: "besides the 54 known chemical elements there is in the physical world one agent only, and this is called Kraft [energy or work]. It may appear, according to circumstances, as motion, chemical affinity, cohesion, electricity, light and magnetism; and from any one of these forms it can be transformed into any of the others."
=== Mechanical equivalent of heat ===
A key stage in the development of the modern conservation principle was the demonstration of the mechanical equivalent of heat. The caloric theory maintained that heat could neither be created nor destroyed, whereas conservation of energy entails the contrary principle that heat and mechanical work are interchangeable.
In the middle of the eighteenth century, Mikhail Lomonosov, a Russian scientist, postulated his corpusculo-kinetic theory of heat, which rejected the idea of a caloric. Through the results of empirical studies, Lomonosov came to the conclusion that heat was not transferred through the particles of the caloric fluid.
In 1798, Count Rumford (Benjamin Thompson) performed measurements of the frictional heat generated in boring cannons and developed the idea that heat is a form of kinetic energy; his measurements refuted caloric theory, but were imprecise enough to leave room for doubt.
The mechanical equivalence principle was first stated in its modern form by the German surgeon Julius Robert von Mayer in 1842. Mayer reached his conclusion on a voyage to the Dutch East Indies, where he found that his patients' blood was a deeper red because they were consuming less oxygen, and therefore less energy, to maintain their body temperature in the hotter climate. He discovered that heat and mechanical work were both forms of energy, and in 1845, after improving his knowledge of physics, he published a monograph that stated a quantitative relationship between them.
Meanwhile, in 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. In one of them, now called the "Joule apparatus", a descending weight attached to a string caused a paddle immersed in water to rotate. He showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle.
Over the period 1840–1843, similar work was carried out by engineer Ludwig A. Colding, although it was little known outside his native Denmark.
Both Joule's and Mayer's work suffered from resistance and neglect but it was Joule's that eventually drew the wider recognition.
In 1844, the Welsh scientist William Robert Grove postulated a relationship between mechanics, heat, light, electricity, and magnetism by treating them all as manifestations of a single "force" (energy in modern terms). In 1846, Grove published his theories in his book The Correlation of Physical Forces. In 1847, drawing on the earlier work of Joule, Sadi Carnot, and Émile Clapeyron, Hermann von Helmholtz arrived at conclusions similar to Grove's and published his theories in his book Über die Erhaltung der Kraft (On the Conservation of Force, 1847). The general modern acceptance of the principle stems from this publication.
In 1850, the Scottish mathematician William Rankine first used the phrase the law of the conservation of energy for the principle.
In 1877, Peter Guthrie Tait claimed that the principle originated with Sir Isaac Newton, based on a creative reading of propositions 40 and 41 of the Philosophiae Naturalis Principia Mathematica. This is now regarded as an example of Whig history.
=== Mass–energy equivalence ===
Matter is composed of atoms and what makes up atoms. Matter has intrinsic or rest mass. In the limited range of recognized experience of the nineteenth century, it was found that such rest mass is conserved. Einstein's 1905 theory of special relativity showed that rest mass corresponds to an equivalent amount of rest energy. This means that rest mass can be converted to or from equivalent amounts of (non-material) forms of energy, for example, kinetic energy, potential energy, and electromagnetic radiant energy. When this happens, as recognized in twentieth-century experience, rest mass is not conserved, unlike the total mass or total energy. All forms of energy contribute to the total mass and total energy.
For example, an electron and a positron each have rest mass. They can perish together, converting their combined rest energy into photons which have electromagnetic radiant energy but no rest mass. If this occurs within an isolated system that does not release the photons or their energy into the external surroundings, then neither the total mass nor the total energy of the system will change. The produced electromagnetic radiant energy contributes just as much to the inertia (and to any weight) of the system as did the rest mass of the electron and positron before their demise. Likewise, non-material forms of energy can perish into matter, which has rest mass.
Thus, conservation of energy (total, including material or rest energy) and conservation of mass (total, not just rest) are one (equivalent) law. In the 18th century, these had appeared as two seemingly-distinct laws.
=== Conservation of energy in beta decay ===
The discovery in 1911 that electrons emitted in beta decay have a continuous rather than a discrete spectrum appeared to contradict conservation of energy, under the then-current assumption that beta decay is the simple emission of an electron from a nucleus. This problem was eventually resolved in 1933 by Enrico Fermi who proposed the correct description of beta-decay as the emission of both an electron and an antineutrino, which carries away the apparently missing energy.
== First law of thermodynamics ==
For a closed thermodynamic system, the first law of thermodynamics may be stated as:
δ
Q
=
d
U
+
δ
W
{\displaystyle \delta Q=\mathrm {d} U+\delta W}
, or equivalently,
d
U
=
δ
Q
−
δ
W
,
{\displaystyle \mathrm {d} U=\delta Q-\delta W,}
where
δ
Q
{\displaystyle \delta Q}
is the quantity of energy added to the system by a heating process,
δ
W
{\displaystyle \delta W}
is the quantity of energy lost by the system due to work done by the system on its surroundings, and
d
U
{\displaystyle \mathrm {d} U}
is the change in the internal energy of the system.
The δ's before the heat and work terms are used to indicate that they describe an increment of energy which is to be interpreted somewhat differently than the
d
U
{\displaystyle \mathrm {d} U}
increment of internal energy (see Inexact differential). Work and heat refer to kinds of process which add or subtract energy to or from a system, while the internal energy
U
{\displaystyle U}
is a property of a particular state of the system when it is in unchanging thermodynamic equilibrium. Thus the term "heat energy" for
δ
Q
{\displaystyle \delta Q}
means "that amount of energy added as a result of heating" rather than referring to a particular form of energy. Likewise, the term "work energy" for
δ
W
{\displaystyle \delta W}
means "that amount of energy lost as a result of work". Thus one can state the amount of internal energy possessed by a thermodynamic system that one knows is presently in a given state, but one cannot tell, just from knowledge of the given present state, how much energy has in the past flowed into or out of the system as a result of its being heated or cooled, nor as a result of work being performed on or by the system.
Entropy is a function of the state of a system which tells of limitations of the possibility of conversion of heat into work.
For a simple compressible system, the work performed by the system may be written:
δ
W
=
P
d
V
,
{\displaystyle \delta W=P\,\mathrm {d} V,}
where
P
{\displaystyle P}
is the pressure and
d
V
{\displaystyle dV}
is a small change in the volume of the system, each of which are system variables. In the fictive case in which the process is idealized and infinitely slow, so as to be called quasi-static, and regarded as reversible, the heat being transferred from a source with temperature infinitesimally above the system temperature, the heat energy may be written
δ
Q
=
T
d
S
,
{\displaystyle \delta Q=T\,\mathrm {d} S,}
where
T
{\displaystyle T}
is the temperature and
d
S
{\displaystyle \mathrm {d} S}
is a small change in the entropy of the system. Temperature and entropy are variables of the state of a system.
If an open system (in which mass may be exchanged with the environment) has several walls such that the mass transfer is through rigid walls separate from the heat and work transfers, then the first law may be written as
d
U
=
δ
Q
−
δ
W
+
∑
i
h
i
d
M
i
,
{\displaystyle \mathrm {d} U=\delta Q-\delta W+\sum _{i}h_{i}\,dM_{i},}
where
d
M
i
{\displaystyle dM_{i}}
is the added mass of species
i
{\displaystyle i}
and
h
i
{\displaystyle h_{i}}
is the corresponding enthalpy per unit mass. Note that generally
d
S
≠
δ
Q
/
T
{\displaystyle dS\neq \delta Q/T}
in this case, as matter carries its own entropy. Instead,
d
S
=
δ
Q
/
T
+
∑
i
s
i
d
M
i
{\displaystyle dS=\delta Q/T+\textstyle {\sum _{i}}s_{i}\,dM_{i}}
, where
s
i
{\displaystyle s_{i}}
is the entropy per unit mass of type
i
{\displaystyle i}
, from which we recover the fundamental thermodynamic relation
d
U
=
T
d
S
−
P
d
V
+
∑
i
μ
i
d
N
i
{\displaystyle \mathrm {d} U=T\,dS-P\,dV+\sum _{i}\mu _{i}\,dN_{i}}
because the chemical potential
μ
i
{\displaystyle \mu _{i}}
is the partial molar Gibbs free energy of species
i
{\displaystyle i}
and the Gibbs free energy
G
≡
H
−
T
S
{\displaystyle G\equiv H-TS}
.
== Noether's theorem ==
The conservation of energy is a common feature in many physical theories. From a mathematical point of view it is understood as a consequence of Noether's theorem, developed by Emmy Noether in 1915 and first published in 1918. In any physical theory that obeys the stationary-action principle, the theorem states that every continuous symmetry has an associated conserved quantity; if the theory's symmetry is time invariance, then the conserved quantity is called "energy". The energy conservation law is a consequence of the shift symmetry of time; energy conservation is implied by the empirical fact that the laws of physics do not change with time itself. Philosophically this can be stated as "nothing depends on time per se". In other words, if the physical system is invariant under the continuous symmetry of time translation, then its energy (which is the canonical conjugate quantity to time) is conserved. Conversely, systems that are not invariant under shifts in time (e.g. systems with time-dependent potential energy) do not exhibit conservation of energy – unless we consider them to exchange energy with another, external system so that the theory of the enlarged system becomes time-invariant again. Conservation of energy for finite systems is valid in physical theories such as special relativity and quantum theory (including QED) in the flat space-time.
== Special relativity ==
With the discovery of special relativity by Henri Poincaré and Albert Einstein, the energy was proposed to be a component of an energy-momentum 4-vector. Each of the four components (one of energy and three of momentum) of this vector is separately conserved across time, in any closed system, as seen from any given inertial reference frame. Also conserved is the vector length (Minkowski norm), which is the rest mass for single particles, and the invariant mass for systems of particles (where momenta and energy are separately summed before the length is calculated).
The relativistic energy of a single massive particle contains a term related to its rest mass in addition to its kinetic energy of motion. In the limit of zero kinetic energy (or equivalently in the rest frame) of a massive particle, or else in the center of momentum frame for objects or systems which retain kinetic energy, the total energy of a particle or object (including internal kinetic energy in systems) is proportional to the rest mass or invariant mass, as described by the equation
E
=
m
c
2
{\displaystyle E=mc^{2}}
.
Thus, the rule of conservation of energy over time in special relativity continues to hold, so long as the reference frame of the observer is unchanged. This applies to the total energy of systems, although different observers disagree as to the energy value. Also conserved, and invariant to all observers, is the invariant mass, which is the minimal system mass and energy that can be seen by any observer, and which is defined by the energy–momentum relation.
== General relativity ==
General relativity introduces new phenomena. In an expanding universe, photons spontaneously redshift and tethers spontaneously gain tension; if vacuum energy is positive, the total vacuum energy of the universe appears to spontaneously increase as the volume of space increases. Some scholars claim that energy is no longer meaningfully conserved in any identifiable form.
John Baez's view is that energy–momentum conservation is not well-defined except in certain special cases. Energy-momentum is typically expressed with the aid of a stress–energy–momentum pseudotensor. However, since pseudotensors are not tensors, they do not transform cleanly between reference frames. If the metric under consideration is static (that is, does not change with time) or asymptotically flat (that is, at an infinite distance away spacetime looks empty), then energy conservation holds without major pitfalls. In practice, some metrics, notably the Friedmann–Lemaître–Robertson–Walker metric that appears to govern the universe, do not satisfy these constraints and energy conservation is not well defined. Besides being dependent on the coordinate system, pseudotensor energy is dependent on the type of pseudotensor in use; for example, the energy exterior to a Kerr–Newman black hole is twice as large when calculated from Møller's pseudotensor as it is when calculated using the Einstein pseudotensor.
For asymptotically flat universes, Einstein and others salvage conservation of energy by introducing a specific global gravitational potential energy that cancels out mass-energy changes triggered by spacetime expansion or contraction. This global energy has no well-defined density and cannot technically be applied to a non-asymptotically flat universe; however, for practical purposes this can be finessed, and so by this view, energy is conserved in our universe. Alan Guth stated that the universe might be "the ultimate free lunch", and theorized that, when accounting for gravitational potential energy, the net energy of the Universe is zero.
== Quantum theory ==
In quantum mechanics, the energy of a quantum system is described by a self-adjoint (or Hermitian) operator called the Hamiltonian, which acts on the Hilbert space (or a space of wave functions) of the system. If the Hamiltonian is a time-independent operator, emergence probability of the measurement result does not change in time over the evolution of the system. Thus the expectation value of energy is also time independent. The local energy conservation in quantum field theory is ensured by the quantum Noether's theorem for the energy-momentum tensor operator. Thus energy is conserved by the normal unitary evolution of a quantum system.
However, when the non-unitary Born rule is applied, the system's energy is measured with an energy that can be below or above the expectation value, if the system was not in an energy eigenstate. (For macroscopic systems, this effect is usually too small to measure.) The disposition of this energy gap is not well-understood; most physicists believe that the energy is transferred to or from the macroscopic environment in the course of the measurement process, while others believe that the observable energy is only conserved "on average". No experiment has been confirmed as definitive evidence of violations of the conservation of energy principle in quantum mechanics, but that does not rule out that some newer experiments, as proposed, may find evidence of violations of the conservation of energy principle in quantum mechanics.
== Status ==
In the context of perpetual motion machines such as the Orbo, Professor Eric Ash has argued at the BBC: "Denying [conservation of energy] would undermine not just little bits of science - the whole edifice would be no more. All of the technology on which we built the modern world would lie in ruins". It is because of conservation of energy that "we know - without having to examine details of a particular device - that Orbo cannot work."
Energy conservation has been a foundational physical principle for about two hundred years. From the point of view of modern general relativity, the lab environment can be well approximated by Minkowski spacetime, where energy is exactly conserved. The entire Earth can be well approximated by the Schwarzschild metric, where again energy is exactly conserved. Given all the experimental evidence, any new theory (such as quantum gravity), in order to be successful, will have to explain why energy has appeared to always be exactly conserved in terrestrial experiments. In some speculative theories, corrections to quantum mechanics are too small to be detected at anywhere near the current TeV level accessible through particle accelerators. Doubly special relativity models may argue for a breakdown in energy-momentum conservation for sufficiently energetic particles; such models are constrained by observations that cosmic rays appear to travel for billions of years without displaying anomalous non-conservation behavior. Some interpretations of quantum mechanics claim that observed energy tends to increase when the Born rule is applied due to localization of the wave function. If true, objects could be expected to spontaneously heat up; thus, such models are constrained by observations of large, cool astronomical objects as well as the observation of (often supercooled) laboratory experiments.
Milton A. Rothman wrote that the law of conservation of energy has been verified by nuclear physics experiments to an accuracy of one part in a thousand million million (1015). He then defines its precision as "perfect for all practical purposes".
== See also ==
== References ==
== Bibliography ==
=== Modern accounts ===
Goldstein, Martin, and Inge F., (1993). The Refrigerator and the Universe. Harvard Univ. Press. A gentle introduction.
Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. ISBN 978-0-7167-1088-2.
Nolan, Peter J. (1996). Fundamentals of College Physics, 2nd ed. William C. Brown Publishers.
Oxtoby & Nachtrieb (1996). Principles of Modern Chemistry, 3rd ed. Saunders College Publishing.
Papineau, D. (2002). Thinking about Consciousness. Oxford: Oxford University Press.
Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 978-0-534-40842-8.
Stenger, Victor J. (2000). Timeless Reality. Prometheus Books. Especially chpt. 12. Nontechnical.
Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 978-0-7167-0809-4.
Lanczos, Cornelius (1970). The Variational Principles of Mechanics. Toronto: University of Toronto Press. ISBN 978-0-8020-1743-7.
=== History of ideas ===
Brown, T.M. (1965). "Resource letter EEC-1 on the evolution of energy concepts from Galileo to Helmholtz". American Journal of Physics. 33 (10): 759–765. Bibcode:1965AmJPh..33..759B. doi:10.1119/1.1970980.
Cardwell, D.S.L. (1971). From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age. London: Heinemann. ISBN 978-0-435-54150-7.
Guillen, M. (1999). Five Equations That Changed the World. New York: Abacus. ISBN 978-0-349-11064-6.
Hiebert, E.N. (1981). Historical Roots of the Principle of Conservation of Energy. Madison, Wis.: Ayer Co Pub. ISBN 978-0-405-13880-5.
Kuhn, T.S. (1957) "Energy conservation as an example of simultaneous discovery", in M. Clagett (ed.) Critical Problems in the History of Science pp.321–56
Sarton, G.; Joule, J. P.; Carnot, Sadi (1929). "The discovery of the law of conservation of energy". Isis. 13: 18–49. doi:10.1086/346430. S2CID 145585492.
Smith, C. (1998). The Science of Energy: Cultural History of Energy Physics in Victorian Britain. London: Heinemann. ISBN 978-0-485-11431-7.
Mach, E. (1872). History and Root of the Principles of the Conservation of Energy. Open Court Pub. Co., Illinois.
Poincaré, H. (1905). Science and Hypothesis. Walter Scott Publishing Co. Ltd; Dover reprint, 1952. ISBN 978-0-486-60221-9. {{cite book}}: ISBN / Date incompatibility (help), Chapter 8, "Energy and Thermo-dynamics"
== External links ==
MISN-0-158 The First Law of Thermodynamics (PDF file) by Jerzy Borysowicz for Project PHYSNET. | Wikipedia/Energy_conservation_law |
In fracture mechanics, the energy release rate,
G
{\displaystyle G}
, is the rate at which energy is transformed as a material undergoes fracture. Mathematically, the energy release rate is expressed as the decrease in total potential energy per increase in fracture surface area, and is thus expressed in terms of energy per unit area. Various energy balances can be constructed relating the energy released during fracture to the energy of the resulting new surface, as well as other dissipative processes such as plasticity and heat generation. The energy release rate is central to the field of fracture mechanics when solving problems and estimating material properties related to fracture and fatigue.
== Definition ==
The energy release rate
G
{\displaystyle G}
is defined as the instantaneous loss of total potential energy
Π
{\displaystyle \Pi }
per unit crack growth area
s
{\displaystyle s}
,
G
≡
−
∂
Π
∂
s
,
{\displaystyle G\equiv -{\frac {\partial \Pi }{\partial s}},}
where the total potential energy is written in terms of the total strain energy
Ω
{\displaystyle \Omega }
, surface traction
t
{\displaystyle \mathbf {t} }
, displacement
u
{\displaystyle \mathbf {u} }
, and body force
b
{\displaystyle \mathbf {b} }
by
Π
=
Ω
−
{
∫
S
t
t
⋅
u
d
S
+
∫
V
b
⋅
u
d
V
}
.
{\displaystyle \Pi =\Omega -\left\{\int _{{\mathcal {S}}_{t}}\mathbf {t} \cdot \mathbf {u} \,dS+\int _{\mathcal {V}}\mathbf {b} \cdot \mathbf {u} \,dV\right\}.}
The first integral is over the surface
S
t
{\displaystyle S_{t}}
of the material, and the second is over its volume
V
{\displaystyle V}
.
The figure on the right shows the plot of an external force
P
{\displaystyle P}
vs. the load-point displacement
q
{\displaystyle q}
, in which the area under the curve is the strain energy. The white area between the curve and the
P
{\displaystyle P}
-axis is referred to as the complementary energy. In the case of a linearly-elastic material,
P
(
q
)
{\displaystyle P(q)}
is a straight line and the strain energy is equal to the complementary energy.
=== Prescribed displacement ===
In the case of prescribed displacement, the strain energy can be expressed in terms of the specified displacement and the crack surface
Ω
(
q
,
s
)
{\displaystyle \Omega (q,s)}
, and the change in this strain energy is only affected by the change in fracture surface area:
δ
Ω
=
(
∂
Ω
/
∂
s
)
δ
s
{\displaystyle \delta \Omega =(\partial \Omega /\partial s)\delta s}
. Correspondingly, the energy release rate in this case is expressed as
G
=
−
∂
Ω
∂
s
|
q
.
{\displaystyle G=-\left.{\frac {\partial \Omega }{\partial s}}\right|_{q}.}
Here is where one can accurately refer to
G
{\displaystyle G}
as the strain energy release rate.
=== Prescribed loads ===
When the load is prescribed instead of the displacement, the strain energy needs to be modified as
Ω
(
q
(
P
,
s
)
,
s
)
{\displaystyle \Omega (q(P,s),s)}
. The energy release rate is then computed as
G
=
−
∂
∂
s
|
P
(
Ω
−
P
q
)
.
{\displaystyle G=-\left.{\frac {\partial }{\partial s}}\right|_{P}\left(\Omega -Pq\right).}
If the material is linearly-elastic, then
Ω
=
P
q
/
2
{\displaystyle \Omega =Pq/2}
and one may instead write
G
=
∂
Ω
∂
s
|
P
.
{\displaystyle G=\left.{\frac {\partial \Omega }{\partial s}}\right|_{P}.}
=== G in two-dimensional cases ===
In the cases of two-dimensional problems, the change in crack growth area is simply the change in crack length times the thickness of the specimen. Namely,
∂
s
=
B
∂
a
{\displaystyle \partial s=B\partial a}
. Therefore, the equation for computing
G
{\displaystyle G}
can be modified for the 2D case:
Prescribed Displacement:
G
=
−
1
B
∂
Ω
∂
a
|
q
.
{\displaystyle G=-\left.{\frac {1}{B}}{\frac {\partial \Omega }{\partial a}}\right|_{q}.}
Prescribed Load:
G
=
−
1
B
∂
∂
a
|
P
(
Ω
−
P
q
)
.
{\displaystyle G=-\left.{\frac {1}{B}}{\frac {\partial }{\partial a}}\right|_{P}\left(\Omega -Pq\right).}
Prescribed Load, Linear Elastic:
G
=
1
B
∂
Ω
∂
a
|
P
.
{\displaystyle G=\left.{\frac {1}{B}}{\frac {\partial \Omega }{\partial a}}\right|_{P}.}
One can refer to the example calculations embedded in the next section for further information. Sometimes, the strain energy is written using
U
=
Ω
/
B
{\displaystyle U=\Omega /B}
, an energy-per-unit thickness. This gives
Prescribed Displacement:
G
=
−
∂
U
∂
a
|
q
.
{\displaystyle G=-\left.{\frac {\partial U}{\partial a}}\right|_{q}.}
Prescribed Load:
G
=
−
∂
∂
a
|
P
(
U
−
P
q
B
)
.
{\displaystyle G=-\left.{\frac {\partial }{\partial a}}\right|_{P}\left(U-{\frac {Pq}{B}}\right).}
Prescribed Load, Linear Elastic:
G
=
∂
U
∂
a
|
P
.
{\displaystyle G=\left.{\frac {\partial U}{\partial a}}\right|_{P}.}
=== Relation to stress intensity factors ===
The energy release rate is directly related to the stress intensity factor associated with a given two-dimensional loading mode (Mode-I, Mode-II, or Mode-III) when the crack grows straight ahead. This is applicable to cracks under plane stress, plane strain, and antiplane shear.
For Mode-I, the energy release rate
G
{\displaystyle G}
rate is related to the Mode-I stress intensity factor
K
I
{\displaystyle K_{I}}
for a linearly-elastic material by
G
=
K
I
2
E
′
,
{\displaystyle G={\frac {K_{I}^{2}}{E'}},}
where
E
′
{\displaystyle E'}
is related to Young's modulus
E
{\displaystyle E}
and Poisson's ratio
ν
{\displaystyle \nu }
depending on whether the material is under plane stress or plane strain:
E
′
=
{
E
,
p
l
a
n
e
s
t
r
e
s
s
,
E
1
−
ν
2
,
p
l
a
n
e
s
t
r
a
i
n
.
{\displaystyle E'={\begin{cases}E,&\mathrm {plane~stress} ,\\\\{\dfrac {E}{1-\nu ^{2}}},&\mathrm {plane~strain} .\end{cases}}}
For Mode-II, the energy release rate is similarly written as
G
=
K
I
I
2
E
′
.
{\displaystyle G={\frac {K_{II}^{2}}{E'}}.}
For Mode-III (antiplane shear), the energy release rate now is a function of the shear modulus
μ
{\displaystyle \mu }
,
G
=
K
I
I
I
2
2
μ
.
{\displaystyle G={\frac {K_{III}^{2}}{2\mu }}.}
For an arbitrary combination of all loading modes, these linear elastic solutions may be superposed as
G
=
K
I
2
E
′
+
K
I
I
2
E
′
+
K
I
I
I
2
2
μ
.
{\displaystyle G={\frac {K_{I}^{2}}{E'}}+{\frac {K_{II}^{2}}{E'}}+{\frac {K_{III}^{2}}{2\mu }}.}
==== Relation to fracture toughness ====
Crack growth is initiated when the energy release rate overcomes a critical value
G
c
{\displaystyle G_{c}}
, which is a material property,
G
≥
G
c
,
{\displaystyle G\geq G_{c},}
Under Mode-I loading, the critical energy release rate
G
c
{\displaystyle G_{c}}
is then related to the Mode-I fracture toughness
K
I
C
{\displaystyle K_{IC}}
, another material property, by
G
c
=
K
I
C
2
E
′
.
{\displaystyle G_{c}={\frac {K_{IC}^{2}}{E'}}.}
== Calculating G ==
There are a variety of methods available for calculating the energy release rate given material properties, specimen geometry, and loading conditions. Some are dependent on certain criteria being satisfied, such as the material being entirely elastic or even linearly-elastic, and/or that the crack must grow straight ahead. The only method presented that works arbitrarily is that using the total potential energy. If two methods are both applicable, they should yield identical energy release rates.
=== Total potential energy ===
The only method to calculate
G
{\displaystyle G}
for arbitrary conditions is to calculate the total potential energy and differentiate it with respect to the crack surface area. This is typically done by:
calculating the stress field resulting from the loading,
calculating the strain energy in the material resulting from the stress field,
calculating the work done by the external loads,
all in terms of the crack surface area.
=== Compliance method ===
If the material is linearly elastic, the computation of its energy release rate can be much simplified. In this case, the Load vs. Load-point Displacement curve is linear with a positive slope, and the displacement per unit force applied is defined as the compliance,
C
{\displaystyle C}
C
=
q
P
.
{\displaystyle C={\frac {q}{P}}.}
The corresponding strain energy
Ω
{\displaystyle \Omega }
(area under the curve) is equal to
Ω
=
1
2
P
q
=
1
2
q
2
C
=
1
2
P
2
C
.
{\displaystyle \Omega ={\frac {1}{2}}Pq={\frac {1}{2}}{\frac {q^{2}}{C}}={\frac {1}{2}}P^{2}C.}
Using the compliance method, one can show that the energy release rate for both cases of prescribed load and displacement come out to be
G
=
1
2
P
2
∂
C
∂
s
.
{\displaystyle G={\frac {1}{2}}P^{2}{\frac {\partial C}{\partial s}}.}
=== Multiple specimen methods for nonlinear materials ===
In the case of prescribed displacement, holding the crack length fixed, the energy release rate can be computed by
G
=
−
∫
0
q
∂
P
∂
s
d
q
,
{\displaystyle G=-\int _{0}^{q}{\frac {\partial P}{\partial s}}\,dq,}
while in the case of prescribed load,
G
=
∫
0
P
∂
q
∂
s
d
P
.
{\displaystyle G=\int _{0}^{P}{\frac {\partial q}{\partial s}}\,dP.}
As one can see, in both cases, the energy release rate
G
{\displaystyle G}
times the change in surface
d
s
{\displaystyle ds}
returns the area between curves, which indicates the energy dissipated for the new surface area as illustrated in the right figure
G
d
s
=
−
d
s
∫
0
q
∂
P
∂
s
d
q
=
d
s
∫
0
P
∂
q
∂
s
d
P
.
{\displaystyle Gds=-ds\int _{0}^{q}{\frac {\partial P}{\partial s}}\,dq=ds\int _{0}^{P}{\frac {\partial q}{\partial s}}\,dP.}
=== Crack closure integral ===
Since the energy release rate is defined as the negative derivative of the total potential energy with respect to crack surface growth, the energy release rate may be written as the difference between the potential energy before and after the crack grows. After some careful derivation, this leads one to the crack closure integral
G
=
lim
Δ
s
→
0
−
1
Δ
s
∫
Δ
s
1
2
t
i
0
(
Δ
u
i
+
−
Δ
u
i
−
)
d
S
,
{\displaystyle G=\lim _{\Delta s\to 0}-{\frac {1}{\Delta s}}\int _{\Delta s}{\frac {1}{2}}\,t_{i}^{0}\left(\Delta u_{i}^{+}-\Delta u_{i}^{-}\right)\,dS,}
where
Δ
s
{\displaystyle \Delta s}
is the new fracture surface area,
t
i
0
{\displaystyle t_{i}^{0}}
are the components of the traction released on the top fracture surface as the crack grows,
Δ
u
i
+
−
Δ
u
i
−
{\displaystyle \Delta u_{i}^{+}-\Delta u_{i}^{-}}
are the components of the crack opening displacement (the difference in displacement increments between the top and bottom crack surfaces), and the integral is over the surface of the material
S
{\displaystyle S}
.
The crack closure integral is valid only for elastic materials but is still valid for cracks that grow in any direction. Nevertheless, for a two-dimensional crack that does indeed grow straight ahead, the crack closure integral simplifies to
G
=
lim
Δ
a
→
0
1
Δ
a
∫
0
Δ
a
σ
i
2
(
x
1
,
0
)
u
i
(
Δ
a
−
x
1
,
π
)
d
x
1
,
{\displaystyle G=\lim _{\Delta a\to 0}{\frac {1}{\Delta a}}\int _{0}^{\Delta a}\sigma _{i2}(x_{1},0)u_{i}(\Delta a-x_{1},\pi )\,dx_{1},}
where
Δ
a
{\displaystyle \Delta a}
is the new crack length, and the displacement components are written as a function of the polar coordinates
r
=
Δ
a
−
x
1
{\displaystyle r=\Delta a-x_{1}}
and
θ
=
π
{\displaystyle \theta =\pi }
.
=== J-integral ===
In certain situations, the energy release rate
G
{\displaystyle G}
can be calculated using the
J-integral, i.e.
G
=
J
{\displaystyle G=J}
, using
J
=
∫
Γ
(
W
n
1
−
t
i
∂
u
i
∂
x
1
)
d
Γ
,
{\displaystyle J=\int _{\Gamma }\left(Wn_{1}-t_{i}\,{\frac {\partial u_{i}}{\partial x_{1}}}\right)\,d\Gamma ,}
where
W
{\displaystyle W}
is the elastic strain energy density,
n
1
{\displaystyle n_{1}}
is the
x
1
{\displaystyle x_{1}}
component of the unit vector normal to
Γ
{\displaystyle \Gamma }
, the curve used for the line integral,
t
i
{\displaystyle t_{i}}
are the components of the traction vector
t
=
σ
⋅
n
{\displaystyle \mathbf {t} ={\boldsymbol {\sigma }}\cdot \mathbf {n} }
, where
σ
{\displaystyle {\boldsymbol {\sigma }}}
is the stress tensor, and
u
i
{\displaystyle u_{i}}
are the components of the displacement vector.
This integral is zero over a simple closed path and is path independent, allowing any simple path starting and ending on the crack faces to be used to calculate
J
{\displaystyle J}
.
In order to equate the energy release rate to the J-integral,
G
=
J
{\displaystyle G=J}
, the following conditions must be met:
the crack must be growing straight ahead, and
the deformation near the crack (enclosed by
Γ
{\displaystyle \Gamma }
) must be elastic (not plastic).
The J-integral may be calculated with these conditions violated, but then
G
≠
J
{\displaystyle G\neq J}
. When they are not violated, one can then relate the energy release rate and the J-integral to the elastic moduli and the stress intensity factors using
G
=
J
=
K
I
2
E
′
+
K
I
I
2
E
′
+
K
I
I
I
2
2
μ
.
{\displaystyle G=J={\frac {K_{I}^{2}}{E'}}+{\frac {K_{II}^{2}}{E'}}+{\frac {K_{III}^{2}}{2\mu }}.}
== Computational methods in fracture mechanics ==
A handful of methods exist for calculating
G
{\displaystyle G}
with finite elements. Although a direct calculation of the J-integral is possible (using the strains and stresses outputted by FEA), approximate approaches for some type of crack growth exist and provide reasonable accuracy with straightforward calculations. This section will elaborate on some relatively simple methods for fracture analysis utilizing numerical simulations.
=== Nodal release method ===
If the crack is growing straight, the energy release rate can be decomposed as a sum of 3 terms
G
i
{\displaystyle G_{i}}
associated with the energy in each 3 modes. As a result, the Nodal Release method (NR) can be used to determine
G
i
{\displaystyle G_{i}}
from FEA results. The energy release rate is calculated at the nodes of the finite element mesh for the crack at an initial length and extended by a small distance
Δ
a
{\displaystyle \Delta a}
. First, we calculate the displacement variation at the node of interest
Δ
u
→
=
u
→
(
t
+
1
)
−
u
→
(
t
)
{\displaystyle \Delta {\vec {u}}={\vec {u}}^{(t+1)}-{\vec {u}}^{(t)}}
(before and after the crack tip node is released). Secondly, we keep track of the nodal force
F
→
{\displaystyle {\vec {F}}}
outputted by FEA. Finally, we can find each components of
G
{\displaystyle G}
using the following formulas:
G
1
NR
=
1
Δ
a
F
2
Δ
u
2
2
{\displaystyle G_{1}^{\text{NR}}={\frac {1}{\Delta a}}F_{2}{\frac {\Delta u_{2}}{2}}}
G
2
NR
=
1
Δ
a
F
1
Δ
u
1
2
{\displaystyle G_{2}^{\text{NR}}={\frac {1}{\Delta a}}F_{1}{\frac {\Delta u_{1}}{2}}}
G
3
NR
=
1
Δ
a
F
3
Δ
u
3
2
{\displaystyle G_{3}^{\text{NR}}={\frac {1}{\Delta a}}F_{3}{\frac {\Delta u_{3}}{2}}}
Where
Δ
a
{\displaystyle \Delta a}
is the width of the element bounding the crack tip. The accuracy of the method highly depends on the mesh refinement, both because the displacement and forces depend on it, and because
G
=
lim
Δ
a
→
0
G
NR
{\displaystyle G=\lim _{\Delta a\to 0}G^{\text{NR}}}
. Note that the equations above are derived using the crack closure integral.
If the energy release rate exceeds a critical value, the crack will grow. In this case, a new FEA simulation is performed (for the next time step) where the node at the crack tip is released. For a bounded substrate, we may simply stop enforcing fixed Dirichlet boundary conditions at the crack tip node of the previous time step (i.e. displacements are no longer restrained). For a symmetric crack, we would need to update the geometry of the domain with a longer crack opening (and therefore generate a new mesh).
=== Modified crack closure integral ===
Similar to the Nodal Release Method, the Modified Crack Closure Integral (MCCI) is a method for calculating the energy release rate utilizing FEA nodal displacements
(
u
i
j
)
{\displaystyle (u_{i}^{j})}
and forces
(
F
i
j
)
{\displaystyle (F_{i}^{j})}
. Where
i
{\displaystyle i}
represents the direction corresponding to the Cartesian basis vectors with origin at the crack tip, and
j
{\displaystyle j}
represents the nodal index. MCCI is more computationally efficient than the nodal release method because it only requires one analysis for each increment of crack growth.
A necessary condition for the MCCI method is uniform element length
(
Δ
a
)
{\displaystyle (\Delta a)}
along the crack face in the
x
1
{\displaystyle x_{1}}
-direction. Additionally, this method requires sufficient discretization such that over the length of one element stress fields are self-similar. This implies that
K
(
a
+
Δ
a
)
≈
K
(
a
)
{\displaystyle K(a+\Delta a)\approx K(a)}
as the crack propagates. Below are examples of the MCCI method with two types of common finite elements.
==== 4-node elements ====
The 4-node square linear elements seen in Figure 2 have a distance between nodes
j
{\displaystyle j}
and
j
+
1
{\displaystyle j+1}
equal to
Δ
a
.
{\displaystyle \Delta a.}
Consider a crack with its tip located at node
j
.
{\displaystyle j.}
Similar to the nodal release method, if the crack were to propagate one element length along the line of symmetry (parallel to the
x
1
{\displaystyle x_{1}}
-axis) the crack opening displacement would be the displacement at the previous crack tip, i.e.
u
j
{\displaystyle {\boldsymbol {u^{j}}}}
and the force at the new crack tip
(
j
+
1
)
{\displaystyle (j+1)}
would be
F
j
+
1
.
{\displaystyle {\boldsymbol {F}}^{j+1}.}
Since the crack growth is assumed to be self-similar the displacement at node
j
{\displaystyle j}
after the crack propagates is equal to the displacement at node
j
−
1
{\displaystyle j-1}
before the crack propagates. This same concept can be applied to the forces at node
j
+
1
{\displaystyle j+1}
and
j
.
{\displaystyle j.}
Utilizing the same method shown in the nodal release section we recover the following equations for energy release rate:
G
1
MCCI
=
1
2
Δ
a
F
2
j
Δ
u
2
j
−
1
{\displaystyle G_{1}^{\text{MCCI}}={\frac {1}{2\Delta a}}F_{2}^{j}{\Delta u_{2}^{j-1}}}
G
2
MCCI
=
1
2
Δ
a
F
1
j
Δ
u
1
j
−
1
{\displaystyle G_{2}^{\text{MCCI}}={\frac {1}{2\Delta a}}F_{1}^{j}{\Delta u_{1}^{j-1}}}
G
3
MCCI
=
1
2
Δ
a
F
3
j
Δ
u
3
j
−
1
{\displaystyle G_{3}^{\text{MCCI}}={\frac {1}{2\Delta a}}F_{3}^{j}{\Delta u_{3}^{j-1}}}
Where
Δ
u
i
j
−
1
=
u
i
(
+
)
j
−
1
−
u
i
(
−
)
j
−
1
{\displaystyle \Delta u_{i}^{j-1}=u_{i}^{(+)j-1}-u_{i}^{(-)j-1}}
(displacement above and below the crack face respectively). Because we have a line of symmetry parallel to the crack, we can assume
u
i
(
+
)
j
−
1
=
−
u
i
(
−
)
j
−
1
.
{\displaystyle u_{i}^{(+)j-1}=-u_{i}^{(-)j-1}.}
Thus,
Δ
u
i
j
−
1
=
2
u
i
(
+
)
j
−
1
.
{\displaystyle \Delta u_{i}^{j-1}=2u_{i}^{(+)j-1}.}
==== 8-node elements ====
The 8-node rectangular elements seen in Figure 3 have quadratic basis functions. The process for calculating G is the same as the 4-node elements with the exception that
Δ
a
{\displaystyle \Delta a}
(the crack growth over one element) is now the distance from node
j
{\displaystyle j}
to
j
+
2.
{\displaystyle j+2.}
Once again, making the assumption of self-similar straight crack growth the energy release rate can be calculated utilizing the following equations:
G
1
MCCI
=
1
2
Δ
a
(
F
2
j
Δ
u
2
j
−
2
+
F
2
j
+
1
Δ
u
2
j
−
1
)
{\displaystyle G_{1}^{\text{MCCI}}={\frac {1}{2\Delta a}}\left(F_{2}^{j}{\Delta u_{2}^{j-2}}+F_{2}^{j+1}{\Delta u_{2}^{j-1}}\right)}
G
2
MCCI
=
1
2
Δ
a
(
F
1
j
Δ
u
1
j
−
2
+
F
1
j
+
1
Δ
u
1
j
−
1
)
{\displaystyle G_{2}^{\text{MCCI}}={\frac {1}{2\Delta a}}\left(F_{1}^{j}{\Delta u_{1}^{j-2}}+F_{1}^{j+1}{\Delta u_{1}^{j-1}}\right)}
G
3
MCCI
=
1
2
Δ
a
(
F
3
j
Δ
u
3
j
−
2
+
F
3
j
+
1
Δ
u
3
j
−
1
)
{\displaystyle G_{3}^{\text{MCCI}}={\frac {1}{2\Delta a}}\left(F_{3}^{j}{\Delta u_{3}^{j-2}}+F_{3}^{j+1}{\Delta u_{3}^{j-1}}\right)}
Like with the nodal release method the accuracy of MCCI is highly dependent on the level of discretization along the crack tip, i.e.
G
=
lim
Δ
a
→
0
G
MCCI
.
{\displaystyle G=\lim _{\Delta a\to 0}G^{\text{MCCI}}.}
Accuracy also depends on element choice. A mesh of 8-node quadratic elements can produce more accurate results than a mesh of 4-node linear elements with the same number of degrees of freedom in the mesh.
=== Domain integral approach for J ===
The J-integral may be calculated directly using the finite element mesh and shape functions. We consider a domain contour as shown in figure 4 and choose an arbitrary smooth function
q
~
(
x
1
,
x
2
)
=
∑
i
N
i
(
x
1
,
x
2
)
q
~
i
{\displaystyle {\tilde {q}}(x_{1},x_{2})=\sum _{i}N_{i}(x_{1},x_{2}){\tilde {q}}_{i}}
such that
q
~
=
1
{\displaystyle {\tilde {q}}=1}
on
Γ
{\displaystyle \Gamma }
and
q
~
=
0
{\displaystyle {\tilde {q}}=0}
on
C
1
{\displaystyle {\mathcal {C}}_{1}}
.
For linear elastic cracks growing straight ahead,
G
=
J
{\displaystyle G=J}
. The energy release rate can then be calculated over the area bounded by the contour using an updated formulation:
J
=
∫
A
(
σ
i
j
u
i
,
1
q
~
,
j
−
W
q
~
,
1
)
d
A
{\displaystyle J=\int _{\mathcal {A}}(\sigma _{ij}u_{i,1}{\tilde {q}}_{,j}-W{\tilde {q}}_{,1})d{\mathcal {A}}}
The formula above may be applied to any annular area surrounding the crack tip (in particular, a set of neighboring elements can be used). This method is very accurate, even with a coarse mesh around the crack tip (one may choose an integration domain located far away, with stresses and displacement less sensitive to mesh refinement)
=== 2-D crack tip singular elements ===
The above-mentioned methods for calculating energy release rate asymptotically approach the actual solution with increased discretization but fail to fully capture the crack tip singularity. More accurate simulations can be performed by utilizing quarter-point elements around the crack tip. These elements have a built-in singularity which more accurately produces stress fields around the crack tip. The advantage of the quarter-point method is that it allows for coarser finite element meshes and greatly reduces computational cost. Furthermore, these elements are derived from small modifications to common finite elements without requiring special computational programs for analysis. For the purposes of this section elastic materials will be examined, although this method can be extended to elastic-plastic fracture mechanics. Assuming perfect elasticity the stress fields will experience a
1
r
{\displaystyle {\frac {1}{\sqrt {r}}}}
crack tip singularity.
==== 8-node isoparametric element ====
The 8-node quadratic element is described by Figure 5 in both parent space with local coordinates
ξ
{\displaystyle \xi }
and
η
,
{\displaystyle \eta ,}
and by the mapped element in physical/global space by
x
{\displaystyle x}
and
y
.
{\displaystyle y.}
The parent element is mapped from the local space to the physical space by the shape functions
N
i
(
ξ
,
η
)
{\displaystyle N_{i}(\xi ,\eta )}
and the degree of freedom coordinates
(
x
i
,
y
i
)
.
{\displaystyle (x_{i},y_{i}).}
The crack tip is located at
ξ
=
−
1
,
η
=
−
1
{\displaystyle \xi =-1,\eta =-1}
or
x
=
0
,
y
=
0.
{\displaystyle x=0,y=0.}
x
(
ξ
,
η
)
=
∑
i
=
1
8
N
i
(
ξ
,
η
)
x
i
{\displaystyle x(\xi ,\eta )=\sum _{i=1}^{8}N_{i}(\xi ,\eta )x_{i}}
y
(
ξ
,
η
)
=
∑
i
=
1
8
N
i
(
ξ
,
η
)
y
i
{\displaystyle y(\xi ,\eta )=\sum _{i=1}^{8}N_{i}(\xi ,\eta )y_{i}}
In a similar way, displacements (defined as
u
≡
u
1
,
v
≡
u
2
{\displaystyle u\equiv u_{1},v\equiv u_{2}}
) can also be mapped.
u
(
ξ
,
η
)
=
∑
i
=
1
8
N
i
(
ξ
,
η
)
u
i
{\displaystyle u(\xi ,\eta )=\sum _{i=1}^{8}N_{i}(\xi ,\eta )u_{i}}
v
(
ξ
,
η
)
=
∑
i
=
1
8
N
i
(
ξ
,
η
)
v
i
{\displaystyle v(\xi ,\eta )=\sum _{i=1}^{8}N_{i}(\xi ,\eta )v_{i}}
A property of shape functions in the finite element method is compact support, specifically the Kronecker delta property (i.e.
N
i
=
1
{\displaystyle N_{i}=1}
at node
i
{\displaystyle i}
and zero at all other nodes). This results in the following shape functions for the 8-node quadratic elements:
N
1
=
−
(
ξ
−
1
)
(
η
−
1
)
(
1
+
η
+
ξ
)
4
{\displaystyle N_{1}={\frac {-(\xi -1)(\eta -1)(1+\eta +\xi )}{4}}}
N
2
=
(
ξ
+
1
)
(
η
−
1
)
(
1
+
η
−
ξ
)
4
{\displaystyle N_{2}={\frac {(\xi +1)(\eta -1)(1+\eta -\xi )}{4}}}
N
3
=
(
ξ
+
1
)
(
η
+
1
)
(
−
1
+
η
+
ξ
)
4
{\displaystyle N_{3}={\frac {(\xi +1)(\eta +1)(-1+\eta +\xi )}{4}}}
N
4
=
−
(
ξ
−
1
)
(
η
+
1
)
(
−
1
+
η
−
ξ
)
4
{\displaystyle N_{4}={\frac {-(\xi -1)(\eta +1)(-1+\eta -\xi )}{4}}}
N
5
=
(
1
−
ξ
2
)
(
1
−
η
)
2
{\displaystyle N_{5}={\frac {(1-\xi ^{2})(1-\eta )}{2}}}
N
6
=
(
1
+
ξ
)
(
1
−
η
2
)
2
{\displaystyle N_{6}={\frac {(1+\xi )(1-\eta ^{2})}{2}}}
N
7
=
(
1
−
ξ
2
)
(
1
+
η
)
2
{\displaystyle N_{7}={\frac {(1-\xi ^{2})(1+\eta )}{2}}}
N
8
=
(
1
−
ξ
)
(
1
−
η
2
)
2
{\displaystyle N_{8}={\frac {(1-\xi )(1-\eta ^{2})}{2}}}
When considering a line in front of the crack that is co-linear with the
x
{\displaystyle x}
- axis (i.e.
N
i
(
ξ
,
η
=
−
1
)
{\displaystyle N_{i}(\xi ,\eta =-1)}
) all basis functions are zero except for
N
1
,
2
,
5
.
{\displaystyle N_{1,2,5}.}
N
1
(
ξ
,
−
1
)
=
−
ξ
(
1
−
ξ
)
2
{\displaystyle N_{1}(\xi ,-1)=-{\frac {\xi (1-\xi )}{2}}}
N
2
(
ξ
,
−
1
)
=
ξ
(
1
+
ξ
)
2
{\displaystyle N_{2}(\xi ,-1)={\frac {\xi (1+\xi )}{2}}}
N
5
(
ξ
,
−
1
)
=
(
1
−
ξ
2
)
{\displaystyle N_{5}(\xi ,-1)=(1-\xi ^{2})}
Calculating the normal strain involves using the chain rule to take the derivative of displacement with respect to
x
.
{\displaystyle x.}
γ
x
x
=
∂
u
∂
x
=
∑
i
=
1
,
2
,
5
∂
N
i
∂
ξ
∂
ξ
∂
x
u
i
{\displaystyle \gamma _{xx}={\frac {\partial u}{\partial x}}=\sum _{i=1,2,5}{\frac {\partial N_{i}}{\partial \xi }}{\frac {\partial \xi }{\partial x}}u_{i}}
If the nodes are spaced evenly on the rectangular element then the strain will not contain the singularity. By moving nodes 5 and 8 position to a quarter of the length
(
L
4
)
{\displaystyle ({\tfrac {L}{4}})}
of the element closer to the crack tip as seen in figure 5, the mapping from
ξ
→
x
{\displaystyle \xi \rightarrow x}
becomes:
x
(
ξ
)
=
ξ
(
1
+
ξ
)
2
L
+
(
1
−
ξ
2
)
L
4
{\displaystyle x(\xi )={\frac {\xi (1+\xi )}{2}}L+(1-\xi ^{2}){\frac {L}{4}}}
Solving for
ξ
{\displaystyle \xi }
and taking the derivative results in:
ξ
(
x
)
=
−
1
+
2
x
L
{\displaystyle \xi (x)=-1+2{\sqrt {\frac {x}{L}}}}
∂
ξ
∂
x
=
1
x
L
{\displaystyle {\frac {\partial \xi }{\partial x}}={\frac {1}{\sqrt {xL}}}}
Plugging this result into the equation for strain the final result is obtained:
γ
x
x
=
4
L
(
u
2
2
−
u
5
)
+
1
x
L
(
2
u
5
−
u
2
5
)
{\displaystyle \gamma _{xx}={\frac {4}{L}}\left({\frac {u_{2}}{2}}-u_{5}\right)+{\frac {1}{\sqrt {xL}}}\left(2u_{5}-{\frac {u_{2}}{5}}\right)}
By moving the mid-nodes to a quarter position results in the correct
1
r
{\displaystyle {\frac {1}{\sqrt {r}}}}
crack tip singularity.
==== Other element types ====
The rectangular element method does not allow for singular elements to be easily meshed around the crack tip. This impedes the ability to capture the angular dependence of the stress fields which is critical in determining the crack path. Also, except along the element edges the
1
r
{\displaystyle {\frac {1}{\sqrt {r}}}}
singularity exists in a very small region near the crack tip. Figure 6 shows another quarter-point method for modeling this singularity. The 8-node rectangular element can be mapped into a triangle. This is done by collapsing the nodes on the line
ξ
=
−
1
{\displaystyle \xi =-1}
to the mid-node location and shifting the mid-nodes on
η
=
±
1
{\displaystyle \eta =\pm 1}
to the quarter-point location. The collapsed rectangle can more easily surround the crack tip but requires that the element edges be straight or the accuracy of calculating the stress intensity factor will be reduced.
A better candidate for the quarter-point method is the natural triangle as seen in Figure 7. The element's geometry allows for the crack tip to be easily surrounded and meshing is simplified. Following the same procedure described above, the displacement and strain field for the triangular elements are:
u
=
u
3
+
x
L
[
4
u
6
−
3
u
3
−
u
1
]
+
x
L
[
2
u
1
+
2
u
3
−
4
u
6
]
{\displaystyle u=u_{3}+{\sqrt {\frac {x}{L}}}\left[4u_{6}-3u_{3}-u_{1}\right]+{\frac {x}{L}}\left[2u_{1}+2u_{3}-4u_{6}\right]}
γ
x
x
=
∂
u
∂
x
=
1
x
L
[
−
u
1
2
−
3
u
3
2
+
2
u
6
]
+
1
L
[
2
u
1
+
2
u
3
−
4
u
6
]
{\displaystyle \gamma _{xx}={\frac {\partial u}{\partial x}}={\frac {1}{\sqrt {xL}}}\left[-{\frac {u_{1}}{2}}-{\frac {3u_{3}}{2}}+2u_{6}\right]+{\frac {1}{L}}\left[2u_{1}+2u_{3}-4u_{6}\right]}
This method reproduces the first two terms of the Williams solutions with a constant and singular term.
An advantage of the quarter-point method is that it can be easily generalized to 3-dimensional models. This can greatly reduce computation when compared to other 3-dimensional methods but can lead to errors if that crack tip propagates with a large degree of curvature.
== See also ==
Fracture mechanics
Stress intensity factor
Fracture toughness
J-integral
== References ==
== External links ==
Nonlinear Fracture Mechanics Notes by Prof. John Hutchinson (from Harvard University)
Griffith's Strain Energy Release Rate on www.fracturemechanics.org | Wikipedia/Strain_energy_release_rate |
Molecular dynamics (MD) is a computer simulation method for analyzing the physical movements of atoms and molecules. The atoms and molecules are allowed to interact for a fixed period of time, giving a view of the dynamic "evolution" of the system. In the most common version, the trajectories of atoms and molecules are determined by numerically solving Newton's equations of motion for a system of interacting particles, where forces between the particles and their potential energies are often calculated using interatomic potentials or molecular mechanical force fields. The method is applied mostly in chemical physics, materials science, and biophysics.
Because molecular systems typically consist of a vast number of particles, it is impossible to determine the properties of such complex systems analytically; MD simulation circumvents this problem by using numerical methods. However, long MD simulations are mathematically ill-conditioned, generating cumulative errors in numerical integration that can be minimized with proper selection of algorithms and parameters, but not eliminated.
For systems that obey the ergodic hypothesis, the evolution of one molecular dynamics simulation may be used to determine the macroscopic thermodynamic properties of the system: the time averages of an ergodic system correspond to microcanonical ensemble averages. MD has also been termed "statistical mechanics by numbers" and "Laplace's vision of Newtonian mechanics" of predicting the future by animating nature's forces and allowing insight into molecular motion on an atomic scale.
== History ==
MD was originally developed in the early 1950s, following earlier successes with Monte Carlo simulations—which themselves date back to the eighteenth century, in the Buffon's needle problem for example—but was popularized for statistical mechanics at Los Alamos National Laboratory by Marshall Rosenbluth and Nicholas Metropolis in what is known today as the Metropolis–Hastings algorithm. Interest in the time evolution of N-body systems dates much earlier to the seventeenth century, beginning with Isaac Newton, and continued into the following century largely with a focus on celestial mechanics and issues such as the stability of the Solar System. Many of the numerical methods used today were developed during this time period, which predates the use of computers; for example, the most common integration algorithm used today, the Verlet integration algorithm, was used as early as 1791 by Jean Baptiste Joseph Delambre. Numerical calculations with these algorithms can be considered to be MD done "by hand".
As early as 1941, integration of the many-body equations of motion was carried out with analog computers. Some undertook the labor-intensive work of modeling atomic motion by constructing physical models, e.g., using macroscopic spheres. The aim was to arrange them in such a way as to replicate the structure of a liquid and use this to examine its behavior. J.D. Bernal describes this process in 1962, writing:... I took a number of rubber balls and stuck them together with rods of a selection of different lengths ranging from 2.75 to 4 inches. I tried to do this in the first place as casually as possible, working in my own office, being interrupted every five minutes or so and not remembering what I had done before the interruption.Following the discovery of microscopic particles and the development of computers, interest expanded beyond the proving ground of gravitational systems to the statistical properties of matter. In an attempt to understand the origin of irreversibility, Enrico Fermi proposed in 1953, and published in 1955, the use of the early computer MANIAC I, also at Los Alamos National Laboratory, to solve the time evolution of the equations of motion for a many-body system subject to several choices of force laws. Today, this seminal work is known as the Fermi–Pasta–Ulam–Tsingou problem. The time evolution of the energy from the original work is shown in the figure to the right.
In 1957, Berni Alder and Thomas Wainwright used an IBM 704 computer to simulate perfectly elastic collisions between hard spheres. In 1960, in perhaps the first realistic simulation of matter, J.B. Gibson et al. simulated radiation damage of solid copper by using a Born–Mayer type of repulsive interaction along with a cohesive surface force. In 1964, Aneesur Rahman published simulations of liquid argon that used a Lennard-Jones potential; calculations of system properties, such as the coefficient of self-diffusion, compared well with experimental data. Today, the Lennard-Jones potential is still one of the most frequently used intermolecular potentials. It is used for describing simple substances (a.k.a. Lennard-Jonesium) for conceptual and model studies and as a building block in many force fields of real substances.
== Areas of application and limits ==
First used in theoretical physics, the molecular dynamics method gained popularity in materials science soon afterward, and since the 1970s it has also been commonly used in biochemistry and biophysics. MD is frequently used to refine 3-dimensional structures of proteins and other macromolecules based on experimental constraints from X-ray crystallography or NMR spectroscopy. In physics, MD is used to examine the dynamics of atomic-level phenomena that cannot be observed directly, such as thin film growth and ion subplantation, and to examine the physical properties of nanotechnological devices that have not or cannot yet be created. In biophysics and structural biology, the method is frequently applied to study the motions of macromolecules such as proteins and nucleic acids, which can be useful for interpreting the results of certain biophysical experiments and for modeling interactions with other molecules, as in ligand docking. In principle, MD can be used for ab initio prediction of protein structure by simulating folding of the polypeptide chain from a random coil. MD can also be used to compute other thermodynamic properties such as drug solubilities and free energies of solvation including in polymers.
The results of MD simulations can be tested through comparison to experiments that measure molecular dynamics, of which a popular method is NMR spectroscopy. MD-derived structure predictions can be tested through community-wide experiments in Critical Assessment of Protein Structure Prediction (CASP), although the method has historically had limited success in this area. Michael Levitt, who shared the Nobel Prize partly for the application of MD to proteins, wrote in 1999 that CASP participants usually did not use the method due to "... a central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". Improvements in computational resources permitting more and longer MD trajectories, combined with modern improvements in the quality of force field parameters, have yielded some improvements in both structure prediction and homology model refinement, without reaching the point of practical utility in these areas; many identify force field parameters as a key area for further development.
MD simulation has been reported for pharmacophore development and drug design. For example, Pinto et al. implemented MD simulations of Bcl-xL complexes to calculate average positions of critical amino acids involved in ligand binding. Carlson et al. implemented molecular dynamics simulations to identify compounds that complement a receptor while causing minimal disruption to the conformation and flexibility of the active site. Snapshots of the protein at constant time intervals during the simulation were overlaid to identify conserved binding regions (conserved in at least three out of eleven frames) for pharmacophore development. Spyrakis et al. relied on a workflow of MD simulations, fingerprints for ligands and proteins (FLAP) and linear discriminant analysis (LDA) to identify the best ligand-protein conformations to act as pharmacophore templates based on retrospective ROC analysis of the resulting pharmacophores. In an attempt to ameliorate structure-based drug discovery modeling, vis-à-vis the need for many modeled compounds, Hatmal et al. proposed a combination of MD simulation and ligand-receptor intermolecular contacts analysis to discern critical intermolecular contacts (binding interactions) from redundant ones in a single ligand–protein complex. Critical contacts can then be converted into pharmacophore models that can be used for virtual screening.
An important factor is intramolecular hydrogen bonds, which are not explicitly included in modern force fields, but described as Coulomb interactions of atomic point charges. This is a crude approximation because hydrogen bonds have a partially quantum mechanical and chemical nature. Furthermore, electrostatic interactions are usually calculated using the dielectric constant of a vacuum, even though the surrounding aqueous solution has a much higher dielectric constant. Thus, using the macroscopic dielectric constant at short interatomic distances is questionable. Finally, van der Waals interactions in MD are usually described by Lennard-Jones potentials based on the Fritz London theory that is only applicable in a vacuum. However, all types of van der Waals forces are ultimately of electrostatic origin and therefore depend on dielectric properties of the environment. The direct measurement of attraction forces between different materials (as Hamaker constant) shows that "the interaction between hydrocarbons across water is about 10% of that across vacuum". The environment-dependence of van der Waals forces is neglected in standard simulations, but can be included by developing polarizable force fields.
== Design constraints ==
The design of a molecular dynamics simulation should account for the available computational power. Simulation size (n = number of particles), timestep, and total time duration must be selected so that the calculation can finish within a reasonable time period. However, the simulations should be long enough to be relevant to the time scales of the natural processes being studied. To make statistically valid conclusions from the simulations, the time span simulated should match the kinetics of the natural process. Otherwise, it is analogous to making conclusions about how a human walks when only looking at less than one footstep. Most scientific publications about the dynamics of proteins and DNA use data from simulations spanning nanoseconds (10−9 s) to microseconds (10−6 s). To obtain these simulations, several CPU-days to CPU-years are needed. Parallel algorithms allow the load to be distributed among CPUs; an example is the spatial or force decomposition algorithm.
During a classical MD simulation, the most CPU intensive task is the evaluation of the potential as a function of the particles' internal coordinates. Within that energy evaluation, the most expensive one is the non-bonded or non-covalent part. In big O notation, common molecular dynamics simulations scale by
O
(
n
2
)
{\displaystyle O(n^{2})}
if all pair-wise electrostatic and van der Waals interactions must be accounted for explicitly. This computational cost can be reduced by employing electrostatics methods such as particle mesh Ewald summation (
O
(
n
log
(
n
)
)
{\displaystyle O(n\log(n))}
), particle-particle-particle mesh (P3M), or good spherical cutoff methods (
O
(
n
)
{\displaystyle O(n)}
).
Another factor that impacts total CPU time needed by a simulation is the size of the integration timestep. This is the time length between evaluations of the potential. The timestep must be chosen small enough to avoid discretization errors (i.e., smaller than the period related to fastest vibrational frequency in the system). Typical timesteps for classical MD are on the order of 1 femtosecond (10−15 s). This value may be extended by using algorithms such as the SHAKE constraint algorithm, which fix the vibrations of the fastest atoms (e.g., hydrogens) into place. Multiple time scale methods have also been developed, which allow extended times between updates of slower long-range forces.
For simulating molecules in a solvent, a choice should be made between an explicit and implicit solvent. Explicit solvent particles (such as the TIP3P, SPC/E and SPC-f water models) must be calculated expensively by the force field, while implicit solvents use a mean-field approach. Using an explicit solvent is computationally expensive, requiring inclusion of roughly ten times more particles in the simulation. But the granularity and viscosity of explicit solvent is essential to reproduce certain properties of the solute molecules. This is especially important to reproduce chemical kinetics.
In all kinds of molecular dynamics simulations, the simulation box size must be large enough to avoid boundary condition artifacts. Boundary conditions are often treated by choosing fixed values at the edges (which may cause artifacts), or by employing periodic boundary conditions in which one side of the simulation loops back to the opposite side, mimicking a bulk phase (which may cause artifacts too).
=== Microcanonical ensemble (NVE) ===
In the microcanonical ensemble, the system is isolated from changes in moles (N), volume (V), and energy (E). It corresponds to an adiabatic process with no heat exchange. A microcanonical molecular dynamics trajectory may be seen as an exchange of potential and kinetic energy, with total energy being conserved. For a system of N particles with coordinates
X
{\displaystyle X}
and velocities
V
{\displaystyle V}
, the following pair of first order differential equations may be written in Newton's notation as
F
(
X
)
=
−
∇
U
(
X
)
=
M
V
˙
(
t
)
{\displaystyle F(X)=-\nabla U(X)=M{\dot {V}}(t)}
V
(
t
)
=
X
˙
(
t
)
.
{\displaystyle V(t)={\dot {X}}(t).}
The potential energy function
U
(
X
)
{\displaystyle U(X)}
of the system is a function of the particle coordinates
X
{\displaystyle X}
. It is referred to simply as the potential in physics, or the force field in chemistry. The first equation comes from Newton's laws of motion; the force
F
{\displaystyle F}
acting on each particle in the system can be calculated as the negative gradient of
U
(
X
)
{\displaystyle U(X)}
.
For every time step, each particle's position
X
{\displaystyle X}
and velocity
V
{\displaystyle V}
may be integrated with a symplectic integrator method such as Verlet integration. The time evolution of
X
{\displaystyle X}
and
V
{\displaystyle V}
is called a trajectory. Given the initial positions (e.g., from theoretical knowledge) and velocities (e.g., randomized Gaussian), we can calculate all future (or past) positions and velocities.
One frequent source of confusion is the meaning of temperature in MD. Commonly we have experience with macroscopic temperatures, which involve a huge number of particles, but temperature is a statistical quantity. If there is a large enough number of atoms, statistical temperature can be estimated from the instantaneous temperature, which is found by equating the kinetic energy of the system to nkBT/2, where n is the number of degrees of freedom of the system.
A temperature-related phenomenon arises due to the small number of atoms that are used in MD simulations. For example, consider simulating the growth of a copper film starting with a substrate containing 500 atoms and a deposition energy of 100 eV. In the real world, the 100 eV from the deposited atom would rapidly be transported through and shared among a large number of atoms (
10
10
{\displaystyle 10^{10}}
or more) with no big change in temperature. When there are only 500 atoms, however, the substrate is almost immediately vaporized by the deposition. Something similar happens in biophysical simulations. The temperature of the system in NVE is naturally raised when macromolecules such as proteins undergo exothermic conformational changes and binding.
=== Canonical ensemble (NVT) ===
In the canonical ensemble, amount of substance (N), volume (V) and temperature (T) are conserved. It is also sometimes called constant temperature molecular dynamics (CTMD). In NVT, the energy of endothermic and exothermic processes is exchanged with a thermostat.
A variety of thermostat algorithms are available to add and remove energy from the boundaries of an MD simulation in a more or less realistic way, approximating the canonical ensemble. Popular methods to control temperature include velocity rescaling, the Nosé–Hoover thermostat, Nosé–Hoover chains, the Berendsen thermostat, the Andersen thermostat and Langevin dynamics. The Berendsen thermostat might introduce the flying ice cube effect, which leads to unphysical translations and rotations of the simulated system.
It is not trivial to obtain a canonical ensemble distribution of conformations and velocities using these algorithms. How this depends on system size, thermostat choice, thermostat parameters, time step and integrator is the subject of many articles in the field.
=== Isothermal–isobaric (NPT) ensemble ===
In the isothermal–isobaric ensemble, amount of substance (N), pressure (P) and temperature (T) are conserved. In addition to a thermostat, a barostat is needed. It corresponds most closely to laboratory conditions with a flask open to ambient temperature and pressure.
In the simulation of biological membranes, isotropic pressure control is not appropriate. For lipid bilayers, pressure control occurs under constant membrane area (NPAT) or constant surface tension "gamma" (NPγT).
=== Generalized ensembles ===
The replica exchange method is a generalized ensemble. It was originally created to deal with the slow dynamics of disordered spin systems. It is also called parallel tempering. The replica exchange MD (REMD) formulation tries to overcome the multiple-minima problem by exchanging the temperature of non-interacting replicas of the system running at several temperatures.
== Potentials in MD simulations ==
A molecular dynamics simulation requires the definition of a potential function, or a description of the terms by which the particles in the simulation will interact. In chemistry and biology this is usually referred to as a force field and in materials physics as an interatomic potential. Potentials may be defined at many levels of physical accuracy; those most commonly used in chemistry are based on molecular mechanics and embody a classical mechanics treatment of particle-particle interactions that can reproduce structural and conformational changes but usually cannot reproduce chemical reactions.
The reduction from a fully quantum description to a classical potential entails two main approximations. The first one is the Born–Oppenheimer approximation, which states that the dynamics of electrons are so fast that they can be considered to react instantaneously to the motion of their nuclei. As a consequence, they may be treated separately. The second one treats the nuclei, which are much heavier than electrons, as point particles that follow classical Newtonian dynamics. In classical molecular dynamics, the effect of the electrons is approximated as one potential energy surface, usually representing the ground state.
When finer levels of detail are needed, potentials based on quantum mechanics are used; some methods attempt to create hybrid classical/quantum potentials where the bulk of the system is treated classically but a small region is treated as a quantum system, usually undergoing a chemical transformation.
=== Empirical potentials ===
Empirical potentials used in chemistry are frequently called force fields, while those used in materials physics are called interatomic potentials.
Most force fields in chemistry are empirical and consist of a summation of bonded forces associated with chemical bonds, bond angles, and bond dihedrals, and non-bonded forces associated with van der Waals forces and electrostatic charge. Empirical potentials represent quantum-mechanical effects in a limited way through ad hoc functional approximations. These potentials contain free parameters such as atomic charge, van der Waals parameters reflecting estimates of atomic radius, and equilibrium bond length, angle, and dihedral; these are obtained by fitting against detailed electronic calculations (quantum chemical simulations) or experimental physical properties such as elastic constants, lattice parameters and spectroscopic measurements.
Because of the non-local nature of non-bonded interactions, they involve at least weak interactions between all particles in the system. Its calculation is normally the bottleneck in the speed of MD simulations. To lower the computational cost, force fields employ numerical approximations such as shifted cutoff radii, reaction field algorithms, particle mesh Ewald summation, or the newer particle–particle-particle–mesh (P3M).
Chemistry force fields commonly employ preset bonding arrangements (an exception being ab initio dynamics), and thus are unable to model the process of chemical bond breaking and reactions explicitly. On the other hand, many of the potentials used in physics, such as those based on the bond order formalism can describe several different coordinations of a system and bond breaking. Examples of such potentials include the Brenner potential for hydrocarbons and its
further developments for the C-Si-H and C-O-H systems. The ReaxFF potential can be considered a fully reactive hybrid between bond order potentials and chemistry force fields.
=== Pair potentials versus many-body potentials ===
The potential functions representing the non-bonded energy are formulated as a sum over interactions between the particles of the system. The simplest choice, employed in many popular force fields, is the "pair potential", in which the total potential energy can be calculated from the sum of energy contributions between pairs of atoms. Therefore, these force fields are also called "additive force fields". An example of such a pair potential is the non-bonded Lennard-Jones potential (also termed the 6–12 potential), used for calculating van der Waals forces.
U
(
r
)
=
4
ε
[
(
σ
r
)
12
−
(
σ
r
)
6
]
{\displaystyle U(r)=4\varepsilon \left[\left({\frac {\sigma }{r}}\right)^{12}-\left({\frac {\sigma }{r}}\right)^{6}\right]}
Another example is the Born (ionic) model of the ionic lattice. The first term in the next equation is Coulomb's law for a pair of ions, the second term is the short-range repulsion explained by Pauli's exclusion principle and the final term is the dispersion interaction term. Usually, a simulation only includes the dipolar term, although sometimes the quadrupolar term is also included. When nl = 6, this potential is also called the Coulomb–Buckingham potential.
U
i
j
(
r
i
j
)
=
z
i
z
j
4
π
ϵ
0
1
r
i
j
+
A
l
exp
−
r
i
j
p
l
+
C
l
r
i
j
−
n
l
+
⋯
{\displaystyle U_{ij}(r_{ij})={\frac {z_{i}z_{j}}{4\pi \epsilon _{0}}}{\frac {1}{r_{ij}}}+A_{l}\exp {\frac {-r_{ij}}{p_{l}}}+C_{l}r_{ij}^{-n_{l}}+\cdots }
In many-body potentials, the potential energy includes the effects of three or more particles interacting with each other. In simulations with pairwise potentials, global interactions in the system also exist, but they occur only through pairwise terms. In many-body potentials, the potential energy cannot be found by a sum over pairs of atoms, as these interactions are calculated explicitly as a combination of higher-order terms. In the statistical view, the dependency between the variables cannot in general be expressed using only pairwise products of the degrees of freedom. For example, the Tersoff potential, which was originally used to simulate carbon, silicon, and germanium, and has since been used for a wide range of other materials, involves a sum over groups of three atoms, with the angles between the atoms being an important factor in the potential. Other examples are the embedded-atom method (EAM), the EDIP, and the Tight-Binding Second Moment Approximation (TBSMA) potentials, where the electron density of states in the region of an atom is calculated from a sum of contributions from surrounding atoms, and the potential energy contribution is then a function of this sum.
=== Semi-empirical potentials ===
Semi-empirical potentials make use of the matrix representation from quantum mechanics. However, the values of the matrix elements are found through empirical formulae that estimate the degree of overlap of specific atomic orbitals. The matrix is then diagonalized to determine the occupancy of the different atomic orbitals, and empirical formulae are used once again to determine the energy contributions of the orbitals.
There are a wide variety of semi-empirical potentials, termed tight-binding potentials, which vary according to the atoms being modeled.
=== Polarizable potentials ===
Most classical force fields implicitly include the effect of polarizability, e.g., by scaling up the partial charges obtained from quantum chemical calculations. These partial charges are stationary with respect to the mass of the atom. But molecular dynamics simulations can explicitly model polarizability with the introduction of induced dipoles through different methods, such as Drude particles or fluctuating charges. This allows for a dynamic redistribution of charge between atoms which responds to the local chemical environment.
For many years, polarizable MD simulations have been touted as the next generation. For homogenous liquids such as water, increased accuracy has been achieved through the inclusion of polarizability. Some promising results have also been achieved for proteins. However, it is still uncertain how to best approximate polarizability in a simulation. The point becomes more important when a particle experiences different environments during its simulation trajectory, e.g. translocation of a drug through a cell membrane.
=== Potentials in ab initio methods ===
In classical molecular dynamics, one potential energy surface (usually the ground state) is represented in the force field. This is a consequence of the Born–Oppenheimer approximation. In excited states, chemical reactions or when a more accurate representation is needed, electronic behavior can be obtained from first principles using a quantum mechanical method, such as density functional theory. This is named Ab Initio Molecular Dynamics (AIMD). Due to the cost of treating the electronic degrees of freedom, the computational burden of these simulations is far higher than classical molecular dynamics. For this reason, AIMD is typically limited to smaller systems and shorter times.
Ab initio quantum mechanical and chemical methods may be used to calculate the potential energy of a system on the fly, as needed for conformations in a trajectory. This calculation is usually made in the close neighborhood of the reaction coordinate. Although various approximations may be used, these are based on theoretical considerations, not on empirical fitting. Ab initio calculations produce a vast amount of information that is not available from empirical methods, such as density of electronic states or other electronic properties. A significant advantage of using ab initio methods is the ability to study reactions that involve breaking or formation of covalent bonds, which correspond to multiple electronic states. Moreover, ab initio methods also allow recovering effects beyond the Born–Oppenheimer approximation using approaches like mixed quantum-classical dynamics.
=== Hybrid QM/MM ===
QM (quantum-mechanical) methods are very powerful. However, they are computationally expensive, while the MM (classical or molecular mechanics) methods are fast but suffer from several limits (require extensive parameterization; energy estimates obtained are not very accurate; cannot be used to simulate reactions where covalent bonds are broken/formed; and are limited in their abilities for providing accurate details regarding the chemical environment). A new class of method has emerged that combines the good points of QM (accuracy) and MM (speed) calculations. These methods are termed mixed or hybrid quantum-mechanical and molecular mechanics methods (hybrid QM/MM).
The most important advantage of hybrid QM/MM method is the speed. The cost of doing classical molecular dynamics (MM) in the most straightforward case scales O(n2), where n is the number of atoms in the system. This is mainly due to electrostatic interactions term (every particle interacts with every other particle). However, use of cutoff radius, periodic pair-list updates and more recently the variations of the particle-mesh Ewald's (PME) method has reduced this to between O(n) to O(n2). In other words, if a system with twice as many atoms is simulated then it would take between two and four times as much computing power. On the other hand, the simplest ab initio calculations typically scale O(n3) or worse (restricted Hartree–Fock calculations have been suggested to scale ~O(n2.7)). To overcome the limit, a small part of the system is treated quantum-mechanically (typically active-site of an enzyme) and the remaining system is treated classically.
In more sophisticated implementations, QM/MM methods exist to treat both light nuclei susceptible to quantum effects (such as hydrogens) and electronic states. This allows generating hydrogen wave-functions (similar to electronic wave-functions). This methodology has been useful in investigating phenomena such as hydrogen tunneling. One example where QM/MM methods have provided new discoveries is the calculation of hydride transfer in the enzyme liver alcohol dehydrogenase. In this case, quantum tunneling is important for the hydrogen, as it determines the reaction rate.
=== Coarse-graining and reduced representations ===
At the other end of the detail scale are coarse-grained and lattice models. Instead of explicitly representing every atom of the system, one uses "pseudo-atoms" to represent groups of atoms. MD simulations on very large systems may require such large computer resources that they cannot easily be studied by traditional all-atom methods. Similarly, simulations of processes on long timescales (beyond about 1 microsecond) are prohibitively expensive, because they require so many time steps. In these cases, one can sometimes tackle the problem by using reduced representations, which are also called coarse-grained models.
Examples for coarse graining (CG) methods are discontinuous molecular dynamics (CG-DMD) and Go-models. Coarse-graining is done sometimes taking larger pseudo-atoms. Such united atom approximations have been used in MD simulations of biological membranes. Implementation of such approach on systems where electrical properties are of interest can be challenging owing to the difficulty of using a proper charge distribution on the pseudo-atoms. The aliphatic tails of lipids are represented by a few pseudo-atoms by gathering 2 to 4 methylene groups into each pseudo-atom.
The parameterization of these very coarse-grained models must be done empirically, by matching the behavior of the model to appropriate experimental data or all-atom simulations. Ideally, these parameters should account for both enthalpic and entropic contributions to free energy in an implicit way. When coarse-graining is done at higher levels, the accuracy of the dynamic description may be less reliable. But very coarse-grained models have been used successfully to examine a wide range of questions in structural biology, liquid crystal organization, and polymer glasses.
Examples of applications of coarse-graining:
protein folding and protein structure prediction studies are often carried out using one, or a few, pseudo-atoms per amino acid;
liquid crystal phase transitions have been examined in confined geometries and/or during flow using the Gay-Berne potential, which describes anisotropic species;
Polymer glasses during deformation have been studied using simple harmonic or FENE springs to connect spheres described by the Lennard-Jones potential;
DNA supercoiling has been investigated using 1–3 pseudo-atoms per basepair, and at even lower resolution;
Packaging of double-helical DNA into bacteriophage has been investigated with models where one pseudo-atom represents one turn (about 10 basepairs) of the double helix;
RNA structure in the ribosome and other large systems has been modeled with one pseudo-atom per nucleotide.
The simplest form of coarse-graining is the united atom (sometimes called extended atom) and was used in most early MD simulations of proteins, lipids, and nucleic acids. For example, instead of treating all four atoms of a CH3 methyl group explicitly (or all three atoms of CH2 methylene group), one represents the whole group with one pseudo-atom. It must, of course, be properly parameterized so that its van der Waals interactions with other groups have the proper distance-dependence. Similar considerations apply to the bonds, angles, and torsions in which the pseudo-atom participates. In this kind of united atom representation, one typically eliminates all explicit hydrogen atoms except those that have the capability to participate in hydrogen bonds (polar hydrogens). An example of this is the CHARMM 19 force-field.
The polar hydrogens are usually retained in the model, because proper treatment of hydrogen bonds requires a reasonably accurate description of the directionality and the electrostatic interactions between the donor and acceptor groups. A hydroxyl group, for example, can be both a hydrogen bond donor, and a hydrogen bond acceptor, and it would be impossible to treat this with one OH pseudo-atom. About half the atoms in a protein or nucleic acid are non-polar hydrogens, so the use of united atoms can provide a substantial savings in computer time.
=== Machine Learning Force Fields ===
Machine Learning Force Fields] (MLFFs) represent one approach to modeling interatomic interactions in molecular dynamics simulations. MLFFs can achieve accuracy close to that of ab initio methods. Once trained, MLFFs are much faster than direct quantum mechanical calculations. MLFFs address the limitations of traditional force fields by learning complex potential energy surfaces directly from high-level quantum mechanical data. Several software packages now support MLFFs, including VASP and open-source libraries like DeePMD-kit and SchNetPack.
== Incorporating solvent effects ==
In many simulations of a solute-solvent system the main focus is on the behavior of the solute with little interest of the solvent behavior particularly in those solvent molecules residing in regions far from the solute molecule. Solvents may influence the dynamic behavior of solutes via random collisions and by imposing a frictional drag on the motion of the solute through the solvent. The use of non-rectangular periodic boundary conditions, stochastic boundaries and solvent shells can all help reduce the number of solvent molecules required and enable a larger proportion of the computing time to be spent instead on simulating the solute. It is also possible to incorporate the effects of a solvent without needing any explicit solvent molecules present. One example of this approach is to use a potential mean force (PMF) which describes how the free energy changes as a particular coordinate is varied. The free energy change described by PMF contains the averaged effects of the solvent.
Without incorporating the effects of solvent simulations of macromolecules (such as proteins) may yield unrealistic behavior and even small molecules may adopt more compact conformations due to favourable van der Waals forces and electrostatic interactions which would be dampened in the presence of a solvent.
== Long-range forces ==
A long range interaction is an interaction in which the spatial interaction falls off no faster than
r
−
d
{\displaystyle r^{-d}}
where
d
{\displaystyle d}
is the dimensionality of the system. Examples include charge-charge interactions between ions and dipole-dipole interactions between molecules. Modelling these forces presents quite a challenge as they are significant over a distance which may be larger than half the box length with simulations of many thousands of particles. Though one solution would be to significantly increase the size of the box length, this brute force approach is less than ideal as the simulation would become computationally very expensive. Spherically truncating the potential is also out of the question as unrealistic behaviour may be observed when the distance is close to the cut off distance.
== Steered molecular dynamics (SMD) ==
Steered molecular dynamics (SMD) simulations, or force probe simulations, apply forces to a protein in order to manipulate its structure by pulling it along desired degrees of freedom. These experiments can be used to reveal structural changes in a protein at the atomic level. SMD is often used to simulate events such as mechanical unfolding or stretching.
There are two typical protocols of SMD: one in which pulling velocity is held constant, and one in which applied force is constant. Typically, part of the studied system (e.g., an atom in a protein) is restrained by a harmonic potential. Forces are then applied to specific atoms at either a constant velocity or a constant force. Umbrella sampling is used to move the system along the desired reaction coordinate by varying, for example, the forces, distances, and angles manipulated in the simulation. Through umbrella sampling, all of the system's configurations—both high-energy and low-energy—are adequately sampled. Then, each configuration's change in free energy can be calculated as the potential of mean force. A popular method of computing PMF is through the weighted histogram analysis method (WHAM), which analyzes a series of umbrella sampling simulations.
A lot of important applications of SMD are in the field of drug discovery and biomolecular sciences. For e.g. SMD was used to investigate the stability of Alzheimer's protofibrils, to study the protein ligand interaction in cyclin-dependent kinase 5 and even to show the effect of electric field on thrombin (protein) and aptamer (nucleotide) complex among many other interesting studies.
== Examples of applications ==
Molecular dynamics is used in many fields of science.
First MD simulation of a simplified biological folding process was published in 1975. Its simulation published in Nature paved the way for the vast area of modern computational protein-folding.
First MD simulation of a biological process was published in 1976. Its simulation published in Nature paved the way for understanding protein motion as essential in function and not just accessory.
MD is the standard method to treat collision cascades in the heat spike regime, i.e., the effects that energetic neutron and ion irradiation have on solids and solid surfaces.
The following biophysical examples illustrate notable efforts to produce simulations of a systems of very large size (a complete virus) or very long simulation times (up to 1.112 milliseconds):
MD simulation of the full satellite tobacco mosaic virus (STMV) (2006, Size: 1 million atoms, Simulation time: 50 ns, program: NAMD) This virus is a small, icosahedral plant virus that worsens the symptoms of infection by Tobacco Mosaic Virus (TMV). Molecular dynamics simulations were used to probe the mechanisms of viral assembly. The entire STMV particle consists of 60 identical copies of one protein that make up the viral capsid (coating), and a 1063 nucleotide single stranded RNA genome. One key finding is that the capsid is very unstable when there is no RNA inside. The simulation would take one 2006 desktop computer around 35 years to complete. It was thus done in many processors in parallel with continuous communication between them.
Folding simulations of the Villin Headpiece in all-atom detail (2006, Size: 20,000 atoms; Simulation time: 500 μs= 500,000 ns, Program: Folding@home) This simulation was run in 200,000 CPU's of participating personal computers around the world. These computers had the Folding@home program installed, a large-scale distributed computing effort coordinated by Vijay Pande at Stanford University. The kinetic properties of the Villin Headpiece protein were probed by using many independent, short trajectories run by CPU's without continuous real-time communication. One method employed was the Pfold value analysis, which measures the probability of folding before unfolding of a specific starting conformation. Pfold gives information about transition state structures and an ordering of conformations along the folding pathway. Each trajectory in a Pfold calculation can be relatively short, but many independent trajectories are needed.
Long continuous-trajectory simulations have been performed on Anton, a massively parallel supercomputer designed and built around custom application-specific integrated circuits (ASICs) and interconnects by D. E. Shaw Research. The longest published result of a simulation performed using Anton is a 1.112-millisecond simulation of NTL9 at 355 K; a second, independent 1.073-millisecond simulation of this configuration was also performed (and many other simulations of over 250 μs continuous chemical time). In How Fast-Folding Proteins Fold, researchers Kresten Lindorff-Larsen, Stefano Piana, Ron O. Dror, and David E. Shaw discuss "the results of atomic-level molecular dynamics simulations, over periods ranging between 100 μs and 1 ms, that reveal a set of common principles underlying the folding of 12 structurally diverse proteins." Examination of these diverse long trajectories, enabled by specialized, custom hardware, allow them to conclude that "In most cases, folding follows a single dominant route in which elements of the native structure appear in an order highly correlated with their propensity to form in the unfolded state." In a separate study, Anton was used to conduct a 1.013-millisecond simulation of the native-state dynamics of bovine pancreatic trypsin inhibitor (BPTI) at 300 K.
Another important application of MD method benefits from its ability of 3-dimensional characterization and analysis of microstructural evolution at atomic scale.
MD simulations are used in characterization of grain size evolution, for example, when describing wear and friction of nanocrystalline Al and Al(Zr) materials. Dislocations evolution and grain size evolution are analyzed during the friction process in this simulation. Since MD method provided the full information of the microstructure, the grain size evolution was calculated in 3D using the Polyhedral Template Matching, Grain Segmentation, and Graph clustering methods. In such simulation, MD method provided an accurate measurement of grain size. Making use of these information, the actual grain structures were extracted, measured, and presented. Compared to the traditional method of using SEM with a single 2-dimensional slice of the material, MD provides a 3-dimensional and accurate way to characterize the microstructural evolution at atomic scale.
== Molecular dynamics algorithms ==
Screened Coulomb potentials implicit solvent model
=== Integrators ===
Symplectic integrator
Verlet–Stoermer integration
Runge–Kutta integration
Beeman's algorithm
Constraint algorithms (for constrained systems)
=== Short-range interaction algorithms ===
Cell lists
Verlet list
Bonded interactions
=== Long-range interaction algorithms ===
Ewald summation
Particle mesh Ewald summation (PME)
Particle–particle-particle–mesh (P3M)
Shifted force method
=== Parallelization strategies ===
Domain decomposition method (Distribution of system data for parallel computing)
=== Ab-initio molecular dynamics ===
Car–Parrinello molecular dynamics
== Specialized hardware for MD simulations ==
Anton – A specialized, massively parallel supercomputer designed to execute MD simulations
MDGRAPE – A special purpose system built for molecular dynamics simulations, especially protein structure prediction
== Graphics card as a hardware for MD simulations ==
== See also ==
== References ==
=== General references ===
== External links ==
The GPUGRID.net Project (GPUGRID.net)
The Blue Gene Project (IBM) JawBreakers.org
Materials modelling and computer simulation codes
A few tips on molecular dynamics
Movie of MD simulation of water (YouTube) | Wikipedia/Molecular_Dynamics |
Similitude is a concept applicable to the testing of engineering models. A model is said to have similitude with the real application if the two share geometric similarity, kinematic similarity and dynamic similarity. Similarity and similitude are interchangeable in this context.
The term dynamic similitude is often used as a catch-all because it implies that geometric and kinematic similitude have already been met.
Similitude's main application is in hydraulic and aerospace engineering to test fluid flow conditions with scaled models. It is also the primary theory behind many textbook formulas in fluid mechanics.
The concept of similitude is strongly tied to dimensional analysis.
== Overview ==
Engineering models are used to study complex fluid dynamics problems where calculations and computer simulations are not reliable. Models are usually smaller than the final design, but not always. Scale models allow testing of a design prior to building, and in many cases are a critical step in the development process.
Construction of a scale model, however, must be accompanied by an analysis to determine what conditions it is tested under. While the geometry may be simply scaled, other parameters, such as pressure, temperature or the velocity and type of fluid may need to be altered. Similitude is achieved when testing conditions are created such that the test results are applicable to the real design.
The following criteria are required to achieve similitude;
Geometric similarity – the model is the same shape as the application, usually scaled.
Kinematic similarity – fluid flow of both the model and real application must undergo similar time rates of change motions. (fluid streamlines are similar)
Dynamic similarity – ratios of all forces acting on corresponding fluid particles and boundary surfaces in the two systems are constant.
To satisfy the above conditions the application is analyzed;
All parameters required to describe the system are identified using principles from continuum mechanics.
Dimensional analysis is used to express the system with as few independent variables and as many dimensionless parameters as possible.
The values of the dimensionless parameters are held to be the same for both the scale model and application. This can be done because they are dimensionless and will ensure dynamic similitude between the model and the application. The resulting equations are used to derive scaling laws which dictate model testing conditions.
It is often impossible to achieve strict similitude during a model test. The greater the departure from the application's operating conditions, the more difficult achieving similitude is. In these cases some aspects of similitude may be neglected, focusing on only the most important parameters.
The design of marine vessels remains more of an art than a science in
large part because dynamic similitude is especially difficult to attain
for a vessel that is partially submerged: a ship is affected by wind
forces in the air above it, by hydrodynamic forces within the water
under it, and especially by wave motions at the interface between the
water and the air. The scaling requirements for each of these
phenomena differ, so models cannot replicate what happens to a full
sized vessel nearly so well as can be done for an aircraft or
submarine—each of which operates entirely within one medium.
Similitude is a term used widely in fracture mechanics relating to the strain life approach. Under given loading conditions the fatigue damage in an un-notched specimen is comparable to that of a notched specimen. Similitude suggests that the component fatigue life of the two objects will also be similar.
== An example ==
Consider a submarine modeled at 1/40th scale. The application operates in sea water at 0.5 °C, moving at 5 m/s. The model will be tested in fresh water at 20 °C. Find the power required for the submarine to operate at the stated speed.
A free body diagram is constructed and the relevant relationships of force and velocity are formulated using techniques from continuum mechanics. The variables which describe the system are:
This example has five independent variables and three fundamental units. The fundamental units are: meter, kilogram, second.
Invoking the Buckingham π theorem shows that the system can be described with two dimensionless numbers and one independent variable.
Dimensional analysis is used to rearrange the units to form the Reynolds number (
R
e
{\displaystyle R_{e}}
) and pressure coefficient (
C
p
{\displaystyle C_{p}}
). These dimensionless numbers account for all the variables listed above except F, which will be the test measurement. Since the dimensionless parameters will stay constant for both the test and the real application, they will be used to formulate scaling laws for the test.
Scaling laws:
R
e
=
(
ρ
V
L
μ
)
⟶
V
model
=
V
application
×
(
ρ
a
ρ
m
)
×
(
L
a
L
m
)
×
(
μ
m
μ
a
)
C
p
=
(
2
Δ
p
ρ
V
2
)
,
F
=
Δ
p
L
2
⟶
F
application
=
F
model
×
(
ρ
a
ρ
m
)
×
(
V
a
V
m
)
2
×
(
L
a
L
m
)
2
.
{\displaystyle {\begin{aligned}&R_{e}=\left({\frac {\rho VL}{\mu }}\right)&\longrightarrow &V_{\text{model}}=V_{\text{application}}\times \left({\frac {\rho _{a}}{\rho _{m}}}\right)\times \left({\frac {L_{a}}{L_{m}}}\right)\times \left({\frac {\mu _{m}}{\mu _{a}}}\right)\\&C_{p}=\left({\frac {2\Delta p}{\rho V^{2}}}\right),F=\Delta pL^{2}&\longrightarrow &F_{\text{application}}=F_{\text{model}}\times \left({\frac {\rho _{a}}{\rho _{m}}}\right)\times \left({\frac {V_{a}}{V_{m}}}\right)^{2}\times \left({\frac {L_{a}}{L_{m}}}\right)^{2}.\end{aligned}}}
The pressure (
p
{\displaystyle p}
) is not one of the five variables, but the force (
F
{\displaystyle F}
) is. The pressure difference (Δ
p
{\displaystyle p}
) has thus been replaced with (
F
/
L
2
{\displaystyle F/L^{2}}
) in the pressure coefficient. This gives a required test velocity of:
V
model
=
V
application
×
21.9
{\displaystyle V_{\text{model}}=V_{\text{application}}\times 21.9}
.
A model test is then conducted at that velocity and the force that is measured in the model (
F
m
o
d
e
l
{\displaystyle F_{model}}
) is then scaled to find the force that can be expected for the real application (
F
a
p
p
l
i
c
a
t
i
o
n
{\displaystyle F_{application}}
):
F
application
=
F
model
×
3.44
{\displaystyle F_{\text{application}}=F_{\text{model}}\times 3.44}
The power
P
{\displaystyle P}
in watts required by the submarine is then:
P
[
W
]
=
F
application
×
V
application
=
F
model
[
N
]
×
17.2
m
/
s
{\displaystyle P[\mathrm {W} ]=F_{\text{application}}\times V_{\text{application}}=F_{\text{model}}[\mathrm {N} ]\times 17.2\ \mathrm {m/s} }
Note that even though the model is scaled smaller, the water velocity needs to be increased for testing. This remarkable result shows how similitude in nature is often counterintuitive.
== Typical applications ==
=== Fluid mechanics ===
Similitude has been well documented for a large number of engineering problems and is the basis of many textbook formulas and dimensionless quantities. These formulas and quantities are easy to use without having to repeat the laborious task of dimensional analysis and formula derivation. Simplification of the formulas (by neglecting some aspects of similitude) is common, and needs to be reviewed by the engineer for each application.
Similitude can be used to predict the performance of a new design based on data from an existing, similar design. In this case, the model is the existing design. Another use of similitude and models is in validation of computer simulations with the ultimate goal of eliminating the need for physical models altogether.
Another application of similitude is to replace the operating fluid with a different test fluid. Wind tunnels, for example, have trouble with air liquefying in certain conditions so helium is sometimes used. Other applications may operate in dangerous or expensive fluids so the testing is carried out in a more convenient substitute.
Some common applications of similitude and associated dimensionless numbers;
=== Solid mechanics: structural similitude ===
Similitude analysis is a powerful engineering tool to design the scaled-down structures. Although both dimensional analysis and direct use of the governing equations may be used to derive the scaling laws, the latter results in more specific scaling laws. The design of the scaled-down composite structures can be successfully carried out using the complete and partial similarities. In the design of the scaled structures under complete similarity condition, all the derived scaling laws must be satisfied between the model and prototype which yields the perfect similarity between the two scales. However, the design of a scaled-down structure which is perfectly similar to its prototype has the practical limitation, especially for laminated structures. Relaxing some of the scaling laws may eliminate the limitation of the design under complete similarity condition and yields the scaled models that are partially similar to their prototype. However, the design of the scaled structures under the partial similarity condition must follow a deliberate methodology to ensure the accuracy of the scaled structure in predicting the structural response of the prototype. Scaled models can be designed to replicate the dynamic characteristic (e.g. frequencies, mode shapes and damping ratios) of their full-scale counterparts. However, appropriate response scaling laws need to be derived to predict the dynamic response of the full-scale prototype from the experimental data of the scaled model.
== See also ==
Similitude of ship models
== References ==
== Further reading ==
Binder, Raymond C. (1973). Fluid Mechanics. Prentice-Hall. ISBN 978-0-13-322594-5. OCLC 393400.
Howarth, L., ed. (1953). Modern Developments in Fluid Mechanics, High Speed Flow. Clarendon Press. OCLC 572735435 – via HathiTrust.
Kline, Stephen J. (1986). Similitude and Approximation Theory. Springer. ISBN 0-387-16518-5.
Chanson, Hubert (2009). "Turbulent Air-water Flows in Hydraulic Structures: Dynamic Similarity and Scale Effects". Environmental Fluid Mechanics. 9 (2): 125–142. Bibcode:2009EFM.....9..125C. doi:10.1007/s10652-008-9078-3. S2CID 121960118.
Heller, V. (2011). "Scale Effects in Physical Hydraulic Engineering Models". Journal of Hydraulic Research. 49 (3): 293–306. Bibcode:2011JHydR..49..293H. doi:10.1080/00221686.2011.578914. S2CID 121563448.
De Rosa, S.; Franco, F. (2015). "Analytical similitudes applied to thin cylindrical shells". Advances in Aircraft and Spacecraft Science. 2 (4): 403–425. doi:10.12989/aas.2015.2.4.403.
Emori, Richard I.; Schuring, Dieterich J. (2016). Scale models in engineering : fundamentals and applications (2nd ed.). Elsevier. ISBN 978-0-08-020860-2.
== External links ==
MIT open courseware lecture notes on Similitude for marine engineering | Wikipedia/Similitude_(model) |
The cohesive zone model (CZM) is a model in fracture mechanics where fracture formation is regarded as a gradual phenomenon and separation of the crack surfaces takes place across an extended crack tip, or cohesive zone, and is resisted by cohesive tractions.
The origin of this model can be traced back to the early sixties by Dugdale (1960) and Barenblatt (1962) to represent nonlinear processes located at the front of a pre-existent crack.
== Description ==
The major advantages of the CZM over the conventional methods in fracture mechanics like those including LEFM (Linear Elastic Fracture Mechanics), CTOD (Crack Tip open Displacement) are:
It is able to adequately predict the behaviour of uncracked structures, including those with blunt notches.
Size of non-linear zone need not be negligible in comparison with other dimensions of the cracked geometry in CZM, while in other conventional methods, it is not so.
Even for brittle materials, the presence of an initial crack is needed for LEFM to be applicable.
Another important advantage of CZM falls in the conceptual framework for interfaces.
The Cohesive Zone Model does not represent any physical material, but describes the cohesive forces which occur when material elements are being pulled apart.
As the surfaces (known as cohesive surfaces) separate, traction first increases until a maximum is reached, and then subsequently reduces to zero which results in complete separation. The variation in traction in relation to displacement is plotted on a curve and is called the traction-displacement curve. The area under this curve is equal to the energy needed for separation.
CZM maintains continuity conditions mathematically; despite physical separation. It eliminates singularity of stress and limits it to the cohesive strength of the material.
The traction-displacement curve gives the constitutive behavior of the fracture. For each material system, guidelines are to be formed and modelling is done individually. This is how the CZM works.
The amount of fracture energy dissipated in the work region depends on the shape of the model considered. Also, the ratio between the maximum stress and the yield stress affects the length of the fracture process zone. The smaller the ratio, the longer is the process zone. The CZM allows the energy to flow into the fracture process zone, where a part of it is spent in the forward region and the rest in the wake region.
Thus, the CZM provides an effective methodology to study and simulate fracture in solids.
== Dugdale and Barenblatt models ==
=== Dugdale Model ===
The Dugdale model (named after Donald S. Dugdale) assumes thin plastic strips of length,
r
p
{\displaystyle r_{p}}
, (sometimes referred to as the strip yield model) are at the forefront of two Mode I crack tips in a thin elastic-perfectly plastic plate.
==== Plastic zone size ====
In the case where
σ
∞
≪
σ
y
{\displaystyle \sigma ^{\infty }\ll \sigma _{y}}
, and therefore
r
p
≪
a
{\displaystyle r_{p}\ll a}
, the plastic zone size is:
r
p
=
π
8
(
K
I
σ
y
)
2
{\displaystyle r_{p}={\frac {\pi }{8}}\left({\frac {K_{I}}{\sigma _{y}}}\right)^{2}}
which is similar to, but slightly smaller than Irwin's predicted plastic zone diameter.
==== Crack-tip opening displacement ====
The general form of the crack tip opening displacement according to the Dugdale model at the points
x
=
±
a
{\displaystyle x=\pm a}
and
y
=
0
{\displaystyle y=0}
is:
δ
t
=
8
σ
y
a
π
E
ln
[
sec
(
π
σ
∞
2
σ
y
)
]
{\displaystyle \delta _{t}={\frac {8\sigma _{y}a}{\pi E}}\ln \left[\sec \left({\frac {\pi \sigma ^{\infty }}{2\sigma _{y}}}\right)\right]}
This can be simplified for cases where
σ
∞
≪
σ
y
{\displaystyle \sigma ^{\infty }\ll \sigma _{y}}
to:
δ
t
=
{
K
2
σ
y
E
plane stress
K
2
2
σ
y
E
plane strain
{\displaystyle \delta _{t}={\begin{cases}{\cfrac {K^{2}}{\sigma _{y}E}}&{\text{plane stress}}\\{\cfrac {K^{2}}{2\sigma _{y}E}}&{\text{plane strain}}\end{cases}}}
=== Barenblatt model ===
The Barenblatt model (after G.I. Barenblatt) is analogous to the Dugdale model, but is applied to brittle solids. This approach considers the interatomic stresses involved cracking, but considers a large enough area to apply to continuum fracture mechanics. Barenblatt's model assumes that "the width of the edge [cohesive] region of a crack is small compared to the size of the whole crack" in addition to the assumption for most fracture mechanics models that the stress fields of all cracks are the same for a given specimen geometry regardless of the remote applied stress. In the Barenblatt model, the traction,
σ
y
y
{\displaystyle \sigma _{yy}}
, is equal to the theoretical bond rupture strength of a brittle solid. This allows the strain energy release rate,
G
{\displaystyle G}
, to be defined by the critical crack opening displacement,
δ
c
=
2
v
c
{\displaystyle \delta _{c}=2v_{c}}
or the critical cohesive zone size,
r
c
o
{\displaystyle r_{co}}
, as follows:
G
c
=
2
∫
0
ν
c
σ
y
y
d
ν
=
8
σ
t
h
2
r
c
o
π
E
=
2
γ
s
{\displaystyle G_{c}=2\int _{0}^{\nu _{c}}\sigma _{yy}d\nu ={\frac {8\sigma _{th}^{2}r_{co}}{\pi E}}=2\gamma _{s}}
where
γ
s
{\displaystyle \gamma _{s}}
is the surface energy.
== References == | Wikipedia/Cohesive_zone_model |
The fracture of soft materials involves large deformations and crack blunting before propagation of the crack can occur. Consequently, the stress field close to the crack tip is significantly different from the traditional formulation encountered in the Linear elastic fracture mechanics. Therefore, fracture analysis for these applications requires a special attention.
The Linear Elastic Fracture Mechanics (LEFM) and K-field (see Fracture Mechanics) are based on the assumption of infinitesimal deformation, and as a result are not suitable to describe the fracture of soft materials. However, LEFM general approach can be applied to understand the basics of fracture on soft materials.
The solution for the deformation and crack stress field in soft materials considers large deformation and is derived from the finite strain elastostatics framework and hyperelastic material models.
Soft materials (Soft matter) consist of a type of material that e.g. includes soft biological tissues as well as synthetic elastomers, and that is very sensitive to thermal variations. Hence, soft materials can become highly deformed before crack propagation.
== Hyperelastic material models ==
Hyperelastic material models are utilized to obtain the stress–strain relationship through a strain energy density function. Relevant models for deriving stress-strain relations for soft materials are: Mooney-Rivlin solid, Neo-Hookean, Exponentially hardening material and Gent hyperelastic models. On this page, the results will be primarily derived from the Neo-Hookean model.
=== Generalized neo-Hookean (GNH) ===
The Neo-Hookean model is generalized to account for the hardening factor:
W
=
μ
2
b
{
[
1
+
b
n
(
I
−
3
)
]
n
−
1
}
,
{\displaystyle W={\frac {\mu }{2b}}\left\{\left[1+{\frac {b}{n}}(I-3)\right]^{n}-1\right\},}
where b>0 and n>1/2 are material parameters, and
I
=
I
1
{\displaystyle I=I_{1}}
is the first invariant of the Cauchy-Green deformation tensor:
I
1
=
λ
1
2
+
λ
2
2
+
λ
3
2
,
{\displaystyle I_{1}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2},}
where
λ
α
{\displaystyle \lambda _{\alpha }}
are the principal stretches.
==== Specific Neo-Hookean model ====
Setting n=1, the specific stress-strain function for the neo-Hookean model is derived:
W
=
μ
2
(
I
−
3
)
{\displaystyle W={\frac {\mu }{2}}(I-3)}
.
== Finite strain crack tip solutions (under large deformation) ==
Since LEFM is no longer applicable, alternative methods are adapted to capture large deformations in the calculation of stress and deformation fields. In this context the method of asymptotic analysis is of relevance.
=== Method of asymptotic analysis ===
The method of asymptotic analysis consists of analyzing the crack-tip asymptotically to find a series expansion of the deformed coordinates capable to characterize the solution near the crack tip. The analysis is reducible to a nonlinear eigenvalue problem.
The problem is formulated based on a crack in an infinite solid, loaded at infinity with uniform uni-axial tension under condition of plane strain (see Fig.1). As the crack deforms and progresses, the coordinates in the current configuration are represented by
y
1
{\displaystyle y_{1}}
and
y
2
{\displaystyle y_{2}}
in cartesian basis and
ρ
{\displaystyle \rho }
and
ϕ
{\displaystyle \phi }
in polar basis. The coordinates
y
1
{\displaystyle y_{1}}
and
y
2
{\displaystyle y_{2}}
are functions of the undeformed coordinates (
r
,
θ
{\displaystyle r,\theta }
) and near the crack tip, as r→0, can be specified as:
y
α
(
r
,
θ
)
=
r
m
α
υ
α
(
θ
)
+
r
p
α
q
α
(
θ
)
+
.
.
.
{\displaystyle y_{\alpha }(r,\theta )=r^{m_{\alpha }}\upsilon _{\alpha }(\theta )+r^{p_{\alpha }}q_{\alpha }(\theta )+...}
m
α
<
p
α
,
{\displaystyle m_{\alpha }<p_{\alpha },}
α
=
1
,
2
,
{\displaystyle \alpha =1,2,}
where
m
α
{\displaystyle m_{\alpha }}
,
p
α
{\displaystyle p_{\alpha }}
are unknown exponents, and
υ
α
(
θ
)
{\displaystyle \upsilon _{\alpha }(\theta )}
,
q
α
(
θ
)
{\displaystyle q_{\alpha }(\theta )}
are unknown functions describing the angular variation.
In order to obtain the eigenvalues, the equation above is substituted into the constitutive model, which yields the corresponding nominal stress components. Then, the stresses are substituted into the equilibrium equations (the same formulation as in LEFM theory) and the boundary conditions are applied. The most dominating terms are retained resulting in an eigenvalue problem for
υ
α
(
θ
)
{\displaystyle \upsilon _{\alpha }(\theta )}
and
m
α
{\displaystyle m_{\alpha }}
.
==== Deformation and stress field in a plane strain crack ====
For the case of a homogeneous neo-Hookean solid (n=1) under Mode I condition the deformed coordinates for a plane strain configuration are given by
y
1
=
−
b
0
r
sin
2
(
θ
/
2
)
,
y
2
=
a
r
1
/
2
sin
(
θ
/
2
)
,
{\displaystyle y_{1}=-b_{0}r\sin ^{2}(\theta /2),\quad y_{2}=ar^{1/2}\sin(\theta /2),}
where a and
b
0
{\displaystyle b_{0}}
are unknown positive amplitudes that depends on the applied loading and specimen geometry.
The leading terms for the nominal stress (or first Piola–Kirchhoff stress, denoted by
σ
{\displaystyle \sigma }
on this page) are:
σ
11
=
μ
b
0
,
σ
12
=
o
(
1
)
,
{\displaystyle \sigma _{11}=\mu b_{0},\quad \sigma _{12}=o(1),}
σ
21
=
−
μ
a
2
r
−
1
/
2
sin
(
θ
/
2
)
,
{\displaystyle \sigma _{21}=-{\frac {\mu a}{2}}r^{-1/2}\sin(\theta /2),}
σ
22
=
μ
a
2
r
−
1
/
2
cos
(
θ
/
2
)
.
{\displaystyle \sigma _{22}={\frac {\mu a}{2}}r^{-1/2}\cos(\theta /2).}
Thus,
σ
11
{\displaystyle \sigma _{11}}
and
σ
21
{\displaystyle \sigma _{21}}
are bounded at the crack tip and
σ
21
{\displaystyle \sigma _{21}}
and
σ
22
{\displaystyle \sigma _{22}}
have the same singularity.
The leading terms for the true stress (or Cauchy stress, denoted by
τ
{\displaystyle \tau }
on this page),
τ
11
=
μ
b
0
2
sin
2
(
θ
/
2
)
,
{\displaystyle \tau _{11}=\mu b_{0}^{2}\sin ^{2}(\theta /2),}
τ
12
=
τ
21
=
−
μ
2
a
b
0
r
−
1
/
2
sin
2
(
θ
/
2
)
,
{\displaystyle \tau _{12}=\tau _{21}=-{\frac {\mu }{2}}ab_{0}r^{-1/2}\sin ^{2}(\theta /2),}
τ
22
=
μ
4
a
2
r
−
1
.
{\displaystyle \tau _{22}={\frac {\mu }{4}}a^{2}r^{-1}.}
The only true stress component completely defined by a is
τ
22
{\displaystyle \tau _{22}}
. It also presents the most severe singularity. With that, it is clear that the singularity differs if the stress is given in the current or reference configuration. Additionally, in LEFM, the true stress field under Mode I has a singularity of
r
−
1
/
2
{\displaystyle {r}^{-1/2}}
, which is weaker than the singularity in
τ
22
{\displaystyle \tau _{22}}
.
While in LEFM the near tip displacement field depends only on the Mode I stress intensity factor, it is shown here that for large deformations, the displacement depends on two parameters (a and
b
0
{\displaystyle b_{0}}
for a plane strain condition).
==== Deformation and stress field in a plane stress crack ====
The crack tip deformation field for a Mode I configuration in a homogeneous material neo-Hookean solid (n=1) is given by
y
1
=
c
r
cos
θ
,
y
2
=
a
r
sin
(
θ
/
2
)
,
{\displaystyle y_{1}=cr\,\cos \theta ,\quad y_{2}=a{\sqrt {r}}\sin(\theta /2),}
where a and c are positive independent amplitudes determined by far field boundary conditions.
The dominant terms of the nominal stress are
σ
11
=
μ
c
,
σ
12
=
o
(
1
)
,
{\displaystyle \sigma _{11}=\mu c,\quad \sigma _{12}=o(1),}
σ
21
=
−
μ
a
2
r
−
1
/
2
sin
(
θ
/
2
)
,
{\displaystyle \sigma _{21}=-{\frac {\mu a}{2}}r^{-1/2}\sin(\theta /2),}
σ
22
=
μ
a
2
r
−
1
/
2
cos
(
θ
/
2
)
.
{\displaystyle \sigma _{22}={\frac {\mu a}{2}}r^{-1/2}\cos(\theta /2).}
And the true stress components are
τ
11
=
μ
c
2
,
τ
12
=
τ
21
=
−
μ
2
a
c
r
−
1
/
2
sin
(
θ
/
2
)
,
{\displaystyle \tau _{11}=\mu c^{2},\quad \tau _{12}=\tau _{21}=-{\frac {\mu }{2}}acr^{-1/2}\sin(\theta /2),}
τ
22
=
μ
4
a
2
r
−
1
.
{\displaystyle \tau _{22}={\frac {\mu }{4}}a^{2}r^{-1}.}
Analogously, the displacement depends on two parameters (a and c for a plane stress condition) and the singularity is stronger in the
τ
22
{\displaystyle \tau _{22}}
term.
The distribution of the true stress in the deformed coordinates (as shown in Fig. 1B) can be relevant when analyzing the crack propagation and blunt phenomenon. Additionally, it is useful when verifying experimental results of the deformation of the crack.
== J-integral ==
The J-integral represents the energy that flows to the crack, hence, it is used to calculate the energy release rate, G. Additionally, it can be used as a fracture criterion. This integral is found to be path independent as long as the material is elastic and damages to the microstructure are not occurring.
Evaluating J on a circular path in the reference configuration yields
J
=
π
A
(
2
n
−
1
2
n
)
2
n
−
1
n
2
−
n
a
2
n
,
{\displaystyle J=\pi A\left({\frac {2n-1}{2n}}\right)^{2n-1}n^{2-n}a^{2n},}
for plane strain Mode I, where a is the amplitude of the leading order term of
y
2
{\displaystyle y_{2}}
and A and n are material parameters from the strain-energy function.
For plane stress Mode I in a neo-Heookean material J is given by
J
=
μ
π
2
(
b
n
)
n
−
1
(
2
n
−
1
2
n
)
2
n
−
1
n
1
−
n
a
2
n
,
{\displaystyle J={\frac {\mu \pi }{2}}\left({\frac {b}{n}}\right)^{n-1}\left({\frac {2n-1}{2n}}\right)^{2n-1}n^{1-n}a^{2n},}
where b and n are material parameters of GNH solids. For the specific case of a neo-Hookean model, where n=1, b=1 and
A
=
μ
/
2
{\displaystyle A=\mu /2}
, the J-integral for plane stress and plane strain in Mode I are the same:
J
=
μ
π
a
2
4
.
{\displaystyle J={\frac {\mu \pi a^{2}}{4}}.}
=== J-integral in the pure-shear experiment ===
The J-integral can be determined by experiments. One common experiment is the pure-shear in an infinite long strip, as shown in Fig. 2. The upper and bottom edges are clamped by grips and the loading is applied by pulling the grips vertically apart by ± ∆. This set generates a condition of plane stress.
Under these conditions, the J-integral is evaluated, therefore, as
J
=
2
h
0
W
(
I
1
,
I
2
)
=
2
h
0
Ψ
(
λ
)
,
{\displaystyle J=2h_{0}W(I_{1},I_{2})=2h_{0}\Psi (\lambda ),}
where
I
1
=
I
2
=
λ
2
+
λ
−
2
+
1
,
{\displaystyle I_{1}=I_{2}=\lambda ^{2}+\lambda ^{-2}+1,}
λ
=
1
+
Δ
h
0
,
{\displaystyle \lambda =1+{\frac {\Delta }{h_{0}}},}
and
h
0
{\displaystyle h_{0}}
is the high of the strip undeformed state. The function
Ψ
(
λ
)
{\displaystyle \Psi (\lambda )}
is determined by measuring the nominal stress acting on the strip stretched by
λ
{\displaystyle \lambda }
:
Ψ
=
∫
1
λ
σ
(
λ
)
d
λ
.
{\displaystyle \Psi =\int _{1}^{\lambda }\sigma (\lambda )d\lambda .}
Therefore, from the imposed displacement of each grip, ± ∆, it is possible to determine the J-integral for the corresponding nominal stress. With the J-integral, the amplitude (parameter a) of some true stress components can be found. Some other stress components amplitudes, however, depend on other parameters such as c (e.g.
σ
11
{\displaystyle \sigma _{11}}
under plane stress condition) and cannot be determined by the pure shear experiment. Nevertheless, the pure shear experiment is very important because it allows the characterization of fracture toughness of soft materials.
== Interface cracks ==
To approach the interaction of adhesion between soft adhesives and rigid substrates, the asymptotic solution for an interface crack problem between a GNH material and a rigid substrate is specified. The interface crack configuration considered here is shown in Fig.3 where the lateral slip is disregarded.
For the special neo-Hookean case with n=1, and
v
1
¯
=
v
2
¯
=
cos
θ
{\displaystyle {\overline {v_{1}}}={\overline {v_{2}}}=\cos \theta }
, the solution for the deformed coordinates is
y
1
=
a
1
r
1
2
sin
(
θ
2
)
+
r
cos
θ
,
{\displaystyle y_{1}=a_{1}r^{\frac {1}{2}}\sin \left({\frac {\theta }{2}}\right)+r\cos \theta ,}
y
2
=
a
2
r
1
2
sin
(
θ
2
)
,
{\displaystyle y_{2}=a_{2}r^{\frac {1}{2}}\sin \left({\frac {\theta }{2}}\right),}
which is equivalent to
y
1
=
a
1
a
2
y
2
−
(
y
2
a
2
)
2
.
{\displaystyle y_{1}={\frac {a_{1}}{a_{2}}}y_{2}-\left({\frac {y_{2}}{a_{2}}}\right)^{2}.}
According to the above equation, the crack on this type of interface is found to open with a parabolic shape. This is confirmed by plotting the normalized coordinates
y
1
/
a
2
2
{\displaystyle y_{1}/a_{2}^{2}}
vs
y
2
/
a
2
2
{\displaystyle y_{2}/a_{2}^{2}}
for different
a
1
/
a
2
{\displaystyle a_{1}/a_{2}}
ratios (see Fig. 4).
To go through the analysis of the interface between two GNH sheets with the same hardening characteristics, refer to the model described by Gaubelle and Knauss.
== See also ==
Fracture mechanics
Soft matter
J-integral
Neo-Hookean solid
Gent (hyperelastic model)
Mooney-rivlin solid
Fracture of Biological Materials
== References == | Wikipedia/Fracture_of_Soft_Materials |
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
The Hamilton–Jacobi equation is a formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, the Schrödinger equation, as described below; for this reason, the Hamilton–Jacobi equation is considered the "closest approach" of classical mechanics to quantum mechanics. The qualitative form of this connection is called Hamilton's optico-mechanical analogy.
In mathematics, the Hamilton–Jacobi equation is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations. It can be understood as a special case of the Hamilton–Jacobi–Bellman equation from dynamic programming.
== Overview ==
The Hamilton–Jacobi equation is a first-order, non-linear partial differential equation
for a system of particles at coordinates
q
{\displaystyle \mathbf {q} }
. The function
H
{\displaystyle H}
is the system's Hamiltonian giving the system's energy. The solution of this equation is the action,
S
{\displaystyle S}
, called Hamilton's principal function.: 291
The solution can be related to the system Lagrangian
L
{\displaystyle \ {\mathcal {L}}\ }
by an indefinite integral of the form used in the principle of least action:: 431
S
=
∫
L
d
t
+
s
o
m
e
c
o
n
s
t
a
n
t
{\displaystyle \ S=\int {\mathcal {L}}\ \mathrm {d} t+~{\mathsf {some\ constant}}~}
Geometrical surfaces of constant action are perpendicular to system trajectories, creating a wavefront-like view of the system dynamics. This property of the Hamilton–Jacobi equation connects classical mechanics to quantum mechanics.: 175
== Mathematical formulation ==
=== Notation ===
Boldface variables such as
q
{\displaystyle \mathbf {q} }
represent a list of
N
{\displaystyle N}
generalized coordinates,
q
=
(
q
1
,
q
2
,
…
,
q
N
−
1
,
q
N
)
{\displaystyle \mathbf {q} =(q_{1},q_{2},\ldots ,q_{N-1},q_{N})}
A dot over a variable or list signifies the time derivative (see Newton's notation). For example,
q
˙
=
d
q
d
t
.
{\displaystyle {\dot {\mathbf {q} }}={\frac {d\mathbf {q} }{dt}}.}
The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, such as
p
⋅
q
=
∑
k
=
1
N
p
k
q
k
.
{\displaystyle \mathbf {p} \cdot \mathbf {q} =\sum _{k=1}^{N}p_{k}q_{k}.}
=== The action functional (a.k.a. Hamilton's principal function) ===
==== Definition ====
Let the Hessian matrix
H
L
(
q
,
q
˙
,
t
)
=
{
∂
2
L
/
∂
q
˙
i
∂
q
˙
j
}
i
j
{\textstyle H_{\mathcal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\left\{\partial ^{2}{\mathcal {L}}/\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}\right\}_{ij}}
be invertible. The relation
d
d
t
∂
L
∂
q
˙
i
=
∑
j
=
1
n
(
∂
2
L
∂
q
˙
i
∂
q
˙
j
q
¨
j
+
∂
2
L
∂
q
˙
i
∂
q
j
q
˙
j
)
+
∂
2
L
∂
q
˙
i
∂
t
,
i
=
1
,
…
,
n
,
{\displaystyle {\frac {d}{dt}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}=\sum _{j=1}^{n}\left({\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}}}{\ddot {q}}^{j}+{\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial {q}^{j}}}{\dot {q}}^{j}\right)+{\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial t}},\qquad i=1,\ldots ,n,}
shows that the Euler–Lagrange equations form a
n
×
n
{\displaystyle n\times n}
system of second-order ordinary differential equations. Inverting the matrix
H
L
{\displaystyle H_{\mathcal {L}}}
transforms this system into
q
¨
i
=
F
i
(
q
,
q
˙
,
t
)
,
i
=
1
,
…
,
n
.
{\displaystyle {\ddot {q}}^{i}=F_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t),\ i=1,\ldots ,n.}
Let a time instant
t
0
{\displaystyle t_{0}}
and a point
q
0
∈
M
{\displaystyle \mathbf {q} _{0}\in M}
in the configuration space be fixed. The existence and uniqueness theorems guarantee that, for every
v
0
,
{\displaystyle \mathbf {v} _{0},}
the initial value problem with the conditions
γ
|
τ
=
t
0
=
q
0
{\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}}
and
γ
˙
|
τ
=
t
0
=
v
0
{\displaystyle {\dot {\gamma }}|_{\tau =t_{0}}=\mathbf {v} _{0}}
has a locally unique solution
γ
=
γ
(
τ
;
t
0
,
q
0
,
v
0
)
.
{\displaystyle \gamma =\gamma (\tau ;t_{0},\mathbf {q} _{0},\mathbf {v} _{0}).}
Additionally, let there be a sufficiently small time interval
(
t
0
,
t
1
)
{\displaystyle (t_{0},t_{1})}
such that extremals with different initial velocities
v
0
{\displaystyle \mathbf {v} _{0}}
would not intersect in
M
×
(
t
0
,
t
1
)
.
{\displaystyle M\times (t_{0},t_{1}).}
The latter means that, for any
q
∈
M
{\displaystyle \mathbf {q} \in M}
and any
t
∈
(
t
0
,
t
1
)
,
{\displaystyle t\in (t_{0},t_{1}),}
there can be at most one extremal
γ
=
γ
(
τ
;
t
,
t
0
,
q
,
q
0
)
{\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})}
for which
γ
|
τ
=
t
0
=
q
0
{\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}}
and
γ
|
τ
=
t
=
q
.
{\displaystyle \gamma |_{\tau =t}=\mathbf {q} .}
Substituting
γ
=
γ
(
τ
;
t
,
t
0
,
q
,
q
0
)
{\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})}
into the action functional results in the Hamilton's principal function (HPF)
where
γ
=
γ
(
τ
;
t
,
t
0
,
q
,
q
0
)
,
{\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0}),}
γ
|
τ
=
t
0
=
q
0
,
{\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0},}
γ
|
τ
=
t
=
q
.
{\displaystyle \gamma |_{\tau =t}=\mathbf {q} .}
=== Formula for the momenta ===
The momenta are defined as the quantities
p
i
(
q
,
q
˙
,
t
)
=
∂
L
/
∂
q
˙
i
.
{\textstyle p_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\partial {\mathcal {L}}/\partial {\dot {q}}^{i}.}
This section shows that the dependency of
p
i
{\displaystyle p_{i}}
on
q
˙
{\displaystyle \mathbf {\dot {q}} }
disappears, once the HPF is known.
Indeed, let a time instant
t
0
{\displaystyle t_{0}}
and a point
q
0
{\displaystyle \mathbf {q} _{0}}
in the configuration space be fixed. For every time instant
t
{\displaystyle t}
and a point
q
,
{\displaystyle \mathbf {q} ,}
let
γ
=
γ
(
τ
;
t
,
t
0
,
q
,
q
0
)
{\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})}
be the (unique) extremal from the definition of the Hamilton's principal function
S
{\displaystyle S}
. Call
v
=
def
γ
˙
(
τ
;
t
,
t
0
,
q
,
q
0
)
|
τ
=
t
{\displaystyle \mathbf {v} \,{\stackrel {\text{def}}{=}}\,{\dot {\gamma }}(\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})|_{\tau =t}}
the velocity at
τ
=
t
{\displaystyle \tau =t}
. Then
=== Formula ===
Given the Hamiltonian
H
(
q
,
p
,
t
)
{\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)}
of a mechanical system, the Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for the Hamilton's principal function
S
{\displaystyle S}
,
Alternatively, as described below, the Hamilton–Jacobi equation may be derived from Hamiltonian mechanics by treating
S
{\displaystyle S}
as the generating function for a canonical transformation of the classical Hamiltonian
H
=
H
(
q
1
,
q
2
,
…
,
q
N
;
p
1
,
p
2
,
…
,
p
N
;
t
)
.
{\displaystyle H=H(q_{1},q_{2},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{N};t).}
The conjugate momenta correspond to the first derivatives of
S
{\displaystyle S}
with respect to the generalized coordinates
p
k
=
∂
S
∂
q
k
.
{\displaystyle p_{k}={\frac {\partial S}{\partial q_{k}}}.}
As a solution to the Hamilton–Jacobi equation, the principal function contains
N
+
1
{\displaystyle N+1}
undetermined constants, the first
N
{\displaystyle N}
of them denoted as
α
1
,
α
2
,
…
,
α
N
{\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}}
, and the last one coming from the integration of
∂
S
∂
t
{\displaystyle {\frac {\partial S}{\partial t}}}
.
The relationship between
p
{\displaystyle \mathbf {p} }
and
q
{\displaystyle \mathbf {q} }
then describes the orbit in phase space in terms of these constants of motion. Furthermore, the quantities
β
k
=
∂
S
∂
α
k
,
k
=
1
,
2
,
…
,
N
{\displaystyle \beta _{k}={\frac {\partial S}{\partial \alpha _{k}}},\quad k=1,2,\ldots ,N}
are also constants of motion, and these equations can be inverted to find
q
{\displaystyle \mathbf {q} }
as a function of all the
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
constants and time.
== Comparison with other formulations of mechanics ==
The Hamilton–Jacobi equation is a single, first-order partial differential equation for the function of the
N
{\displaystyle N}
generalized coordinates
q
1
,
q
2
,
…
,
q
N
{\displaystyle q_{1},\,q_{2},\dots ,q_{N}}
and the time
t
{\displaystyle t}
. The generalized momenta do not appear, except as derivatives of
S
{\displaystyle S}
, the classical action.
For comparison, in the equivalent Euler–Lagrange equations of motion of Lagrangian mechanics, the conjugate momenta also do not appear; however, those equations are a system of
N
{\displaystyle N}
, generally second-order equations for the time evolution of the generalized coordinates. Similarly, Hamilton's equations of motion are another system of 2N first-order equations for the time evolution of the generalized coordinates and their conjugate momenta
p
1
,
p
2
,
…
,
p
N
{\displaystyle p_{1},\,p_{2},\dots ,p_{N}}
.
Since the HJE is an equivalent expression of an integral minimization problem such as Hamilton's principle, the HJE can be useful in other problems of the calculus of variations and, more generally, in other branches of mathematics and physics, such as dynamical systems, symplectic geometry and quantum chaos. For example, the Hamilton–Jacobi equations can be used to determine the geodesics on a Riemannian manifold, an important variational problem in Riemannian geometry. However as a computational tool, the partial differential equations are notoriously complicated to solve except when is it possible to separate the independent variables; in this case the HJE become computationally useful.: 444
== Derivation using a canonical transformation ==
Any canonical transformation involving a type-2 generating function
G
2
(
q
,
P
,
t
)
{\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)}
leads to the relations
p
=
∂
G
2
∂
q
,
Q
=
∂
G
2
∂
P
,
K
(
Q
,
P
,
t
)
=
H
(
q
,
p
,
t
)
+
∂
G
2
∂
t
{\displaystyle {\begin{aligned}&\mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }},\quad \mathbf {Q} ={\frac {\partial G_{2}}{\partial \mathbf {P} }},\quad \\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial G_{2}}{\partial t}}\end{aligned}}}
and Hamilton's equations in terms of the new variables
P
,
Q
{\displaystyle \mathbf {P} ,\,\mathbf {Q} }
and new Hamiltonian
K
{\displaystyle K}
have the same form:
P
˙
=
−
∂
K
∂
Q
,
Q
˙
=
+
∂
K
∂
P
.
{\displaystyle {\dot {\mathbf {P} }}=-{\partial K \over \partial \mathbf {Q} },\quad {\dot {\mathbf {Q} }}=+{\partial K \over \partial \mathbf {P} }.}
To derive the HJE, a generating function
G
2
(
q
,
P
,
t
)
{\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)}
is chosen in such a way that, it will make the new Hamiltonian
K
=
0
{\displaystyle K=0}
. Hence, all its derivatives are also zero, and the transformed Hamilton's equations become trivial
P
˙
=
Q
˙
=
0
{\displaystyle {\dot {\mathbf {P} }}={\dot {\mathbf {Q} }}=0}
so the new generalized coordinates and momenta are constants of motion. As they are constants, in this context the new generalized momenta
P
{\displaystyle \mathbf {P} }
are usually denoted
α
1
,
α
2
,
…
,
α
N
{\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}}
, i.e.
P
m
=
α
m
{\displaystyle P_{m}=\alpha _{m}}
and the new generalized coordinates
Q
{\displaystyle \mathbf {Q} }
are typically denoted as
β
1
,
β
2
,
…
,
β
N
{\displaystyle \beta _{1},\,\beta _{2},\dots ,\beta _{N}}
, so
Q
m
=
β
m
{\displaystyle Q_{m}=\beta _{m}}
.
Setting the generating function equal to Hamilton's principal function, plus an arbitrary constant
A
{\displaystyle A}
:
G
2
(
q
,
α
,
t
)
=
S
(
q
,
t
)
+
A
,
{\displaystyle G_{2}(\mathbf {q} ,{\boldsymbol {\alpha }},t)=S(\mathbf {q} ,t)+A,}
the HJE automatically arises
p
=
∂
G
2
∂
q
=
∂
S
∂
q
→
H
(
q
,
p
,
t
)
+
∂
G
2
∂
t
=
0
→
H
(
q
,
∂
S
∂
q
,
t
)
+
∂
S
∂
t
=
0.
{\displaystyle {\begin{aligned}&\mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }}={\frac {\partial S}{\partial \mathbf {q} }}\\[1ex]\rightarrow {}&H(\mathbf {q} ,\mathbf {p} ,t)+{\partial G_{2} \over \partial t}=0\\[1ex]\rightarrow {}&H{\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right)}+{\partial S \over \partial t}=0.\end{aligned}}}
When solved for
S
(
q
,
α
,
t
)
{\displaystyle S(\mathbf {q} ,{\boldsymbol {\alpha }},t)}
, these also give us the useful equations
Q
=
β
=
∂
S
∂
α
,
{\displaystyle \mathbf {Q} ={\boldsymbol {\beta }}={\partial S \over \partial {\boldsymbol {\alpha }}},}
or written in components for clarity
Q
m
=
β
m
=
∂
S
(
q
,
α
,
t
)
∂
α
m
.
{\displaystyle Q_{m}=\beta _{m}={\frac {\partial S(\mathbf {q} ,{\boldsymbol {\alpha }},t)}{\partial \alpha _{m}}}.}
Ideally, these N equations can be inverted to find the original generalized coordinates
q
{\displaystyle \mathbf {q} }
as a function of the constants
α
,
β
,
{\displaystyle {\boldsymbol {\alpha }},\,{\boldsymbol {\beta }},}
and
t
{\displaystyle t}
, thus solving the original problem.
== Separation of variables ==
When the problem allows additive separation of variables, the HJE leads directly to constants of motion. For example, the time t can be separated if the Hamiltonian does not depend on time explicitly. In that case, the time derivative
∂
S
∂
t
{\displaystyle {\frac {\partial S}{\partial t}}}
in the HJE must be a constant, usually denoted (
−
E
{\displaystyle -E}
), giving the separated solution
S
=
W
(
q
1
,
q
2
,
…
,
q
N
)
−
E
t
{\displaystyle S=W(q_{1},q_{2},\ldots ,q_{N})-Et}
where the time-independent function
W
(
q
)
{\displaystyle W(\mathbf {q} )}
is sometimes called the abbreviated action or Hamilton's characteristic function : 434 and sometimes: 607 written
S
0
{\displaystyle S_{0}}
(see action principle names). The reduced Hamilton–Jacobi equation can then be written
H
(
q
,
∂
S
∂
q
)
=
E
.
{\displaystyle H{\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }}\right)}=E.}
To illustrate separability for other variables, a certain generalized coordinate
q
k
{\displaystyle q_{k}}
and its derivative
∂
S
∂
q
k
{\displaystyle {\frac {\partial S}{\partial q_{k}}}}
are assumed to appear together as a single function
ψ
(
q
k
,
∂
S
∂
q
k
)
{\displaystyle \psi {\left(q_{k},{\frac {\partial S}{\partial q_{k}}}\right)}}
in the Hamiltonian
H
=
H
(
q
1
,
q
2
,
…
,
q
k
−
1
,
q
k
+
1
,
…
,
q
N
;
p
1
,
p
2
,
…
,
p
k
−
1
,
p
k
+
1
,
…
,
p
N
;
ψ
;
t
)
.
{\displaystyle H=H(q_{1},q_{2},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{k-1},p_{k+1},\ldots ,p_{N};\psi ;t).}
In that case, the function S can be partitioned into two functions, one that depends only on qk and another that depends only on the remaining generalized coordinates
S
=
S
k
(
q
k
)
+
S
rem
(
q
1
,
…
,
q
k
−
1
,
q
k
+
1
,
…
,
q
N
,
t
)
.
{\displaystyle S=S_{k}(q_{k})+S_{\text{rem}}(q_{1},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N},t).}
Substitution of these formulae into the Hamilton–Jacobi equation shows that the function ψ must be a constant (denoted here as
Γ
k
{\displaystyle \Gamma _{k}}
), yielding a first-order ordinary differential equation for
S
k
(
q
k
)
,
{\displaystyle S_{k}(q_{k}),}
ψ
(
q
k
,
d
S
k
d
q
k
)
=
Γ
k
.
{\displaystyle \psi {\left(q_{k},{\frac {dS_{k}}{dq_{k}}}\right)}=\Gamma _{k}.}
In fortunate cases, the function
S
{\displaystyle S}
can be separated completely into
N
{\displaystyle N}
functions
S
m
(
q
m
)
,
{\displaystyle S_{m}(q_{m}),}
S
=
S
1
(
q
1
)
+
S
2
(
q
2
)
+
⋯
+
S
N
(
q
N
)
−
E
t
.
{\displaystyle S=S_{1}(q_{1})+S_{2}(q_{2})+\cdots +S_{N}(q_{N})-Et.}
In such a case, the problem devolves to
N
{\displaystyle N}
ordinary differential equations.
The separability of S depends both on the Hamiltonian and on the choice of generalized coordinates. For orthogonal coordinates and Hamiltonians that have no time dependence and are quadratic in the generalized momenta,
S
{\displaystyle S}
will be completely separable if the potential energy is additively separable in each coordinate, where the potential energy term for each coordinate is multiplied by the coordinate-dependent factor in the corresponding momentum term of the Hamiltonian (the Staeckel conditions). For illustration, several examples in orthogonal coordinates are worked in the next sections.
=== Examples in various coordinate systems ===
==== Spherical coordinates ====
In spherical coordinates the Hamiltonian of a free particle moving in a conservative potential U can be written
H
=
1
2
m
[
p
r
2
+
p
θ
2
r
2
+
p
ϕ
2
r
2
sin
2
θ
]
+
U
(
r
,
θ
,
ϕ
)
.
{\displaystyle H={\frac {1}{2m}}\left[p_{r}^{2}+{\frac {p_{\theta }^{2}}{r^{2}}}+{\frac {p_{\phi }^{2}}{r^{2}\sin ^{2}\theta }}\right]+U(r,\theta ,\phi ).}
The Hamilton–Jacobi equation is completely separable in these coordinates provided that there exist functions
U
r
(
r
)
,
U
θ
(
θ
)
,
U
ϕ
(
ϕ
)
{\displaystyle U_{r}(r),U_{\theta }(\theta ),U_{\phi }(\phi )}
such that
U
{\displaystyle U}
can be written in the analogous form
U
(
r
,
θ
,
ϕ
)
=
U
r
(
r
)
+
U
θ
(
θ
)
r
2
+
U
ϕ
(
ϕ
)
r
2
sin
2
θ
.
{\displaystyle U(r,\theta ,\phi )=U_{r}(r)+{\frac {U_{\theta }(\theta )}{r^{2}}}+{\frac {U_{\phi }(\phi )}{r^{2}\sin ^{2}\theta }}.}
Substitution of the completely separated solution
S
=
S
r
(
r
)
+
S
θ
(
θ
)
+
S
ϕ
(
ϕ
)
−
E
t
{\displaystyle S=S_{r}(r)+S_{\theta }(\theta )+S_{\phi }(\phi )-Et}
into the HJE yields
1
2
m
(
d
S
r
d
r
)
2
+
U
r
(
r
)
+
1
2
m
r
2
[
(
d
S
θ
d
θ
)
2
+
2
m
U
θ
(
θ
)
]
+
1
2
m
r
2
sin
2
θ
[
(
d
S
ϕ
d
ϕ
)
2
+
2
m
U
ϕ
(
ϕ
)
]
=
E
.
{\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )\right]+{\frac {1}{2mr^{2}\sin ^{2}\theta }}\left[\left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )\right]=E.}
This equation may be solved by successive integrations of ordinary differential equations, beginning with the equation for
ϕ
{\displaystyle \phi }
(
d
S
ϕ
d
ϕ
)
2
+
2
m
U
ϕ
(
ϕ
)
=
Γ
ϕ
{\displaystyle \left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )=\Gamma _{\phi }}
where
Γ
ϕ
{\displaystyle \Gamma _{\phi }}
is a constant of the motion that eliminates the
ϕ
{\displaystyle \phi }
dependence from the Hamilton–Jacobi equation
1
2
m
(
d
S
r
d
r
)
2
+
U
r
(
r
)
+
1
2
m
r
2
[
1
sin
2
θ
(
d
S
θ
d
θ
)
2
+
2
m
sin
2
θ
U
θ
(
θ
)
+
Γ
ϕ
]
=
E
.
{\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[{\frac {1}{\sin ^{2}\theta }}\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+{\frac {2m}{\sin ^{2}\theta }}U_{\theta }(\theta )+\Gamma _{\phi }\right]=E.}
The next ordinary differential equation involves the
θ
{\displaystyle \theta }
generalized coordinate
1
sin
2
θ
(
d
S
θ
d
θ
)
2
+
2
m
sin
2
θ
U
θ
(
θ
)
+
Γ
ϕ
=
Γ
θ
{\displaystyle {\frac {1}{\sin ^{2}\theta }}\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+{\frac {2m}{\sin ^{2}\theta }}U_{\theta }(\theta )+\Gamma _{\phi }=\Gamma _{\theta }}
where
Γ
θ
{\displaystyle \Gamma _{\theta }}
is again a constant of the motion that eliminates the
θ
{\displaystyle \theta }
dependence and reduces the HJE to the final ordinary differential equation
1
2
m
(
d
S
r
d
r
)
2
+
U
r
(
r
)
+
Γ
θ
2
m
r
2
=
E
{\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {\Gamma _{\theta }}{2mr^{2}}}=E}
whose integration completes the solution for
S
{\displaystyle S}
.
==== Elliptic cylindrical coordinates ====
The Hamiltonian in elliptic cylindrical coordinates can be written
H
=
p
μ
2
+
p
ν
2
2
m
a
2
(
sinh
2
μ
+
sin
2
ν
)
+
p
z
2
2
m
+
U
(
μ
,
ν
,
z
)
{\displaystyle H={\frac {p_{\mu }^{2}+p_{\nu }^{2}}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}+{\frac {p_{z}^{2}}{2m}}+U(\mu ,\nu ,z)}
where the foci of the ellipses are located at
±
a
{\displaystyle \pm a}
on the
x
{\displaystyle x}
-axis. The Hamilton–Jacobi equation is completely separable in these coordinates provided that
U
{\displaystyle U}
has an analogous form
U
(
μ
,
ν
,
z
)
=
U
μ
(
μ
)
+
U
ν
(
ν
)
sinh
2
μ
+
sin
2
ν
+
U
z
(
z
)
{\displaystyle U(\mu ,\nu ,z)={\frac {U_{\mu }(\mu )+U_{\nu }(\nu )}{\sinh ^{2}\mu +\sin ^{2}\nu }}+U_{z}(z)}
where
U
μ
(
μ
)
{\displaystyle U_{\mu }(\mu )}
,
U
ν
(
ν
)
{\displaystyle U_{\nu }(\nu )}
and
U
z
(
z
)
{\displaystyle U_{z}(z)}
are arbitrary functions. Substitution of the completely separated solution
S
=
S
μ
(
μ
)
+
S
ν
(
ν
)
+
S
z
(
z
)
−
E
t
{\displaystyle S=S_{\mu }(\mu )+S_{\nu }(\nu )+S_{z}(z)-Et}
into the HJE yields
1
2
m
(
d
S
z
d
z
)
2
+
1
2
m
a
2
(
sinh
2
μ
+
sin
2
ν
)
[
(
d
S
μ
d
μ
)
2
+
(
d
S
ν
d
ν
)
2
]
+
U
z
(
z
)
+
1
sinh
2
μ
+
sin
2
ν
[
U
μ
(
μ
)
+
U
ν
(
ν
)
]
=
E
.
{\displaystyle {\begin{aligned}{\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+{\frac {1}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}\left[\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}\right]&\\{}+U_{z}(z)+{\frac {1}{\sinh ^{2}\mu +\sin ^{2}\nu }}\left[U_{\mu }(\mu )+U_{\nu }(\nu )\right]&=E.\end{aligned}}}
Separating the first ordinary differential equation
1
2
m
(
d
S
z
d
z
)
2
+
U
z
(
z
)
=
Γ
z
{\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}}
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
(
d
S
μ
d
μ
)
2
+
(
d
S
ν
d
ν
)
2
+
2
m
a
2
U
μ
(
μ
)
+
2
m
a
2
U
ν
(
ν
)
=
2
m
a
2
(
sinh
2
μ
+
sin
2
ν
)
(
E
−
Γ
z
)
{\displaystyle \left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}U_{\nu }(\nu )=2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)\left(E-\Gamma _{z}\right)}
which itself may be separated into two independent ordinary differential equations
(
d
S
μ
d
μ
)
2
+
2
m
a
2
U
μ
(
μ
)
+
2
m
a
2
(
Γ
z
−
E
)
sinh
2
μ
=
Γ
μ
(
d
S
ν
d
ν
)
2
+
2
m
a
2
U
ν
(
ν
)
+
2
m
a
2
(
Γ
z
−
E
)
sin
2
ν
=
Γ
ν
{\displaystyle {\begin{alignedat}{4}\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}&\,+\,&2ma^{2}U_{\mu }(\mu )&\,+\,&2ma^{2}\left(\Gamma _{z}-E\right)\sinh ^{2}\mu &=\,&\Gamma _{\mu }\\\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}&\,+\,&2ma^{2}U_{\nu }(\nu )&\,+\,&2ma^{2}\left(\Gamma _{z}-E\right)\sin ^{2}\nu &=\,&\Gamma _{\nu }\end{alignedat}}}
that, when solved, provide a complete solution for
S
{\displaystyle S}
.
==== Parabolic cylindrical coordinates ====
The Hamiltonian in parabolic cylindrical coordinates can be written
H
=
p
σ
2
+
p
τ
2
2
m
(
σ
2
+
τ
2
)
+
p
z
2
2
m
+
U
(
σ
,
τ
,
z
)
.
{\displaystyle H={\frac {p_{\sigma }^{2}+p_{\tau }^{2}}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}+{\frac {p_{z}^{2}}{2m}}+U(\sigma ,\tau ,z).}
The Hamilton–Jacobi equation is completely separable in these coordinates provided that
U
{\displaystyle U}
has an analogous form
U
(
σ
,
τ
,
z
)
=
U
σ
(
σ
)
+
U
τ
(
τ
)
σ
2
+
τ
2
+
U
z
(
z
)
{\displaystyle U(\sigma ,\tau ,z)={\frac {U_{\sigma }(\sigma )+U_{\tau }(\tau )}{\sigma ^{2}+\tau ^{2}}}+U_{z}(z)}
where
U
σ
(
σ
)
{\displaystyle U_{\sigma }(\sigma )}
,
U
τ
(
τ
)
{\displaystyle U_{\tau }(\tau )}
, and
U
z
(
z
)
{\displaystyle U_{z}(z)}
are arbitrary functions. Substitution of the completely separated solution
S
=
S
σ
(
σ
)
+
S
τ
(
τ
)
+
S
z
(
z
)
−
E
t
+
constant
{\displaystyle S=S_{\sigma }(\sigma )+S_{\tau }(\tau )+S_{z}(z)-Et+{\text{constant}}}
into the HJE yields
1
2
m
(
d
S
z
d
z
)
2
+
1
2
m
(
σ
2
+
τ
2
)
[
(
d
S
σ
d
σ
)
2
+
(
d
S
τ
d
τ
)
2
]
+
U
z
(
z
)
+
1
σ
2
+
τ
2
[
U
σ
(
σ
)
+
U
τ
(
τ
)
]
=
E
.
{\displaystyle {\begin{aligned}{\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+{\frac {1}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}\left[\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}\right]&\\{}+U_{z}(z)+{\frac {1}{\sigma ^{2}+\tau ^{2}}}\left[U_{\sigma }(\sigma )+U_{\tau }(\tau )\right]&=E.\end{aligned}}}
Separating the first ordinary differential equation
1
2
m
(
d
S
z
d
z
)
2
+
U
z
(
z
)
=
Γ
z
{\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}}
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
(
d
S
σ
d
σ
)
2
+
(
d
S
τ
d
τ
)
2
+
2
m
[
U
σ
(
σ
)
+
U
τ
(
τ
)
]
=
2
m
(
σ
2
+
τ
2
)
(
E
−
Γ
z
)
{\displaystyle \left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2m\left[U_{\sigma }(\sigma )+U_{\tau }(\tau )\right]=2m\left(\sigma ^{2}+\tau ^{2}\right)\left(E-\Gamma _{z}\right)}
which itself may be separated into two independent ordinary differential equations
(
d
S
σ
d
σ
)
2
+
2
m
U
σ
(
σ
)
+
2
m
σ
2
(
Γ
z
−
E
)
=
Γ
σ
(
d
S
τ
d
τ
)
2
+
2
m
U
τ
(
τ
)
+
2
m
τ
2
(
Γ
z
−
E
)
=
Γ
τ
{\displaystyle {\begin{alignedat}{4}\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}&+\,&2mU_{\sigma }(\sigma )&+\,&2m\sigma ^{2}\left(\Gamma _{z}-E\right)&=\,&\Gamma _{\sigma }\\\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}&+\,&2mU_{\tau }(\tau )&+\,&2m\tau ^{2}\left(\Gamma _{z}-E\right)&=\,&\Gamma _{\tau }\end{alignedat}}}
that, when solved, provide a complete solution for
S
{\displaystyle S}
.
== Waves and particles ==
=== Optical wave fronts and trajectories ===
The HJE establishes a duality between trajectories and wavefronts. For example, in geometrical optics, light can be considered either as “rays” or waves. The wave front can be defined as the surface
C
t
{\textstyle {\mathcal {C}}_{t}}
that the light emitted at time
t
=
0
{\textstyle t=0}
has reached at time
t
{\textstyle t}
. Light rays and wave fronts are dual: if one is known, the other can be deduced.
More precisely, geometrical optics is a variational problem where the “action” is the travel time
T
{\textstyle T}
along a path,
T
=
1
c
∫
A
B
n
d
s
{\displaystyle T={\frac {1}{c}}\int _{A}^{B}n\,ds}
where
n
{\textstyle n}
is the medium's index of refraction and
d
s
{\textstyle ds}
is an infinitesimal arc length. From the above formulation, one can compute the ray paths using the Euler–Lagrange formulation; alternatively, one can compute the wave fronts by solving the Hamilton–Jacobi equation. Knowing one leads to knowing the other.
The above duality is very general and applies to all systems that derive from a variational principle: either compute the trajectories using Euler–Lagrange equations or the wave fronts by using Hamilton–Jacobi equation.
The wave front at time
t
{\textstyle t}
, for a system initially at
q
0
{\textstyle \mathbf {q} _{0}}
at time
t
0
{\textstyle t_{0}}
, is defined as the collection of points
q
{\textstyle \mathbf {q} }
such that
S
(
q
,
t
)
=
const
{\textstyle S(\mathbf {q} ,t)={\text{const}}}
. If
S
(
q
,
t
)
{\textstyle S(\mathbf {q} ,t)}
is known, the momentum is immediately deduced.
p
=
∂
S
∂
q
.
{\displaystyle \mathbf {p} ={\frac {\partial S}{\partial \mathbf {q} }}.}
Once
p
{\textstyle \mathbf {p} }
is known, tangents to the trajectories
q
˙
{\textstyle {\dot {\mathbf {q} }}}
are computed by solving the equation
∂
L
∂
q
˙
=
p
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial {\dot {\mathbf {q} }}}}={\boldsymbol {p}}}
for
q
˙
{\textstyle {\dot {\mathbf {q} }}}
, where
L
{\textstyle {\mathcal {L}}}
is the Lagrangian. The trajectories are then recovered from the knowledge of
q
˙
{\textstyle {\dot {\mathbf {q} }}}
.
=== Relationship to the Schrödinger equation ===
The isosurfaces of the function
S
(
q
,
t
)
{\displaystyle S(\mathbf {q} ,t)}
can be determined at any time t. The motion of an
S
{\displaystyle S}
-isosurface as a function of time is defined by the motions of the particles beginning at the points
q
{\displaystyle \mathbf {q} }
on the isosurface. The motion of such an isosurface can be thought of as a wave moving through
q
{\displaystyle \mathbf {q} }
-space, although it does not obey the wave equation exactly. To show this, let S represent the phase of a wave
ψ
=
ψ
0
e
i
S
/
ℏ
{\displaystyle \psi =\psi _{0}e^{iS/\hbar }}
where
ℏ
{\displaystyle \hbar }
is a constant (the Planck constant) introduced to make the exponential argument dimensionless; changes in the amplitude of the wave can be represented by having
S
{\displaystyle S}
be a complex number. The Hamilton–Jacobi equation is then rewritten as
ℏ
2
2
m
∇
2
ψ
−
U
ψ
=
ℏ
i
∂
ψ
∂
t
{\displaystyle {\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi -U\psi ={\frac {\hbar }{i}}{\frac {\partial \psi }{\partial t}}}
which is the Schrödinger equation.
Conversely, starting with the Schrödinger equation and our ansatz for
ψ
{\displaystyle \psi }
, it can be deduced that
1
2
m
(
∇
S
)
2
+
U
+
∂
S
∂
t
=
i
ℏ
2
m
∇
2
ψ
0
ψ
0
.
{\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}={\frac {i\hbar }{2m}}{\frac {\nabla ^{2}\psi _{0}}{\psi _{0}}}.}
The classical limit (
ℏ
→
0
{\displaystyle \hbar \rightarrow 0}
) of the Schrödinger equation above becomes identical to the following variant of the Hamilton–Jacobi equation,
1
2
m
(
∇
S
)
2
+
U
+
∂
S
∂
t
=
0.
{\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}=0.}
== Applications ==
=== HJE in a gravitational field ===
Using the energy–momentum relation in the form
g
α
β
P
α
P
β
−
(
m
c
)
2
=
0
{\displaystyle g^{\alpha \beta }P_{\alpha }P_{\beta }-(mc)^{2}=0}
for a particle of rest mass
m
{\displaystyle m}
travelling in curved space, where
g
α
β
{\displaystyle g^{\alpha \beta }}
are the contravariant coordinates of the metric tensor (i.e., the inverse metric) solved from the Einstein field equations, and
c
{\displaystyle c}
is the speed of light. Setting the four-momentum
P
α
{\displaystyle P_{\alpha }}
equal to the four-gradient of the action
S
{\displaystyle S}
,
P
α
=
−
∂
S
∂
x
α
{\displaystyle P_{\alpha }=-{\frac {\partial S}{\partial x^{\alpha }}}}
gives the Hamilton–Jacobi equation in the geometry determined by the metric
g
{\displaystyle g}
:
g
α
β
∂
S
∂
x
α
∂
S
∂
x
β
−
(
m
c
)
2
=
0
,
{\displaystyle g^{\alpha \beta }{\frac {\partial S}{\partial x^{\alpha }}}{\frac {\partial S}{\partial x^{\beta }}}-(mc)^{2}=0,}
in other words, in a gravitational field.
=== HJE in electromagnetic fields ===
For a particle of rest mass
m
{\displaystyle m}
and electric charge
e
{\displaystyle e}
moving in electromagnetic field with four-potential
A
i
=
(
ϕ
,
A
)
{\displaystyle A_{i}=(\phi ,\mathrm {A} )}
in vacuum, the Hamilton–Jacobi equation in geometry determined by the metric tensor
g
i
k
=
g
i
k
{\displaystyle g^{ik}=g_{ik}}
has a form
g
i
k
(
∂
S
∂
x
i
+
e
c
A
i
)
(
∂
S
∂
x
k
+
e
c
A
k
)
=
m
2
c
2
{\displaystyle g^{ik}\left({\frac {\partial S}{\partial x^{i}}}+{\frac {e}{c}}A_{i}\right)\left({\frac {\partial S}{\partial x^{k}}}+{\frac {e}{c}}A_{k}\right)=m^{2}c^{2}}
and can be solved for the Hamilton principal action function
S
{\displaystyle S}
to obtain further solution for the particle trajectory and momentum:
x
=
−
e
c
γ
∫
A
z
d
ξ
,
y
=
−
e
c
γ
∫
A
y
d
ξ
,
z
=
−
e
2
2
c
2
γ
2
∫
(
A
2
−
A
2
¯
)
d
ξ
,
ξ
=
c
t
−
e
2
2
γ
2
c
2
∫
(
A
2
−
A
2
¯
)
d
ξ
,
p
x
=
−
e
c
A
x
,
p
y
=
−
e
c
A
y
,
p
z
=
e
2
2
γ
c
(
A
2
−
A
2
¯
)
,
E
=
c
γ
+
e
2
2
γ
c
(
A
2
−
A
2
¯
)
,
{\displaystyle {\begin{aligned}x&=-{\frac {e}{c\gamma }}\int A_{z}\,d\xi ,&y&=-{\frac {e}{c\gamma }}\int A_{y}\,d\xi ,\\[1ex]z&=-{\frac {e^{2}}{2c^{2}\gamma ^{2}}}\int \left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right)\,d\xi ,&\xi &=ct-{\frac {e^{2}}{2\gamma ^{2}c^{2}}}\int \left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right)\,d\xi ,\\[1ex]p_{x}&=-{\frac {e}{c}}A_{x},&p_{y}&=-{\frac {e}{c}}A_{y},\\[1ex]p_{z}&={\frac {e^{2}}{2\gamma c}}\left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right),&{\mathcal {E}}&=c\gamma +{\frac {e^{2}}{2\gamma c}}\left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right),\end{aligned}}}
where
ξ
=
c
t
−
z
{\displaystyle \xi =ct-z}
and
γ
2
=
m
2
c
2
+
e
2
c
2
A
¯
2
{\displaystyle \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}}{c^{2}}}{\overline {A}}^{2}}
with
A
¯
{\displaystyle {\overline {\mathbf {A} }}}
the cycle average of the vector potential.
==== A circularly polarized wave ====
In the case of circular polarization,
E
x
=
E
0
sin
ω
ξ
1
,
E
y
=
E
0
cos
ω
ξ
1
,
A
x
=
c
E
0
ω
cos
ω
ξ
1
,
A
y
=
−
c
E
0
ω
sin
ω
ξ
1
.
{\displaystyle {\begin{aligned}E_{x}&=E_{0}\sin \omega \xi _{1},&E_{y}&=E_{0}\cos \omega \xi _{1},\\[1ex]A_{x}&={\frac {cE_{0}}{\omega }}\cos \omega \xi _{1},&A_{y}&=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1}.\end{aligned}}}
Hence
x
=
−
e
c
E
0
ω
sin
ω
ξ
1
,
y
=
−
e
c
E
0
ω
cos
ω
ξ
1
,
p
x
=
−
e
E
0
ω
cos
ω
ξ
1
,
p
y
=
e
E
0
ω
sin
ω
ξ
1
,
{\displaystyle {\begin{aligned}x&=-{\frac {ecE_{0}}{\omega }}\sin \omega \xi _{1},&y&=-{\frac {ecE_{0}}{\omega }}\cos \omega \xi _{1},\\[1ex]p_{x}&=-{\frac {eE_{0}}{\omega }}\cos \omega \xi _{1},&p_{y}&={\frac {eE_{0}}{\omega }}\sin \omega \xi _{1},\end{aligned}}}
where
ξ
1
=
ξ
/
c
{\displaystyle \xi _{1}=\xi /c}
, implying the particle moving along a circular trajectory with a permanent radius
e
c
E
0
/
γ
ω
2
{\displaystyle ecE_{0}/\gamma \omega ^{2}}
and an invariable value of momentum
e
E
0
/
ω
2
{\displaystyle eE_{0}/\omega ^{2}}
directed along a magnetic field vector.
==== A monochromatic linearly polarized plane wave ====
For the flat, monochromatic, linearly polarized wave with a field
E
{\displaystyle E}
directed along the axis
y
{\displaystyle y}
E
y
=
E
0
cos
ω
ξ
1
,
A
y
=
−
c
E
0
ω
sin
ω
ξ
1
,
{\displaystyle {\begin{aligned}E_{y}&=E_{0}\cos \omega \xi _{1},&A_{y}&=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1},\end{aligned}}}
hence
x
=
const
,
y
=
y
0
cos
ω
ξ
1
,
y
0
=
−
e
c
E
0
γ
ω
2
,
z
=
C
z
y
0
sin
2
ω
ξ
1
,
C
z
=
e
E
0
8
γ
ω
,
γ
2
=
m
2
c
2
+
e
2
E
0
2
2
ω
2
,
{\displaystyle {\begin{aligned}x&={\text{const}},\\[1ex]y&=y_{0}\cos \omega \xi _{1},&y_{0}&=-{\frac {ecE_{0}}{\gamma \omega ^{2}}},\\[1ex]z&=C_{z}y_{0}\sin 2\omega \xi _{1},&C_{z}&={\frac {eE_{0}}{8\gamma \omega }},\\[1ex]\gamma ^{2}&=m^{2}c^{2}+{\frac {e^{2}E_{0}^{2}}{2\omega ^{2}}},\end{aligned}}}
p
x
=
0
,
p
y
=
p
y
,
0
sin
ω
ξ
1
,
p
y
,
0
=
e
E
0
ω
,
p
z
=
−
2
C
z
p
y
,
0
cos
2
ω
ξ
1
{\displaystyle {\begin{aligned}p_{x}&=0,\\[1ex]p_{y}&=p_{y,0}\sin \omega \xi _{1},&p_{y,0}&={\frac {eE_{0}}{\omega }},\\[1ex]p_{z}&=-2C_{z}p_{y,0}\cos 2\omega \xi _{1}\end{aligned}}}
implying the particle figure-8 trajectory with a long its axis oriented along the electric field
E
{\displaystyle E}
vector.
==== An electromagnetic wave with a solenoidal magnetic field ====
For the electromagnetic wave with axial (solenoidal) magnetic field:
E
=
E
ϕ
=
ω
ρ
0
c
B
0
cos
ω
ξ
1
,
{\displaystyle E=E_{\phi }={\frac {\omega \rho _{0}}{c}}B_{0}\cos \omega \xi _{1},}
A
ϕ
=
−
ρ
0
B
0
sin
ω
ξ
1
=
−
L
s
π
ρ
0
N
s
I
0
sin
ω
ξ
1
,
{\displaystyle A_{\phi }=-\rho _{0}B_{0}\sin \omega \xi _{1}=-{\frac {L_{s}}{\pi \rho _{0}N_{s}}}I_{0}\sin \omega \xi _{1},}
hence
x
=
constant
,
y
=
y
0
cos
ω
ξ
1
,
y
0
=
−
e
ρ
0
B
0
γ
ω
,
z
=
C
z
y
0
sin
2
ω
ξ
1
,
C
z
=
e
ρ
0
B
0
8
c
γ
,
γ
2
=
m
2
c
2
+
e
2
ρ
0
2
B
0
2
2
c
2
,
{\displaystyle {\begin{aligned}x&={\text{constant}},\\y&=y_{0}\cos \omega \xi _{1},&y_{0}&=-{\frac {e\rho _{0}B_{0}}{\gamma \omega }},\\z&=C_{z}y_{0}\sin 2\omega \xi _{1},&C_{z}&={\frac {e\rho _{0}B_{0}}{8c\gamma }},\\\gamma ^{2}&=m^{2}c^{2}+{\frac {e^{2}\rho _{0}^{2}B_{0}^{2}}{2c^{2}}},\end{aligned}}}
p
x
=
0
,
p
y
=
p
y
,
0
sin
ω
ξ
1
,
p
y
,
0
=
e
ρ
0
B
0
c
,
p
z
=
−
2
C
z
p
y
,
0
cos
2
ω
ξ
1
,
{\displaystyle {\begin{aligned}p_{x}&=0,\\p_{y}&=p_{y,0}\sin \omega \xi _{1},&p_{y,0}&={\frac {e\rho _{0}B_{0}}{c}},\\p_{z}&=-2C_{z}p_{y,0}\cos 2\omega \xi _{1},\end{aligned}}}
where
B
0
{\displaystyle B_{0}}
is the magnetic field magnitude in a solenoid with the effective radius
ρ
0
{\displaystyle \rho _{0}}
, inductivity
L
s
{\displaystyle L_{s}}
, number of windings
N
s
{\displaystyle N_{s}}
, and an electric current magnitude
I
0
{\displaystyle I_{0}}
through the solenoid windings. The particle motion occurs along the figure-8 trajectory in
y
z
{\displaystyle yz}
plane set perpendicular to the solenoid axis with arbitrary azimuth angle
φ
{\displaystyle \varphi }
due to axial symmetry of the solenoidal magnetic field.
== See also ==
== References ==
== Further reading ==
Arnold, V.I. (1989). Mathematical Methods of Classical Mechanics (2 ed.). New York: Springer. ISBN 0-387-96890-3.
Hamilton, W. (1833). "On a General Method of Expressing the Paths of Light, and of the Planets, by the Coefficients of a Characteristic Function" (PDF). Dublin University Review: 795–826.
Hamilton, W. (1834). "On the Application to Dynamics of a General Mathematical Method previously Applied to Optics" (PDF). British Association Report: 513–518.
Fetter, A. & Walecka, J. (2003). Theoretical Mechanics of Particles and Continua. Dover Books. ISBN 978-0-486-43261-8.
Landau, L. D.; Lifshitz, E. M. (1975). Mechanics. Amsterdam: Elsevier.
Sakurai, J. J. (1985). Modern Quantum Mechanics. Benjamin/Cummings Publishing. ISBN 978-0-8053-7501-5.
Jacobi, C. G. J. (1884), Vorlesungen über Dynamik, C. G. J. Jacobi's Gesammelte Werke (in German), Berlin: G. Reimer, OL 14009561M
Nakane, Michiyo; Fraser, Craig G. (2002). "The Early History of Hamilton-Jacobi Dynamics". Centaurus. 44 (3–4): 161–227. doi:10.1111/j.1600-0498.2002.tb00613.x. PMID 17357243. | Wikipedia/Hamilton's_characteristic_function |
The Discourses and Mathematical Demonstrations Relating to Two New Sciences (Italian: Discorsi e dimostrazioni matematiche intorno a due nuove scienze pronounced [diˈskorsi e ddimostratˈtsjoːni mateˈmaːtike inˈtorno a dˈduːe ˈnwɔːve ʃˈʃɛntse]) published in 1638 was Galileo Galilei's final book and a scientific testament covering much of his work in physics over the preceding thirty years. It was written partly in Italian and partly in Latin.
After his Dialogue Concerning the Two Chief World Systems, the Roman Inquisition had banned the publication of any of Galileo's works, including any he might write in the future. After the failure of his initial attempts to publish Two New Sciences in France, Germany, and Poland, it was published by Lodewijk Elzevir who was working in Leiden, South Holland, where the writ of the Inquisition was of less consequence (see House of Elzevir). Fra Fulgenzio Micanzio, the official theologian of the Republic of Venice, had initially offered to help Galileo publish the new work there, but he pointed out that publishing the Two New Sciences in Venice might cause Galileo unnecessary trouble; thus, the book was eventually published in Holland. Galileo did not seem to suffer any harm from the Inquisition for publishing this book since in January 1639, the book reached Rome's bookstores, and all available copies (about fifty) were quickly sold.
Discourses was written in a style similar to Dialogues, in which three men (Simplicio, Sagredo, and Salviati) discuss and debate the various questions Galileo is seeking to answer. There is a notable change in the men, however; Simplicio, in particular, is no longer quite as simple-minded, stubborn and Aristotelian as his name implies. His arguments are representative of Galileo's own early beliefs, as Sagredo represents his middle period, and Salviati proposes Galileo's newest models.
== Introduction ==
The book is divided into four days, each addressing different areas of physics. Galileo dedicates Two New Sciences to Lord Count of Noailles.
In the First Day, Galileo addressed topics that were discussed in Aristotle's Physics and the Aristotelian school's Mechanics. It also provides an introduction to the discussion of both of the new sciences. The likeness between the topics discussed, specific questions that are hypothesized, and the style and sources throughout give Galileo the backbone to his First Day. The First Day introduces the speakers in the dialogue: Salviati, Sagredo, and Simplicio, the same as in the Dialogue. These three people are all Galileo just at different stages of his life, Simplicio the youngest and Salviati, Galileo's closest counterpart. The Second Day addresses the question of the strength of materials.
The Third and Fourth days address the science of motion. The Third day discusses uniform and naturally accelerated motion, the issue of terminal velocity having been addressed in the First day. The Fourth day discusses projectile motion.
In Two Sciences uniform motion is defined as a motion that, over any equal periods of time, covers equal distance. With the use of the quantifier ″any″, uniformity is introduced and expressed more explicitly than in previous definitions.
Galileo had started an additional day on the force of percussion, but was not able to complete it to his own satisfaction. This section was referenced frequently in the first four days of discussion. It finally appeared only in the 1718 edition of Galilei's works. and it is often quoted as "Sixth Day" following the numbering in the 1898 edition. During this additional day Simplicio was replaced by Aproino, a former scholar and assistant of Galileo in Padua.
== Summary ==
Page numbers at the start of each paragraph are from the 1898 version, presently adopted as standard, and are found in the Crew and Drake translations.
=== Day one: Resistance of bodies to separation ===
[50] Preliminary discussions.
Sagredo (taken to be the younger Galileo) cannot understand why with machines one cannot argue from the small to the large: "I do not see that the properties of circles, triangles and...solid figures should change with their size". Salviati (speaking for Galileo) says the common opinion is wrong. Scale matters: a horse falling from a height of 3 or 4 cubits will break its bones whereas a cat falling from twice the height won't, nor will a grasshopper falling from a tower.
[56] The first example is a hemp rope which is constructed from small fibres which bind together in the same way as a rope round a windlass to produce something much stronger. Then the vacuum that prevents two highly polished plates from separating even though they slide easily gives rise to an experiment to test whether water can be expanded or whether a vacuum is caused. In fact, Sagredo had observed that a suction pump could not lift more than 18 cubits of water and Salviati observes that the weight of this is the amount of resistance to a void. The discussion turns to the strength of a copper wire and whether there are minute void spaces inside the metal or whether there is some other explanation for its strength.
[68] This leads into a discussion of infinites and the continuum and thence to the observation that the number of squares equal the number of roots. He comes eventually to the view that "if any number can be said to be infinite, it must be unity" and demonstrates a construction in which an infinite circle is approached and another to divide a line.
[85] The difference between a fine dust and a liquid leads to a discussion of light and how the concentrated power of the sun can melt metals. He deduces that light has motion and describes an (unsuccessful) attempt to measure its speed.
[106] Aristotle believed that bodies fell at a speed proportional to weight but Salviati doubts that Aristotle ever tested this. He also did not believe that motion in a void was possible, but since air is much less dense than water Salviati asserts that in a medium devoid of resistance (a vacuum) all bodies—a lock of wool or a bit of lead—would fall at the same speed. Large and small bodies fall at the same speed through air or water providing they are of the same density. Since ebony weighs a thousand times as much as air (which he had measured), it will fall only a very little more slowly than lead which weighs ten times as much. But shape also matters—even a piece of gold leaf (the densest of all substances [asserts Salviati]) floats through the air and a bladder filled with air falls much more slowly than lead.
[128] Measuring the speed of a fall is difficult because of the small time intervals involved and his first way round this used pendulums of the same length but with lead or cork weights. The period of oscillation was the same, even when the cork was swung more widely to compensate for the fact that it soon stopped.
[139] This leads to a discussion of the vibration of strings and he suggests that not only the length of the string is important for pitch but also the tension and the weight of the string.
=== Day two: Cause of cohesion ===
[151] Salviati proves that a balance can be used not only with equal arms but with unequal arms with weights inversely proportional to the distances from the fulcrum. Following this he shows that the moment of a weight suspended by a beam supported at one end is proportional to the square of the length. The resistance to fracture of beams of various sizes and thicknesses is demonstrated, supported at one or both ends.
[169] He shows that animal bones have to be proportionately larger for larger animals and the length of a cylinder that will break under its own weight. He proves that the best place to break a stick placed upon the knee is the middle and shows how far along a beam that a larger weight can be placed without breaking it.
[178] He proves that the optimum shape for a beam supported at one end and bearing a load at the other is parabolic. He also shows that hollow cylinders are stronger than solid ones of the same weight.
=== Day three: Naturally accelerated motion ===
[191] He first defines uniform (steady) motion and shows the relationship between speed, time and distance. He then defines uniformly accelerated motion where the speed increases by the same amount in increments of time. Falling bodies start very slowly and he sets out to show that their velocity increases in simple proportionality to time, not to distance which he shows is impossible.
[208] He shows that the distance travelled in naturally accelerated motion is proportional to the square of the time. He describes an experiment in which a steel ball was rolled down a groove in a piece of wooden moulding 12 cubits long (about 5.5m) with one end raised by one or two cubits. This was repeated, measuring times by accurately weighing the amount of water that came out of a thin pipe in a jet from the bottom of a large jug of water. By this means he was able to verify the uniformly accelerated motion. He then shows that whatever the inclination of the plane, the square of the time taken to fall a given vertical height is proportional to the inclined distance.
[221] He next considers descent along the chords of a circle, showing that the time is the same as that falling from the vertex, and various other combinations of planes. He gives an erroneous solution to the brachistochrone problem, claiming to prove that the arc of the circle is the fastest descent. 16 problems with solutions are given.
=== Day four: The motion of projectiles ===
[268] The motion of projectiles consists of a combination of uniform horizontal motion and a naturally accelerated vertical motion which produces a parabolic curve. Two motions at right angles can be calculated using the sum of the squares. He shows in detail how to construct the parabolas in various situations and gives tables for altitude and range depending on the projected angle.
[274] Air resistance shows itself in two ways: by affecting less dense bodies more and by offering greater resistance to faster bodies. A lead ball will fall slightly faster than an oak ball, but the difference with a stone ball is negligible. However the speed does not go on increasing indefinitely but reaches a maximum. Though at small speeds the effect of air resistance is small, it is greater when considering, say, a ball fired from a cannon.
[292] The effect of a projectile hitting a target is reduced if the target is free to move. The velocity of a moving body can overcome that of a larger body if its speed is proportionately greater than the resistance.
[310] A cord or chain stretched out is never level but also approximates to a parabola. (But see also catenary.)
=== Additional day: The force of percussion ===
[323] What is the weight of water falling from a bucket hanging on a balance arm onto another bucket suspended to the same arm?
[325] Piling of wooden poles for foundations; hammers and the force of percussion.
[336] Speed of fall along inclined planes; again on the principle of inertia.
== Methodology ==
Many contemporary scientists, such as Gassendi, dispute Galileo's methodology for conceptualizing his law of falling bodies. Two of the main arguments are that his epistemology followed the example of Platonist thought or hypothetico-deductivist. It has now been considered to be ex suppositione, or knowing the how and why effects from past events in order to determine the requirements for the production of similar effects in the future. Galilean methodology mirrored that of Aristotelian and Archimedean epistemology. Following a letter from Cardinal Bellarmine in 1615 Galileo distinguished his arguments and Copernicus' as natural suppositions as opposed to the "fictive" that are "introduced only for the sake of astronomical computations," such as Ptolemy's hypothesis on eccentrics and equants.
Galileo's earlier writing considered Juvenilia, or youthful writings, are considered his first attempts at creating lecture notes for his course "hypothesis of the celestial motions" while teaching in at the University of Padua. These notes mirrored those of his contemporaries at the Collegio as well as contained an "Aristotelian context with decided Thomistic (St. Thomas Aquinas) overtones." These earlier papers are believed to have encouraged him to apply demonstrative proof in order to give validity to his discoveries on motion.
Discovery of folio 116v gives evidence of experiments that had previously not been reported and therefore demonstrated Galileo's actual calculations for the Law of Falling Bodies.
His methods of experimentation have been proved by the recording and recreation done by scientists such as James MacLachlan, Stillman Drake, R.H. Taylor and others in order to prove he did not merely imagine his ideas as historian Alexandre Koyré argued, but sought to prove them mathematically.
Galileo believed that knowledge could be acquired through reason, and reinforced through observation and experimentation. Thus, it can be argued that Galileo was a rationalist, and also that he was an empiricist.
== The two new sciences ==
The two sciences mentioned in the title are the strength of materials and the motion of objects (the forebears of modern material engineering and kinematics). In the title of the book "mechanics" and "motion" are separate, since at Galileo's time "mechanics" meant only statics and strength of materials.
=== The science of materials ===
The discussion begins with a demonstration of the reasons that a large structure proportioned in exactly the same way as a smaller one must necessarily be weaker known as the square–cube law. Later in the discussion this principle is applied to the thickness required of the bones of a large animal, possibly the first quantitative result in biology, anticipating J. B. S. Haldane's work On Being the Right Size, and other essays, edited by John Maynard Smith.
=== The motion of objects ===
Galileo expresses clearly for the first time the constant acceleration of a falling body which he was able to measure accurately by slowing it down using an inclined plane.
In Two New Sciences, Galileo (Salviati speaks for him) used a wood molding, "12 cubits long, half a cubit wide and three finger-breadths thick" as a ramp with a straight, smooth, polished groove to study rolling balls ("a hard, smooth and very round bronze ball"). He lined the groove with "parchment, also smooth and polished as possible". He inclined the ramp at various angles, effectively slowing down the acceleration enough so that he could measure the elapsed time. He would let the ball roll a known distance down the ramp, and use a water clock to measure the time taken to move the known distance. This clock was
a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length. The water collected was weighed, and after each descent on a very accurate balance, the differences and ratios of these weights gave him the differences and ratios of the times. This was done with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results.
==== The law of falling bodies ====
While Aristotle had observed that heavier objects fall more quickly than lighter ones, in Two New Sciences Galileo postulated that this was due not to inherently stronger forces acting on the heavier objects, but to the countervailing forces of air resistance and friction. To compensate, he conducted experiments using a shallowly inclined ramp, smoothed so as to eliminate as much friction as possible, on which he rolled down balls of different weights. In this manner, he was able to provide empirical evidence that matter accelerates vertically downward at a constant rate, regardless of mass, due to the effects of gravity.
The unreported experiment found in folio 116V tested the constant rate of acceleration in falling bodies due to gravity. This experiment consisted of dropping a ball from specified heights onto a deflector in order to transfer its motion from vertical to horizontal. The data from the inclined plane experiments were used to calculate the expected horizontal motion. However, discrepancies were found in the results of the experiment: the observed horizontal distances disagreed with the calculated distances expected for a constant rate of acceleration. Galileo attributed the discrepancies to air resistance in the unreported experiment, and friction in the inclined plane experiment. These discrepancies forced Galileo to assert that the postulate held only under "ideal conditions", i.e., in the absence of friction and/or air resistance.
==== Bodies in motion ====
Aristotelian physics argued that the Earth must not move as humans are unable to perceive the effects of this motion. A popular justification of this is the experiment of an archer shooting an arrow straight up into the air. If the Earth were moving, Aristotle argued, the arrow should fall in a different location than the launch point. Galileo refuted this argument in Dialogues Concerning the Two Chief World Systems. He provided the example of sailors aboard a boat at sea. The boat is obviously in motion, but the sailors are unable to perceive this motion. If a sailor were to drop a weighted object from the mast, this object would fall at the base of the mast rather than behind it (due to the ship's forward motion). This was the result of simultaneously the horizontal and vertical motion of the ship, sailors, and ball.
==== Relativity of motions ====
One of Galileo's experiments regarding falling bodies was that describing the relativity of motions, explaining that, under the right circumstances, "one motion may be superimposed upon another without effect upon either...". In Two New Sciences, Galileo made his case for this argument and it would become the basis of Newton's first law, the law of inertia.
He poses the question of what happens to a ball dropped from the mast of a sailing ship or an arrow fired into the air on the deck. According to Aristotle's physics, the ball dropped should land at the stern of the ship as it falls straight down from the point of origin. Likewise the arrow when fired straight up should not land in the same spot if the ship is in motion. Galileo offers that there are two independent motions at play. One is the accelerating vertical motion caused by gravity while the other is the uniform horizontal motion caused by the moving ship which continues to influence the trajectory of the ball through the principle of inertia. The combination of these two motions results in a parabolic curve. The observer cannot identify this parabolic curve because the ball and observer share the horizontal movement imparted to them by the ship, meaning only the perpendicular, vertical motion is perceivable. Surprisingly, nobody had tested this theory with the simple experiments needed to gain a conclusive result until Pierre Gassendi published the results of said experiments in his letters entitled De Motu Impresso a Motore Translato (1642).
== Infinity ==
The book also contains a discussion of infinity. Galileo considers the example of numbers and their squares. He starts by noting that:
It cannot be denied that there are as many [squares] as there are numbers because every number is a [square] root of some square:
1 ↔ 1, 2 ↔ 4, 3 ↔ 9, 4 ↔ 16, and so on.
But he notes what appears to be a contradiction:
Yet at the outset we said there are many more numbers than squares, since the larger portion of them are not squares. Not only so, but the proportionate number of squares diminishes as we pass to larger numbers.
(In modern language, there is a bijection between the set of positive integers N and the set of squares S, and S is a proper subset of N of density zero.) He resolves the contradiction by denying the possibility of comparing infinite numbers (and of comparing infinite and finite numbers):
We can only infer that the totality of all numbers is infinite, that the number of squares is infinite, and that the number of their roots is infinite; neither is the number of squares less than the totality of all numbers, nor the latter greater than the former; and finally the attributes "equal", greater", and "less", are not applicable to infinite, but only to finite, quantities.
This conclusion, that ascribing sizes to infinite sets should be ruled impossible, owing to the contradictory results obtained from these two ostensibly natural ways of attempting to do so, is a resolution to the problem that is consistent with, but less powerful than, the methods used in modern mathematics. The resolution to the problem may be generalized by considering Galileo's first definition of what it means for sets to have equal sizes, that is, the ability to put them in one-to-one correspondence. This turns out to yield a way of comparing the sizes of infinite sets that is free from contradictory results.
These issues of infinity arise from problems of rolling circles. If two concentric circles of different radii roll along lines, then if the larger does not slip it appears clear that the smaller must slip. But in what way? Galileo attempts to clarify the matter by considering hexagons and then extending to rolling 100 000-gons, or n-gons, where he shows that a finite number of finite slips occur on the inner shape. Eventually, he concludes "the line traversed by the larger circle consists then of an infinite number of points which completely fill it; while that which is traced by the smaller circle consists of an infinite number of points which leave empty spaces and only partly fill the line," which would not be considered satisfactory now.
== Reactions by commentators ==
So great a contribution to physics was Two New Sciences that scholars have long maintained that the book anticipated Isaac Newton's laws of motion.
Galileo ... is the father of modern physics—indeed of modern science
Part of Two New Sciences was pure mathematics, as has been pointed out by the mathematician Alfréd Rényi, who said that it was the most significant book on mathematics in over 2000 years: Greek mathematics did not deal with motion, and so they never formulated mathematical laws of motion, even though Archimedes developed differentiation and integration. Two New Sciences opened the way to treating physics mathematically by treating motion mathematically for the first time. The Greek mathematician Zeno had designed his paradoxes to prove that motion could not be treated mathematically, and that any attempt to do so would lead to paradoxes. (He regarded this as an inevitable limitation of mathematics.) Aristotle reinforced this belief, saying that mathematic could only deal with abstract objects that were immutable. Galileo used the very methods of the Greeks to show that motion could indeed be treated mathematically. His idea was to separate out the paradoxes of the infinite from Zeno's paradoxes. He did this in several steps. First, he showed that the infinite sequence S of the squares 1, 4, 9, 16, ...contained as many elements as the sequence N of all positive integers (infinity); this is now referred to as Galileo's paradox. Then, using Greek style geometry, he showed a short line interval contained as many points as a longer interval. At some point he formulates the general principle that a smaller infinite set can have just as many points as a larger infinite set containing it. It was then clear that Zeno's paradoxes on motion resulted entirely from this paradoxical behavior of infinite quantities. Renyi said that, having removed this 2000-year-old stumbling block, Galileo went on to introduce his mathematical laws of motion, anticipating Newton.
=== Gassendi's thoughts ===
Pierre Gassendi defended Galileo's opinions in his book, De Motu Impresso a Motore Translato. In Howard Jones' article, Gassendi's Defence of Galileo: The Politics of Discretion, Jones says Gassendi displayed an understanding of Galileo's arguments and a clear grasp of their implications for the physical objections to the earth's motion.
=== Koyré's thoughts ===
The law of falling bodies was published by Galileo in 1638. But in the 20th century some authorities challenged the reality of Galileo's experiments. In particular, the French historian of science Alexandre Koyré bases his doubt on the fact that the experiments reported in Two New Sciences to determine the law of acceleration of falling bodies, required accurate measurements of time which appeared to be impossible with the technology of 1600. According to Koyré, the law was created deductively, and the experiments were merely illustrative thought experiments. In fact, Galileo's water clock (described above) provided sufficiently accurate measurements of time to confirm his conjectures.
Later research, however, has validated the experiments. The experiments on falling bodies (actually rolling balls) were replicated using the methods described by Galileo, and the precision of the results was consistent with Galileo's report. Later research into Galileo's unpublished working papers from 1604 clearly showed the reality of the experiments and even indicated the particular results that led to the time-squared law.
== See also ==
De Motu Antiquiora (Galileo's earliest investigations of the motion of falling bodies)
== Notes ==
== References ==
Drake, Stillman, translator (1974). Two New Sciences, University of Wisconsin Press, 1974. ISBN 0-299-06404-2. A new translation including sections on centers of gravity and the force of percussion.
Drake, Stillman (1978). Galileo At Work. Chicago: University of Chicago Press. ISBN 978-0-226-16226-3.
Henry Crew and Alfonso de Salvio, translators, [1914] (1954). Dialogues Concerning Two New Sciences, Dover Publications Inc., New York, NY. ISBN 978-0-486-60099-4. The classic source in English, originally published by McMillan (1914).
Jones, Howard, "Gassendi's Defense of Galileo: The Politics of Discretion", Medieval Renaissance Texts and Studies 58, 1988.
Titles of the first editions taken from Leonard C. Bruno 1989, The Landmarks of Science: from the Collections of the Library of Congress. ISBN 0-8160-2137-6 Q125.B87
Galileo Galilei, Discorsi e dimostrazioni matematiche intorno a due nuove scienze attinenti la meccanica e i movimenti locali (pag.664, of Claudio Pierini) publication Cierre, Simeoni Arti Grafiche, Verona, 2011, ISBN 9788895351049.
Wallace, Willian, A. Galileo and Reasoning Ex Suppositione: The Methodology of the Two New Sciences. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, Vol. 1974, (1974), pp. 79–104
Salvia, Stafano (2014). "'Galileo's Machine': Late Notes on Free Fall, Projectile Motion, and the Force of Percussion (ca. 1638–1639)". Physics in Perspective. 16 (4): 440–460. Bibcode:2014PhP....16..440S. doi:10.1007/s00016-014-0149-1. S2CID 122967350.
De Angelis, Alessandro (2021). Discorsi e Dimostrazioni Matematiche di Galileo Galilei per il Lettore Moderno (in Italian). Torino: Codice. ISBN 978-8875789305.
De Angelis, Alessandro (2021). Galilei's Two New Sciences for Modern Readers. Heidelberg: Springer Nature. ISBN 978-3030719524. With prefaces by Ugo Amaldi and Telmo Pievani.
== External links ==
(in Italian) Italian text with figures
English translation by Crew and de Salvio, with original figures | Wikipedia/Dialogues_Concerning_Two_New_Sciences |
The Course of Theoretical Physics is a ten-volume series of books covering theoretical physics that was initiated by Lev Landau and written in collaboration with his student Evgeny Lifshitz starting in the late 1930s.
It is said that Landau composed much of the series in his head while in an NKVD prison in 1938–1939. However, almost all of the actual writing of the early volumes was done by Lifshitz, giving rise to the witticism, "not a word of Landau and not a thought of Lifshitz". The first eight volumes were finished in the 1950s, written in Russian and translated into English in the late 1950s by John Stewart Bell, together with John Bradbury Sykes, M. J. Kearsley, and W. H. Reid. The last two volumes were written in the early 1980s. Vladimir Berestetskii and Lev Pitaevskii also contributed to the series. The series is often referred to as "Landau and Lifshitz", "Landafshitz" (Russian: "Ландафшиц"), or "Lanlifshitz" (Russian: "Ланлифшиц") in informal settings.
== Impact ==
The presentation of material is advanced and typically considered suitable for graduate-level study. Despite this specialized character, it is estimated that a million volumes of the Course were sold by 2005.
The series has been called "renowned" in Science and "celebrated" in American Scientist. A note in Mathematical Reviews states, "The usefulness and the success of this course have been proved by the great number of successive editions in Russian, English, French, German and other languages." At a centenary celebration of Landau's career, it was observed that the Course had shown "unprecedented longevity."
In 1962, Landau and Lifshitz were awarded the Lenin Prize for their work on the Course. This was the first occasion on which the Lenin Prize had been awarded for the teaching of physics.
== English editions ==
The following list does not include reprints and revised editions.
=== Volume 1 ===
Landau, Lev D.; Lifshitz, Evgeny M. (1960). Mechanics. Vol. 1 (1st ed.). Pergamon Press. ASIN B0006AWV88.
Landau, Lev D.; Lifshitz, Evgeny M. (1969). Mechanics. Vol. 1 (2nd ed.). Pergamon Press. ISBN 978-0-201-04146-0.
Landau, Lev D.; Lifshitz, Evgeny M. (1976). Mechanics. Vol. 1 (3rd ed.). Butterworth-Heinemann. ISBN 978-0-7506-2896-9.
Volume 1 covers classical mechanics without special or general relativity, in the Lagrangian and Hamiltonian formalisms.
=== Volume 2 ===
Landau, Lev D.; Lifshitz, Evgeny M. (1951). The Classical Theory of Fields. Vol. 2 (1st ed.). Addison-Wesley. ASIN B0007G5B42.
Landau, Lev D.; Lifshitz, Evgeny M. (1959). The Classical Theory of Fields. Vol. 2 (2nd ed.). Pergamon Press.
Landau, Lev D.; Lifshitz, Evgeny M. (1971). The Classical Theory of Fields. Vol. 2 (3rd ed.). Pergamon Press. ISBN 978-0-08-016019-1.
Landau, Lev D.; Lifshitz, Evgeny M. (1975). The Classical Theory of Fields. Vol. 2 (4th ed.). Butterworth-Heinemann. ISBN 978-0-7506-2768-9.
Volume 2 covers relativistic mechanics of particles, and classical field theory for fields, specifically special relativity and electromagnetism, general relativity and gravitation.
=== Volume 3 ===
Landau, Lev D.; Lifshitz, Evgeny M. (1958). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (1st ed.). Pergamon Press.
Landau, Lev D.; Lifshitz, Evgeny M. (1965). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (2nd ed.). Pergamon Press.
Landau, Lev D.; Lifshitz, Evgeny M. (1977). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (3rd ed.). Pergamon Press. ISBN 978-0-08-020940-1.
Volume 3 covers quantum mechanics without special relativity.
=== Volume 4 ===
Berestetskii, Vladimir B.; Lifshitz, Evgeny M.; Pitaevskii, Lev P. (1971). Relativistic Quantum Theory. Vol. 4 (1st ed.). Pergamon Press. ISBN 978-0-08-017175-3.
Berestetskii, Vladimir B.; Lifshitz, Evgeny M.; Pitaevskii, Lev P. (1982). Quantum Electrodynamics. Vol. 4 (2nd ed.). Butterworth-Heinemann. ISBN 978-0-7506-3371-0.
The original edition comprised two books, labelled part 1 and part 2. The first covered general aspects of relativistic quantum mechanics and relativistic quantum field theory, leading onto quantum electrodynamics. The second continued with quantum electrodynamics and what was then known about the strong and weak interactions. These books were published in the early 1970s, at a time when the strong and weak forces were still not well understood. In the second edition, the corresponding sections were scrapped and replaced with more topics in the well-established quantum electrodynamics, and the two parts were unified into one, thus providing a one-volume exposition on relativistic quantum field theory with the electromagnetic interaction as the prototype of a quantum field theory.
=== Volume 5 ===
Statistical Physics. Vol. 5 (1st ed.). 1951.
Early version: Landau, Lev D. (1938). Statistical Physics. Clarendon Press. ASIN B00085BKZG.
Statistical Physics. Vol. 5 (2nd ed.). 1968.
Landau, Lev D.; Lifshitz, Evgeny M. (1980). Statistical Physics. Vol. 5 (3rd ed.). Butterworth-Heinemann. ISBN 978-0-7506-3372-7.
Volume 5 covers general statistical mechanics and thermodynamics and applications, including chemical reactions, phase transitions, and condensed matter physics.
=== Volume 6 ===
Landau, Lev D.; Lifshitz, Evgeny M. (1959). Fluid Mechanics. Vol. 6 (1st ed.). Pergamon Press. ISBN 978-0-08-009104-4. {{cite book}}: ISBN / Date incompatibility (help)
Landau, Lev D.; Lifshitz, Evgeny M. (1987). Fluid Mechanics. Vol. 6 (2nd ed.). Butterworth-Heinemann. ISBN 978-0-08-033933-7.
Volume 6 covers fluid mechanics in a condensed but varied exposition, from ideal to viscous fluids, includes a chapter on relativistic fluid mechanics, and another on superfluids.
=== Volume 7 ===
Landau, Lev D.; Lifshitz, Evgeny M. (1959). Theory of Elasticity. Vol. 7 (1st ed.). Pergamon Press.
Landau, Lev D.; Lifshitz, Evgeny M. (1970). Theory of Elasticity. Vol. 7 (2nd ed.). Pergamon Press. ISBN 978-0-08-006465-9.
Landau, Lev D.; Lifshitz, Evgeny M. (1986). Theory of Elasticity. Vol. 7 (3rd ed.). Butterworth-Heinemann. ISBN 978-0-7506-2633-0.
Volume 7 covers elasticity theory of solids, including viscous solids, vibrations and waves in crystals with dislocations, and a chapter on the mechanics of liquid crystals.
=== Volume 8 ===
Landau, Lev D.; Lifshitz, Evgeny M. (1960). Electrodynamics of Continuous Media. Vol. 8 (1st ed.). Pergamon Press. ISBN 978-0-08-009105-1. {{cite book}}: ISBN / Date incompatibility (help)
Landau, Lev D.; Lifshitz, Evgeny M.; Pitaevskii, Lev P. (1984). Electrodynamics of Continuous Media. Vol. 8 (2nd ed.). Butterworth-Heinemann. ISBN 978-0-7506-2634-7.
Volume 8 covers electromagnetism in materials, and includes a variety of topics in condensed matter physics, a chapter on magnetohydrodynamics, and another on nonlinear optics.
=== Volume 9 ===
Lifshitz, Evgeny M.; Pitaevskii, Lev P. (1980). Statistical Physics, Part 2: Theory of the Condensed State. Vol. 9 (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-2636-1.
Volume 9 builds on the original statistical physics book, with more applications to condensed matter theory.
=== Volume 10 ===
Lifshitz, Evgeny M.; Pitaevskii, Lev P. (1981). Physical Kinetics. Vol. 10 (1st ed.). Pergamon Press. ISBN 978-0-7506-2635-4.
Volume 10 presents various applications of kinetic theory to condensed matter theory, and to metals, insulators, and phase transitions.
== See also ==
Lectures on Theoretical Physics
List of textbooks on classical and quantum mechanics
List of textbooks in thermodynamics and statistical mechanics
List of textbooks in electromagnetism
The Theoretical Minimum
== Notes ==
== External links ==
Internet Archive: "Internet Archive". Retrieved 2013-11-02. (for volumes 1, 2, 3, 6, 7, 8) and "Internet Archive". Retrieved 2013-11-02. (for volume 4), and "Internet Archive". Internet Archive. 1969. Retrieved 2016-08-10. (for volume 5).
Britannica Online: Course of Theoretical Physics
Internet Archive: Landau-Lifschitz Vol. 1-10 | Wikipedia/Course_of_theoretical_physics |
A pendulum is a body suspended from a fixed support such that it freely swings back and forth under the influence of gravity. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back towards the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging it back and forth. The mathematics of pendulums are in general quite complicated. Simplifying assumptions can be made, which in the case of a simple pendulum allow the equations of motion to be solved analytically for small-angle oscillations.
== Simple gravity pendulum ==
A simple gravity pendulum is an idealized mathematical model of a real pendulum. It is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. Since in the model there is no frictional energy loss, when given an initial displacement it swings back and forth with a constant amplitude. The model is based on the assumptions:
The rod or cord is massless, inextensible and always remains under tension.
The bob is a point mass.
The motion occurs in two dimensions.
The motion does not lose energy to external friction or air resistance.
The gravitational field is uniform.
The support is immobile.
The differential equation which governs the motion of a simple pendulum is
where g is the magnitude of the gravitational field, ℓ is the length of the rod or cord, and θ is the angle from the vertical to the pendulum.
== Small-angle approximation ==
The differential equation given above is not easily solved, and there is no solution that can be written in terms of elementary functions. However, adding a restriction to the size of the oscillation's amplitude gives a form whose solution can be easily obtained. If it is assumed that the angle is much less than 1 radian (often cited as less than 0.1 radians, about 6°), or
θ
≪
1
,
{\displaystyle \theta \ll 1,}
then substituting for sin θ into Eq. 1 using the small-angle approximation,
sin
θ
≈
θ
,
{\displaystyle \sin \theta \approx \theta ,}
yields the equation for a harmonic oscillator,
d
2
θ
d
t
2
+
g
ℓ
θ
=
0.
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+{\frac {g}{\ell }}\theta =0.}
The error due to the approximation is of order θ3 (from the Taylor expansion for sin θ).
Let the starting angle be θ0. If it is assumed that the pendulum is released with zero angular velocity, the solution becomes
The motion is simple harmonic motion where θ0 is the amplitude of the oscillation (that is, the maximum angle between the rod of the pendulum and the vertical). The corresponding approximate period of the motion is then
which is known as Christiaan Huygens's law for the period. Note that under the small-angle approximation, the period is independent of the amplitude θ0; this is the property of isochronism that Galileo discovered.
=== Rule of thumb for pendulum length ===
T
0
=
2
π
ℓ
g
{\displaystyle T_{0}=2\pi {\sqrt {\frac {\ell }{g}}}}
gives
ℓ
=
g
π
2
T
0
2
4
.
{\displaystyle \ell ={\frac {g}{\pi ^{2}}}{\frac {T_{0}^{2}}{4}}.}
If SI units are used (i.e. measure in metres and seconds), and assuming the measurement is taking place on the Earth's surface, then g ≈ 9.81 m/s2, and g/π2 ≈ 1 m/s2 (0.994 is the approximation to 3 decimal places).
Therefore, relatively reasonable approximations for the length and period are:
ℓ
≈
T
0
2
4
,
T
0
≈
2
ℓ
{\displaystyle {\begin{aligned}\ell &\approx {\frac {T_{0}^{2}}{4}},\\T_{0}&\approx 2{\sqrt {\ell }}\end{aligned}}}
where T0 is the number of seconds between two beats (one beat for each side of the swing), and l is measured in metres.
== Arbitrary-amplitude period ==
For amplitudes beyond the small angle approximation, one can compute the exact period by first inverting the equation for the angular velocity obtained from the energy method (Eq. 2),
d
t
d
θ
=
ℓ
2
g
1
cos
θ
−
cos
θ
0
{\displaystyle {\frac {dt}{d\theta }}={\sqrt {\frac {\ell }{2g}}}{\frac {1}{\sqrt {\cos \theta -\cos \theta _{0}}}}}
and then integrating over one complete cycle,
T
=
t
(
θ
0
→
0
→
−
θ
0
→
0
→
θ
0
)
,
{\displaystyle T=t(\theta _{0}\rightarrow 0\rightarrow -\theta _{0}\rightarrow 0\rightarrow \theta _{0}),}
or twice the half-cycle
T
=
2
t
(
θ
0
→
0
→
−
θ
0
)
,
{\displaystyle T=2t(\theta _{0}\rightarrow 0\rightarrow -\theta _{0}),}
or four times the quarter-cycle
T
=
4
t
(
θ
0
→
0
)
,
{\displaystyle T=4t(\theta _{0}\rightarrow 0),}
which leads to
T
=
4
ℓ
2
g
∫
0
θ
0
d
θ
cos
θ
−
cos
θ
0
.
{\displaystyle T=4{\sqrt {\frac {\ell }{2g}}}\int _{0}^{\theta _{0}}{\frac {d\theta }{\sqrt {\cos \theta -\cos \theta _{0}}}}.}
Note that this integral diverges as θ0 approaches the vertical
lim
θ
0
→
π
T
=
∞
,
{\displaystyle \lim _{\theta _{0}\to \pi }T=\infty ,}
so that a pendulum with just the right energy to go vertical will never actually get there. (Conversely, a pendulum close to its maximum can take an arbitrarily long time to fall down.)
This integral can be rewritten in terms of elliptic integrals as
T
=
4
ℓ
g
F
(
π
2
,
sin
θ
0
2
)
{\displaystyle T=4{\sqrt {\frac {\ell }{g}}}F\left({\frac {\pi }{2}},\sin {\frac {\theta _{0}}{2}}\right)}
where F is the incomplete elliptic integral of the first kind defined by
F
(
φ
,
k
)
=
∫
0
φ
d
u
1
−
k
2
sin
2
u
.
{\displaystyle F(\varphi ,k)=\int _{0}^{\varphi }{\frac {du}{\sqrt {1-k^{2}\sin ^{2}u}}}\,.}
Or more concisely by the substitution
sin
u
=
sin
θ
2
sin
θ
0
2
{\displaystyle \sin {u}={\frac {\sin {\frac {\theta }{2}}}{\sin {\frac {\theta _{0}}{2}}}}}
expressing θ in terms of u,
Here K is the complete elliptic integral of the first kind defined by
K
(
k
)
=
F
(
π
2
,
k
)
=
∫
0
π
2
d
u
1
−
k
2
sin
2
u
.
{\displaystyle K(k)=F\left({\frac {\pi }{2}},k\right)=\int _{0}^{\frac {\pi }{2}}{\frac {du}{\sqrt {1-k^{2}\sin ^{2}u}}}\,.}
For comparison of the approximation to the full solution, consider the period of a pendulum of length 1 m on Earth (g = 9.80665 m/s2) at an initial angle of 10 degrees is
4
1
m
g
K
(
sin
10
∘
2
)
≈
2.0102
s
.
{\displaystyle 4{\sqrt {\frac {1{\text{ m}}}{g}}}\ K\left(\sin {\frac {10^{\circ }}{2}}\right)\approx 2.0102{\text{ s}}.}
The linear approximation gives
2
π
1
m
g
≈
2.0064
s
.
{\displaystyle 2\pi {\sqrt {\frac {1{\text{ m}}}{g}}}\approx 2.0064{\text{ s}}.}
The difference between the two values, less than 0.2%, is much less than that caused by the variation of g with geographical location.
From here there are many ways to proceed to calculate the elliptic integral.
=== Legendre polynomial solution for the elliptic integral ===
Given Eq. 3 and the Legendre polynomial solution for the elliptic integral:
K
(
k
)
=
π
2
∑
n
=
0
∞
(
(
2
n
−
1
)
!
!
(
2
n
)
!
!
k
n
)
2
{\displaystyle K(k)={\frac {\pi }{2}}\sum _{n=0}^{\infty }\left({\frac {(2n-1)!!}{(2n)!!}}k^{n}\right)^{2}}
where n!! denotes the double factorial, an exact solution to the period of a simple pendulum is:
T
=
2
π
ℓ
g
(
1
+
(
1
2
)
2
sin
2
θ
0
2
+
(
1
⋅
3
2
⋅
4
)
2
sin
4
θ
0
2
+
(
1
⋅
3
⋅
5
2
⋅
4
⋅
6
)
2
sin
6
θ
0
2
+
⋯
)
=
2
π
ℓ
g
⋅
∑
n
=
0
∞
(
(
(
2
n
)
!
(
2
n
⋅
n
!
)
2
)
2
⋅
sin
2
n
θ
0
2
)
.
{\displaystyle {\begin{alignedat}{2}T&=2\pi {\sqrt {\frac {\ell }{g}}}\left(1+\left({\frac {1}{2}}\right)^{2}\sin ^{2}{\frac {\theta _{0}}{2}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right)^{2}\sin ^{4}{\frac {\theta _{0}}{2}}+\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right)^{2}\sin ^{6}{\frac {\theta _{0}}{2}}+\cdots \right)\\&=2\pi {\sqrt {\frac {\ell }{g}}}\cdot \sum _{n=0}^{\infty }\left(\left({\frac {(2n)!}{(2^{n}\cdot n!)^{2}}}\right)^{2}\cdot \sin ^{2n}{\frac {\theta _{0}}{2}}\right).\end{alignedat}}}
Figure 4 shows the relative errors using the power series. T0 is the linear approximation, and T2 to T10 include respectively the terms up to the 2nd to the 10th powers.
=== Power series solution for the elliptic integral ===
Another formulation of the above solution can be found if the following Maclaurin series:
sin
θ
0
2
=
1
2
θ
0
−
1
48
θ
0
3
+
1
3
840
θ
0
5
−
1
645
120
θ
0
7
+
⋯
.
{\displaystyle \sin {\frac {\theta _{0}}{2}}={\frac {1}{2}}\theta _{0}-{\frac {1}{48}}\theta _{0}^{3}+{\frac {1}{3\,840}}\theta _{0}^{5}-{\frac {1}{645\,120}}\theta _{0}^{7}+\cdots .}
is used in the Legendre polynomial solution above.
The resulting power series is:
T
=
2
π
ℓ
g
(
1
+
1
16
θ
0
2
+
11
3
072
θ
0
4
+
173
737
280
θ
0
6
+
22
931
1
321
205
760
θ
0
8
+
1
319
183
951
268
147
200
θ
0
10
+
233
526
463
2
009
078
326
886
400
θ
0
12
+
⋯
)
,
{\displaystyle T=2\pi {\sqrt {\frac {\ell }{g}}}\left(1+{\frac {1}{16}}\theta _{0}^{2}+{\frac {11}{3\,072}}\theta _{0}^{4}+{\frac {173}{737\,280}}\theta _{0}^{6}+{\frac {22\,931}{1\,321\,205\,760}}\theta _{0}^{8}+{\frac {1\,319\,183}{951\,268\,147\,200}}\theta _{0}^{10}+{\frac {233\,526\,463}{2\,009\,078\,326\,886\,400}}\theta _{0}^{12}+\cdots \right),}
more fractions available in the On-Line Encyclopedia of Integer Sequences with OEIS: A223067 having the numerators and OEIS: A223068 having the denominators.
=== Arithmetic-geometric mean solution for elliptic integral ===
Given Eq. 3 and the arithmetic–geometric mean solution of the elliptic integral:
K
(
k
)
=
π
2
M
(
1
−
k
,
1
+
k
)
,
{\displaystyle K(k)={\frac {\pi }{2M(1-k,1+k)}},}
where M(x,y) is the arithmetic-geometric mean of x and y.
This yields an alternative and faster-converging formula for the period:
T
=
2
π
M
(
1
,
cos
θ
0
2
)
ℓ
g
.
{\displaystyle T={\frac {2\pi }{M\left(1,\cos {\frac {\theta _{0}}{2}}\right)}}{\sqrt {\frac {\ell }{g}}}.}
The first iteration of this algorithm gives
T
1
=
2
T
0
1
+
cos
θ
0
2
.
{\displaystyle T_{1}={\frac {2T_{0}}{1+\cos {\frac {\theta _{0}}{2}}}}.}
This approximation has the relative error of less than 1% for angles up to 96.11 degrees. Since
1
2
(
1
+
cos
(
θ
0
2
)
)
=
cos
2
θ
0
4
,
{\textstyle {\frac {1}{2}}\left(1+\cos \left({\frac {\theta _{0}}{2}}\right)\right)=\cos ^{2}{\frac {\theta _{0}}{4}},}
the expression can be written more concisely as
T
1
=
T
0
sec
2
θ
0
4
.
{\displaystyle T_{1}=T_{0}\sec ^{2}{\frac {\theta _{0}}{4}}.}
The second order expansion of
sec
2
(
θ
0
/
4
)
{\displaystyle \sec ^{2}(\theta _{0}/4)}
reduces to
T
≈
T
0
(
1
+
θ
0
2
16
)
.
{\textstyle T\approx T_{0}\left(1+{\frac {\theta _{0}^{2}}{16}}\right).}
A second iteration of this algorithm gives
T
2
=
4
T
0
1
+
cos
θ
0
2
+
2
cos
θ
0
2
=
4
T
0
(
1
+
cos
θ
0
2
)
2
.
{\displaystyle T_{2}={\frac {4T_{0}}{1+\cos {\frac {\theta _{0}}{2}}+2{\sqrt {\cos {\frac {\theta _{0}}{2}}}}}}={\frac {4T_{0}}{\left(1+{\sqrt {\cos {\frac {\theta _{0}}{2}}}}\right)^{2}}}.}
This second approximation has a relative error of less than 1% for angles up to 163.10 degrees.
== Approximate formulae for the nonlinear pendulum period ==
Though the exact period
T
{\displaystyle T}
can be determined, for any finite amplitude
θ
0
<
π
{\displaystyle \theta _{0}<\pi }
rad, by evaluating the corresponding complete elliptic integral
K
(
k
)
{\displaystyle K(k)}
, where
k
≡
sin
(
θ
0
/
2
)
{\displaystyle k\equiv \sin(\theta _{0}/2)}
, this is often avoided in applications because it is not possible to express this integral in a closed form in terms of elementary functions. This has made way for research on simple approximate formulae for the increase of the pendulum period with amplitude (useful in introductory physics labs, classical mechanics, electromagnetism, acoustics, electronics, superconductivity, etc. The approximate formulae found by different authors can be classified as follows:
‘Not so large-angle’ formulae, i.e. those yielding good estimates for amplitudes below
π
/
2
{\displaystyle \pi /2}
rad (a natural limit for a bob on the end of a flexible string), though the deviation with respect to the exact period increases monotonically with amplitude, being unsuitable for amplitudes near to
π
{\displaystyle \pi }
rad. One of the simplest formulae found in literature is the following one by Lima (2006):
T
≈
−
T
0
ln
a
1
−
a
{\textstyle T\approx -\,T_{0}\,{\frac {\ln {a}}{1-a}}}
, where
a
≡
cos
(
θ
0
/
2
)
{\displaystyle a\equiv \cos {(\theta _{0}/2)}}
.
‘Very large-angle’ formulae, i.e. those which approximate the exact period asymptotically for amplitudes near to
π
{\displaystyle \pi }
rad, with an error that increases monotonically for smaller amplitudes (i.e., unsuitable for small amplitudes). One of the better such formulae is that by Cromer, namely:
T
≈
2
π
T
0
ln
(
4
/
a
)
{\textstyle T\approx {\frac {2}{\pi }}\,T_{0}\,\ln {(4/a)}}
.
Of course, the increase of
T
{\displaystyle T}
with amplitude is more apparent when
π
/
2
<
θ
0
<
π
{\displaystyle \pi /2<\theta _{0}<\pi }
, as has been observed in many experiments using either a rigid rod or a disc. As accurate timers and sensors are currently available even in introductory physics labs, the experimental errors found in ‘very large-angle’ experiments are already small enough for a comparison with the exact period, and a very good agreement between theory and experiments in which friction is negligible has been found. Since this activity has been encouraged by many instructors, a simple approximate formula for the pendulum period valid for all possible amplitudes, to which experimental data could be compared, was sought. In 2008, Lima derived a weighted-average formula with this characteristic:
T
≈
r
a
2
T
Lima
+
k
2
T
Cromer
r
a
2
+
k
2
,
{\displaystyle T\approx {\frac {r\,a^{2}\,T_{\text{Lima}}+k^{2}\,T_{\text{Cromer}}}{r\,a^{2}+k^{2}}},}
where
r
=
7.17
{\displaystyle r=7.17}
, which presents a maximum error of only 0.6% (at
θ
0
=
95
∘
{\displaystyle \theta _{0}=95^{\circ }}
).
== Arbitrary-amplitude angular displacement ==
The Fourier series expansion of
θ
(
t
)
{\displaystyle \theta (t)}
is given by
θ
(
t
)
=
8
∑
n
≥
1
odd
(
−
1
)
⌊
n
/
2
⌋
n
q
n
/
2
1
+
q
n
cos
(
n
ω
t
)
{\displaystyle \theta (t)=8\sum _{n\geq 1{\text{ odd}}}{\frac {(-1)^{\left\lfloor {n/2}\right\rfloor }}{n}}{\frac {q^{n/2}}{1+q^{n}}}\cos(n\omega t)}
where
q
{\displaystyle q}
is the elliptic nome,
q
=
exp
(
−
π
K
(
1
−
k
2
)
/
K
(
k
)
)
,
{\displaystyle q=\exp \left({-\pi K{\bigl (}{\sqrt {\textstyle 1-k^{2}}}{\bigr )}{\big /}K(k)}\right),}
k
=
sin
(
θ
0
/
2
)
,
{\displaystyle k=\sin(\theta _{0}/2),}
and
ω
=
2
π
/
T
{\displaystyle \omega =2\pi /T}
the angular frequency.
If one defines
ε
=
1
2
⋅
1
−
cos
(
θ
0
/
2
)
1
+
cos
(
θ
0
/
2
)
{\displaystyle \varepsilon ={\frac {1}{2}}\cdot {\frac {1-{\sqrt {\cos(\theta _{0}/2)}}}{1+{\sqrt {\cos(\theta _{0}/2)}}}}}
q
{\displaystyle q}
can be approximated using the expansion
q
=
ε
+
2
ε
5
+
15
ε
9
+
150
ε
13
+
1707
ε
17
+
20910
ε
21
+
⋯
{\displaystyle q=\varepsilon +2\varepsilon ^{5}+15\varepsilon ^{9}+150\varepsilon ^{13}+1707\varepsilon ^{17}+20910\varepsilon ^{21}+\cdots }
(see OEIS: A002103). Note that
ε
<
1
2
{\displaystyle \varepsilon <{\tfrac {1}{2}}}
for
θ
0
<
π
{\displaystyle \theta _{0}<\pi }
, thus the approximation is applicable even for large amplitudes.
Equivalently, the angle can be given in terms of the Jacobi elliptic function
cd
{\displaystyle \operatorname {cd} }
with modulus
k
{\displaystyle k}
θ
(
t
)
=
2
arcsin
(
k
cd
(
g
ℓ
t
;
k
)
)
,
k
=
sin
θ
0
2
.
{\displaystyle \theta (t)=2\arcsin \left(k\operatorname {cd} \left({\sqrt {\frac {g}{\ell }}}t;k\right)\right),\quad k=\sin {\frac {\theta _{0}}{2}}.}
For small
x
{\displaystyle x}
,
sin
x
≈
x
{\displaystyle \sin x\approx x}
,
arcsin
x
≈
x
{\displaystyle \arcsin x\approx x}
and
cd
(
t
;
0
)
=
cos
t
{\displaystyle \operatorname {cd} (t;0)=\cos t}
, so the solution is well-approximated by the solution given in Pendulum (mechanics)#Small-angle approximation.
== Examples ==
The animations below depict the motion of a simple (frictionless) pendulum with increasing amounts of initial displacement of the bob, or equivalently increasing initial velocity. The small graph above each pendulum is the corresponding phase plane diagram; the horizontal axis is displacement and the vertical axis is velocity. With a large enough initial velocity the pendulum does not oscillate back and forth but rotates completely around the pivot.
== Compound pendulum ==
A compound pendulum (or physical pendulum) is one where the rod is not massless, and may have extended size; that is, an arbitrarily shaped rigid body swinging by a pivot
O
{\displaystyle O}
. In this case the pendulum's period depends on its moment of inertia
I
O
{\displaystyle I_{O}}
around the pivot point.
The equation of torque gives:
τ
=
I
α
{\displaystyle \tau =I\alpha }
where:
α
{\displaystyle \alpha }
is the angular acceleration.
τ
{\displaystyle \tau }
is the torque
The torque is generated by gravity so:
τ
=
−
m
g
r
⊕
sin
θ
{\displaystyle \tau =-mgr_{\oplus }\sin \theta }
where:
m
{\displaystyle m}
is the total mass of the rigid body (rod and bob)
r
⊕
{\displaystyle r_{\oplus }}
is the distance from the pivot point to the system's centre-of-mass
θ
{\displaystyle \theta }
is the angle from the vertical
Hence, under the small-angle approximation,
sin
θ
≈
θ
{\displaystyle \sin \theta \approx \theta }
(or equivalently when
θ
m
a
x
≪
1
{\displaystyle \theta _{\mathrm {max} }\ll 1}
),
α
=
θ
¨
=
m
g
r
⊕
I
O
sin
θ
≈
−
m
g
r
⊕
I
O
θ
{\displaystyle \alpha ={\ddot {\theta }}={\frac {mgr_{\oplus }}{I_{O}}}\sin \theta \approx -{\frac {mgr_{\oplus }}{I_{O}}}\theta }
where
I
O
{\displaystyle I_{O}}
is the moment of inertia of the body about the pivot point
O
{\displaystyle O}
.
The expression for
α
{\displaystyle \alpha }
is of the same form as the conventional simple pendulum and gives a period of
T
=
2
π
I
O
m
g
r
⊕
{\displaystyle T=2\pi {\sqrt {\frac {I_{O}}{mgr_{\oplus }}}}}
And a frequency of
f
=
1
T
=
1
2
π
m
g
r
⊕
I
O
{\displaystyle f={\frac {1}{T}}={\frac {1}{2\pi }}{\sqrt {\frac {mgr_{\oplus }}{I_{O}}}}}
If the initial angle is taken into consideration (for large amplitudes), then the expression for
α
{\displaystyle \alpha }
becomes:
α
=
θ
¨
=
−
m
g
r
⊕
I
O
sin
θ
{\displaystyle \alpha ={\ddot {\theta }}=-{\frac {mgr_{\oplus }}{I_{O}}}\sin \theta }
and gives a period of:
T
=
4
K
(
sin
2
θ
m
a
x
2
)
I
O
m
g
r
⊕
{\displaystyle T=4\operatorname {K} \left(\sin ^{2}{\frac {\theta _{\mathrm {max} }}{2}}\right){\sqrt {\frac {I_{O}}{mgr_{\oplus }}}}}
where
θ
m
a
x
{\displaystyle \theta _{\mathrm {max} }}
is the maximum angle of oscillation (with respect to the vertical) and
K
(
k
)
{\displaystyle \operatorname {K} (k)}
is the complete elliptic integral of the first kind.
An important concept is the equivalent length,
ℓ
e
q
{\displaystyle \ell ^{\mathrm {eq} }}
, the length of a simple pendulums that has the same angular frequency
ω
0
{\displaystyle \omega _{0}}
as the compound pendulum:
ω
0
2
=
g
ℓ
e
q
:=
m
g
r
⊕
I
O
⟹
ℓ
e
q
=
I
O
m
r
⊕
{\displaystyle {\omega _{0}}^{2}={\frac {g}{\ell ^{\mathrm {eq} }}}:={\frac {mgr_{\oplus }}{I_{O}}}\implies \ell ^{\mathrm {eq} }={\frac {I_{O}}{mr_{\oplus }}}}
Consider the following cases:
The simple pendulum is the special case where all the mass is located at the bob swinging at a distance
ℓ
{\displaystyle \ell }
from the pivot. Thus,
r
⊕
=
ℓ
{\displaystyle r_{\oplus }=\ell }
and
I
O
=
m
ℓ
2
{\displaystyle I_{O}=m\ell ^{2}}
, so the expression reduces to:
ω
0
2
=
m
g
r
⊕
I
O
=
m
g
ℓ
m
ℓ
2
=
g
ℓ
{\displaystyle {\omega _{0}}^{2}={\frac {mgr_{\oplus }}{I_{O}}}={\frac {mg\ell }{m\ell ^{2}}}={\frac {g}{\ell }}}
. Notice
ℓ
e
q
=
ℓ
{\displaystyle \ell ^{\mathrm {eq} }=\ell }
, as expected (the definition of equivalent length).
A homogeneous rod of mass
m
{\displaystyle m}
and length
ℓ
{\displaystyle \ell }
swinging from its end has
r
⊕
=
1
2
ℓ
{\displaystyle r_{\oplus }={\frac {1}{2}}\ell }
and
I
O
=
1
3
m
ℓ
2
{\displaystyle I_{O}={\frac {1}{3}}m\ell ^{2}}
, so the expression reduces to:
ω
0
2
=
m
g
r
⊕
I
O
=
m
g
1
2
ℓ
1
3
m
ℓ
2
=
g
2
3
ℓ
{\displaystyle {\omega _{0}}^{2}={\frac {mgr_{\oplus }}{I_{O}}}={\frac {mg\,{\frac {1}{2}}\ell }{{\frac {1}{3}}m\ell ^{2}}}={\frac {g}{{\frac {2}{3}}\ell }}}
. Notice
ℓ
e
q
=
2
3
ℓ
{\displaystyle \ell ^{\mathrm {eq} }={\frac {2}{3}}\ell }
, a homogeneous rod oscillates as if it were a simple pendulum of two-thirds its length.
A heavy simple pendulum: combination of a homogeneous rod of mass
m
r
o
d
{\displaystyle m_{\mathrm {rod} }}
and length
ℓ
{\displaystyle \ell }
swinging from its end, and a bob
m
b
o
b
{\displaystyle m_{\mathrm {bob} }}
at the other end. Then the system has a total mass of
m
b
o
b
+
m
r
o
d
{\displaystyle m_{\mathrm {bob} }+m_{\mathrm {rod} }}
, and the other parameters being
m
r
⊕
=
m
b
o
b
ℓ
+
m
r
o
d
ℓ
2
{\displaystyle mr_{\oplus }=m_{\mathrm {bob} }\ell +m_{\mathrm {rod} }{\frac {\ell }{2}}}
(by definition of centre-of-mass) and
I
O
=
m
b
o
b
ℓ
2
+
1
3
m
r
o
d
ℓ
2
{\displaystyle I_{O}=m_{\mathrm {bob} }\ell ^{2}+{\frac {1}{3}}m_{\mathrm {rod} }\ell ^{2}}
, so the expression reduces to:
ω
0
2
=
m
g
r
⊕
I
O
=
(
m
b
o
b
ℓ
+
m
r
o
d
ℓ
2
)
g
m
b
o
b
ℓ
2
+
1
3
m
r
o
d
ℓ
2
=
g
ℓ
m
b
o
b
+
m
r
o
d
2
m
b
o
b
+
m
r
o
d
3
=
g
ℓ
1
+
m
r
o
d
2
m
b
o
b
1
+
m
r
o
d
3
m
b
o
b
{\displaystyle {\omega _{0}}^{2}={\frac {mgr_{\oplus }}{I_{O}}}={\frac {\left(m_{\mathrm {bob} }\ell +m_{\mathrm {rod} }{\frac {\ell }{2}}\right)g}{m_{\mathrm {bob} }\ell ^{2}+{\frac {1}{3}}m_{\mathrm {rod} }\ell ^{2}}}={\frac {g}{\ell }}{\frac {m_{\mathrm {bob} }+{\frac {m_{\mathrm {rod} }}{2}}}{m_{\mathrm {bob} }+{\frac {m_{\mathrm {rod} }}{3}}}}={\frac {g}{\ell }}{\frac {1+{\frac {m_{\mathrm {rod} }}{2m_{\mathrm {bob} }}}}{1+{\frac {m_{\mathrm {rod} }}{3m_{\mathrm {bob} }}}}}}
Where
ℓ
e
q
=
ℓ
1
+
m
r
o
d
3
m
b
o
b
1
+
m
r
o
d
2
m
b
o
b
{\displaystyle \ell ^{\mathrm {eq} }=\ell {\frac {1+{\frac {m_{\mathrm {rod} }}{3m_{\mathrm {bob} }}}}{1+{\frac {m_{\mathrm {rod} }}{2m_{\mathrm {bob} }}}}}}
. Notice these formulae can be particularized into the two previous cases studied before just by considering the mass of the rod or the bob to be zero respectively. Also notice that the formula does not depend on both the mass of the bob and the rod, but actually on their ratio,
m
r
o
d
m
b
o
b
{\displaystyle {\frac {m_{\mathrm {rod} }}{m_{\mathrm {bob} }}}}
. An approximation can be made for
m
r
o
d
m
b
o
b
≪
1
{\displaystyle {\frac {m_{\mathrm {rod} }}{m_{\mathrm {bob} }}}\ll 1}
:
ω
0
2
≈
g
ℓ
(
1
+
1
6
m
r
o
d
m
b
o
b
+
⋯
)
{\displaystyle {\omega _{0}}^{2}\approx {\frac {g}{\ell }}\left(1+{\frac {1}{6}}{\frac {m_{\mathrm {rod} }}{m_{\mathrm {bob} }}}+\cdots \right)}
Notice how similar it is to the angular frequency in a spring-mass system with effective mass.
== Damped, driven pendulum ==
The above discussion focuses on a pendulum bob only acted upon by the force of gravity. Suppose a damping force, e.g. air resistance, as well as a sinusoidal driving force acts on the body. This system is a damped, driven oscillator, and is chaotic.
Equation (1) can be written as
m
l
2
d
2
θ
d
t
2
=
−
m
g
l
sin
θ
{\displaystyle ml^{2}{\frac {d^{2}\theta }{dt^{2}}}=-mgl\sin \theta }
(see the Torque derivation of Equation (1) above).
A damping term and forcing term can be added to the right hand side to get
m
l
2
d
2
θ
d
t
2
=
−
m
g
l
sin
θ
−
b
d
θ
d
t
+
a
cos
(
Ω
t
)
{\displaystyle ml^{2}{\frac {d^{2}\theta }{dt^{2}}}=-mgl\sin \theta -b{\frac {d\theta }{dt}}+a\cos(\Omega t)}
where the damping is assumed to be directly proportional to the angular velocity (this is true for low-speed air resistance, see also Drag (physics)).
a
{\displaystyle a}
and
b
{\displaystyle b}
are constants defining the amplitude of forcing and the degree of damping respectively.
Ω
{\textstyle \Omega }
is the angular frequency of the driving oscillations.
Dividing through by
m
l
2
{\textstyle ml^{2}}
:
d
2
θ
d
t
2
+
b
m
l
2
d
θ
d
t
+
g
l
sin
θ
−
a
m
l
2
cos
(
Ω
t
)
=
0.
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+{\frac {b}{ml^{2}}}{\frac {d\theta }{dt}}+{\frac {g}{l}}{\sin \theta }-{\frac {a}{ml^{2}}}\cos(\Omega t)=0.}
For a physical pendulum:
d
2
θ
d
t
2
+
b
I
d
θ
d
t
+
m
g
r
⊕
I
sin
θ
−
a
I
cos
(
Ω
t
)
=
0.
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+{\frac {b}{I}}{\frac {d\theta }{dt}}+{\frac {mgr_{\oplus }}{I}}{\sin \theta }-{\frac {a}{I}}\cos(\Omega t)=0.}
This equation exhibits chaotic behaviour. The exact motion of this pendulum can only be found numerically and is highly dependent on initial conditions, e.g. the initial velocity and the starting amplitude. However, the small angle approximation outlined above can still be used under the required conditions to give an approximate analytical solution.
== Physical interpretation of the imaginary period ==
The Jacobian elliptic function that expresses the position of a pendulum as a function of time is a doubly periodic function with a real period and an imaginary period. The real period is, of course, the time it takes the pendulum to go through one full cycle. Paul Appell pointed out a physical interpretation of the imaginary period: if θ0 is the maximum angle of one pendulum and 180° − θ0 is the maximum angle of another, then the real period of each is the magnitude of the imaginary period of the other.
== Coupled pendula ==
Coupled pendulums can affect each other's motion, either through a direction connection (such as a spring connecting the bobs) or through motions in a supporting structure (such as a tabletop). The equations of motion for two identical simple pendulums coupled by a spring connecting the bobs can be obtained using Lagrangian mechanics.
The kinetic energy of the system is:
E
K
=
1
2
m
L
2
(
θ
˙
1
2
+
θ
˙
2
2
)
{\displaystyle E_{\text{K}}={\frac {1}{2}}mL^{2}\left({\dot {\theta }}_{1}^{2}+{\dot {\theta }}_{2}^{2}\right)}
where
m
{\displaystyle m}
is the mass of the bobs,
L
{\displaystyle L}
is the length of the strings, and
θ
1
{\displaystyle \theta _{1}}
,
θ
2
{\displaystyle \theta _{2}}
are the angular displacements of the two bobs from equilibrium.
The potential energy of the system is:
E
p
=
m
g
L
(
2
−
cos
θ
1
−
cos
θ
2
)
+
1
2
k
L
2
(
θ
2
−
θ
1
)
2
{\displaystyle E_{\text{p}}=mgL(2-\cos \theta _{1}-\cos \theta _{2})+{\frac {1}{2}}kL^{2}(\theta _{2}-\theta _{1})^{2}}
where
g
{\displaystyle g}
is the gravitational acceleration, and
k
{\displaystyle k}
is the spring constant. The displacement
L
(
θ
2
−
θ
1
)
{\displaystyle L(\theta _{2}-\theta _{1})}
of the spring from its equilibrium position assumes the small angle approximation.
The Lagrangian is then
L
=
1
2
m
L
2
(
θ
˙
1
2
+
θ
˙
2
2
)
−
m
g
L
(
2
−
cos
θ
1
−
cos
θ
2
)
−
1
2
k
L
2
(
θ
2
−
θ
1
)
2
{\displaystyle {\mathcal {L}}={\frac {1}{2}}mL^{2}\left({\dot {\theta }}_{1}^{2}+{\dot {\theta }}_{2}^{2}\right)-mgL(2-\cos \theta _{1}-\cos \theta _{2})-{\frac {1}{2}}kL^{2}(\theta _{2}-\theta _{1})^{2}}
which leads to the following set of coupled differential equations:
θ
¨
1
+
g
L
sin
θ
1
+
k
m
(
θ
1
−
θ
2
)
=
0
θ
¨
2
+
g
L
sin
θ
2
−
k
m
(
θ
1
−
θ
2
)
=
0
{\displaystyle {\begin{aligned}{\ddot {\theta }}_{1}+{\frac {g}{L}}\sin \theta _{1}+{\frac {k}{m}}(\theta _{1}-\theta _{2})&=0\\{\ddot {\theta }}_{2}+{\frac {g}{L}}\sin \theta _{2}-{\frac {k}{m}}(\theta _{1}-\theta _{2})&=0\end{aligned}}}
Adding and subtracting these two equations in turn, and applying the small angle approximation, gives two harmonic oscillator equations in the variables
θ
1
+
θ
2
{\displaystyle \theta _{1}+\theta _{2}}
and
θ
1
−
θ
2
{\displaystyle \theta _{1}-\theta _{2}}
:
θ
¨
1
+
θ
¨
2
+
g
L
(
θ
1
+
θ
2
)
=
0
θ
¨
1
−
θ
¨
2
+
(
g
L
+
2
k
m
)
(
θ
1
−
θ
2
)
=
0
{\displaystyle {\begin{aligned}{\ddot {\theta }}_{1}+{\ddot {\theta }}_{2}+{\frac {g}{L}}(\theta _{1}+\theta _{2})&=0\\{\ddot {\theta }}_{1}-{\ddot {\theta }}_{2}+\left({\frac {g}{L}}+2{\frac {k}{m}}\right)(\theta _{1}-\theta _{2})&=0\end{aligned}}}
with the corresponding solutions
θ
1
+
θ
2
=
A
cos
(
ω
1
t
+
α
)
θ
1
−
θ
2
=
B
cos
(
ω
2
t
+
β
)
{\displaystyle {\begin{aligned}\theta _{1}+\theta _{2}&=A\cos(\omega _{1}t+\alpha )\\\theta _{1}-\theta _{2}&=B\cos(\omega _{2}t+\beta )\end{aligned}}}
where
ω
1
=
g
L
ω
2
=
g
L
+
2
k
m
{\displaystyle {\begin{aligned}\omega _{1}&={\sqrt {\frac {g}{L}}}\\\omega _{2}&={\sqrt {{\frac {g}{L}}+2{\frac {k}{m}}}}\end{aligned}}}
and
A
{\displaystyle A}
,
B
{\displaystyle B}
,
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
are constants of integration.
Expressing the solutions in terms of
θ
1
{\displaystyle \theta _{1}}
and
θ
2
{\displaystyle \theta _{2}}
alone:
θ
1
=
1
2
A
cos
(
ω
1
t
+
α
)
+
1
2
B
cos
(
ω
2
t
+
β
)
θ
2
=
1
2
A
cos
(
ω
1
t
+
α
)
−
1
2
B
cos
(
ω
2
t
+
β
)
{\displaystyle {\begin{aligned}\theta _{1}&={\frac {1}{2}}A\cos(\omega _{1}t+\alpha )+{\frac {1}{2}}B\cos(\omega _{2}t+\beta )\\\theta _{2}&={\frac {1}{2}}A\cos(\omega _{1}t+\alpha )-{\frac {1}{2}}B\cos(\omega _{2}t+\beta )\end{aligned}}}
If the bobs are not given an initial push, then the condition
θ
˙
1
(
0
)
=
θ
˙
2
(
0
)
=
0
{\displaystyle {\dot {\theta }}_{1}(0)={\dot {\theta }}_{2}(0)=0}
requires
α
=
β
=
0
{\displaystyle \alpha =\beta =0}
, which gives (after some rearranging):
A
=
θ
1
(
0
)
+
θ
2
(
0
)
B
=
θ
1
(
0
)
−
θ
2
(
0
)
{\displaystyle {\begin{aligned}A&=\theta _{1}(0)+\theta _{2}(0)\\B&=\theta _{1}(0)-\theta _{2}(0)\end{aligned}}}
== See also ==
Harmonograph
Conical pendulum
Cycloidal pendulum
Double pendulum
Inverted pendulum
Kapitza's pendulum
Rayleigh–Lorentz pendulum
Elastic pendulum
Mathieu function
Pendulum equations (software)
== References ==
== Further reading ==
Baker, Gregory L.; Blackburn, James A. (2005). The Pendulum: A Physics Case Study (PDF). Oxford University Press.
Ochs, Karlheinz (2011). "A comprehensive analytical solution of the nonlinear pendulum". European Journal of Physics. 32 (2): 479–490. Bibcode:2011EJPh...32..479O. doi:10.1088/0143-0807/32/2/019. S2CID 53621685.
Sala, Kenneth L. (1989). "Transformations of the Jacobian Amplitude Function and its Calculation via the Arithmetic-Geometric Mean". SIAM J. Math. Anal. 20 (6): 1514–1528. doi:10.1137/0520100.
== External links ==
Mathworld article on Mathieu Function | Wikipedia/Pendulum_equation |
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.
Because a differentiable functional is stationary at its local extrema, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing or maximizing it. This is analogous to Fermat's theorem in calculus, stating that at any point where a differentiable function attains a local extremum its derivative is zero.
In Lagrangian mechanics, according to Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler equation for the action of the system. In this context Euler equations are usually called Lagrange equations. In classical mechanics, it is equivalent to Newton's laws of motion; indeed, the Euler-Lagrange equations will produce the same equations as Newton's Laws. This is particularly useful when analyzing systems whose force vectors are particularly complicated. It has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations. In classical field theory there is an analogous equation to calculate the dynamics of a field.
== History ==
The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.
Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their correspondence ultimately led to the calculus of variations, a term coined by Euler himself in 1766.
== Statement ==
Let
(
X
,
L
)
{\displaystyle (X,L)}
be a real dynamical system with
n
{\displaystyle n}
degrees of freedom. Here
X
{\displaystyle X}
is the configuration space and
L
=
L
(
t
,
q
(
t
)
,
v
(
t
)
)
{\displaystyle L=L(t,{\boldsymbol {q}}(t),{\boldsymbol {v}}(t))}
the Lagrangian, i.e. a smooth real-valued function such that
q
(
t
)
∈
X
,
{\displaystyle {\boldsymbol {q}}(t)\in X,}
and
v
(
t
)
{\displaystyle {\boldsymbol {v}}(t)}
is an
n
{\displaystyle n}
-dimensional "vector of speed". (For those familiar with differential geometry,
X
{\displaystyle X}
is a smooth manifold, and
L
:
R
t
×
X
×
T
X
→
R
,
{\displaystyle L:{\mathbb {R} }_{t}\times X\times TX\to {\mathbb {R} },}
where
T
X
{\displaystyle TX}
is the tangent bundle of
X
)
.
{\displaystyle X).}
Let
P
(
a
,
b
,
x
a
,
x
b
)
{\displaystyle {\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})}
be the set of smooth paths
q
:
[
a
,
b
]
→
X
{\displaystyle {\boldsymbol {q}}:[a,b]\to X}
for which
q
(
a
)
=
x
a
{\displaystyle {\boldsymbol {q}}(a)={\boldsymbol {x}}_{a}}
and
q
(
b
)
=
x
b
.
{\displaystyle {\boldsymbol {q}}(b)={\boldsymbol {x}}_{b}.}
The action functional
S
:
P
(
a
,
b
,
x
a
,
x
b
)
→
R
{\displaystyle S:{\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})\to \mathbb {R} }
is defined via
S
[
q
]
=
∫
a
b
L
(
t
,
q
(
t
)
,
q
˙
(
t
)
)
d
t
.
{\displaystyle S[{\boldsymbol {q}}]=\int _{a}^{b}L(t,{\boldsymbol {q}}(t),{\dot {\boldsymbol {q}}}(t))\,dt.}
A path
q
∈
P
(
a
,
b
,
x
a
,
x
b
)
{\displaystyle {\boldsymbol {q}}\in {\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})}
is a stationary point of
S
{\displaystyle S}
if and only if
Here,
q
˙
(
t
)
{\displaystyle {\dot {\boldsymbol {q}}}(t)}
is the time derivative of
q
(
t
)
.
{\displaystyle {\boldsymbol {q}}(t).}
When we say stationary point, we mean a stationary point of
S
{\displaystyle S}
with respect to any small perturbation in
q
{\displaystyle {\boldsymbol {q}}}
. See proofs below for more rigorous detail.
== Example ==
A standard example is finding the real-valued function y(x) on the interval [a, b], such that y(a) = c and y(b) = d, for which the path length along the curve traced by y is as short as possible.
s
=
∫
a
b
d
x
2
+
d
y
2
=
∫
a
b
1
+
y
′
2
d
x
,
{\displaystyle {\text{s}}=\int _{a}^{b}{\sqrt {\mathrm {d} x^{2}+\mathrm {d} y^{2}}}=\int _{a}^{b}{\sqrt {1+y'^{2}}}\,\mathrm {d} x,}
the integrand function being
L
(
x
,
y
,
y
′
)
=
1
+
y
′
2
{\textstyle L(x,y,y')={\sqrt {1+y'^{2}}}}
.
The partial derivatives of L are:
∂
L
(
x
,
y
,
y
′
)
∂
y
′
=
y
′
1
+
y
′
2
and
∂
L
(
x
,
y
,
y
′
)
∂
y
=
0.
{\displaystyle {\frac {\partial L(x,y,y')}{\partial y'}}={\frac {y'}{\sqrt {1+y'^{2}}}}\quad {\text{and}}\quad {\frac {\partial L(x,y,y')}{\partial y}}=0.}
By substituting these into the Euler–Lagrange equation, we obtain
d
d
x
y
′
(
x
)
1
+
(
y
′
(
x
)
)
2
=
0
y
′
(
x
)
1
+
(
y
′
(
x
)
)
2
=
C
=
constant
⇒
y
′
(
x
)
=
C
1
−
C
2
=:
A
⇒
y
(
x
)
=
A
x
+
B
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} x}}{\frac {y'(x)}{\sqrt {1+(y'(x))^{2}}}}&=0\\{\frac {y'(x)}{\sqrt {1+(y'(x))^{2}}}}&=C={\text{constant}}\\\Rightarrow y'(x)&={\frac {C}{\sqrt {1-C^{2}}}}=:A\\\Rightarrow y(x)&=Ax+B\end{aligned}}}
that is, the function must have a constant first derivative, and thus its graph is a straight line.
== Generalizations ==
=== Single function of single variable with higher derivatives ===
The stationary values of the functional
I
[
f
]
=
∫
x
0
x
1
L
(
x
,
f
,
f
′
,
f
″
,
…
,
f
(
k
)
)
d
x
;
f
′
:=
d
f
d
x
,
f
″
:=
d
2
f
d
x
2
,
f
(
k
)
:=
d
k
f
d
x
k
{\displaystyle I[f]=\int _{x_{0}}^{x_{1}}{\mathcal {L}}(x,f,f',f'',\dots ,f^{(k)})~\mathrm {d} x~;~~f':={\cfrac {\mathrm {d} f}{\mathrm {d} x}},~f'':={\cfrac {\mathrm {d} ^{2}f}{\mathrm {d} x^{2}}},~f^{(k)}:={\cfrac {\mathrm {d} ^{k}f}{\mathrm {d} x^{k}}}}
can be obtained from the Euler–Lagrange equation
∂
L
∂
f
−
d
d
x
(
∂
L
∂
f
′
)
+
d
2
d
x
2
(
∂
L
∂
f
″
)
−
⋯
+
(
−
1
)
k
d
k
d
x
k
(
∂
L
∂
f
(
k
)
)
=
0
{\displaystyle {\cfrac {\partial {\mathcal {L}}}{\partial f}}-{\cfrac {\mathrm {d} }{\mathrm {d} x}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f'}}\right)+{\cfrac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f''}}\right)-\dots +(-1)^{k}{\cfrac {\mathrm {d} ^{k}}{\mathrm {d} x^{k}}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f^{(k)}}}\right)=0}
under fixed boundary conditions for the function itself as well as for the first
k
−
1
{\displaystyle k-1}
derivatives (i.e. for all
f
(
i
)
,
i
∈
{
0
,
.
.
.
,
k
−
1
}
{\displaystyle f^{(i)},i\in \{0,...,k-1\}}
). The endpoint values of the highest derivative
f
(
k
)
{\displaystyle f^{(k)}}
remain flexible.
=== Several functions of single variable with single derivative ===
If the problem involves finding several functions (
f
1
,
f
2
,
…
,
f
m
{\displaystyle f_{1},f_{2},\dots ,f_{m}}
) of a single independent variable (
x
{\displaystyle x}
) that define an extremum of the functional
I
[
f
1
,
f
2
,
…
,
f
m
]
=
∫
x
0
x
1
L
(
x
,
f
1
,
f
2
,
…
,
f
m
,
f
1
′
,
f
2
′
,
…
,
f
m
′
)
d
x
;
f
i
′
:=
d
f
i
d
x
{\displaystyle I[f_{1},f_{2},\dots ,f_{m}]=\int _{x_{0}}^{x_{1}}{\mathcal {L}}(x,f_{1},f_{2},\dots ,f_{m},f_{1}',f_{2}',\dots ,f_{m}')~\mathrm {d} x~;~~f_{i}':={\cfrac {\mathrm {d} f_{i}}{\mathrm {d} x}}}
then the corresponding Euler–Lagrange equations are
∂
L
∂
f
i
−
d
d
x
(
∂
L
∂
f
i
′
)
=
0
;
i
=
1
,
2
,
.
.
.
,
m
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f_{i}}}-{\frac {\mathrm {d} }{\mathrm {d} x}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i}'}}\right)=0;\quad i=1,2,...,m\end{aligned}}}
=== Single function of several variables with single derivative ===
A multi-dimensional generalization comes from considering a function on n variables. If
Ω
{\displaystyle \Omega }
is some surface, then
I
[
f
]
=
∫
Ω
L
(
x
1
,
…
,
x
n
,
f
,
f
1
,
…
,
f
n
)
d
x
;
f
j
:=
∂
f
∂
x
j
{\displaystyle I[f]=\int _{\Omega }{\mathcal {L}}(x_{1},\dots ,x_{n},f,f_{1},\dots ,f_{n})\,\mathrm {d} \mathbf {x} \,\!~;~~f_{j}:={\cfrac {\partial f}{\partial x_{j}}}}
is extremized only if f satisfies the partial differential equation
∂
L
∂
f
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
j
)
=
0.
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{j}}}\right)=0.}
When n = 2 and functional
I
{\displaystyle {\mathcal {I}}}
is the energy functional, this leads to the soap-film minimal surface problem.
=== Several functions of several variables with single derivative ===
If there are several unknown functions to be determined and several variables such that
I
[
f
1
,
f
2
,
…
,
f
m
]
=
∫
Ω
L
(
x
1
,
…
,
x
n
,
f
1
,
…
,
f
m
,
f
1
,
1
,
…
,
f
1
,
n
,
…
,
f
m
,
1
,
…
,
f
m
,
n
)
d
x
;
f
i
,
j
:=
∂
f
i
∂
x
j
{\displaystyle I[f_{1},f_{2},\dots ,f_{m}]=\int _{\Omega }{\mathcal {L}}(x_{1},\dots ,x_{n},f_{1},\dots ,f_{m},f_{1,1},\dots ,f_{1,n},\dots ,f_{m,1},\dots ,f_{m,n})\,\mathrm {d} \mathbf {x} \,\!~;~~f_{i,j}:={\cfrac {\partial f_{i}}{\partial x_{j}}}}
the system of Euler–Lagrange equations is
∂
L
∂
f
1
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
1
,
j
)
=
0
1
∂
L
∂
f
2
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
2
,
j
)
=
0
2
⋮
⋮
⋮
∂
L
∂
f
m
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
m
,
j
)
=
0
m
.
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f_{1}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{1,j}}}\right)&=0_{1}\\{\frac {\partial {\mathcal {L}}}{\partial f_{2}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{2,j}}}\right)&=0_{2}\\\vdots \qquad \vdots \qquad &\quad \vdots \\{\frac {\partial {\mathcal {L}}}{\partial f_{m}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{m,j}}}\right)&=0_{m}.\end{aligned}}}
=== Single function of two variables with higher derivatives ===
If there is a single unknown function f to be determined that is dependent on two variables x1 and x2 and if the functional depends on higher derivatives of f up to n-th order such that
I
[
f
]
=
∫
Ω
L
(
x
1
,
x
2
,
f
,
f
1
,
f
2
,
f
11
,
f
12
,
f
22
,
…
,
f
22
…
2
)
d
x
f
i
:=
∂
f
∂
x
i
,
f
i
j
:=
∂
2
f
∂
x
i
∂
x
j
,
…
{\displaystyle {\begin{aligned}I[f]&=\int _{\Omega }{\mathcal {L}}(x_{1},x_{2},f,f_{1},f_{2},f_{11},f_{12},f_{22},\dots ,f_{22\dots 2})\,\mathrm {d} \mathbf {x} \\&\qquad \quad f_{i}:={\cfrac {\partial f}{\partial x_{i}}}\;,\quad f_{ij}:={\cfrac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}\;,\;\;\dots \end{aligned}}}
then the Euler–Lagrange equation is
∂
L
∂
f
−
∂
∂
x
1
(
∂
L
∂
f
1
)
−
∂
∂
x
2
(
∂
L
∂
f
2
)
+
∂
2
∂
x
1
2
(
∂
L
∂
f
11
)
+
∂
2
∂
x
1
∂
x
2
(
∂
L
∂
f
12
)
+
∂
2
∂
x
2
2
(
∂
L
∂
f
22
)
−
⋯
+
(
−
1
)
n
∂
n
∂
x
2
n
(
∂
L
∂
f
22
…
2
)
=
0
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f}}&-{\frac {\partial }{\partial x_{1}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{1}}}\right)-{\frac {\partial }{\partial x_{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{2}}}\right)+{\frac {\partial ^{2}}{\partial x_{1}^{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{11}}}\right)+{\frac {\partial ^{2}}{\partial x_{1}\partial x_{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{12}}}\right)+{\frac {\partial ^{2}}{\partial x_{2}^{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{22}}}\right)\\&-\dots +(-1)^{n}{\frac {\partial ^{n}}{\partial x_{2}^{n}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{22\dots 2}}}\right)=0\end{aligned}}}
which can be represented shortly as:
∂
L
∂
f
+
∑
j
=
1
n
∑
μ
1
≤
…
≤
μ
j
(
−
1
)
j
∂
j
∂
x
μ
1
…
∂
x
μ
j
(
∂
L
∂
f
μ
1
…
μ
j
)
=
0
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f}}+\sum _{j=1}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}{\frac {\partial ^{j}}{\partial x_{\mu _{1}}\dots \partial x_{\mu _{j}}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{\mu _{1}\dots \mu _{j}}}}\right)=0}
wherein
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
are indices that span the number of variables, that is, here they go from 1 to 2. Here summation over the
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
indices is only over
μ
1
≤
μ
2
≤
…
≤
μ
j
{\displaystyle \mu _{1}\leq \mu _{2}\leq \ldots \leq \mu _{j}}
in order to avoid counting the same partial derivative multiple times, for example
f
12
=
f
21
{\displaystyle f_{12}=f_{21}}
appears only once in the previous equation.
=== Several functions of several variables with higher derivatives ===
If there are p unknown functions fi to be determined that are dependent on m variables x1 ... xm and if the functional depends on higher derivatives of the fi up to n-th order such that
I
[
f
1
,
…
,
f
p
]
=
∫
Ω
L
(
x
1
,
…
,
x
m
;
f
1
,
…
,
f
p
;
f
1
,
1
,
…
,
f
p
,
m
;
f
1
,
11
,
…
,
f
p
,
m
m
;
…
;
f
p
,
1
…
1
,
…
,
f
p
,
m
…
m
)
d
x
f
i
,
μ
:=
∂
f
i
∂
x
μ
,
f
i
,
μ
1
μ
2
:=
∂
2
f
i
∂
x
μ
1
∂
x
μ
2
,
…
{\displaystyle {\begin{aligned}I[f_{1},\ldots ,f_{p}]&=\int _{\Omega }{\mathcal {L}}(x_{1},\ldots ,x_{m};f_{1},\ldots ,f_{p};f_{1,1},\ldots ,f_{p,m};f_{1,11},\ldots ,f_{p,mm};\ldots ;f_{p,1\ldots 1},\ldots ,f_{p,m\ldots m})\,\mathrm {d} \mathbf {x} \\&\qquad \quad f_{i,\mu }:={\cfrac {\partial f_{i}}{\partial x_{\mu }}}\;,\quad f_{i,\mu _{1}\mu _{2}}:={\cfrac {\partial ^{2}f_{i}}{\partial x_{\mu _{1}}\partial x_{\mu _{2}}}}\;,\;\;\dots \end{aligned}}}
where
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
are indices that span the number of variables, that is they go from 1 to m. Then the Euler–Lagrange equation is
∂
L
∂
f
i
+
∑
j
=
1
n
∑
μ
1
≤
…
≤
μ
j
(
−
1
)
j
∂
j
∂
x
μ
1
…
∂
x
μ
j
(
∂
L
∂
f
i
,
μ
1
…
μ
j
)
=
0
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f_{i}}}+\sum _{j=1}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}{\frac {\partial ^{j}}{\partial x_{\mu _{1}}\dots \partial x_{\mu _{j}}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i,\mu _{1}\dots \mu _{j}}}}\right)=0}
where the summation over the
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
is avoiding counting the same derivative
f
i
,
μ
1
μ
2
=
f
i
,
μ
2
μ
1
{\displaystyle f_{i,\mu _{1}\mu _{2}}=f_{i,\mu _{2}\mu _{1}}}
several times, just as in the previous subsection. This can be expressed more compactly as
∑
j
=
0
n
∑
μ
1
≤
…
≤
μ
j
(
−
1
)
j
∂
μ
1
…
μ
j
j
(
∂
L
∂
f
i
,
μ
1
…
μ
j
)
=
0
{\displaystyle \sum _{j=0}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}\partial _{\mu _{1}\ldots \mu _{j}}^{j}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i,\mu _{1}\dots \mu _{j}}}}\right)=0}
=== Field theories ===
== Generalization to manifolds ==
Let
M
{\displaystyle M}
be a smooth manifold, and let
C
∞
(
[
a
,
b
]
)
{\displaystyle C^{\infty }([a,b])}
denote the space of smooth functions
f
:
[
a
,
b
]
→
M
{\displaystyle f\colon [a,b]\to M}
. Then, for functionals
S
:
C
∞
(
[
a
,
b
]
)
→
R
{\displaystyle S\colon C^{\infty }([a,b])\to \mathbb {R} }
of the form
S
[
f
]
=
∫
a
b
(
L
∘
f
˙
)
(
t
)
d
t
{\displaystyle S[f]=\int _{a}^{b}(L\circ {\dot {f}})(t)\,\mathrm {d} t}
where
L
:
T
M
→
R
{\displaystyle L\colon TM\to \mathbb {R} }
is the Lagrangian, the statement
d
S
f
=
0
{\displaystyle \mathrm {d} S_{f}=0}
is equivalent to the statement that, for all
t
∈
[
a
,
b
]
{\displaystyle t\in [a,b]}
, each coordinate frame trivialization
(
x
i
,
X
i
)
{\displaystyle (x^{i},X^{i})}
of a neighborhood of
f
˙
(
t
)
{\displaystyle {\dot {f}}(t)}
yields the following
dim
M
{\displaystyle \dim M}
equations:
∀
i
:
d
d
t
∂
L
∂
X
i
|
f
˙
(
t
)
=
∂
L
∂
x
i
|
f
˙
(
t
)
.
{\displaystyle \forall i:{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial X^{i}}}{\bigg |}_{{\dot {f}}(t)}={\frac {\partial L}{\partial x^{i}}}{\bigg |}_{{\dot {f}}(t)}.}
Euler-Lagrange equations can also be written in a coordinate-free form as
L
Δ
θ
L
=
d
L
{\displaystyle {\mathcal {L}}_{\Delta }\theta _{L}=dL}
where
θ
L
{\displaystyle \theta _{L}}
is the canonical momenta 1-form corresponding to the Lagrangian
L
{\displaystyle L}
. The vector field generating time translations is denoted by
Δ
{\displaystyle \Delta }
and the Lie derivative is denoted by
L
{\displaystyle {\mathcal {L}}}
. One can use local charts
(
q
α
,
q
˙
α
)
{\displaystyle (q^{\alpha },{\dot {q}}^{\alpha })}
in which
θ
L
=
∂
L
∂
q
˙
α
d
q
α
{\displaystyle \theta _{L}={\frac {\partial L}{\partial {\dot {q}}^{\alpha }}}dq^{\alpha }}
and
Δ
:=
d
d
t
=
q
˙
α
∂
∂
q
α
+
q
¨
α
∂
∂
q
˙
α
{\displaystyle \Delta :={\frac {d}{dt}}={\dot {q}}^{\alpha }{\frac {\partial }{\partial q^{\alpha }}}+{\ddot {q}}^{\alpha }{\frac {\partial }{\partial {\dot {q}}^{\alpha }}}}
and use coordinate expressions for the Lie derivative to see equivalence with coordinate expressions of the Euler Lagrange equation. The coordinate free form is particularly suitable for geometrical interpretation of the Euler Lagrange equations.
== See also ==
Lagrangian mechanics
Hamiltonian mechanics
Analytical mechanics
Beltrami identity
Functional derivative
== Notes ==
== References ==
"Lagrange equations (in mechanics)", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Euler-Lagrange Differential Equation". MathWorld.
Calculus of Variations at PlanetMath.
Gelfand, Izrail Moiseevich (1963). Calculus of Variations. Dover. ISBN 0-486-41448-5. {{cite book}}: ISBN / Date incompatibility (help)
Roubicek, T.: Calculus of variations. Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, ISBN 978-3-527-41188-7, pp. 551–588. | Wikipedia/Lagrange's_equation |
In the physical science of dynamics, rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. The assumption that the bodies are rigid (i.e. they do not deform under the action of applied forces) simplifies analysis, by reducing the parameters that describe the configuration of the system to the translation and rotation of reference frames attached to each body. This excludes bodies that display fluid, highly elastic, and plastic behavior.
The dynamics of a rigid body system is described by the laws of kinematics and by the application of Newton's second law (kinetics) or their derivative form, Lagrangian mechanics. The solution of these equations of motion provides a description of the position, the motion and the acceleration of the individual components of the system, and overall the system itself, as a function of time. The formulation and solution of rigid body dynamics is an important tool in the computer simulation of mechanical systems.
== Planar rigid body dynamics ==
If a system of particles moves parallel to a fixed plane, the system is said to be constrained to planar movement. In this case, Newton's laws (kinetics) for a rigid system of N particles, Pi, i=1,...,N, simplify because there is no movement in the k direction. Determine the resultant force and torque at a reference point R, to obtain
F
=
∑
i
=
1
N
m
i
A
i
,
T
=
∑
i
=
1
N
(
r
i
−
R
)
×
m
i
A
i
,
{\displaystyle \mathbf {F} =\sum _{i=1}^{N}m_{i}\mathbf {A} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {r} _{i}-\mathbf {R} )\times m_{i}\mathbf {A} _{i},}
where ri denotes the planar trajectory of each particle.
The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration A of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as,
A
i
=
α
×
(
r
i
−
R
)
+
ω
×
(
ω
×
(
r
i
−
R
)
)
+
A
.
{\displaystyle \mathbf {A} _{i}={\boldsymbol {\alpha }}\times (\mathbf {r} _{i}-\mathbf {R} )+{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times (\mathbf {r} _{i}-\mathbf {R} ))+\mathbf {A} .}
For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along k perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors ei from the reference point R to a point ri and the unit vectors
t
i
=
k
×
e
i
{\textstyle \mathbf {t} _{i}=\mathbf {k} \times \mathbf {e} _{i}}
, so
A
i
=
α
(
Δ
r
i
t
i
)
−
ω
2
(
Δ
r
i
e
i
)
+
A
.
{\displaystyle \mathbf {A} _{i}=\alpha (\Delta r_{i}\mathbf {t} _{i})-\omega ^{2}(\Delta r_{i}\mathbf {e} _{i})+\mathbf {A} .}
This yields the resultant force on the system as
F
=
α
∑
i
=
1
N
m
i
(
Δ
r
i
t
i
)
−
ω
2
∑
i
=
1
N
m
i
(
Δ
r
i
e
i
)
+
(
∑
i
=
1
N
m
i
)
A
,
{\displaystyle \mathbf {F} =\alpha \sum _{i=1}^{N}m_{i}\left(\Delta r_{i}\mathbf {t} _{i}\right)-\omega ^{2}\sum _{i=1}^{N}m_{i}\left(\Delta r_{i}\mathbf {e} _{i}\right)+\left(\sum _{i=1}^{N}m_{i}\right)\mathbf {A} ,}
and torque as
T
=
∑
i
=
1
N
(
m
i
Δ
r
i
e
i
)
×
(
α
(
Δ
r
i
t
i
)
−
ω
2
(
Δ
r
i
e
i
)
+
A
)
=
(
∑
i
=
1
N
m
i
Δ
r
i
2
)
α
k
+
(
∑
i
=
1
N
m
i
Δ
r
i
e
i
)
×
A
,
{\displaystyle {\begin{aligned}\mathbf {T} ={}&\sum _{i=1}^{N}(m_{i}\Delta r_{i}\mathbf {e} _{i})\times \left(\alpha (\Delta r_{i}\mathbf {t} _{i})-\omega ^{2}(\Delta r_{i}\mathbf {e} _{i})+\mathbf {A} \right)\\{}={}&\left(\sum _{i=1}^{N}m_{i}\Delta r_{i}^{2}\right)\alpha \mathbf {k} +\left(\sum _{i=1}^{N}m_{i}\Delta r_{i}\mathbf {e} _{i}\right)\times \mathbf {A} ,\end{aligned}}}
where
e
i
×
e
i
=
0
{\textstyle \mathbf {e} _{i}\times \mathbf {e} _{i}=0}
and
e
i
×
t
i
=
k
{\textstyle \mathbf {e} _{i}\times \mathbf {t} _{i}=\mathbf {k} }
is the unit vector perpendicular to the plane for all of the particles Pi.
Use the center of mass C as the reference point, so these equations for Newton's laws simplify to become
F
=
M
A
,
T
=
I
C
α
k
,
{\displaystyle \mathbf {F} =M\mathbf {A} ,\quad \mathbf {T} =I_{\textbf {C}}\alpha \mathbf {k} ,}
where M is the total mass and IC is the moment of inertia about an axis perpendicular to the movement of the rigid system and through the center of mass.
== Rigid body in three dimensions ==
=== Orientation or attitude descriptions ===
Several methods to describe orientations of a rigid body in three dimensions have been developed. They are summarized in the following sections.
==== Euler angles ====
The first attempt to represent an orientation is attributed to Leonhard Euler. He imagined three reference frames that could rotate one around the other, and realized that by starting with a fixed reference frame and performing three rotations, he could get any other reference frame in the space (using two rotations to fix the vertical axis and another to fix the other two axes). The values of these three rotations are called Euler angles. Commonly,
ψ
{\displaystyle \psi }
is used to denote precession,
θ
{\displaystyle \theta }
nutation, and
ϕ
{\displaystyle \phi }
intrinsic rotation.
==== Tait–Bryan angles ====
These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles. Mathematically they constitute a set of six possibilities inside the twelve possible sets of Euler angles, the ordering being the one best used for describing the orientation of a vehicle such as an airplane. In aerospace engineering they are usually referred to as Euler angles.
==== Orientation vector ====
Euler also realized that the composition of two rotations is equivalent to a single rotation about a different fixed axis (Euler's rotation theorem). Therefore, the composition of the former three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed.
Based on this fact he introduced a vectorial way to describe any rotation, with a vector on the rotation axis and module equal to the value of the angle. Therefore, any orientation can be represented by a rotation vector (also called Euler vector) that leads to it from the reference frame. When used to represent an orientation, the rotation vector is commonly called orientation vector, or attitude vector.
A similar method, called axis-angle representation, describes a rotation or orientation using a unit vector aligned with the rotation axis, and a separate value to indicate the angle (see figure).
==== Orientation matrix ====
With the introduction of matrices the Euler theorems were rewritten. The rotations were described by orthogonal matrices referred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a rotation matrix is commonly called orientation matrix, or attitude matrix.
The above-mentioned Euler vector is the eigenvector of a rotation matrix (a rotation matrix has a unique real eigenvalue).
The product of two rotation matrices is the composition of rotations. Therefore, as before, the orientation can be given as the rotation from the initial frame to achieve the frame that we want to describe.
The configuration space of a non-symmetrical object in n-dimensional space is SO(n) × Rn. Orientation may be visualized by attaching a basis of tangent vectors to an object. The direction in which each vector points determines its orientation.
==== Orientation quaternion ====
Another way to describe rotations is using rotation quaternions, also called versors. They are equivalent to rotation matrices and rotation vectors. With respect to rotation vectors, they can be more easily converted to and from matrices. When used to represent orientations, rotation quaternions are typically called orientation quaternions or attitude quaternions.
=== Newton's second law in three dimensions ===
To consider rigid body dynamics in three-dimensional space, Newton's second law must be extended to define the relationship between the movement of a rigid body and the system of forces and torques that act on it.
Newton formulated his second law for a particle as, "The change of motion of an object is proportional to the force impressed and is made in the direction of the straight line in which the force is impressed." Because Newton generally referred to mass times velocity as the "motion" of a particle, the phrase "change of motion" refers to the mass times acceleration of the particle, and so this law is usually written as
F
=
m
a
,
{\displaystyle \mathbf {F} =m\mathbf {a} ,}
where F is understood to be the only external force acting on the particle, m is the mass of the particle, and a is its acceleration vector. The extension of Newton's second law to rigid bodies is achieved by considering a rigid system of particles.
=== Rigid system of particles ===
If a system of N particles, Pi, i=1,...,N, are assembled into a rigid body, then Newton's second law can be applied to each of the particles in the body. If Fi is the external force applied to particle Pi with mass mi, then
F
i
+
∑
j
=
1
N
F
i
j
=
m
i
a
i
,
i
=
1
,
…
,
N
,
{\displaystyle \mathbf {F} _{i}+\sum _{j=1}^{N}\mathbf {F} _{ij}=m_{i}\mathbf {a} _{i},\quad i=1,\ldots ,N,}
where Fij is the internal force of particle Pj acting on particle Pi that maintains the constant distance between these particles.
An important simplification to these force equations is obtained by introducing the resultant force and torque that acts on the rigid system. This resultant force and torque is obtained by choosing one of the particles in the system as a reference point, R, where each of the external forces are applied with the addition of an associated torque. The resultant force F and torque T are given by the formulas,
F
=
∑
i
=
1
N
F
i
,
T
=
∑
i
=
1
N
(
R
i
−
R
)
×
F
i
,
{\displaystyle \mathbf {F} =\sum _{i=1}^{N}\mathbf {F} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {R} _{i}-\mathbf {R} )\times \mathbf {F} _{i},}
where Ri is the vector that defines the position of particle Pi.
Newton's second law for a particle combines with these formulas for the resultant force and torque to yield,
F
=
∑
i
=
1
N
m
i
a
i
,
T
=
∑
i
=
1
N
(
R
i
−
R
)
×
(
m
i
a
i
)
,
{\displaystyle \mathbf {F} =\sum _{i=1}^{N}m_{i}\mathbf {a} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {R} _{i}-\mathbf {R} )\times (m_{i}\mathbf {a} _{i}),}
where the internal forces Fij cancel in pairs. The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration a of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as,
a
i
=
α
×
(
R
i
−
R
)
+
ω
×
(
ω
×
(
R
i
−
R
)
)
+
a
.
{\displaystyle \mathbf {a} _{i}=\alpha \times (\mathbf {R} _{i}-\mathbf {R} )+\omega \times (\omega \times (\mathbf {R} _{i}-\mathbf {R} ))+\mathbf {a} .}
=== Mass properties ===
The mass properties of the rigid body are represented by its center of mass and inertia matrix. Choose the reference point R so that it satisfies the condition
∑
i
=
1
N
m
i
(
R
i
−
R
)
=
0
,
{\displaystyle \sum _{i=1}^{N}m_{i}(\mathbf {R} _{i}-\mathbf {R} )=0,}
then it is known as the center of mass of the system.
The inertia matrix [IR] of the system relative to the reference point R is defined by
[
I
R
]
=
∑
i
=
1
N
m
i
(
I
(
S
i
T
S
i
)
−
S
i
S
i
T
)
,
{\displaystyle [I_{R}]=\sum _{i=1}^{N}m_{i}\left(\mathbf {I} \left(\mathbf {S} _{i}^{\textsf {T}}\mathbf {S} _{i}\right)-\mathbf {S} _{i}\mathbf {S} _{i}^{\textsf {T}}\right),}
where
S
i
{\displaystyle \mathbf {S} _{i}}
is the column vector Ri − R;
S
i
T
{\displaystyle \mathbf {S} _{i}^{\textsf {T}}}
is its transpose, and
I
{\displaystyle \mathbf {I} }
is the 3 by 3 identity matrix.
S
i
T
S
i
{\displaystyle \mathbf {S} _{i}^{\textsf {T}}\mathbf {S} _{i}}
is the scalar product of
S
i
{\displaystyle \mathbf {S} _{i}}
with itself, while
S
i
S
i
T
{\displaystyle \mathbf {S} _{i}\mathbf {S} _{i}^{\textsf {T}}}
is the tensor product of
S
i
{\displaystyle \mathbf {S} _{i}}
with itself.
=== Force-torque equations ===
Using the center of mass and inertia matrix, the force and torque equations for a single rigid body take the form
F
=
m
a
,
T
=
[
I
R
]
α
+
ω
×
[
I
R
]
ω
,
{\displaystyle \mathbf {F} =m\mathbf {a} ,\quad \mathbf {T} =[I_{R}]\alpha +\omega \times [I_{R}]\omega ,}
and are known as Newton's second law of motion for a rigid body.
The dynamics of an interconnected system of rigid bodies, Bi, j = 1, ..., M, is formulated by isolating each rigid body and introducing the interaction forces. The resultant of the external and interaction forces on each body, yields the force-torque equations
F
j
=
m
j
a
j
,
T
j
=
[
I
R
]
j
α
j
+
ω
j
×
[
I
R
]
j
ω
j
,
j
=
1
,
…
,
M
.
{\displaystyle \mathbf {F} _{j}=m_{j}\mathbf {a} _{j},\quad \mathbf {T} _{j}=[I_{R}]_{j}\alpha _{j}+\omega _{j}\times [I_{R}]_{j}\omega _{j},\quad j=1,\ldots ,M.}
Newton's formulation yields 6M equations that define the dynamics of a system of M rigid bodies.
=== Rotation in three dimensions ===
A rotating object, whether under the influence of torques or not, may exhibit the behaviours of precession and nutation.
The fundamental equation describing the behavior of a rotating solid body is Euler's equation of motion:
τ
=
D
L
D
t
=
d
L
d
t
+
ω
×
L
=
d
(
I
ω
)
d
t
+
ω
×
I
ω
=
I
α
+
ω
×
I
ω
{\displaystyle {\boldsymbol {\tau }}={\frac {D\mathbf {L} }{Dt}}={\frac {d\mathbf {L} }{dt}}+{\boldsymbol {\omega }}\times \mathbf {L} ={\frac {d(I{\boldsymbol {\omega }})}{dt}}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}=I{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}}
where the pseudovectors τ and L are, respectively, the torques on the body and its angular momentum, the scalar I is its moment of inertia, the vector ω is its angular velocity, the vector α is its angular acceleration, D is the differential in an inertial reference frame and d is the differential in a relative reference frame fixed with the body.
The solution to this equation when there is no applied torque is discussed in the articles Euler's equation of motion and Poinsot's ellipsoid.
It follows from Euler's equation that a torque τ applied perpendicular to the axis of rotation, and therefore perpendicular to L, results in a rotation about an axis perpendicular to both τ and L. This motion is called precession. The angular velocity of precession ΩP is given by the cross product:
τ
=
Ω
P
×
L
.
{\displaystyle {\boldsymbol {\tau }}={\boldsymbol {\Omega }}_{\mathrm {P} }\times \mathbf {L} .}
Precession can be demonstrated by placing a spinning top with its axis horizontal and supported loosely (frictionless toward precession) at one end. Instead of falling, as might be expected, the top appears to defy gravity by remaining with its axis horizontal, when the other end of the axis is left unsupported and the free end of the axis slowly describes a circle in a horizontal plane, the resulting precession turning. This effect is explained by the above equations. The torque on the top is supplied by a couple of forces: gravity acting downward on the device's centre of mass, and an equal force acting upward to support one end of the device. The rotation resulting from this torque is not downward, as might be intuitively expected, causing the device to fall, but perpendicular to both the gravitational torque (horizontal and perpendicular to the axis of rotation) and the axis of rotation (horizontal and outwards from the point of support), i.e., about a vertical axis, causing the device to rotate slowly about the supporting point.
Under a constant torque of magnitude τ, the speed of precession ΩP is inversely proportional to L, the magnitude of its angular momentum:
τ
=
Ω
P
L
sin
θ
,
{\displaystyle \tau ={\mathit {\Omega }}_{\mathrm {P} }L\sin \theta ,}
where θ is the angle between the vectors ΩP and L. Thus, if the top's spin slows down (for example, due to friction), its angular momentum decreases and so the rate of precession increases. This continues until the device is unable to rotate fast enough to support its own weight, when it stops precessing and falls off its support, mostly because friction against precession cause another precession that goes to cause the fall.
By convention, these three vectors – torque, spin, and precession – are all oriented with respect to each other according to the right-hand rule.
== Virtual work of forces acting on a rigid body ==
An alternate formulation of rigid body dynamics that has a number of convenient features is obtained by considering the virtual work of forces acting on a rigid body.
The virtual work of forces acting at various points on a single rigid body can be calculated using the velocities of their point of application and the resultant force and torque. To see this, let the forces F1, F2 ... Fn act on the points R1, R2 ... Rn in a rigid body.
The trajectories of Ri, i = 1, ..., n are defined by the movement of the rigid body. The velocity of the points Ri along their trajectories are
V
i
=
ω
×
(
R
i
−
R
)
+
V
,
{\displaystyle \mathbf {V} _{i}={\boldsymbol {\omega }}\times (\mathbf {R} _{i}-\mathbf {R} )+\mathbf {V} ,}
where ω is the angular velocity vector of the body.
=== Virtual work ===
Work is computed from the dot product of each force with the displacement of its point of contact
δ
W
=
∑
i
=
1
n
F
i
⋅
δ
r
i
.
{\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot \delta \mathbf {r} _{i}.}
If the trajectory of a rigid body is defined by a set of generalized coordinates qj, j = 1, ..., m, then the virtual displacements δri are given by
δ
r
i
=
∑
j
=
1
m
∂
r
i
∂
q
j
δ
q
j
=
∑
j
=
1
m
∂
V
i
∂
q
˙
j
δ
q
j
.
{\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}}\delta q_{j}=\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}\delta q_{j}.}
The virtual work of this system of forces acting on the body in terms of the generalized coordinates becomes
δ
W
=
F
1
⋅
(
∑
j
=
1
m
∂
V
1
∂
q
˙
j
δ
q
j
)
+
⋯
+
F
n
⋅
(
∑
j
=
1
m
∂
V
n
∂
q
˙
j
δ
q
j
)
{\displaystyle \delta W=\mathbf {F} _{1}\cdot \left(\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{1}}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)+\dots +\mathbf {F} _{n}\cdot \left(\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{n}}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)}
or collecting the coefficients of δqj
δ
W
=
(
∑
i
=
1
n
F
i
⋅
∂
V
i
∂
q
˙
1
)
δ
q
1
+
⋯
+
(
∑
1
=
1
n
F
i
⋅
∂
V
i
∂
q
˙
m
)
δ
q
m
.
{\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{1}}}\right)\delta q_{1}+\dots +\left(\sum _{1=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{m}}}\right)\delta q_{m}.}
=== Generalized forces ===
For simplicity consider a trajectory of a rigid body that is specified by a single generalized coordinate q, such as a rotation angle, then the formula becomes
δ
W
=
(
∑
i
=
1
n
F
i
⋅
∂
V
i
∂
q
˙
)
δ
q
=
(
∑
i
=
1
n
F
i
⋅
∂
(
ω
×
(
R
i
−
R
)
+
V
)
∂
q
˙
)
δ
q
.
{\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}\right)\delta q=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial ({\boldsymbol {\omega }}\times (\mathbf {R} _{i}-\mathbf {R} )+\mathbf {V} )}{\partial {\dot {q}}}}\right)\delta q.}
Introduce the resultant force F and torque T so this equation takes the form
δ
W
=
(
F
⋅
∂
V
∂
q
˙
+
T
⋅
∂
ω
∂
q
˙
)
δ
q
.
{\displaystyle \delta W=\left(\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}\right)\delta q.}
The quantity Q defined by
Q
=
F
⋅
∂
V
∂
q
˙
+
T
⋅
∂
ω
∂
q
˙
,
{\displaystyle Q=\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}},}
is known as the generalized force associated with the virtual displacement δq. This formula generalizes to the movement of a rigid body defined by more than one generalized coordinate, that is
δ
W
=
∑
j
=
1
m
Q
j
δ
q
j
,
{\displaystyle \delta W=\sum _{j=1}^{m}Q_{j}\delta q_{j},}
where
Q
j
=
F
⋅
∂
V
∂
q
˙
j
+
T
⋅
∂
ω
∂
q
˙
j
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}=\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}_{j}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.}
It is useful to note that conservative forces such as gravity and spring forces are derivable from a potential function V(q1, ..., qn), known as a potential energy. In this case the generalized forces are given by
Q
j
=
−
∂
V
∂
q
j
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.}
== D'Alembert's form of the principle of virtual work ==
The equations of motion for a mechanical system of rigid bodies can be determined using D'Alembert's form of the principle of virtual work. The principle of virtual work is used to study the static equilibrium of a system of rigid bodies, however by introducing acceleration terms in Newton's laws this approach is generalized to define dynamic equilibrium.
=== Static equilibrium ===
The static equilibrium of a mechanical system rigid bodies is defined by the condition that the virtual work of the applied forces is zero for any virtual displacement of the system. This is known as the principle of virtual work. This is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is Qi=0.
Let a mechanical system be constructed from n rigid bodies, Bi, i = 1, ..., n, and let the resultant of the applied forces on each body be the force-torque pairs, Fi and Ti, i = 1, ..., n. Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity Vi and angular velocities ωi, i = 1, ..., n, for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom.
The virtual work of the forces and torques, Fi and Ti, applied to this one degree of freedom system is given by
δ
W
=
∑
i
=
1
n
(
F
i
⋅
∂
V
i
∂
q
˙
+
T
i
⋅
∂
ω
i
∂
q
˙
)
δ
q
=
Q
δ
q
,
{\displaystyle \delta W=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}}}\right)\delta q=Q\delta q,}
where
Q
=
∑
i
=
1
n
(
F
i
⋅
∂
V
i
∂
q
˙
+
T
i
⋅
∂
ω
i
∂
q
˙
)
,
{\displaystyle Q=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}}}\right),}
is the generalized force acting on this one degree of freedom system.
If the mechanical system is defined by m generalized coordinates, qj, j = 1, ..., m, then the system has m degrees of freedom and the virtual work is given by,
δ
W
=
∑
j
=
1
m
Q
j
δ
q
j
,
{\displaystyle \delta W=\sum _{j=1}^{m}Q_{j}\delta q_{j},}
where
Q
j
=
∑
i
=
1
n
(
F
i
⋅
∂
V
i
∂
q
˙
j
+
T
i
⋅
∂
ω
i
∂
q
˙
j
)
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}_{j}}}\right),\quad j=1,\ldots ,m.}
is the generalized force associated with the generalized coordinate qj. The principle of virtual work states that static equilibrium occurs when these generalized forces acting on the system are zero, that is
Q
j
=
0
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}=0,\quad j=1,\ldots ,m.}
These m equations define the static equilibrium of the system of rigid bodies.
=== Generalized inertia forces ===
Consider a single rigid body which moves under the action of a resultant force F and torque T, with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force Q* associated with the generalized coordinate q is given by
Q
∗
=
−
(
M
A
)
⋅
∂
V
∂
q
˙
−
(
[
I
R
]
α
+
ω
×
[
I
R
]
ω
)
⋅
∂
ω
∂
q
˙
.
{\displaystyle Q^{*}=-(M\mathbf {A} )\cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}-\left([I_{R}]{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times [I_{R}]{\boldsymbol {\omega }}\right)\cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}.}
This inertia force can be computed from the kinetic energy of the rigid body,
T
=
1
2
M
V
⋅
V
+
1
2
ω
⋅
[
I
R
]
ω
,
{\displaystyle T={\tfrac {1}{2}}M\mathbf {V} \cdot \mathbf {V} +{\tfrac {1}{2}}{\boldsymbol {\omega }}\cdot [I_{R}]{\boldsymbol {\omega }},}
by using the formula
Q
∗
=
−
(
d
d
t
∂
T
∂
q
˙
−
∂
T
∂
q
)
.
{\displaystyle Q^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}}}-{\frac {\partial T}{\partial q}}\right).}
A system of n rigid bodies with m generalized coordinates has the kinetic energy
T
=
∑
i
=
1
n
(
1
2
M
V
i
⋅
V
i
+
1
2
ω
i
⋅
[
I
R
]
ω
i
)
,
{\displaystyle T=\sum _{i=1}^{n}\left({\tfrac {1}{2}}M\mathbf {V} _{i}\cdot \mathbf {V} _{i}+{\tfrac {1}{2}}{\boldsymbol {\omega }}_{i}\cdot [I_{R}]{\boldsymbol {\omega }}_{i}\right),}
which can be used to calculate the m generalized inertia forces
Q
j
∗
=
−
(
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
)
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}\right),\quad j=1,\ldots ,m.}
=== Dynamic equilibrium ===
D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that
δ
W
=
(
Q
1
+
Q
1
∗
)
δ
q
1
+
⋯
+
(
Q
m
+
Q
m
∗
)
δ
q
m
=
0
,
{\displaystyle \delta W=\left(Q_{1}+Q_{1}^{*}\right)\delta q_{1}+\dots +\left(Q_{m}+Q_{m}^{*}\right)\delta q_{m}=0,}
for any set of virtual displacements δqj. This condition yields m equations,
Q
j
+
Q
j
∗
=
0
,
j
=
1
,
…
,
m
,
{\displaystyle Q_{j}+Q_{j}^{*}=0,\quad j=1,\ldots ,m,}
which can also be written as
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
=
Q
j
,
j
=
1
,
…
,
m
.
{\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=Q_{j},\quad j=1,\ldots ,m.}
The result is a set of m equations of motion that define the dynamics of the rigid body system.
=== Lagrange's equations ===
If the generalized forces Qj are derivable from a potential energy V(q1, ..., qm), then these equations of motion take the form
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
=
−
∂
V
∂
q
j
,
j
=
1
,
…
,
m
.
{\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.}
In this case, introduce the Lagrangian, L = T − V, so these equations of motion become
d
d
t
∂
L
∂
q
˙
j
−
∂
L
∂
q
j
=
0
j
=
1
,
…
,
m
.
{\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}-{\frac {\partial L}{\partial q_{j}}}=0\quad j=1,\ldots ,m.}
These are known as Lagrange's equations of motion.
== Linear and angular momentum ==
=== System of particles ===
The linear and angular momentum of a rigid system of particles is formulated by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi, i = 1, ..., n be located at the coordinates ri and velocities vi. Select a reference point R and compute the relative position and velocity vectors,
r
i
=
(
r
i
−
R
)
+
R
,
v
i
=
d
d
t
(
r
i
−
R
)
+
V
.
{\displaystyle \mathbf {r} _{i}=\left(\mathbf {r} _{i}-\mathbf {R} \right)+\mathbf {R} ,\quad \mathbf {v} _{i}={\frac {d}{dt}}(\mathbf {r} _{i}-\mathbf {R} )+\mathbf {V} .}
The total linear and angular momentum vectors relative to the reference point R are
p
=
d
d
t
(
∑
i
=
1
n
m
i
(
r
i
−
R
)
)
+
(
∑
i
=
1
n
m
i
)
V
,
{\displaystyle \mathbf {p} ={\frac {d}{dt}}\left(\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\right)+\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} ,}
and
L
=
∑
i
=
1
n
m
i
(
r
i
−
R
)
×
d
d
t
(
r
i
−
R
)
+
(
∑
i
=
1
n
m
i
(
r
i
−
R
)
)
×
V
.
{\displaystyle \mathbf {L} =\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\times {\frac {d}{dt}}\left(\mathbf {r} _{i}-\mathbf {R} \right)+\left(\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\right)\times \mathbf {V} .}
If R is chosen as the center of mass these equations simplify to
p
=
M
V
,
L
=
∑
i
=
1
n
m
i
(
r
i
−
R
)
×
d
d
t
(
r
i
−
R
)
.
{\displaystyle \mathbf {p} =M\mathbf {V} ,\quad \mathbf {L} =\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\times {\frac {d}{dt}}\left(\mathbf {r} _{i}-\mathbf {R} \right).}
=== Rigid system of particles ===
To specialize these formulas to a rigid body, assume the particles are rigidly connected to each other so Pi, i=1,...,n are located by the coordinates ri and velocities vi. Select a reference point R and compute the relative position and velocity vectors,
r
i
=
(
r
i
−
R
)
+
R
,
v
i
=
ω
×
(
r
i
−
R
)
+
V
,
{\displaystyle \mathbf {r} _{i}=(\mathbf {r} _{i}-\mathbf {R} )+\mathbf {R} ,\quad \mathbf {v} _{i}=\omega \times (\mathbf {r} _{i}-\mathbf {R} )+\mathbf {V} ,}
where ω is the angular velocity of the system.
The linear momentum and angular momentum of this rigid system measured relative to the center of mass R is
p
=
(
∑
i
=
1
n
m
i
)
V
,
L
=
∑
i
=
1
n
m
i
(
r
i
−
R
)
×
v
i
=
∑
i
=
1
n
m
i
(
r
i
−
R
)
×
(
ω
×
(
r
i
−
R
)
)
.
{\displaystyle \mathbf {p} =\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} ,\quad \mathbf {L} =\sum _{i=1}^{n}m_{i}(\mathbf {r} _{i}-\mathbf {R} )\times \mathbf {v} _{i}=\sum _{i=1}^{n}m_{i}(\mathbf {r} _{i}-\mathbf {R} )\times (\omega \times (\mathbf {r} _{i}-\mathbf {R} )).}
These equations simplify to become,
p
=
M
V
,
L
=
[
I
R
]
ω
,
{\displaystyle \mathbf {p} =M\mathbf {V} ,\quad \mathbf {L} =[I_{R}]\omega ,}
where M is the total mass of the system and [IR] is the moment of inertia matrix defined by
[
I
R
]
=
−
∑
i
=
1
n
m
i
[
r
i
−
R
]
[
r
i
−
R
]
,
{\displaystyle [I_{R}]=-\sum _{i=1}^{n}m_{i}[r_{i}-R][r_{i}-R],}
where [ri − R] is the skew-symmetric matrix constructed from the vector ri − R.
== Applications ==
For the analysis of robotic systems
For the biomechanical analysis of animals, humans or humanoid systems
For the analysis of space objects
For the understanding of strange motions of rigid bodies.
For the design and development of dynamics-based sensors, such as gyroscopic sensors.
For the design and development of various stability enhancement applications in automobiles.
For improving the graphics of video games which involves rigid bodies
== See also ==
== References ==
== Further reading ==
E. Leimanis (1965). The General Problem of the Motion of Coupled Rigid Bodies about a Fixed Point. (Springer, New York).
W. B. Heard (2006). Rigid Body Mechanics: Mathematics, Physics and Applications. (Wiley-VCH).
== External links ==
Chris Hecker's Rigid Body Dynamics Information Archived 12 March 2007 at the Wayback Machine
Physically Based Modeling: Principles and Practice
DigitalRune Knowledge Base Archived 20 November 2008 at the Wayback Machine contains a master thesis and a collection of resources about rigid body dynamics.
F. Klein, "Note on the connection between line geometry and the mechanics of rigid bodies" (English translation)
F. Klein, "On Sir Robert Ball's theory of screws" (English translation)
E. Cotton, "Application of Cayley geometry to the geometric study of the displacement of a solid around a fixed point" (English translation) | Wikipedia/Rigid_body_mechanics |
The following outline is provided as an overview of and topical guide to fluid dynamics:
In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids – liquids and gases. It has several subdisciplines, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of water and other liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space, understanding large scale geophysical flows involving oceans/atmosphere and modelling fission weapon detonation.
Below is a structured list of topics in fluid dynamics.
== What type of thing is fluid dynamics? ==
Fluid dynamics can be described as all of the following:
An academic discipline – one with academic departments, curricula and degrees; national and international societies; and specialized journals.
A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published.
A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods.
A physical science – one that studies non-living systems.
A branch of physics – study of matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force.
A branch of mechanics – area of mathematics and physics concerned with the relationships between force, matter, and motion among physical objects.
A branch of continuum mechanics – subject that models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic.
A subdiscipline of fluid mechanics – branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them, which also includes hydrostatics as a subdiscipline
A subdiscipline of fluid mechanics – branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them.
A branch of dynamics (mechanics) – subject that studies forces and motion.
== Branches of fluid dynamics ==
Acoustic theory – Theory of sound waves
Aerodynamics – Branch of dynamics concerned with studying the motion of air
Aeroelasticity – Interactions among inertial, elastic, and aerodynamic forces
Computational fluid dynamics – Analysis and solving of problems that involve fluid flows
Flow measurement – Quantification of bulk fluid movement
Electrohydrodynamics – Study of electrically conducting fluids in the presence of electric fields
Magnetohydrodynamics – Model of electrically conducting fluids
Topological fluid dynamics
Quantum hydrodynamics – The study of hydrodynamic-like systems which demonstrate quantum mechanical behaviorPages displaying wikidata descriptions as a fallback
== History of fluid dynamics ==
History of fluid dynamics
== Mathematical equations and concepts ==
Airy wave theory – Fluid dynamics theory on the propagation of gravity waves
Benjamin–Bona–Mahony equation
Boussinesq approximation (water waves) – Approximation valid for weakly non-linear and fairly long waves
Boundary conditions in fluid dynamics
Boundary conditions in computational fluid dynamics
Elementary flow – collection of basic flows from which is possible to construct more complex flows by superpositionPages displaying wikidata descriptions as a fallback
Euler equations (fluid dynamics) – Set of quasilinear hyperbolic equations governing adiabatic and inviscid flow
Relativistic Euler equations – generalization of the Euler equations that account for the effects of general relativityPages displaying wikidata descriptions as a fallback
Helmholtz's theorems – 3D motion of fluid near vortex lines
Kirchhoff equations – Motion of rigid body in ideal fluid
Knudsen equation – Description of gas flow in free molecular flow
Manning equation – Estimate of velocity in open channel flowsPages displaying short descriptions of redirect targets
Mild-slope equation – Physics phenomenon and formula
Morison equation – Equation for force on an object in sea waves
Navier–Stokes equations – Equations describing the motion of viscous fluid substances
Oseen flow – Formulae for viscous and incompressible fluid flow at small Reynolds numbersPages displaying short descriptions of redirect targets
Poiseuille's law – Law describing the pressure drop in an incompressible and Newtonian fluidPages displaying short descriptions of redirect targets
Pressure head – In fluid mechanics, the height of a liquid column
Rayleigh's equation (fluid dynamics)
Stokes stream function – describe the streamlines and flow velocity in a three-dimensional incompressible flow with axisymmetry.Pages displaying wikidata descriptions as a fallback
Stream function – Function for incompressible divergence-free flows in two dimensions
Streamlines, streaklines and pathlines – Field lines in a fluid flowPages displaying short descriptions of redirect targets
Torricelli's Law – Theorem in fluid mechanicsPages displaying short descriptions of redirect targets
== Types of fluid flow ==
Aerodynamic force – Force exerted on a body as it moves through air or gas
Convection – Fluid flow that occurs due to heterogeneous fluid properties and body forces
Cavitation – Low-pressure voids formed in liquids
Compressible flow – Branch of fluid mechanics
Couette flow – Model of viscous fluid flow between two surfaces moving relative to each other
Effusive limit
Free molecular flow – Gas flow with a relatively large mean free molecular path
Incompressible flow – Fluid flow in which density remains constant
Inviscid flow – Flow of fluids with zero viscosity (superfluids)
Isothermal flow – Model of fluid flow
Open channel flow – Type of liquid flow within a conduitPages displaying short descriptions of redirect targets
Pipe flow – Type of liquid flow within a closed conduit
Pressure-driven flow
Secondary flow – Relatively minor flow superimposed on the primary flow by inviscid assumptions
Stream thrust averaging – Process to convert 3D flow into 1D
Superfluidity – Fluid which flows without losing kinetic energy
Transient flow – Aspects of fluid mechanics involving fluid flowPages displaying short descriptions of redirect targets
Two-phase flow – Flow of gas and liquid in the same conduit
== Fluid properties ==
List of hydrodynamic instabilities
Newtonian fluid – Type of fluid
Non-Newtonian fluid – Fluid whose viscosity varies with the amount of force/stress applied to it
Surface tension – Tendency of a liquid surface to shrink to reduce surface area
Vapour pressure – Pressure exerted by a vapor in thermodynamic equilibriumPages displaying short descriptions of redirect targets
== Fluid phenomena ==
Balanced flow – Model of atmospheric motion
Boundary layer – Layer of fluid in the immediate vicinity of a bounding surface
Coanda effect – Tendency of a fluid jet to stay attached to a surface of any formPages displaying short descriptions of redirect targets
Convection cell – Cyclic flow of convection currents in a fluid
Convergence/Bifurcation – Linear mapping permuting rectangles of the same area
Darwin drift – phenomenon in fluid dynamics where a fluid parcel is permanently displaced after the passage of a body through a fluidPages displaying wikidata descriptions as a fallback
Drag (force) – Retarding force on a body moving in a fluidPages displaying short descriptions of redirect targets
Droplet vaporization – Phenomenon in fluid dynamics
Hydrodynamic stability – Subfield of fluid dynamics
Kaye effect – Property of complex liquids
Lift (force) – Force perpendicular to flow of surrounding fluid
Magnus effect – Deflection in the path of a spinning object moving through a fluid
Ocean current – Directional mass flow of oceanic water
Ocean surface waves – Surface waves generated by wind on open waterPages displaying short descriptions of redirect targets
Rossby wave – Inertial wave occurring in rotating fluids
Shock wave – Propagating disturbance
Soliton – Self-reinforcing single wave packet
Stokes drift – Average velocity of a fluid parcel in a gravity wave
Teapot effect – Phenomenon in fluid dynamics
Thread breakup
Turbulent jet breakup
Upstream contamination – Contaminants moving opposite of flow
Venturi effect – Reduced pressure caused by a flow restriction in a tube or pipe
Vortex – Fluid flow revolving around an axis of rotation
Water hammer – Pressure surge when a fluid is forced to stop or change direction suddenlyPages displaying short descriptions of redirect targets
Wave drag – Aircraft aerodynamic drag at transonic and supersonic speeds due to the presence of shock waves
Wind – Natural movement of air or other gases relative to a planet's surface
== Concepts in aerodynamics ==
Aileron – Aircraft control surface used to induce roll
Airplane – Powered aircraft with wings
Angle of attack – Angle between the chord of a wing and the undisturbed airflow
Banked turn – Inclination of road or surface other than flat
Bernoulli's principle – Principle relating to fluid dynamics
Bilgeboard
Boomerang – Thrown tool and weapon
Centerboard – Retractable keel which pivots out of a slot in the hull of a sailboatPages displaying short descriptions of redirect targets
Chord (aircraft) – Imaginary straight line joining the leading and trailing edges of an aerofoilPages displaying short descriptions of redirect targets
Circulation control wing – Aircraft high-lift device
Currentology – Science that studies the internal movements of water masses
Diving plane – Control surface on a submarine
Downforce – Downwards lift force created by the aerodynamic characteristics of a vehicle
Drag coefficient – Dimensionless parameter to quantify fluid resistance
Fin – Thin component or appendage attached to a larger body or structure
Flipper (anatomy) – Flattened limb adapted for propulsion and maneuvering in water
Flow separation – Detachment of a boundary layer from a surface into a wake
Foil (fluid mechanics) – Solid object used in fluid mechanics
Fluid coupling – Device used to transmit rotating mechanical power
Gas kinetics – Study of the motion of gases
Hydrofoil – Type of fast watercraft and the name of the technology it uses
Keel – Lower centreline structural element of a ship or boat hull (hydrodynamic)
Küssner effect – Unsteady aerodynamic forces on an airfoil or hydrofoil caused by encountering a transverse gust
Kutta condition – Fluid dynamics principle regarding bodies with sharp corners
Kutta–Joukowski theorem – Formula relating lift on an airfoil to fluid speed, density, and circulation
Lift coefficient – Dimensionless quantity relating lift to fluid density and velocity over an area
Lift-induced drag – Type of aerodynamic resistance against the motion of a wing or other airfoil
Lift-to-drag ratio – Measure of aerodynamic efficiency
Lifting-line theory – Mathematical model to quantify lift
NACA airfoil – Wing shape
Newton's third law – Laws in physics about force and motionPages displaying short descriptions of redirect targets
Propeller – Device that transmits rotational power into air movement thrust on a fluid
Pump – Device that imparts energy to the fluids by mechanical action
Rudder – Control surface for fluid-dynamic steering in the yaw axis
Sail – Fabric or other surface supported by a mast to allow wind propulsion (aerodynamics)
Skeg – Extension of a boat's keel at the back, also a surfboard's fin
Sound barrier – Sudden increase of undesirable effects when an aircraft approaches the speed of sound
Spoiler (automotive) – Device for reducing aerodynamic dragPages displaying short descriptions of redirect targets
Stall (flight) – Abrupt reduction in lift due to flow separationPages displaying short descriptions of redirect targets
Supersonic flow over a flat plate
Surfboard fin – Part of a surfboard
Surface science – Study of physical and chemical phenomena that occur at the interface of two phases
Torque converter – Fluid coupling that transfers rotating power from a prime mover to a rotating driven load
Trim tab – Boat or aircraft component
Wing – Appendage used for flight
Wingtip vortices – Turbulence caused by difference in air pressure on either side of wing
== Fluid dynamics research ==
Fluid dynamics journals
=== Methods used in fluid dynamics research ===
Finite volume method for unsteady flow
Flow visualization – Visualization technique in fluid dynamics
Immersed boundary method
Projection method (fluid dynamics) – Method for numerically solving time-dependent incompressible fluid-flow problems
Seeding (fluid dynamics) – process done while attempting to evaluate the flow of a fluidPages displaying wikidata descriptions as a fallback
=== Tools used in fluid dynamics research ===
Peniche (fluid dynamics)
Rotating tank – Fluid dynamics
== Applications of fluid dynamics ==
Acoustics – Branch of physics involving mechanical waves
Aeronautics – Science involved with the study, design, and manufacturing of airflight-capable machines
Astrophysical fluid dynamics – modern branch of astronomy involving fluid mechanicsPages displaying wikidata descriptions as a fallback
Cryosphere science – Earth's surface where water is frozenPages displaying short descriptions of redirect targets
Geophysical fluid dynamics – Dynamics of naturally occurring flows
Hemodynamics – Dynamics of blood flowPages displaying short descriptions of redirect targets
Hydraulics – Applied engineering involving liquids
Hydrology – Science of the movement, distribution, and quality of water on Earth
Fluidics – Use of a fluid to perform analog or digital operations
Fluid power – Use of fluids under pressure to generate, control, and transmit power
Geodynamics – Study of dynamics of the Earth
Hydraulic machinery – Type of machine that uses liquid fluid power to perform work
Meteorology – Interdisciplinary scientific study of the atmosphere focusing on weather forecasting
Naval architecture – Engineering discipline of marine vessels
Oceanography – Study of physical, chemical, and biological processes in the ocean
Plasma physics – State of matterPages displaying short descriptions of redirect targets
Pneumatics – Use of pressurised gas in mechanical systems
Ice-sheet dynamics – Large mass of glacial icePages displaying short descriptions of redirect targets
== Fluid dynamics organizations ==
Von Karman Institute for Fluid Dynamics
Max Planck Institute for Dynamics and Self-Organization
== Fluid dynamics publications ==
=== Books on fluid dynamics ===
Publications in fluid dynamics throughout history
An Album of Fluid Motion (1982)
=== Journals pertaining to fluid dynamics ===
Annual Review of Fluid Mechanics
Journal of Fluid Mechanics
Physics of Fluids
Physical Review Fluids
Experiments in Fluids
European Journal of Mechanics B: Fluids
Theoretical and Computational Fluid Dynamics
Computers and Fluids
International Journal for Numerical Methods in Fluids
Flow, Turbulence and Combustion
== Persons influential in fluid dynamics ==
Contributors to the field of fluid dynamics in turn come from a wide array of fields, and in addition to their other titles, each is also a fluid dynamicist. Following is a list of notable fluid dynamicists:
Snezhana Abarzhi – Applied mathematician and mathematical physicist
John Abraham – American professor
H. Norman Abramson – American engineer (1926–2022)
David Acheson – British mathematician
Andreas Acrivos – Greek–American physicist (1928–2025)
Noreen Sher Akbar – Pakistani applied mathematician
Silas D. Alben – American mathematician
Jean le Rond d'Alembert – French mathematician, mechanician, physicist, philosopher and music theorist (1717–1783)
Hannes Alfvén – Swedish electrical engineer, plasma physicist and Nobel laureate (1908-1995)
John D. Anderson – American curator (born 1937)
Elephter Andronikashvili – Georgian physicist
Shelley Anna – American chemical engineer
Archimedes – Greek mathematician and physicist (c. 287 – 212 BC)
Hassan Aref – Professor of fluid dynamics
Vladimir Arnold – Russian mathematician (1937–2010)
Amedeo Avogadro – Italian scientist (1776–1856)
Ralph Bagnold – British Army officer
Boris Bakhmeteff – Russian diplomat (1880–1951)
Donát Bánki – Hungarian mechanical engineer and inventor (1859–1922)Pages displaying wikidata descriptions as a fallback
Grigory Barenblatt – Russian mathematician (1927–2018)
Dwight Barkley – British researcher
Adhémar Jean Claude Barré de Saint-Venant – French mathematician (1797–1886)Pages displaying short descriptions of redirect targets
Alfred Barnard Basset – British mathematician (1854–1930)
George Batchelor – Australian mathematician and physicist
Harry Bateman – British-American mathematician
Francine Battaglia – American computational fluid dynamicist
Jurjen Battjes – Dutch civil engineer (born 1939)
Henri-Émile Bazin – French hydraulic engineer
James Thomas Beale – American mathematician
Adrian Bejan – Romanian-American professor
Josette Bellan – Romanian-French-American fluid dynamicist
Henri Bénard – French physicist (1874–1939)
Brooke Benjamin – English mathematical physicist and mathematician
David Benney – New Zealand applied mathematician
Frank H. Berkshire – British mathematician
Natalia Berloff – Russian mathematician
Daniel Bernoulli – Swiss mathematician and physicist (1700–1782)
Johann Bernoulli – Swiss mathematician (1667–1748)
Andrea Bertozzi – American mathematician
W. H. Besant – British mathematician
Albert Betz – German physicist (1885–1968)
Eugene C. Bingham – American chemist (1878–1945)
Jean-Baptiste Biot – French physicist (1774–1862)
Robert Byron Bird – American chemical engineer (1924–2020)
Garrett Birkhoff – American mathematician (1911–1996)
Paul Richard Heinrich Blasius – German physicist
Tobias de Boer – Dutch scientist
Ludwig Boltzmann – Austrian mathematician and theoretical physicist (1844–1906)
Wilfrid Noel Bond – English physicist (1897–1937)
Joseph Valentin Boussinesq – French mathematician and physicist (1842–1929)
Robert Boyle – Anglo-Irish scientist (1627–1691)
Peter Bradshaw (aeronautical engineer) – British engineer (1935–2024)
Francis Bretherton – American mathematician, oceanographer and engineer (1935–2021)
John D. Buckmaster – British aerospace engineer
Gerald Bull – Canadian artillery engineer and entrepreneur (1928–1990)
Jan Burgers – Dutch physicist (1895–1981)
Adolf Busemann – German aerospace engineer
Sébastien Candel – French physicist (born 1946)
Isabelle Cantat – French physicist
Silvana Cardoso – Portuguese fluid dynamicist
Nicolas Léonard Sadi Carnot – French physicist and engineer (1796–1832)
George F. Carrier – American mathematician
Claudia Cenedese – Italian oceanographer
Subrahmanyan Chandrasekhar – Indian-American physicist (1910-1995)
Hubert Chanson – Australian engineering academic (born 1961)
Jacques Charles – French inventor, scientist and mathematician (1746–1823)
Jean-Yves Chemin – French mathematician (born 1959)
Thomas H. Chilton – American chemical engineer
Alexandre Chorin – American mathematician
Demetrios Christodoulou – Greek mathematician and physicist (born 1951)
Chia-Kun Chu – Chinese-American mathematician (1927–2023)
Émile Clapeyron – French engineer and physicist
John Frederick Clarke – British scientist (1927-2013)Pages displaying wikidata descriptions as a fallback
Rudolf Clausius – German physicist and mathematician (1822–1888)
Paul Clavin – French scientist
Nicolas Clément – French physicist and chemist (1779–1841)
Julian Cole – American mathematician
Adrian Constantin – Romanian-Austrian mathematician
Stanley Corrsin – American physicist and engineer
Maurice Couette – French physicist
Richard Courant – German-American mathematician (1888–1972)
David Crighton – British mathematician and physicist
Mimi Dai – MathematicianPages displaying short descriptions with no spaces
Stuart Dalziel – British and New Zealand fluid dynamicist
Gerhard Damköhler – German chemist (1908–1944)
Henry Darcy – French engineer (1803–1858)
Georges Jean Marie Darrieus – French aerospace and electrical engineer
Stephen H. Davis – American mathematician (1939–2021)
William Reginald Dean – British mathematician (1896–1973)
Lokenath Debnath – Indian American mathematician (1935–2023)
Subhasish Dey – Indian hydraulician and educator
Satish Dhawan – Indian mathematician and engineer (1920–2002)
Rudolf Diesel – German inventor and engineer (1858–1913)
Ronald DiPerna – American mathematician
Charles R. Doering – American mathematician (1956–2021)
David Dolidze – Georgian and Soviet mathematician
Philip Drazin – British mathematician (1934–2002)
Hugh Latimer Dryden – American aeronautical scientist and civil servant (1898–1965)
Elizabeth B. Dussan V. – American mathematician
Ernst R. G. Eckert – American aerospace engineerPages displaying wikidata descriptions as a fallback
Vagn Walfrid Ekman – Swedish oceanographer (1874–1954)
Simen Ådnøy Ellingsen – Norwegian Professor
Loránd Eötvös – Hungarian physicist (1848–1919)
Jerald Ericksen – American mathematician (1924–2021)
R. Cengiz Ertekin – Turkish marine engineer
Leonhard Euler – Swiss mathematician (1707–1783)
David Evans (mathematician) – British mathematician
Amir Faghri – American mechanical engineering professor (born 1951)
Gino Girolamo Fanno – Italian mechanical engineer (1882–1962)
Eduard Feireisl – Czech mathematician
Antonio Ferri – Italian scientist (1912–1975)
John Ffowcs Williams – British engineer-scientist (1935–2020)
Bruce A. Finlayson – American chemical engineerPages displaying wikidata descriptions as a fallback
Irmgard Flügge-Lotz – German mathematician
Emanuele Foà – Italian engineer and physicist (1892–1949)
Hermann Föttinger – German engineer (1877–1945)
Joseph Fourier – French mathematician and physicist (1768–1830)
James B. Francis – British-American civil engineer (1815–1892)
David A. Frank-Kamenetskii – Soviet scientist (1910–1970)
François Frenkiel – physicistPages displaying wikidata descriptions as a fallbackPages displaying short descriptions with no spaces
Uriel Frisch – French mathematical physicist
Robert Edmund Froude – British engineer and naval architect
William Froude – British engineer and naval architect
Mohamed Gad-el-Hak – Professor of Biomedical Engineering
Joseph Louis Gay-Lussac – French chemist and physicist (1778–1850)
Israel Gelfand – Soviet mathematician (1913–2009)
William K. George – American fluid dynamicist
Morteza Gharib – Iranian American professor of biomechanical engineeringPages displaying wikidata descriptions as a fallback
Alan Jeffrey Giacomin – Canadian editorPages displaying wikidata descriptions as a fallback
Josiah Willard Gibbs – American scientist (1839–1903)
Adrian Gill (meteorologist) – Australian meteorologist
Pierre-Simon Girard – French mathematician and engineer (1765–1836)
Hermann Glauert – British aerodynamicist
James Glimm – American mathematician
Sergei Godunov – Russian mathematician (1929–2023)
Sydney Goldstein – British mathematician (1903–1989)
Alexander Gorlov – American scientist and inventor (1931–2016)Pages displaying wikidata descriptions as a fallback
Leo Graetz – German physicist
Franz Grashof – German engineer (1826–1893)
Albert E. Green – British mathematician
Harvey P. Greenspan – American mathematician
Marina Guenza – Italian chemist
Max Gunzburger – American mathematician
Wolfgang Haack – German mathematician (1902–1994)
Gotthilf Hagen – German physicist
Georg Hamel – German mathematician (1877 - 1954)
Thomas Henry Havelock – English mathematicianPages displaying wikidata descriptions as a fallback
Wallace D. Hayes – American mechanical and aerospace engineer (1918–2001)Pages displaying wikidata descriptions as a fallback
Peter H. Haynes – British mathematician
Werner Heisenberg – German theoretical physicist (1901–1976)
Henry Selby Hele-Shaw – British engineer (1854–1941)
Hermann von Helmholtz – German physicist and physiologist (1821–1894)
John Hinch (mathematician) – British mathematicianPages displaying wikidata descriptions as a fallback
Julius Oscar Hinze – Dutch scientist (1907–1993)
Hans G. Hornung – American engineerPages displaying wikidata descriptions as a fallback
Leslie Howarth – British mathematician
Pierre Henri Hugoniot – French military engineer (1851-1887)Pages displaying wikidata descriptions as a fallback
Herbert Huppert – British geophysicist
Fazle Hussain – American physicist
M. Yousuff Hussaini – American academicPages displaying wikidata descriptions as a fallback
Caius Iacob – Romanian mathematician and politician
Antony Jameson – British aerospace engineer (born 1934)
James Jeans – English physicist, astronomer and mathematician (1877–1946)
George Barker Jeffery – British mathematical physicist (1891–1957)
Daniel D. Joseph – American mechanical engineer
James Prescott Joule – English physicist (1818–1889)
Viktor Kaplan – Austrian engineer
Béla Karlovitz – Hungarian-American engineer, inventor
Theodore von Kármán – Hungarian-American mathematician, aerospace engineer and physicist (1881–1963)
Lord Kelvin – British physicist, engineer and mathematician (1824–1907)
Earle Hesse Kennard – Theoretical physicist
Gustav Kirchhoff – German chemist, mathematician, physicist, and spectroscopist (1824–1887)
Alexander Kiselev (mathematician) – American mathematician
Martin Knudsen – Danish physicist
Andrey Kolmogorov – Soviet mathematician (1903–1987)
Ludwig Kort
Diederik Korteweg – Dutch mathematician (1848–1941)
Leslie Stephen George Kovasznay – Hungarian-American engineer
Robert Kraichnan – American theoretical physicist (1928–2008)
Martin Kutta – German mathematician (1867–1944)
Olga Ladyzhenskaya – Russian mathematician (1922–2004)
Paco Lagerstrom – Swedish American mathematician
Horace Lamb – English mathematician (1849–1934)
Lev Landau – Soviet theoretical physicist (1908–1968)
Pierre-Simon Laplace – French polymath (1749–1827)
Boris Laschka – German fluid dynamics scientist and aeronautical engineer
Brian Launder – British academic
Gustaf de Laval – Swedish engineer and inventor (1845–1913)
Chung K. Law – Engineering researcher
Peter Lax – Hungarian-born American mathematician (1926–2025)
L. Gary Leal – American chemical engineer and academic
Leonid Leibenson – Soviet physicist (1879–1951)
Leonardo da Vinci – Italian Renaissance polymath (1452–1519)
Tullio Levi-Civita – Italian mathematician (1873–1941)
Veniamin Levich – Ukrainian physicist (1917–1988)
Bernard Lewis (scientist) – scientist (1899-1993)Pages displaying wikidata descriptions as a fallback
Warren K. Lewis – American chemical engineer (1882–1975)
Paul A. Libby – American scientist (1921–2021)
Wolfgang Liebe – German aeronautical engineer (1911–2005)
Hans W. Liepmann – American engineer and academic (1914–2009)
Evgeny Lifshitz – Soviet physicist (1915–1985)
Edwin N. Lightfoot – American chemical engineer
James Lighthill – British applied mathematician (1924–1998)
Chia-Chiao Lin – Chinese-born American mathematician
Amable Liñán – Spanish aeronautical engineer
Paul Linden – mathematician specialising in fluid dynamicsPages displaying wikidata descriptions as a fallback
Anke Lindner – German physicist
Michael S. Longuet-Higgins – British mathematician (1925-2016)Pages displaying wikidata descriptions as a fallback
Lu Shijia – Chinese physicist
Geoffrey S. S. Ludford – American scientist (1921–2021)
John L. Lumley – American Professor of Mechanical Engineering and Aerospace Engineering (1930-2015)Pages displaying wikidata descriptions as a fallback
Thomas S. Lundgren – American academic
Ernst Mach – Austrian physicist, philosopher and university educator (1838–1916)
Charles L. Mader – American physical chemist
Andrew Majda – American mathematician (1949–2021)
Carlo Marangoni – Italian physicist (1840–1925)
Frank E. Marble – American scientist
Moshe Matalon (engineer) – Israeli-American engineer and mathematician (born 1949)
Tony Maxworthy – British-American physicist (1933–2013)
John B. McCormick – American mechanical engineer (1834–1924)
Trevor McDougall – OceanographerPages displaying short descriptions with no spaces
Beverley McKeon – Physicist and aerospace engineer
Chiang C. Mei – Taiwanese-American physicist
Charles Meneveau – French-Chilean born American fluid dynamicist
Theodor Meyer – German physicist (1882–1972)
Anthony Michell – Australian mechanical engineer
John W. Miles – American research professor of applied mechanics and geophysics
Laura Miller (mathematical biologist) – American mathematical biologist
L. M. Milne-Thomson – English applied mathematician
Richard von Mises – Austrian physicist and mathematician (1883–1953)
Keith Moffatt – British mathematician and physicist
Parviz Moin – American engineer
Andrei Monin – Soviet and Russian physicist, applied mathematician, and oceanographer (1921-2007)Pages displaying wikidata descriptions as a fallback
Lewis Ferry Moody – American engineer and professor
Rose Morton – American mathematician
Samar Mubarakmand – Pakistani nuclear physicist (born 1942)
Walter Munk – American oceanographer (1917–2019)
Morris Muskat – American petroleum engineer
Roddam Narasimha – Indian scientist (1933–2020)
Claude-Louis Navier – French engineer and physicist (1785–1836)
Paul Neményi – Hungarian mathematician and physicist (1895–1952)
John von Neumann – Hungarian and American mathematician and physicist (1903–1957)
Isaac Newton – English polymath (1642–1726)
Nhan Phan-Thien – researcherPages displaying wikidata descriptions as a fallbackPages displaying short descriptions with no spaces
Wilhelm Nusselt – German engineer (1882–1957)
Morrough Parker O'Brien – American hydraulic engineering professor (1902–1988)
John Ockendon – British mathematician, Emeritus Professor at the University of OxfordPages displaying wikidata descriptions as a fallback
Hisashi Okamoto – Japanese mathematician
Steven Orszag – American mathematician (1943–2011)
Carl Wilhelm Oseen – Swedish theoretical physicist (1879–1944)
Simon Ostrach – American aerodynamics engineer (1923–2017)Pages displaying wikidata descriptions as a fallback
Mariolina Padula – Italian mathematical physicist
Stoycho Panchev – Bulgarian meteorologist and fluid dynamicist
Blaise Pascal – French mathematician, physicist, inventor, writer, and Christian philosopher (1623–1662)
Jean Claude Eugène Péclet – French physicist (1793–1857)
Tim Pedley – British mathematician and a former G
Joseph Pedlosky – American physical oceanographer (born 1938)
Lester Allan Pelton – American mechanical engineer
Stanford S. Penner – German-American professor of engineering physicsPages displaying wikidata descriptions as a fallback
Howell Peregrine – British mathematician
Adriana Pesci – Argentine mathematician and physicist
Charles S. Peskin – American mathematician
Norbert Peters (engineer) – German combustion engineer (1942–2015)
Henri Pitot – French hydraulic engineer (1695–1771)
Joseph Plateau – Belgian physicist (1801–1883)
Milton S. Plesset – American physicist (1908–1991)
Henri Poincaré – French mathematician, physicist and engineer (1854–1912)
Jean Léonard Marie Poiseuille – French physicist and physiologist (1797–1869)
Siméon Denis Poisson – French mathematician and physicist (1781–1840)
Stephen B. Pope – Cornell University professor of mechanical engineering
Constantine Pozrikidis – American chemical engineer
Ludwig Prandtl – German physicist (1875–1953)
Ronald F. Probstein – American engineer (1928–2021)
Andrea Prosperetti – American scientistPages displaying wikidata descriptions as a fallback
Joseph Proudman – British mathematician and oceanographer
Seth Putterman – American physicist
William Rankine – Scottish mechanical engineer (1820–1872)
John William Strutt, 3rd Baron Rayleigh – English physicist (1842–1919)
Theodor Rehbock – German professor of hydraulics, hydraulics engineer (1864–1950)Pages displaying wikidata descriptions as a fallback
Markus Reiner – Israeli scientist and engineer
Osborne Reynolds – Anglo-Irish innovator (1842–1912)
William Craig Reynolds – American fluid dynamicist (1933–2004)Pages displaying wikidata descriptions as a fallback
Dimitri Riabouchinsky – Russian physicist (1882–1962)
Lewis Fry Richardson – English meteorologist and mathematician (1881–1953)
Robert D. Richtmyer – American mathematician
Norman Riley (professor) – British mathematician
Petre Roman – Prime Minister of Romania between 1989 and 1991
Louis Rosenhead – British mathematician
Anatol Roshko – Canadian-American physicist and engineer
Carl-Gustaf Rossby – Swedish-born American meteorologist
Hunter Rouse – American physicist
John Scott Russell – Naval engineer
Philip Saffman – British mathematician (1931–2008)
Stephen Salter – South African-born Scottish academic and inventor
Ralph Allan Sampson – British astronomer
Hermann Schlichting – German fluid dynamics engineer
James Serrin – American mathematician
Tasneem M. Shah – Pakistani scientist and mathematicianPages displaying short descriptions of redirect targets
P. N. Shankar – Indian scientist (1944–2019)
Ascher H. Shapiro – American author and professor of mechanical engineering and fluid mechanics
Beverley Shenstone – Canadian aerodynamicist (1906–1979)
Thomas Kilgore Sherwood – American chemical engineer
Albert F. Shields – American engineer
Max Shiffman – American mathematician
Wei Shyy – Chinese areospace engineer (born 1955)
Gregory Sivashinsky – scientistPages displaying wikidata descriptions as a fallbackPages displaying short descriptions with no spaces
Apollo M. O. Smith – American aerospace engineer (1911-1997)Pages displaying wikidata descriptions as a fallback
Frank T. Smith – English applied mathematician
Arnold Sommerfeld – German theoretical physicist (1868–1951)
Andrew Soward – British fluid dynamicist
Brian Spalding – British academic (1923–2016)Pages displaying wikidata descriptions as a fallback
Ephraim M. Sparrow – American academic
Charles Speziale – American scientist (1948–1999)
Herbert Squire – British aerospace engineer (1909–1961)
K. R. Sreenivasan – Indian-American scientist and physicist
Paul H. Steen – American engineer
Josef Stefan – Carinthian Slovene physicist, mathematician and poet (1835–1893)
Keith Stewartson – British mathematician (1925–1983)
Sir George Stokes, 1st Baronet – Irish mathematician and physicist (1819–1903)
Yvonne Stokes – Australian mathematician
Howard A. Stone – American engineer (born 1960)
Vincenc Strouhal – Czech physicist
John Trevor Stuart – British mathematician (1929–2023)
G. I. Taylor – British physicist and mathematician (1886–1975)
Roger Temam – French mathematician
Hendrik Tennekes – Dutch scientist (1936–2021)
Walter Tollmien – German fluid dynamicist
Albert Alan Townsend – Fluid dynamics physicist
David Tritton – English physicist (1935–1998)
Viktor Trkal – Czech physicist and mathematician
Clifford Truesdell – American mathematician (1919–2000)
Gretar Tryggvason – American fluid dynamicist (born 1956)
Ernie Tuck – Australian mathematician
Laurette Tuckerman – American mathematical physicist
Stewart Turner – Australian geophysicist (1930–2022)
Fritz Ursell – British mathematician (1923-2012)Pages displaying wikidata descriptions as a fallback
Victor Vâlcovici – Romanian mechanician and mathematician
Milton Van Dyke – American fluid dynamicistPages displaying wikidata descriptions as a fallback
Henri Villat – French mathematician
Ricardo Vinuesa – Spanish-Swedish fluid dynamicist and machine-learning researcherPages displaying wikidata descriptions as a fallback
Gustav de Vries – Dutch mathematician (1866–1934)
John V. Wehausen – American applied mathematician
Julius Weisbach – German mathematician and engineer
Karl Weissenberg – Austrian mathematician and physicist
Richard T. Whitcomb – American aeronautical engineer (1921–2009)
Frank M. White – American mechanical engineer (1933–2022)
Gerald B. Whitham – American mathematician (1927–2014)
Forman A. Williams – American academic
John R. Womersley – British mathematician, computer scientist and biophysicist
Theodore Y. Wu – American engineer (1924–2023)
Akiva Yaglom – Russian physicist, mathematician, statistician, and meteorologist
Chia-Shun Yih – American engineerPages displaying wikidata descriptions as a fallback
Z. Jane Wang – Chinese and American physicist
Yakov Zeldovich – Soviet physicist, physical chemist and cosmologist (1914–1987)
Yuwen Zhang – Chinese-American academic
Nikolay Zhukovsky (scientist) – Russian scientist (1847–1921)
== Miscellaneous concepts ==
These topics need placement in the sections above, or in new sections.
Beta plane – Approximation whereby the Coriolis parameter, f, is set to vary linearly in space
Bridge scour – Erosion of sediment near bridge foundations by water
Isosurface – Surface representing points of constant value within a volume
Keulegan–Carpenter number – Dimensionless quantity used in fluid dynamics
Entrance length (fluid dynamics) – Distance a flow travels after entering a pipe before fully developed
Modon (fluid dynamics) – Sea eddies
Shock (fluid dynamics) – term in fluid dynamicsPages displaying wikidata descriptions as a fallback
Eddy (fluid dynamics) – Swirling of a fluid and the reverse current created when the fluid is in a turbulent flow regime
Non ideal compressible fluid dynamics
Plume (fluid dynamics) – Column of one fluid moving through another
Stall (fluid dynamics) – Abrupt reduction in lift due to flow separation
== References ==
== External links == | Wikipedia/Applications_of_fluid_dynamics |
In the physical science of dynamics, rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. The assumption that the bodies are rigid (i.e. they do not deform under the action of applied forces) simplifies analysis, by reducing the parameters that describe the configuration of the system to the translation and rotation of reference frames attached to each body. This excludes bodies that display fluid, highly elastic, and plastic behavior.
The dynamics of a rigid body system is described by the laws of kinematics and by the application of Newton's second law (kinetics) or their derivative form, Lagrangian mechanics. The solution of these equations of motion provides a description of the position, the motion and the acceleration of the individual components of the system, and overall the system itself, as a function of time. The formulation and solution of rigid body dynamics is an important tool in the computer simulation of mechanical systems.
== Planar rigid body dynamics ==
If a system of particles moves parallel to a fixed plane, the system is said to be constrained to planar movement. In this case, Newton's laws (kinetics) for a rigid system of N particles, Pi, i=1,...,N, simplify because there is no movement in the k direction. Determine the resultant force and torque at a reference point R, to obtain
F
=
∑
i
=
1
N
m
i
A
i
,
T
=
∑
i
=
1
N
(
r
i
−
R
)
×
m
i
A
i
,
{\displaystyle \mathbf {F} =\sum _{i=1}^{N}m_{i}\mathbf {A} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {r} _{i}-\mathbf {R} )\times m_{i}\mathbf {A} _{i},}
where ri denotes the planar trajectory of each particle.
The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration A of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as,
A
i
=
α
×
(
r
i
−
R
)
+
ω
×
(
ω
×
(
r
i
−
R
)
)
+
A
.
{\displaystyle \mathbf {A} _{i}={\boldsymbol {\alpha }}\times (\mathbf {r} _{i}-\mathbf {R} )+{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times (\mathbf {r} _{i}-\mathbf {R} ))+\mathbf {A} .}
For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along k perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors ei from the reference point R to a point ri and the unit vectors
t
i
=
k
×
e
i
{\textstyle \mathbf {t} _{i}=\mathbf {k} \times \mathbf {e} _{i}}
, so
A
i
=
α
(
Δ
r
i
t
i
)
−
ω
2
(
Δ
r
i
e
i
)
+
A
.
{\displaystyle \mathbf {A} _{i}=\alpha (\Delta r_{i}\mathbf {t} _{i})-\omega ^{2}(\Delta r_{i}\mathbf {e} _{i})+\mathbf {A} .}
This yields the resultant force on the system as
F
=
α
∑
i
=
1
N
m
i
(
Δ
r
i
t
i
)
−
ω
2
∑
i
=
1
N
m
i
(
Δ
r
i
e
i
)
+
(
∑
i
=
1
N
m
i
)
A
,
{\displaystyle \mathbf {F} =\alpha \sum _{i=1}^{N}m_{i}\left(\Delta r_{i}\mathbf {t} _{i}\right)-\omega ^{2}\sum _{i=1}^{N}m_{i}\left(\Delta r_{i}\mathbf {e} _{i}\right)+\left(\sum _{i=1}^{N}m_{i}\right)\mathbf {A} ,}
and torque as
T
=
∑
i
=
1
N
(
m
i
Δ
r
i
e
i
)
×
(
α
(
Δ
r
i
t
i
)
−
ω
2
(
Δ
r
i
e
i
)
+
A
)
=
(
∑
i
=
1
N
m
i
Δ
r
i
2
)
α
k
+
(
∑
i
=
1
N
m
i
Δ
r
i
e
i
)
×
A
,
{\displaystyle {\begin{aligned}\mathbf {T} ={}&\sum _{i=1}^{N}(m_{i}\Delta r_{i}\mathbf {e} _{i})\times \left(\alpha (\Delta r_{i}\mathbf {t} _{i})-\omega ^{2}(\Delta r_{i}\mathbf {e} _{i})+\mathbf {A} \right)\\{}={}&\left(\sum _{i=1}^{N}m_{i}\Delta r_{i}^{2}\right)\alpha \mathbf {k} +\left(\sum _{i=1}^{N}m_{i}\Delta r_{i}\mathbf {e} _{i}\right)\times \mathbf {A} ,\end{aligned}}}
where
e
i
×
e
i
=
0
{\textstyle \mathbf {e} _{i}\times \mathbf {e} _{i}=0}
and
e
i
×
t
i
=
k
{\textstyle \mathbf {e} _{i}\times \mathbf {t} _{i}=\mathbf {k} }
is the unit vector perpendicular to the plane for all of the particles Pi.
Use the center of mass C as the reference point, so these equations for Newton's laws simplify to become
F
=
M
A
,
T
=
I
C
α
k
,
{\displaystyle \mathbf {F} =M\mathbf {A} ,\quad \mathbf {T} =I_{\textbf {C}}\alpha \mathbf {k} ,}
where M is the total mass and IC is the moment of inertia about an axis perpendicular to the movement of the rigid system and through the center of mass.
== Rigid body in three dimensions ==
=== Orientation or attitude descriptions ===
Several methods to describe orientations of a rigid body in three dimensions have been developed. They are summarized in the following sections.
==== Euler angles ====
The first attempt to represent an orientation is attributed to Leonhard Euler. He imagined three reference frames that could rotate one around the other, and realized that by starting with a fixed reference frame and performing three rotations, he could get any other reference frame in the space (using two rotations to fix the vertical axis and another to fix the other two axes). The values of these three rotations are called Euler angles. Commonly,
ψ
{\displaystyle \psi }
is used to denote precession,
θ
{\displaystyle \theta }
nutation, and
ϕ
{\displaystyle \phi }
intrinsic rotation.
==== Tait–Bryan angles ====
These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles. Mathematically they constitute a set of six possibilities inside the twelve possible sets of Euler angles, the ordering being the one best used for describing the orientation of a vehicle such as an airplane. In aerospace engineering they are usually referred to as Euler angles.
==== Orientation vector ====
Euler also realized that the composition of two rotations is equivalent to a single rotation about a different fixed axis (Euler's rotation theorem). Therefore, the composition of the former three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed.
Based on this fact he introduced a vectorial way to describe any rotation, with a vector on the rotation axis and module equal to the value of the angle. Therefore, any orientation can be represented by a rotation vector (also called Euler vector) that leads to it from the reference frame. When used to represent an orientation, the rotation vector is commonly called orientation vector, or attitude vector.
A similar method, called axis-angle representation, describes a rotation or orientation using a unit vector aligned with the rotation axis, and a separate value to indicate the angle (see figure).
==== Orientation matrix ====
With the introduction of matrices the Euler theorems were rewritten. The rotations were described by orthogonal matrices referred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a rotation matrix is commonly called orientation matrix, or attitude matrix.
The above-mentioned Euler vector is the eigenvector of a rotation matrix (a rotation matrix has a unique real eigenvalue).
The product of two rotation matrices is the composition of rotations. Therefore, as before, the orientation can be given as the rotation from the initial frame to achieve the frame that we want to describe.
The configuration space of a non-symmetrical object in n-dimensional space is SO(n) × Rn. Orientation may be visualized by attaching a basis of tangent vectors to an object. The direction in which each vector points determines its orientation.
==== Orientation quaternion ====
Another way to describe rotations is using rotation quaternions, also called versors. They are equivalent to rotation matrices and rotation vectors. With respect to rotation vectors, they can be more easily converted to and from matrices. When used to represent orientations, rotation quaternions are typically called orientation quaternions or attitude quaternions.
=== Newton's second law in three dimensions ===
To consider rigid body dynamics in three-dimensional space, Newton's second law must be extended to define the relationship between the movement of a rigid body and the system of forces and torques that act on it.
Newton formulated his second law for a particle as, "The change of motion of an object is proportional to the force impressed and is made in the direction of the straight line in which the force is impressed." Because Newton generally referred to mass times velocity as the "motion" of a particle, the phrase "change of motion" refers to the mass times acceleration of the particle, and so this law is usually written as
F
=
m
a
,
{\displaystyle \mathbf {F} =m\mathbf {a} ,}
where F is understood to be the only external force acting on the particle, m is the mass of the particle, and a is its acceleration vector. The extension of Newton's second law to rigid bodies is achieved by considering a rigid system of particles.
=== Rigid system of particles ===
If a system of N particles, Pi, i=1,...,N, are assembled into a rigid body, then Newton's second law can be applied to each of the particles in the body. If Fi is the external force applied to particle Pi with mass mi, then
F
i
+
∑
j
=
1
N
F
i
j
=
m
i
a
i
,
i
=
1
,
…
,
N
,
{\displaystyle \mathbf {F} _{i}+\sum _{j=1}^{N}\mathbf {F} _{ij}=m_{i}\mathbf {a} _{i},\quad i=1,\ldots ,N,}
where Fij is the internal force of particle Pj acting on particle Pi that maintains the constant distance between these particles.
An important simplification to these force equations is obtained by introducing the resultant force and torque that acts on the rigid system. This resultant force and torque is obtained by choosing one of the particles in the system as a reference point, R, where each of the external forces are applied with the addition of an associated torque. The resultant force F and torque T are given by the formulas,
F
=
∑
i
=
1
N
F
i
,
T
=
∑
i
=
1
N
(
R
i
−
R
)
×
F
i
,
{\displaystyle \mathbf {F} =\sum _{i=1}^{N}\mathbf {F} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {R} _{i}-\mathbf {R} )\times \mathbf {F} _{i},}
where Ri is the vector that defines the position of particle Pi.
Newton's second law for a particle combines with these formulas for the resultant force and torque to yield,
F
=
∑
i
=
1
N
m
i
a
i
,
T
=
∑
i
=
1
N
(
R
i
−
R
)
×
(
m
i
a
i
)
,
{\displaystyle \mathbf {F} =\sum _{i=1}^{N}m_{i}\mathbf {a} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {R} _{i}-\mathbf {R} )\times (m_{i}\mathbf {a} _{i}),}
where the internal forces Fij cancel in pairs. The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration a of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as,
a
i
=
α
×
(
R
i
−
R
)
+
ω
×
(
ω
×
(
R
i
−
R
)
)
+
a
.
{\displaystyle \mathbf {a} _{i}=\alpha \times (\mathbf {R} _{i}-\mathbf {R} )+\omega \times (\omega \times (\mathbf {R} _{i}-\mathbf {R} ))+\mathbf {a} .}
=== Mass properties ===
The mass properties of the rigid body are represented by its center of mass and inertia matrix. Choose the reference point R so that it satisfies the condition
∑
i
=
1
N
m
i
(
R
i
−
R
)
=
0
,
{\displaystyle \sum _{i=1}^{N}m_{i}(\mathbf {R} _{i}-\mathbf {R} )=0,}
then it is known as the center of mass of the system.
The inertia matrix [IR] of the system relative to the reference point R is defined by
[
I
R
]
=
∑
i
=
1
N
m
i
(
I
(
S
i
T
S
i
)
−
S
i
S
i
T
)
,
{\displaystyle [I_{R}]=\sum _{i=1}^{N}m_{i}\left(\mathbf {I} \left(\mathbf {S} _{i}^{\textsf {T}}\mathbf {S} _{i}\right)-\mathbf {S} _{i}\mathbf {S} _{i}^{\textsf {T}}\right),}
where
S
i
{\displaystyle \mathbf {S} _{i}}
is the column vector Ri − R;
S
i
T
{\displaystyle \mathbf {S} _{i}^{\textsf {T}}}
is its transpose, and
I
{\displaystyle \mathbf {I} }
is the 3 by 3 identity matrix.
S
i
T
S
i
{\displaystyle \mathbf {S} _{i}^{\textsf {T}}\mathbf {S} _{i}}
is the scalar product of
S
i
{\displaystyle \mathbf {S} _{i}}
with itself, while
S
i
S
i
T
{\displaystyle \mathbf {S} _{i}\mathbf {S} _{i}^{\textsf {T}}}
is the tensor product of
S
i
{\displaystyle \mathbf {S} _{i}}
with itself.
=== Force-torque equations ===
Using the center of mass and inertia matrix, the force and torque equations for a single rigid body take the form
F
=
m
a
,
T
=
[
I
R
]
α
+
ω
×
[
I
R
]
ω
,
{\displaystyle \mathbf {F} =m\mathbf {a} ,\quad \mathbf {T} =[I_{R}]\alpha +\omega \times [I_{R}]\omega ,}
and are known as Newton's second law of motion for a rigid body.
The dynamics of an interconnected system of rigid bodies, Bi, j = 1, ..., M, is formulated by isolating each rigid body and introducing the interaction forces. The resultant of the external and interaction forces on each body, yields the force-torque equations
F
j
=
m
j
a
j
,
T
j
=
[
I
R
]
j
α
j
+
ω
j
×
[
I
R
]
j
ω
j
,
j
=
1
,
…
,
M
.
{\displaystyle \mathbf {F} _{j}=m_{j}\mathbf {a} _{j},\quad \mathbf {T} _{j}=[I_{R}]_{j}\alpha _{j}+\omega _{j}\times [I_{R}]_{j}\omega _{j},\quad j=1,\ldots ,M.}
Newton's formulation yields 6M equations that define the dynamics of a system of M rigid bodies.
=== Rotation in three dimensions ===
A rotating object, whether under the influence of torques or not, may exhibit the behaviours of precession and nutation.
The fundamental equation describing the behavior of a rotating solid body is Euler's equation of motion:
τ
=
D
L
D
t
=
d
L
d
t
+
ω
×
L
=
d
(
I
ω
)
d
t
+
ω
×
I
ω
=
I
α
+
ω
×
I
ω
{\displaystyle {\boldsymbol {\tau }}={\frac {D\mathbf {L} }{Dt}}={\frac {d\mathbf {L} }{dt}}+{\boldsymbol {\omega }}\times \mathbf {L} ={\frac {d(I{\boldsymbol {\omega }})}{dt}}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}=I{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}}
where the pseudovectors τ and L are, respectively, the torques on the body and its angular momentum, the scalar I is its moment of inertia, the vector ω is its angular velocity, the vector α is its angular acceleration, D is the differential in an inertial reference frame and d is the differential in a relative reference frame fixed with the body.
The solution to this equation when there is no applied torque is discussed in the articles Euler's equation of motion and Poinsot's ellipsoid.
It follows from Euler's equation that a torque τ applied perpendicular to the axis of rotation, and therefore perpendicular to L, results in a rotation about an axis perpendicular to both τ and L. This motion is called precession. The angular velocity of precession ΩP is given by the cross product:
τ
=
Ω
P
×
L
.
{\displaystyle {\boldsymbol {\tau }}={\boldsymbol {\Omega }}_{\mathrm {P} }\times \mathbf {L} .}
Precession can be demonstrated by placing a spinning top with its axis horizontal and supported loosely (frictionless toward precession) at one end. Instead of falling, as might be expected, the top appears to defy gravity by remaining with its axis horizontal, when the other end of the axis is left unsupported and the free end of the axis slowly describes a circle in a horizontal plane, the resulting precession turning. This effect is explained by the above equations. The torque on the top is supplied by a couple of forces: gravity acting downward on the device's centre of mass, and an equal force acting upward to support one end of the device. The rotation resulting from this torque is not downward, as might be intuitively expected, causing the device to fall, but perpendicular to both the gravitational torque (horizontal and perpendicular to the axis of rotation) and the axis of rotation (horizontal and outwards from the point of support), i.e., about a vertical axis, causing the device to rotate slowly about the supporting point.
Under a constant torque of magnitude τ, the speed of precession ΩP is inversely proportional to L, the magnitude of its angular momentum:
τ
=
Ω
P
L
sin
θ
,
{\displaystyle \tau ={\mathit {\Omega }}_{\mathrm {P} }L\sin \theta ,}
where θ is the angle between the vectors ΩP and L. Thus, if the top's spin slows down (for example, due to friction), its angular momentum decreases and so the rate of precession increases. This continues until the device is unable to rotate fast enough to support its own weight, when it stops precessing and falls off its support, mostly because friction against precession cause another precession that goes to cause the fall.
By convention, these three vectors – torque, spin, and precession – are all oriented with respect to each other according to the right-hand rule.
== Virtual work of forces acting on a rigid body ==
An alternate formulation of rigid body dynamics that has a number of convenient features is obtained by considering the virtual work of forces acting on a rigid body.
The virtual work of forces acting at various points on a single rigid body can be calculated using the velocities of their point of application and the resultant force and torque. To see this, let the forces F1, F2 ... Fn act on the points R1, R2 ... Rn in a rigid body.
The trajectories of Ri, i = 1, ..., n are defined by the movement of the rigid body. The velocity of the points Ri along their trajectories are
V
i
=
ω
×
(
R
i
−
R
)
+
V
,
{\displaystyle \mathbf {V} _{i}={\boldsymbol {\omega }}\times (\mathbf {R} _{i}-\mathbf {R} )+\mathbf {V} ,}
where ω is the angular velocity vector of the body.
=== Virtual work ===
Work is computed from the dot product of each force with the displacement of its point of contact
δ
W
=
∑
i
=
1
n
F
i
⋅
δ
r
i
.
{\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot \delta \mathbf {r} _{i}.}
If the trajectory of a rigid body is defined by a set of generalized coordinates qj, j = 1, ..., m, then the virtual displacements δri are given by
δ
r
i
=
∑
j
=
1
m
∂
r
i
∂
q
j
δ
q
j
=
∑
j
=
1
m
∂
V
i
∂
q
˙
j
δ
q
j
.
{\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}}\delta q_{j}=\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}\delta q_{j}.}
The virtual work of this system of forces acting on the body in terms of the generalized coordinates becomes
δ
W
=
F
1
⋅
(
∑
j
=
1
m
∂
V
1
∂
q
˙
j
δ
q
j
)
+
⋯
+
F
n
⋅
(
∑
j
=
1
m
∂
V
n
∂
q
˙
j
δ
q
j
)
{\displaystyle \delta W=\mathbf {F} _{1}\cdot \left(\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{1}}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)+\dots +\mathbf {F} _{n}\cdot \left(\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{n}}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)}
or collecting the coefficients of δqj
δ
W
=
(
∑
i
=
1
n
F
i
⋅
∂
V
i
∂
q
˙
1
)
δ
q
1
+
⋯
+
(
∑
1
=
1
n
F
i
⋅
∂
V
i
∂
q
˙
m
)
δ
q
m
.
{\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{1}}}\right)\delta q_{1}+\dots +\left(\sum _{1=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{m}}}\right)\delta q_{m}.}
=== Generalized forces ===
For simplicity consider a trajectory of a rigid body that is specified by a single generalized coordinate q, such as a rotation angle, then the formula becomes
δ
W
=
(
∑
i
=
1
n
F
i
⋅
∂
V
i
∂
q
˙
)
δ
q
=
(
∑
i
=
1
n
F
i
⋅
∂
(
ω
×
(
R
i
−
R
)
+
V
)
∂
q
˙
)
δ
q
.
{\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}\right)\delta q=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial ({\boldsymbol {\omega }}\times (\mathbf {R} _{i}-\mathbf {R} )+\mathbf {V} )}{\partial {\dot {q}}}}\right)\delta q.}
Introduce the resultant force F and torque T so this equation takes the form
δ
W
=
(
F
⋅
∂
V
∂
q
˙
+
T
⋅
∂
ω
∂
q
˙
)
δ
q
.
{\displaystyle \delta W=\left(\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}\right)\delta q.}
The quantity Q defined by
Q
=
F
⋅
∂
V
∂
q
˙
+
T
⋅
∂
ω
∂
q
˙
,
{\displaystyle Q=\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}},}
is known as the generalized force associated with the virtual displacement δq. This formula generalizes to the movement of a rigid body defined by more than one generalized coordinate, that is
δ
W
=
∑
j
=
1
m
Q
j
δ
q
j
,
{\displaystyle \delta W=\sum _{j=1}^{m}Q_{j}\delta q_{j},}
where
Q
j
=
F
⋅
∂
V
∂
q
˙
j
+
T
⋅
∂
ω
∂
q
˙
j
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}=\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}_{j}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.}
It is useful to note that conservative forces such as gravity and spring forces are derivable from a potential function V(q1, ..., qn), known as a potential energy. In this case the generalized forces are given by
Q
j
=
−
∂
V
∂
q
j
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.}
== D'Alembert's form of the principle of virtual work ==
The equations of motion for a mechanical system of rigid bodies can be determined using D'Alembert's form of the principle of virtual work. The principle of virtual work is used to study the static equilibrium of a system of rigid bodies, however by introducing acceleration terms in Newton's laws this approach is generalized to define dynamic equilibrium.
=== Static equilibrium ===
The static equilibrium of a mechanical system rigid bodies is defined by the condition that the virtual work of the applied forces is zero for any virtual displacement of the system. This is known as the principle of virtual work. This is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is Qi=0.
Let a mechanical system be constructed from n rigid bodies, Bi, i = 1, ..., n, and let the resultant of the applied forces on each body be the force-torque pairs, Fi and Ti, i = 1, ..., n. Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity Vi and angular velocities ωi, i = 1, ..., n, for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom.
The virtual work of the forces and torques, Fi and Ti, applied to this one degree of freedom system is given by
δ
W
=
∑
i
=
1
n
(
F
i
⋅
∂
V
i
∂
q
˙
+
T
i
⋅
∂
ω
i
∂
q
˙
)
δ
q
=
Q
δ
q
,
{\displaystyle \delta W=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}}}\right)\delta q=Q\delta q,}
where
Q
=
∑
i
=
1
n
(
F
i
⋅
∂
V
i
∂
q
˙
+
T
i
⋅
∂
ω
i
∂
q
˙
)
,
{\displaystyle Q=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}}}\right),}
is the generalized force acting on this one degree of freedom system.
If the mechanical system is defined by m generalized coordinates, qj, j = 1, ..., m, then the system has m degrees of freedom and the virtual work is given by,
δ
W
=
∑
j
=
1
m
Q
j
δ
q
j
,
{\displaystyle \delta W=\sum _{j=1}^{m}Q_{j}\delta q_{j},}
where
Q
j
=
∑
i
=
1
n
(
F
i
⋅
∂
V
i
∂
q
˙
j
+
T
i
⋅
∂
ω
i
∂
q
˙
j
)
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}_{j}}}\right),\quad j=1,\ldots ,m.}
is the generalized force associated with the generalized coordinate qj. The principle of virtual work states that static equilibrium occurs when these generalized forces acting on the system are zero, that is
Q
j
=
0
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}=0,\quad j=1,\ldots ,m.}
These m equations define the static equilibrium of the system of rigid bodies.
=== Generalized inertia forces ===
Consider a single rigid body which moves under the action of a resultant force F and torque T, with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force Q* associated with the generalized coordinate q is given by
Q
∗
=
−
(
M
A
)
⋅
∂
V
∂
q
˙
−
(
[
I
R
]
α
+
ω
×
[
I
R
]
ω
)
⋅
∂
ω
∂
q
˙
.
{\displaystyle Q^{*}=-(M\mathbf {A} )\cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}-\left([I_{R}]{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times [I_{R}]{\boldsymbol {\omega }}\right)\cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}.}
This inertia force can be computed from the kinetic energy of the rigid body,
T
=
1
2
M
V
⋅
V
+
1
2
ω
⋅
[
I
R
]
ω
,
{\displaystyle T={\tfrac {1}{2}}M\mathbf {V} \cdot \mathbf {V} +{\tfrac {1}{2}}{\boldsymbol {\omega }}\cdot [I_{R}]{\boldsymbol {\omega }},}
by using the formula
Q
∗
=
−
(
d
d
t
∂
T
∂
q
˙
−
∂
T
∂
q
)
.
{\displaystyle Q^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}}}-{\frac {\partial T}{\partial q}}\right).}
A system of n rigid bodies with m generalized coordinates has the kinetic energy
T
=
∑
i
=
1
n
(
1
2
M
V
i
⋅
V
i
+
1
2
ω
i
⋅
[
I
R
]
ω
i
)
,
{\displaystyle T=\sum _{i=1}^{n}\left({\tfrac {1}{2}}M\mathbf {V} _{i}\cdot \mathbf {V} _{i}+{\tfrac {1}{2}}{\boldsymbol {\omega }}_{i}\cdot [I_{R}]{\boldsymbol {\omega }}_{i}\right),}
which can be used to calculate the m generalized inertia forces
Q
j
∗
=
−
(
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
)
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}\right),\quad j=1,\ldots ,m.}
=== Dynamic equilibrium ===
D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that
δ
W
=
(
Q
1
+
Q
1
∗
)
δ
q
1
+
⋯
+
(
Q
m
+
Q
m
∗
)
δ
q
m
=
0
,
{\displaystyle \delta W=\left(Q_{1}+Q_{1}^{*}\right)\delta q_{1}+\dots +\left(Q_{m}+Q_{m}^{*}\right)\delta q_{m}=0,}
for any set of virtual displacements δqj. This condition yields m equations,
Q
j
+
Q
j
∗
=
0
,
j
=
1
,
…
,
m
,
{\displaystyle Q_{j}+Q_{j}^{*}=0,\quad j=1,\ldots ,m,}
which can also be written as
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
=
Q
j
,
j
=
1
,
…
,
m
.
{\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=Q_{j},\quad j=1,\ldots ,m.}
The result is a set of m equations of motion that define the dynamics of the rigid body system.
=== Lagrange's equations ===
If the generalized forces Qj are derivable from a potential energy V(q1, ..., qm), then these equations of motion take the form
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
=
−
∂
V
∂
q
j
,
j
=
1
,
…
,
m
.
{\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.}
In this case, introduce the Lagrangian, L = T − V, so these equations of motion become
d
d
t
∂
L
∂
q
˙
j
−
∂
L
∂
q
j
=
0
j
=
1
,
…
,
m
.
{\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}-{\frac {\partial L}{\partial q_{j}}}=0\quad j=1,\ldots ,m.}
These are known as Lagrange's equations of motion.
== Linear and angular momentum ==
=== System of particles ===
The linear and angular momentum of a rigid system of particles is formulated by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi, i = 1, ..., n be located at the coordinates ri and velocities vi. Select a reference point R and compute the relative position and velocity vectors,
r
i
=
(
r
i
−
R
)
+
R
,
v
i
=
d
d
t
(
r
i
−
R
)
+
V
.
{\displaystyle \mathbf {r} _{i}=\left(\mathbf {r} _{i}-\mathbf {R} \right)+\mathbf {R} ,\quad \mathbf {v} _{i}={\frac {d}{dt}}(\mathbf {r} _{i}-\mathbf {R} )+\mathbf {V} .}
The total linear and angular momentum vectors relative to the reference point R are
p
=
d
d
t
(
∑
i
=
1
n
m
i
(
r
i
−
R
)
)
+
(
∑
i
=
1
n
m
i
)
V
,
{\displaystyle \mathbf {p} ={\frac {d}{dt}}\left(\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\right)+\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} ,}
and
L
=
∑
i
=
1
n
m
i
(
r
i
−
R
)
×
d
d
t
(
r
i
−
R
)
+
(
∑
i
=
1
n
m
i
(
r
i
−
R
)
)
×
V
.
{\displaystyle \mathbf {L} =\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\times {\frac {d}{dt}}\left(\mathbf {r} _{i}-\mathbf {R} \right)+\left(\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\right)\times \mathbf {V} .}
If R is chosen as the center of mass these equations simplify to
p
=
M
V
,
L
=
∑
i
=
1
n
m
i
(
r
i
−
R
)
×
d
d
t
(
r
i
−
R
)
.
{\displaystyle \mathbf {p} =M\mathbf {V} ,\quad \mathbf {L} =\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\times {\frac {d}{dt}}\left(\mathbf {r} _{i}-\mathbf {R} \right).}
=== Rigid system of particles ===
To specialize these formulas to a rigid body, assume the particles are rigidly connected to each other so Pi, i=1,...,n are located by the coordinates ri and velocities vi. Select a reference point R and compute the relative position and velocity vectors,
r
i
=
(
r
i
−
R
)
+
R
,
v
i
=
ω
×
(
r
i
−
R
)
+
V
,
{\displaystyle \mathbf {r} _{i}=(\mathbf {r} _{i}-\mathbf {R} )+\mathbf {R} ,\quad \mathbf {v} _{i}=\omega \times (\mathbf {r} _{i}-\mathbf {R} )+\mathbf {V} ,}
where ω is the angular velocity of the system.
The linear momentum and angular momentum of this rigid system measured relative to the center of mass R is
p
=
(
∑
i
=
1
n
m
i
)
V
,
L
=
∑
i
=
1
n
m
i
(
r
i
−
R
)
×
v
i
=
∑
i
=
1
n
m
i
(
r
i
−
R
)
×
(
ω
×
(
r
i
−
R
)
)
.
{\displaystyle \mathbf {p} =\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} ,\quad \mathbf {L} =\sum _{i=1}^{n}m_{i}(\mathbf {r} _{i}-\mathbf {R} )\times \mathbf {v} _{i}=\sum _{i=1}^{n}m_{i}(\mathbf {r} _{i}-\mathbf {R} )\times (\omega \times (\mathbf {r} _{i}-\mathbf {R} )).}
These equations simplify to become,
p
=
M
V
,
L
=
[
I
R
]
ω
,
{\displaystyle \mathbf {p} =M\mathbf {V} ,\quad \mathbf {L} =[I_{R}]\omega ,}
where M is the total mass of the system and [IR] is the moment of inertia matrix defined by
[
I
R
]
=
−
∑
i
=
1
n
m
i
[
r
i
−
R
]
[
r
i
−
R
]
,
{\displaystyle [I_{R}]=-\sum _{i=1}^{n}m_{i}[r_{i}-R][r_{i}-R],}
where [ri − R] is the skew-symmetric matrix constructed from the vector ri − R.
== Applications ==
For the analysis of robotic systems
For the biomechanical analysis of animals, humans or humanoid systems
For the analysis of space objects
For the understanding of strange motions of rigid bodies.
For the design and development of dynamics-based sensors, such as gyroscopic sensors.
For the design and development of various stability enhancement applications in automobiles.
For improving the graphics of video games which involves rigid bodies
== See also ==
== References ==
== Further reading ==
E. Leimanis (1965). The General Problem of the Motion of Coupled Rigid Bodies about a Fixed Point. (Springer, New York).
W. B. Heard (2006). Rigid Body Mechanics: Mathematics, Physics and Applications. (Wiley-VCH).
== External links ==
Chris Hecker's Rigid Body Dynamics Information Archived 12 March 2007 at the Wayback Machine
Physically Based Modeling: Principles and Practice
DigitalRune Knowledge Base Archived 20 November 2008 at the Wayback Machine contains a master thesis and a collection of resources about rigid body dynamics.
F. Klein, "Note on the connection between line geometry and the mechanics of rigid bodies" (English translation)
F. Klein, "On Sir Robert Ball's theory of screws" (English translation)
E. Cotton, "Application of Cayley geometry to the geometric study of the displacement of a solid around a fixed point" (English translation) | Wikipedia/Rigid-body_dynamics |
In physics, Brownian dynamics is a mathematical approach for describing the dynamics of molecular systems in the diffusive regime. It is a simplified version of Langevin dynamics and corresponds to the limit where no average acceleration takes place. This approximation is also known as overdamped Langevin dynamics or as Langevin dynamics without inertia.
== Definition ==
In Brownian dynamics, the following equation of motion is used to describe the dynamics of a stochastic system with coordinates
X
=
X
(
t
)
{\displaystyle X=X(t)}
:
X
˙
=
−
D
k
B
T
∇
U
(
X
)
+
2
D
R
(
t
)
.
{\displaystyle {\dot {X}}=-{\frac {D}{k_{\text{B}}T}}\nabla U(X)+{\sqrt {2D}}R(t).}
where:
X
˙
{\displaystyle {\dot {X}}}
is the velocity, the dot being a time derivative
U
(
X
)
{\displaystyle U(X)}
is the particle interaction potential
∇
{\displaystyle \nabla }
is the gradient operator, such that
−
∇
U
(
X
)
{\displaystyle -\nabla U(X)}
is the force calculated from the particle interaction potential
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant
T
{\displaystyle T}
is the temperature
D
{\displaystyle D}
is a diffusion coefficient
R
(
t
)
{\displaystyle R(t)}
is a white noise term, satisfying
⟨
R
(
t
)
⟩
=
0
{\displaystyle \left\langle R(t)\right\rangle =0}
and
⟨
R
(
t
)
R
(
t
′
)
⟩
=
δ
(
t
−
t
′
)
{\displaystyle \left\langle R(t)R(t')\right\rangle =\delta (t-t')}
== Derivation ==
In Langevin dynamics, the equation of motion using the same notation as above is as follows:
M
X
¨
=
−
∇
U
(
X
)
−
ζ
X
˙
+
2
ζ
k
B
T
R
(
t
)
{\displaystyle M{\ddot {X}}=-\nabla U(X)-\zeta {\dot {X}}+{\sqrt {2\zeta k_{\text{B}}T}}R(t)}
where:
M
{\displaystyle M}
is the mass of the particle.
X
¨
{\displaystyle {\ddot {X}}}
is the acceleration
ζ
{\displaystyle \zeta }
is the friction constant or tensor, in units of
mass
/
time
{\displaystyle {\text{mass}}/{\text{time}}}
.
It is often of form
ζ
=
γ
M
{\displaystyle \zeta =\gamma M}
, where
γ
{\displaystyle \gamma }
is the collision frequency with the solvent, a damping constant in units of
time
−
1
{\displaystyle {\text{time}}^{-1}}
.
For spherical particles of radius r in the limit of low Reynolds number, Stokes' law gives
ζ
=
6
π
η
r
{\displaystyle \zeta =6\pi \,\eta \,r}
.
The above equation may be rewritten as
M
X
¨
⏟
inertial force
+
∇
U
(
X
)
⏟
potential force
+
ζ
X
˙
⏟
viscous force
−
2
ζ
k
B
T
R
(
t
)
⏟
random force
=
0
{\displaystyle \underbrace {M{\ddot {X}}} _{\text{inertial force}}+\underbrace {\nabla U(X)} _{\text{potential force}}+\underbrace {\zeta {\dot {X}}} _{\text{viscous force}}-\underbrace {{\sqrt {2\zeta k_{\text{B}}T}}R(t)} _{\text{random force}}=0}
In Brownian dynamics, the inertial force term
M
X
¨
(
t
)
{\displaystyle M{\ddot {X}}(t)}
is so much smaller than the other three that it is considered negligible. In this case, the equation is approximately
0
=
−
∇
U
(
X
)
−
ζ
X
˙
+
2
ζ
k
B
T
R
(
t
)
{\displaystyle 0=-\nabla U(X)-\zeta {\dot {X}}+{\sqrt {2\zeta k_{\text{B}}T}}R(t)}
For spherical particles of radius
r
{\displaystyle r}
in the limit of low Reynolds number, we can use the Stokes–Einstein relation. In this case,
D
=
k
B
T
/
ζ
{\displaystyle D=k_{\text{B}}T/\zeta }
, and the equation reads:
X
˙
(
t
)
=
−
D
k
B
T
∇
U
(
X
)
+
2
D
R
(
t
)
.
{\displaystyle {\dot {X}}(t)=-{\frac {D}{k_{\text{B}}T}}\nabla U(X)+{\sqrt {2D}}R(t).}
For example, when the magnitude of the friction tensor
ζ
{\displaystyle \zeta }
increases, the damping effect of the viscous force becomes dominant relative to the inertial force. Consequently, the system transitions from the inertial to the diffusive (Brownian) regime. For this reason, Brownian dynamics are also known as overdamped Langevin dynamics or Langevin dynamics without inertia.
== Inclusion of hydrodynamic interaction ==
In 1978, Ermak and McCammon suggested an algorithm for efficiently computing Brownian dynamics with hydrodynamic interactions. Hydrodynamic interactions occur when the particles interact indirectly by generating and reacting to local velocities in the solvent. For a system of
N
{\displaystyle N}
three-dimensional particle diffusing subject to a force vector F(X), the derived Brownian dynamics scheme becomes:
X
i
(
t
+
Δ
t
)
=
X
i
(
t
)
+
∑
j
N
Δ
t
D
i
j
k
B
T
F
[
X
j
(
t
)
]
+
R
i
(
t
)
{\displaystyle X_{i}(t+\Delta t)=X_{i}(t)+\sum _{j}^{N}{\frac {\Delta tD_{ij}}{k_{\text{B}}T}}F[X_{j}(t)]+R_{i}(t)}
where
D
i
j
{\displaystyle D_{ij}}
is a diffusion matrix specifying hydrodynamic interactions, Oseen tensor for example, in non-diagonal entries interacting between the target particle
i
{\displaystyle i}
and the surrounding particle
j
{\displaystyle j}
,
F
{\displaystyle F}
is the force exerted on the particle
j
{\displaystyle j}
, and
R
(
t
)
{\displaystyle R(t)}
is a Gaussian noise vector with zero mean and a standard deviation of
2
D
Δ
t
{\displaystyle {\sqrt {2D\Delta t}}}
in each vector entry. The subscripts
i
{\displaystyle i}
and
j
{\displaystyle j}
indicate the ID of the particles and
N
{\displaystyle N}
refers to the total number of particles. This equation works for the dilute system where the near-field effect is ignored.
== See also ==
Brownian motion
Immersed boundary method
== References == | Wikipedia/Brownian_dynamics |
Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to rockets, satellites, and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and the law of universal gravitation. Astrodynamics is a core discipline within space-mission design and control.
Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including both spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbital plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers.
General relativity is a more exact theory than Newton's laws for calculating orbits, and it is sometimes necessary to use it for greater accuracy or in high-gravity situations (e.g. orbits near the Sun).
== History ==
Until the rise of space travel in the twentieth century, there was little distinction between orbital and celestial mechanics. At the time of Sputnik, the field was termed 'space dynamics'. The fundamental techniques, such as those used to solve the Keplerian problem (determining position as a function of time), are therefore the same in both fields. Furthermore, the history of the fields is almost entirely shared.
Johannes Kepler was the first to successfully model planetary orbits to a high degree of accuracy, publishing his laws in 1609. Isaac Newton published more general laws of celestial motion in the first edition of Philosophiæ Naturalis Principia Mathematica (1687), which gave a method for finding the orbit of a body following a parabolic path from three observations. This was used by Edmund Halley to establish the orbits of various comets, including that which bears his name. Newton's method of successive approximation was formalised into an analytic method by Leonhard Euler in 1744, whose work was in turn generalised to elliptical and hyperbolic orbits by Johann Lambert in 1761–1777.
Another milestone in orbit determination was Carl Friedrich Gauss's assistance in the "recovery" of the dwarf planet Ceres in 1801. Gauss's method was able to use just three observations (in the form of pairs of right ascension and declination), to find the six orbital elements that completely describe an orbit. The theory of orbit determination has subsequently been developed to the point where today it is applied in GPS receivers as well as the tracking and cataloguing of newly observed minor planets. Modern orbit determination and prediction are used to operate all types of satellites and space probes, as it is necessary to know their future positions to a high degree of accuracy.
Astrodynamics was developed by astronomer Samuel Herrick beginning in the 1930s. He consulted the rocket scientist Robert Goddard and was encouraged to continue his work on space navigation techniques, as Goddard believed they would be needed in the future. Numerical techniques of astrodynamics were coupled with new powerful computers in the 1960s, and humans were ready to travel to the Moon and return.
== Practical techniques ==
=== Rules of thumb ===
The following rules of thumb are useful for situations approximated by classical mechanics under the standard assumptions of astrodynamics outlined below. The specific example discussed is of a satellite orbiting a planet, but the rules of thumb could also apply to other situations, such as orbits of small bodies around a star such as the Sun.
Kepler's laws of planetary motion:
Orbits are elliptical, with the heavier body at one focus of the ellipse. A special case of this is a circular orbit (a circle is a special case of ellipse) with the planet at the center.
A line drawn from the planet to the satellite sweeps out equal areas in equal times no matter which portion of the orbit is measured.
The square of a satellite's orbital period is proportional to the cube of its average distance from the planet.
Without applying force (such as firing a rocket engine), the period and shape of the satellite's orbit will not change.
A satellite in a low orbit (or a low part of an elliptical orbit) moves more quickly with respect to the surface of the planet than a satellite in a higher orbit (or a high part of an elliptical orbit), due to the stronger gravitational attraction closer to the planet.
If thrust is applied at only one point in the satellite's orbit, it will return to that same point on each subsequent orbit, though the rest of its path will change. Thus one cannot move from one circular orbit to another with only one brief application of thrust.
From a circular orbit, thrust applied in a direction opposite to the satellite's motion changes the orbit to an elliptical one; the satellite will descend and reach the lowest orbital point (the periapse) at 180 degrees away from the firing point; then it will ascend back. The period of the resultant orbit will be less than that of the original circular orbit. Thrust applied in the direction of the satellite's motion creates an elliptical orbit with its highest point (apoapse) 180 degrees away from the firing point. The period of the resultant orbit will be longer than that of the original circular orbit.
The consequences of the rules of orbital mechanics are sometimes counter-intuitive. For example, if two spacecrafts are in the same circular orbit and wish to dock, the trailing craft cannot simply fire its engines to accelerate towards the leading craft. This will change the shape of its orbit, causing it to gain altitude and slow down relative to the leading craft, thus moving away from the target. The space rendezvous before docking normally takes multiple precisely calculated engine firings in multiple orbital periods, requiring hours or even days to complete.
To the extent that the standard assumptions of astrodynamics do not hold, actual trajectories will vary from those calculated. For example, simple atmospheric drag is another complicating factor for objects in low Earth orbit.
These rules of thumb are decidedly inaccurate when describing two or more bodies of similar mass, such as a binary star system (see n-body problem). Celestial mechanics uses more general rules applicable to a wider variety of situations. Kepler's laws of planetary motion, which can be mathematically derived from Newton's laws, hold strictly only in describing the motion of two gravitating bodies in the absence of non-gravitational forces; they also describe parabolic and hyperbolic trajectories. In the close proximity of large objects like stars the differences between classical mechanics and general relativity also become important.
== Laws of astrodynamics ==
The fundamental laws of astrodynamics are Newton's law of universal gravitation and Newton's laws of motion, while the fundamental mathematical tool is differential calculus.
In a Newtonian framework, the laws governing orbits and trajectories are in principle time-symmetric.
Standard assumptions in astrodynamics include non-interference from outside bodies, negligible mass for one of the bodies, and negligible other forces (such as from the solar wind, atmospheric drag, etc.). More accurate calculations can be made without these simplifying assumptions, but they are more complicated. The increased accuracy often does not make enough of a difference in the calculation to be worthwhile.
Kepler's laws of planetary motion may be derived from Newton's laws, when it is assumed that the orbiting body is subject only to the gravitational force of the central attractor. When an engine thrust or propulsive force is present, Newton's laws still apply, but Kepler's laws are invalidated. When the thrust stops, the resulting orbit will be different but will once again be described by Kepler's laws which have been set out above. The three laws are:
The orbit of every planet is an ellipse with the Sun at one of the foci.
A line joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The squares of the orbital periods of planets are directly proportional to the cubes of the semi-major axis of the orbits.
=== Escape velocity ===
The formula for an escape velocity is derived as follows. The specific energy (energy per unit mass) of any space vehicle is composed of two components, the specific potential energy and the specific kinetic energy. The specific potential energy associated with a planet of mass M is given by
ϵ
p
=
−
G
M
r
{\displaystyle \epsilon _{p}=-{\frac {GM}{r}}\,}
where G is the gravitational constant and r is the distance between the two bodies;
while the specific kinetic energy of an object is given by
ϵ
k
=
v
2
2
{\displaystyle \epsilon _{k}={\frac {v^{2}}{2}}\,}
where v is its Velocity;
and so the total specific orbital energy is
ϵ
=
ϵ
k
+
ϵ
p
=
v
2
2
−
G
M
r
{\displaystyle \epsilon =\epsilon _{k}+\epsilon _{p}={\frac {v^{2}}{2}}-{\frac {GM}{r}}\,}
Since energy is conserved,
ϵ
{\displaystyle \epsilon }
cannot depend on the distance,
r
{\displaystyle r}
, from the center of the central body to the space vehicle in question, i.e. v must vary with r to keep the specific orbital energy constant. Therefore, the object can reach infinite
r
{\displaystyle r}
only if this quantity is nonnegative, which implies
v
≥
2
G
M
r
.
{\displaystyle v\geq {\sqrt {\frac {2GM}{r}}}.}
The escape velocity from the Earth's surface is about 11 km/s, but that is insufficient to send the body an infinite distance because of the gravitational pull of the Sun. To escape the Solar System from a location at a distance from the Sun equal to the distance Sun–Earth, but not close to the Earth, requires around 42 km/s velocity, but there will be "partial credit" for the Earth's orbital velocity for spacecraft launched from Earth, if their further acceleration (due to the propulsion system) carries them in the same direction as Earth travels in its orbit.
=== Formulae for free orbits ===
Orbits are conic sections, so the formula for the distance of a body for a given angle corresponds to the formula for that curve in polar coordinates, which is:
r
=
p
1
+
e
cos
θ
{\displaystyle r={\frac {p}{1+e\cos \theta }}}
μ
=
G
(
m
1
+
m
2
)
{\displaystyle \mu =G(m_{1}+m_{2})\,}
p
=
h
2
/
μ
{\displaystyle p=h^{2}/\mu \,}
μ
{\displaystyle \mu }
is called the gravitational parameter.
m
1
{\displaystyle m_{1}}
and
m
2
{\displaystyle m_{2}}
are the masses of objects 1 and 2, and
h
{\displaystyle h}
is the specific angular momentum of object 2 with respect to object 1. The parameter
θ
{\displaystyle \theta }
is known as the true anomaly,
p
{\displaystyle p}
is the semi-latus rectum, while
e
{\displaystyle e}
is the orbital eccentricity, all obtainable from the various forms of the six independent orbital elements.
=== Circular orbits ===
All bounded orbits where the gravity of a central body dominates are elliptical in nature. A special case of this is the circular orbit, which is an ellipse of zero eccentricity. The formula for the velocity of a body in a circular orbit at distance r from the center of gravity of mass M can be derived as follows:
Centrifugal acceleration matches the acceleration due to gravity.
So,
v
2
r
=
G
M
r
2
{\displaystyle {\frac {v^{2}}{r}}={\frac {GM}{r^{2}}}}
Therefore,
v
=
G
M
r
{\displaystyle \ v={\sqrt {{\frac {GM}{r}}\ }}}
where
G
{\displaystyle G}
is the gravitational constant, equal to
6.6743 × 10−11 m3/(kg·s2)
To properly use this formula, the units must be consistent; for example,
M
{\displaystyle M}
must be in kilograms, and
r
{\displaystyle r}
must be in meters. The answer will be in meters per second.
The quantity
G
M
{\displaystyle GM}
is often termed the standard gravitational parameter, which has a different value for every planet or moon in the Solar System.
Once the circular orbital velocity is known, the escape velocity is easily found by multiplying by
2
{\displaystyle {\sqrt {2}}}
:
v
=
2
G
M
r
=
2
G
M
r
.
{\displaystyle \ v={\sqrt {2}}{\sqrt {{\frac {GM}{r}}\ }}={\sqrt {{\frac {2GM}{r}}\ }}.}
To escape from gravity, the kinetic energy must at least match the negative potential energy. Therefore,
1
2
m
v
2
=
G
M
m
r
{\displaystyle {\frac {1}{2}}mv^{2}={\frac {GMm}{r}}}
v
=
2
G
M
r
.
{\displaystyle v={\sqrt {{\frac {2GM}{r}}\ }}.}
=== Elliptical orbits ===
If
0
<
e
<
1
{\displaystyle 0<e<1}
, then the denominator of the equation of free orbits varies with the true anomaly
θ
{\displaystyle \theta }
, but remains positive, never becoming zero. Therefore, the relative position vector remains bounded, having its smallest magnitude at periapsis
r
p
{\displaystyle r_{p}}
, which is given by:
r
p
=
p
1
+
e
{\displaystyle r_{p}={\frac {p}{1+e}}}
The maximum value
r
{\displaystyle r}
is reached when
θ
=
180
∘
{\displaystyle \theta =180^{\circ }}
. This point is called the apoapsis, and its radial coordinate, denoted
r
a
{\displaystyle r_{a}}
, is
r
a
=
p
1
−
e
{\displaystyle r_{a}={\frac {p}{1-e}}}
Let
2
a
{\displaystyle 2a}
be the distance measured along the apse line from periapsis
P
{\displaystyle P}
to apoapsis
A
{\displaystyle A}
, as illustrated in the equation below:
2
a
=
r
p
+
r
a
{\displaystyle 2a=r_{p}+r_{a}}
Substituting the equations above, we get:
a
=
p
1
−
e
2
{\displaystyle a={\frac {p}{1-e^{2}}}}
a is the semimajor axis of the ellipse. Solving for
p
{\displaystyle p}
, and substituting the result in the conic section curve formula above, we get:
r
=
a
(
1
−
e
2
)
1
+
e
cos
θ
{\displaystyle r={\frac {a(1-e^{2})}{1+e\cos \theta }}}
==== Orbital period ====
Under standard assumptions the orbital period (
T
{\displaystyle T\,\!}
) of a body traveling along an elliptic orbit can be computed as:
T
=
2
π
a
3
μ
{\displaystyle T=2\pi {\sqrt {a^{3} \over {\mu }}}}
where:
μ
{\displaystyle \mu \,}
is the standard gravitational parameter,
a
{\displaystyle a\,\!}
is the length of the semi-major axis.
Conclusions:
The orbital period is equal to that for a circular orbit with the orbit radius equal to the semi-major axis (
a
{\displaystyle a\,\!}
),
For a given semi-major axis the orbital period does not depend on the eccentricity (See also: Kepler's third law).
==== Velocity ====
Under standard assumptions the orbital speed (
v
{\displaystyle v\,}
) of a body traveling along an elliptic orbit can be computed from the Vis-viva equation as:
v
=
μ
(
2
r
−
1
a
)
{\displaystyle v={\sqrt {\mu \left({2 \over {r}}-{1 \over {a}}\right)}}}
where:
μ
{\displaystyle \mu \,}
is the standard gravitational parameter,
r
{\displaystyle r\,}
is the distance between the orbiting bodies.
a
{\displaystyle a\,\!}
is the length of the semi-major axis.
The velocity equation for a hyperbolic trajectory is
v
=
μ
(
2
r
+
|
1
a
|
)
{\displaystyle v={\sqrt {\mu \left({2 \over {r}}+\left\vert {1 \over {a}}\right\vert \right)}}}
.
==== Energy ====
Under standard assumptions, specific orbital energy (
ϵ
{\displaystyle \epsilon \,}
) of elliptic orbit is negative and the orbital energy conservation equation (the Vis-viva equation) for this orbit can take the form:
v
2
2
−
μ
r
=
−
μ
2
a
=
ϵ
<
0
{\displaystyle {v^{2} \over {2}}-{\mu \over {r}}=-{\mu \over {2a}}=\epsilon <0}
where:
v
{\displaystyle v\,}
is the speed of the orbiting body,
r
{\displaystyle r\,}
is the distance of the orbiting body from the center of mass of the central body,
a
{\displaystyle a\,}
is the semi-major axis,
μ
{\displaystyle \mu \,}
is the standard gravitational parameter.
Conclusions:
For a given semi-major axis the specific orbital energy is independent of the eccentricity.
Using the virial theorem we find:
the time-average of the specific potential energy is equal to
2
ϵ
{\displaystyle 2\epsilon }
the time-average of
r
−
1
{\displaystyle r^{-1}}
is
a
−
1
{\displaystyle a^{-1}}
the time-average of the specific kinetic energy is equal to
−
ϵ
{\displaystyle -\epsilon }
=== Parabolic orbits ===
If the eccentricity equals 1, then the orbit equation becomes:
r
=
h
2
μ
1
1
+
cos
θ
{\displaystyle r={{h^{2}} \over {\mu }}{{1} \over {1+\cos \theta }}}
where:
r
{\displaystyle r\,}
is the radial distance of the orbiting body from the mass center of the central body,
h
{\displaystyle h\,}
is specific angular momentum of the orbiting body,
θ
{\displaystyle \theta \,}
is the true anomaly of the orbiting body,
μ
{\displaystyle \mu \,}
is the standard gravitational parameter.
As the true anomaly θ approaches 180°, the denominator approaches zero, so that r tends towards infinity. Hence, the energy of the trajectory for which e=1 is zero, and is given by:
ϵ
=
v
2
2
−
μ
r
=
0
{\displaystyle \epsilon ={v^{2} \over 2}-{\mu \over {r}}=0}
where:
v
{\displaystyle v\,}
is the speed of the orbiting body.
In other words, the speed anywhere on a parabolic path is:
v
=
2
μ
r
{\displaystyle v={\sqrt {2\mu \over {r}}}}
=== Hyperbolic orbits ===
If
e
>
1
{\displaystyle e>1}
, the orbit formula,
r
=
h
2
μ
1
1
+
e
cos
θ
{\displaystyle r={{h^{2}} \over {\mu }}{{1} \over {1+e\cos \theta }}}
describes the geometry of the hyperbolic orbit. The system consists of two symmetric curves. The orbiting body occupies one of them; the other one is its empty mathematical image. Clearly, the denominator of the equation above goes to zero when
cos
θ
=
−
1
/
e
{\displaystyle \cos \theta =-1/e}
. we denote this value of true anomaly
θ
∞
=
cos
−
1
(
−
1
e
)
{\displaystyle \theta _{\infty }=\cos ^{-1}\left(-{\frac {1}{e}}\right)}
since the radial distance approaches infinity as the true anomaly approaches
θ
∞
{\displaystyle \theta _{\infty }}
, known as the true anomaly of the asymptote. Observe that
θ
∞
{\displaystyle \theta _{\infty }}
lies between 90° and 180°. From the trigonometric identity
sin
2
θ
+
cos
2
θ
=
1
{\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1}
it follows that:
sin
θ
∞
=
1
e
e
2
−
1
{\displaystyle \sin \theta _{\infty }={\frac {1}{e}}{\sqrt {e^{2}-1}}}
==== Energy ====
Under standard assumptions, specific orbital energy (
ϵ
{\displaystyle \epsilon \,}
) of a hyperbolic trajectory is greater than zero and the orbital energy conservation equation for this kind of trajectory takes form:
ϵ
=
v
2
2
−
μ
r
=
μ
−
2
a
{\displaystyle \epsilon ={v^{2} \over 2}-{\mu \over {r}}={\mu \over {-2a}}}
where:
v
{\displaystyle v\,}
is the orbital velocity of orbiting body,
r
{\displaystyle r\,}
is the radial distance of orbiting body from central body,
a
{\displaystyle a\,}
is the negative semi-major axis of the orbit's hyperbola,
μ
{\displaystyle \mu \,}
is standard gravitational parameter.
==== Hyperbolic excess velocity ====
Under standard assumptions the body traveling along a hyperbolic trajectory will attain at
r
=
{\displaystyle r=}
infinity an orbital velocity called hyperbolic excess velocity (
v
∞
{\displaystyle v_{\infty }\,\!}
) that can be computed as:
v
∞
=
μ
−
a
{\displaystyle v_{\infty }={\sqrt {\mu \over {-a}}}\,\!}
where:
μ
{\displaystyle \mu \,\!}
is standard gravitational parameter,
a
{\displaystyle a\,\!}
is the negative semi-major axis of orbit's hyperbola.
The hyperbolic excess velocity is related to the specific orbital energy or characteristic energy by
2
ϵ
=
C
3
=
v
∞
2
{\displaystyle 2\epsilon =C_{3}=v_{\infty }^{2}\,\!}
== Calculating trajectories ==
=== Kepler's equation ===
One approach to calculating orbits (mainly used historically) is to use Kepler's equation:
M
=
E
−
ϵ
⋅
sin
E
{\displaystyle M=E-\epsilon \cdot \sin E}
.
where M is the mean anomaly, E is the eccentric anomaly, and
ϵ
{\displaystyle \epsilon }
is the eccentricity.
With Kepler's formula, finding the time-of-flight to reach an angle (true anomaly) of
θ
{\displaystyle \theta }
from periapsis is broken into two steps:
Compute the eccentric anomaly
E
{\displaystyle E}
from true anomaly
θ
{\displaystyle \theta }
Compute the time-of-flight
t
{\displaystyle t}
from the eccentric anomaly
E
{\displaystyle E}
Finding the eccentric anomaly at a given time (the inverse problem) is more difficult. Kepler's equation is transcendental in
E
{\displaystyle E}
, meaning it cannot be solved for
E
{\displaystyle E}
algebraically. Kepler's equation can be solved for
E
{\displaystyle E}
analytically by inversion.
A solution of Kepler's equation, valid for all real values of
ϵ
{\displaystyle \textstyle \epsilon }
is:
E
=
{
∑
n
=
1
∞
M
n
3
n
!
lim
θ
→
0
(
d
n
−
1
d
θ
n
−
1
[
(
θ
θ
−
sin
(
θ
)
3
)
n
]
)
,
ϵ
=
1
∑
n
=
1
∞
M
n
n
!
lim
θ
→
0
(
d
n
−
1
d
θ
n
−
1
[
(
θ
θ
−
ϵ
⋅
sin
(
θ
)
)
n
]
)
,
ϵ
≠
1
{\displaystyle E={\begin{cases}\displaystyle \sum _{n=1}^{\infty }{\frac {M^{\frac {n}{3}}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left[\left({\frac {\theta }{\sqrt[{3}]{\theta -\sin(\theta )}}}\right)^{n}\right]\right),&\epsilon =1\\\displaystyle \sum _{n=1}^{\infty }{\frac {M^{n}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left[\left({\frac {\theta }{\theta -\epsilon \cdot \sin(\theta )}}\right)^{n}\right]\right),&\epsilon \neq 1\end{cases}}}
Evaluating this yields:
E
=
{
x
+
1
60
x
3
+
1
1400
x
5
+
1
25200
x
7
+
43
17248000
x
9
+
1213
7207200000
x
11
+
151439
12713500800000
x
13
⋯
|
x
=
(
6
M
)
1
3
,
ϵ
=
1
1
1
−
ϵ
M
−
ϵ
(
1
−
ϵ
)
4
M
3
3
!
+
(
9
ϵ
2
+
ϵ
)
(
1
−
ϵ
)
7
M
5
5
!
−
(
225
ϵ
3
+
54
ϵ
2
+
ϵ
)
(
1
−
ϵ
)
10
M
7
7
!
+
(
11025
ϵ
4
+
4131
ϵ
3
+
243
ϵ
2
+
ϵ
)
(
1
−
ϵ
)
13
M
9
9
!
⋯
,
ϵ
≠
1
{\displaystyle E={\begin{cases}\displaystyle x+{\frac {1}{60}}x^{3}+{\frac {1}{1400}}x^{5}+{\frac {1}{25200}}x^{7}+{\frac {43}{17248000}}x^{9}+{\frac {1213}{7207200000}}x^{11}+{\frac {151439}{12713500800000}}x^{13}\cdots \ |\ x=(6M)^{\frac {1}{3}},&\epsilon =1\\\\\displaystyle {\frac {1}{1-\epsilon }}M-{\frac {\epsilon }{(1-\epsilon )^{4}}}{\frac {M^{3}}{3!}}+{\frac {(9\epsilon ^{2}+\epsilon )}{(1-\epsilon )^{7}}}{\frac {M^{5}}{5!}}-{\frac {(225\epsilon ^{3}+54\epsilon ^{2}+\epsilon )}{(1-\epsilon )^{10}}}{\frac {M^{7}}{7!}}+{\frac {(11025\epsilon ^{4}+4131\epsilon ^{3}+243\epsilon ^{2}+\epsilon )}{(1-\epsilon )^{13}}}{\frac {M^{9}}{9!}}\cdots ,&\epsilon \neq 1\end{cases}}}
Alternatively, Kepler's Equation can be solved numerically. First one must guess a value of
E
{\displaystyle E}
and solve for time-of-flight; then adjust
E
{\displaystyle E}
as necessary to bring the computed time-of-flight closer to the desired value until the required precision is achieved. Usually, Newton's method is used to achieve relatively fast convergence.
The main difficulty with this approach is that it can take prohibitively long to converge for the extreme elliptical orbits. For near-parabolic orbits, eccentricity
ϵ
{\displaystyle \epsilon }
is nearly 1, and substituting
e
=
1
{\displaystyle e=1}
into the formula for mean anomaly,
E
−
sin
E
{\displaystyle E-\sin E}
, we find ourselves subtracting two nearly-equal values, and accuracy suffers. For near-circular orbits, it is hard to find the periapsis in the first place (and truly circular orbits have no periapsis at all). Furthermore, the equation was derived on the assumption of an elliptical orbit, and so it does not hold for parabolic or hyperbolic orbits. These difficulties are what led to the development of the universal variable formulation, described below.
=== Conic orbits ===
For simple procedures, such as computing the delta-v for coplanar transfer ellipses, traditional approaches are fairly effective. Others, such as time-of-flight are far more complicated, especially for near-circular and hyperbolic orbits.
=== The patched conic approximation ===
The Hohmann transfer orbit alone is a poor approximation for interplanetary trajectories because it neglects the planets' own gravity. Planetary gravity dominates the behavior of the spacecraft in the vicinity of a planet and in most cases Hohmann severely overestimates delta-v, and produces highly inaccurate prescriptions for burn timings. A relatively simple way to get a first-order approximation of delta-v is based on the 'Patched Conic Approximation' technique. One must choose the one dominant gravitating body in each region of space through which the trajectory will pass, and to model only that body's effects in that region. For instance, on a trajectory from the Earth to Mars, one would begin by considering only the Earth's gravity until the trajectory reaches a distance where the Earth's gravity no longer dominates that of the Sun. The spacecraft would be given escape velocity to send it on its way to interplanetary space. Next, one would consider only the Sun's gravity until the trajectory reaches the neighborhood of Mars. During this stage, the transfer orbit model is appropriate. Finally, only Mars's gravity is considered during the final portion of the trajectory where Mars's gravity dominates the spacecraft's behavior. The spacecraft would approach Mars on a hyperbolic orbit, and a final retrograde burn would slow the spacecraft enough to be captured by Mars. Friedrich Zander was one of the first to apply the patched-conics approach for astrodynamics purposes, when proposing the use of intermediary bodies' gravity for interplanetary travels, in what is known today as a gravity assist.
The size of the "neighborhoods" (or spheres of influence) vary with radius
r
S
O
I
{\displaystyle r_{SOI}}
:
r
S
O
I
=
a
p
(
m
p
m
s
)
2
/
5
{\displaystyle r_{SOI}=a_{p}\left({\frac {m_{p}}{m_{s}}}\right)^{2/5}}
where
a
p
{\displaystyle a_{p}}
is the semimajor axis of the planet's orbit relative to the Sun;
m
p
{\displaystyle m_{p}}
and
m
s
{\displaystyle m_{s}}
are the masses of the planet and Sun, respectively.
This simplification is sufficient to compute rough estimates of fuel requirements, and rough time-of-flight estimates, but it is not generally accurate enough to guide a spacecraft to its destination. For that, numerical methods are required.
=== The universal variable formulation ===
To address computational shortcomings of traditional approaches for solving the 2-body problem, the universal variable formulation was developed. It works equally well for the circular, elliptical, parabolic, and hyperbolic cases, the differential equations converging well when integrated for any orbit. It also generalizes well to problems incorporating perturbation theory.
=== Perturbations ===
The universal variable formulation works well with the variation of parameters technique, except now, instead of the six Keplerian orbital elements, we use a different set of orbital elements: namely, the satellite's initial position and velocity vectors
x
0
{\displaystyle x_{0}}
and
v
0
{\displaystyle v_{0}}
at a given epoch
t
=
0
{\displaystyle t=0}
. In a two-body simulation, these elements are sufficient to compute the satellite's position and velocity at any time in the future, using the universal variable formulation. Conversely, at any moment in the satellite's orbit, we can measure its position and velocity, and then use the universal variable approach to determine what its initial position and velocity would have been at the epoch. In perfect two-body motion, these orbital elements would be invariant (just like the Keplerian elements would be).
However, perturbations cause the orbital elements to change over time. Hence, the position element is written as
x
0
(
t
)
{\displaystyle x_{0}(t)}
and the velocity element as
v
0
(
t
)
{\displaystyle v_{0}(t)}
, indicating that they vary with time. The technique to compute the effect of perturbations becomes one of finding expressions, either exact or approximate, for the functions
x
0
(
t
)
{\displaystyle x_{0}(t)}
and
v
0
(
t
)
{\displaystyle v_{0}(t)}
.
The following are some effects which make real orbits differ from the simple models based on a spherical Earth. Most of them can be handled on short timescales (perhaps less than a few thousand orbits) by perturbation theory because they are small relative to the corresponding two-body effects.
Equatorial bulges cause precession of the node and the perigee
Tesseral harmonics of the gravity field introduce additional perturbations
Lunar and solar gravity perturbations alter the orbits
Atmospheric drag reduces the semi-major axis unless make-up thrust is used
Over very long timescales (perhaps millions of orbits), even small perturbations can dominate, and the behavior can become chaotic. On the other hand, the various perturbations can be orchestrated by clever astrodynamicists to assist with orbit maintenance tasks, such as station-keeping, ground track maintenance or adjustment, or phasing of perigee to cover selected targets at low altitude.
== Orbital maneuver ==
In spaceflight, an orbital maneuver is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth—for example those in orbits around the Sun—an orbital maneuver is called a deep-space maneuver (DSM).
=== Orbital transfer ===
Transfer orbits are usually elliptical orbits that allow spacecraft to move from one (usually substantially circular) orbit to another. Usually they require a burn at the start, a burn at the end, and sometimes one or more burns in the middle.
The Hohmann transfer orbit requires a minimal delta-v.
A bi-elliptic transfer can require less energy than the Hohmann transfer, if the ratio of orbits is 11.94 or greater, but comes at the cost of increased trip time over the Hohmann transfer.
Faster transfers may use any orbit that intersects both the original and destination orbits, at the cost of higher delta-v.
Using low thrust engines (such as electrical propulsion), if the initial orbit is supersynchronous to the final desired circular orbit then the optimal transfer orbit is achieved by thrusting continuously in the direction of the velocity at apogee. This method however takes much longer due to the low thrust.
For the case of orbital transfer between non-coplanar orbits, the change-of-plane thrust must be made at the point where the orbital planes intersect (the "node"). As the objective is to change the direction of the velocity vector by an angle equal to the angle between the planes, almost all of this thrust should be made when the spacecraft is at the node near the apoapse, when the magnitude of the velocity vector is at its lowest. However, a small fraction of the orbital inclination change can be made at the node near the periapse, by slightly angling the transfer orbit injection thrust in the direction of the desired inclination change. This works because the cosine of a small angle is very nearly one, resulting in the small plane change being effectively "free" despite the high velocity of the spacecraft near periapse, as the Oberth Effect due to the increased, slightly angled thrust exceeds the cost of the thrust in the orbit-normal axis.
=== Gravity assist and the Oberth effect ===
In a gravity assist, a spacecraft swings by a planet and leaves in a different direction, at a different speed. This is useful to speed or slow a spacecraft instead of carrying more fuel.
This maneuver can be approximated by an elastic collision at large distances, though the flyby does not involve any physical contact. Due to Newton's third law (equal and opposite reaction), any momentum gained by a spacecraft must be lost by the planet, or vice versa. However, because the planet is much, much more massive than the spacecraft, the effect on the planet's orbit is negligible.
The Oberth effect can be employed, particularly during a gravity assist operation. This effect is that use of a propulsion system works better at high speeds, and hence course changes are best done when close to a gravitating body; this can multiply the effective delta-v.
=== Interplanetary Transport Network and fuzzy orbits ===
It is now possible to use computers to search for routes using the nonlinearities in the gravity of the planets and moons of the Solar System. For example, it is possible to plot an orbit from high Earth orbit to Mars, passing close to one of the Earth's trojan points. Collectively referred to as the Interplanetary Transport Network, these highly perturbative, even chaotic, orbital trajectories in principle need no fuel beyond that needed to reach the Lagrange point (in practice keeping to the trajectory requires some course corrections). The biggest problem with them is they can be exceedingly slow, taking many years. In addition launch windows can be very far apart.
They have, however, been employed on projects such as Genesis. This spacecraft visited the Earth-Sun L1 point and returned using very little propellant.
== See also ==
Celestial mechanics
Chaos theory
Kepler orbit
Lagrange point
Mechanical engineering
N-body problem
Roche limit
Spacecraft propulsion
Universal variable formulation
== References ==
== Further reading ==
Lynnane George. Introduction to Orbital Mechanics.
Sellers, Jerry J.; Astore, William J.; Giffen, Robert B.; Larson, Wiley J. (2004). Kirkpatrick, Douglas H. (ed.). Understanding Space: An Introduction to Astronautics (2 ed.). McGraw Hill. p. 228. ISBN 0-07-242468-0.
"Air University Space Primer, Chapter 8 - Orbital Mechanics" (PDF). USAF. Archived from the original (PDF) on 2013-02-14. Retrieved 2007-10-13.
Bate, R.R.; Mueller, D.D.; White, J.E. (1971). Fundamentals of Astrodynamics. Dover Publications, New York. ISBN 978-0-486-60061-1.
Vallado, D. A. (2001). Fundamentals of Astrodynamics and Applications (2nd ed.). Springer. ISBN 978-0-7923-6903-5.
Battin, R.H. (1999). An Introduction to the Mathematics and Methods of Astrodynamics. American Institute of Aeronautics & Ast, Washington, D.C. ISBN 978-1-56347-342-5.
Chobotov, V.A., ed. (2002). Orbital Mechanics (3rd ed.). American Institute of Aeronautics & Ast, Washington, D.C. ISBN 978-1-56347-537-5.
Herrick, S. (1971). Astrodynamics: Orbit Determination, Space Navigation, Celestial Mechanics, Volume 1. Van Nostrand Reinhold, London. ISBN 978-0-442-03370-5.
Herrick, S. (1972). Astrodynamics: Orbit Correction, Perturbation Theory, Integration, Volume 2. Van Nostrand Reinhold, London. ISBN 978-0-442-03371-2.
Kaplan, M.H. (1976). Modern Spacecraft Dynamics and Controls. Wiley, New York. ISBN 978-0-471-45703-9.
Tom Logsdon (1997). Orbital Mechanics. Wiley-Interscience, New York. ISBN 978-0-471-14636-0.
John E. Prussing & Bruce A. Conway (1993). Orbital Mechanics. Oxford University Press, New York. ISBN 978-0-19-507834-3.
M.J. Sidi (2000). Spacecraft Dynamics and Control. Cambridge University Press, New York. ISBN 978-0-521-78780-2.
W.E. Wiesel (1996). Spaceflight Dynamics (2nd ed.). McGraw-Hill, New York. ISBN 978-0-07-070110-6.
J.P. Vinti (1998). Orbital and Celestial Mechanics. American Institute of Aeronautics & Ast, Reston, Virginia. ISBN 978-1-56347-256-5.
P. Gurfil (2006). Modern Astrodynamics. Butterworth-Heinemann. ISBN 978-0-12-373562-1.
== External links ==
ORBITAL MECHANICS (Rocket and Space Technology)
Java Astrodynamics Toolkit
Astrodynamics-based Space Traffic and Event Knowledge Graph | Wikipedia/Orbital_dynamics |
For classical dynamics at relativistic speeds, see relativistic mechanics.
Relativistic dynamics refers to a combination of relativistic and quantum concepts to describe the relationships between the motion and properties of a relativistic system and the forces acting on the system. What distinguishes relativistic dynamics from other physical theories is the use of an invariant scalar evolution parameter to monitor the historical evolution of space-time events. In a scale-invariant theory, the strength of particle interactions does not depend on the energy of the particles involved.
Twentieth century experiments showed that the physical description of microscopic and submicroscopic objects moving at or near the speed of light raised questions about such fundamental concepts as space, time, mass, and energy. The theoretical description of the physical phenomena required the integration of concepts from relativity and quantum theory.
Vladimir Fock was the first to propose an evolution parameter theory for describing relativistic quantum phenomena, but the evolution parameter theory introduced by Ernst Stueckelberg is more closely aligned with recent work. Evolution parameter theories were used by Feynman, Schwinger and others to formulate quantum field theory in the late 1940s and early 1950s. Silvan S. Schweber wrote a historical exposition of Feynman's investigation of such a theory. A resurgence of interest in evolution parameter theories began in the 1970s with the work of Horwitz and Piron, and Fanchi and Collins.
== Invariant Evolution Parameter Concept ==
Some researchers view the evolution parameter as a mathematical artifact while others view the parameter as a physically measurable quantity. To understand the role of an evolution parameter and the fundamental difference between the standard theory and evolution parameter theories, it is necessary to review the concept of time.
Time t played the role of a monotonically increasing evolution parameter in classical Newtonian mechanics, as in the force law F = dP/dt for a non-relativistic, classical object with momentum P. To Newton, time was an “arrow” that parameterized the direction of evolution of a system.
Albert Einstein rejected the Newtonian concept and identified t as the fourth coordinate of a space-time four-vector. Einstein's view of time requires a physical equivalence between coordinate time and coordinate space. In this view, time should be a reversible coordinate in the same manner as space. Particles moving backward in time are often used to display antiparticles in Feynman-diagrams, but they are not thought of as really moving backward in time usually it is done to simplify notation. However a lot of people think they are really moving backward in time and take it as evidence for time reversibility.
The development of non-relativistic quantum mechanics in the early twentieth century preserved the Newtonian concept of time in the Schrödinger equation. The ability of non-relativistic quantum mechanics and special relativity to successfully describe observations motivated efforts to extend quantum concepts to the relativistic domain. Physicists had to decide what role time should play in relativistic quantum theory. The role of time was a key difference between Einsteinian and Newtonian views of classical theory. Two hypotheses that were consistent with special relativity were possible:
=== Hypothesis I ===
Assume t = Einsteinian time and reject Newtonian time.
=== Hypothesis II ===
Introduce two temporal variables:
A coordinate time in the sense of Einstein
An invariant evolution parameter in the sense of Newton
Hypothesis I led to a relativistic probability conservation equation that is essentially a re-statement of the non-relativistic continuity equation. Time in the relativistic probability conservation equation is Einstein's time and is a consequence of implicitly adopting Hypothesis I. By adopting Hypothesis I, the standard paradigm has at its foundation a temporal paradox: motion relative to a single temporal variable must be reversible even though the second law of thermodynamics establishes an “arrow of time” for evolving systems, including relativistic systems. Thus, even though Einstein's time is reversible in the standard theory, the evolution of a system is not time reversal invariant. From the perspective of Hypothesis I, time must be both an irreversible arrow tied to entropy and a reversible coordinate in the Einsteinian sense. The development of relativistic dynamics is motivated in part by the concern that Hypothesis I was too restrictive.
The problems associated with the standard formulation of relativistic quantum mechanics provide a clue to the validity of Hypothesis I. These problems included negative probabilities, hole theory, the Klein paradox, non-covariant expectation values, and so forth. Most of these problems were never solved; they were avoided when quantum field theory (QFT) was adopted as the standard paradigm. The QFT perspective, particularly its formulation by Schwinger, is a subset of the more general Relativistic Dynamics.
Relativistic Dynamics is based on Hypothesis II and employs two temporal variables: a coordinate time, and an evolution parameter. The evolution parameter, or parameterized time, may be viewed as a physically measurable quantity, and a procedure has been presented for designing evolution parameter clocks. By recognizing the existence of a distinct parameterized time and a distinct coordinate time, the conflict between a universal direction of time and a time that may proceed as readily from future to past as from past to future is resolved. The distinction between parameterized time and coordinate time removes ambiguities in the properties associated with the two temporal concepts in Relativistic Dynamics.
== See also ==
Ernst Stueckelberg
== References ==
== External links ==
Relativistic dynamics of stars near a supermassive black hole (2014)
International Association for Relativistic Dynamics (IARD) | Wikipedia/Relativistic_dynamics |
Multibody system is the study of the dynamic behavior of interconnected rigid or flexible bodies, each of which may undergo large translational and rotational displacements.
== Introduction ==
The systematic treatment of the dynamic behavior of interconnected bodies has led to a large number of important multibody formalisms in the field of mechanics. The simplest bodies or elements of a multibody system were treated by Newton (free particle) and Euler (rigid body). Euler introduced reaction forces between bodies. Later, a series of formalisms were derived, only to mention Lagrange’s formalisms based on minimal coordinates and a second formulation that introduces constraints.
Basically, the motion of bodies is described by their kinematic behavior. The dynamic behavior results from the equilibrium of applied forces and the rate of change of momentum.
Nowadays, the term multibody system is related to a large number of engineering fields of research, especially in robotics and vehicle dynamics. As an important feature, multibody system formalisms usually offer an algorithmic, computer-aided way to model, analyze, simulate and optimize the arbitrary motion of possibly thousands of interconnected bodies.
== Applications ==
While single bodies or parts of a mechanical system are studied in detail with finite element methods, the behavior of the whole multibody system is usually studied with multibody system methods within the following areas:
Aerospace engineering (helicopter, landing gears, behavior of machines under different gravity conditions)
Biomechanics
Combustion engine, gears and transmissions, chain drive, belt drive
Dynamic simulation
Hoist, conveyor, paper mill
Military applications
Particle simulation (granular media, sand, molecules)
Physics engine
Robotics
Vehicle simulation (vehicle dynamics, rapid prototyping of vehicles, improvement of stability, comfort optimization, improvement of efficiency, ...)
== Example ==
The following example shows a typical multibody system. It is usually denoted as slider-crank mechanism. The mechanism is used to transform rotational motion into translational motion by means of a rotating driving beam, a connection rod and a sliding body. In the present example, a flexible body is used for the connection rod. The sliding mass is not allowed to rotate and three revolute joints are used to connect the bodies. While each body has six degrees of freedom in space, the kinematical conditions lead to one degree of freedom for the whole system.
The motion of the mechanism can be viewed in the following gif animation:
== Concept ==
A body is usually considered to be a rigid or flexible part of a mechanical system (not to be confused with the human body). An example of a body is the arm of a robot, a wheel or axle in a car or the human forearm. A link is the connection of two or more bodies, or a body with the ground. The link is defined by certain (kinematical) constraints that restrict the relative motion of the bodies. Typical constraints are:
cardan joint or Universal Joint; 4 kinematical constraints
prismatic joint; relative displacement along one axis is allowed, constrains relative rotation; implies 5 kinematical constraints
revolute joint; only one relative rotation is allowed; implies 5 kinematical constraints; see the example above
spherical joint; constrains relative displacements in one point, relative rotation is allowed; implies 3 kinematical constraints
There are two important terms in multibody systems: degree of freedom and
constraint condition.
=== Degree of freedom ===
The degrees of freedom denote the number of independent kinematical possibilities to move. In other words, degrees of freedom are the minimum number of parameters required to completely define the position of an entity in space.
A rigid body has six degrees of freedom in the case of general spatial motion, three of them translational degrees of freedom and three rotational degrees of freedom. In the case of planar motion, a body has only three degrees of freedom with only one rotational and two translational degrees of freedom.
The degrees of freedom in planar motion can be easily demonstrated using a computer mouse. The degrees of freedom are: left-right, forward-backward and the rotation about the vertical axis.
=== Constraint condition ===
A constraint condition implies a restriction in the kinematical degrees of freedom of one or more bodies. The classical constraint is usually an algebraic equation that defines the relative translation or rotation between two bodies. There are furthermore possibilities to constrain the relative velocity between two bodies or a body and the ground. This is for example the case of a rolling disc, where the point of the disc that contacts the ground has always zero relative velocity with respect to the ground. In the case that the velocity constraint condition cannot be integrated in time in order to form a position constraint, it is called non-holonomic. This is the case for the general rolling constraint.
In addition to that there are non-classical constraints that might even introduce a new unknown coordinate, such as a sliding joint, where a point of a body is allowed to move along the surface of another body. In the case of contact, the constraint condition is based on inequalities and therefore such a constraint does not permanently restrict the degrees of freedom of bodies.
== Equations of motion ==
The equations of motion are used to describe the dynamic behavior of a multibody system. Each multibody system formulation may lead to a different mathematical appearance of the equations of motion while the physics behind is the same. The motion of the constrained bodies is described by means of equations that result basically from Newton’s second law. The equations are written for general motion of the single bodies with the addition of constraint conditions. Usually the equations of motions are derived from the Newton-Euler equations or Lagrange’s equations.
The motion of rigid bodies is described by means of
M
(
q
)
q
¨
−
Q
v
+
C
q
T
λ
=
F
,
{\displaystyle \mathbf {M(q)} {\ddot {\mathbf {q} }}-\mathbf {Q} _{v}+\mathbf {C_{q}} ^{T}\mathbf {\lambda } =\mathbf {F} ,}
(1)
C
(
q
,
q
˙
)
=
0
{\displaystyle \mathbf {C} (\mathbf {q} ,{\dot {\mathbf {q} }})=0}
(2)
These types of equations of motion are based on so-called redundant coordinates, because the equations use more coordinates than degrees of freedom of the underlying system. The generalized coordinates are denoted by
q
{\displaystyle \mathbf {q} }
, the mass matrix is represented by
M
(
q
)
{\displaystyle \mathbf {M} (\mathbf {q} )}
which may depend on the generalized coordinates.
C
{\displaystyle \mathbf {C} }
represents the constraint conditions and the matrix
C
q
{\displaystyle \mathbf {C_{q}} }
(sometimes termed the Jacobian) is the derivative of the constraint conditions with respect to the coordinates. This matrix is used to apply constraint forces
λ
{\displaystyle \mathbf {\lambda } }
to the according equations of the bodies. The components of the vector
λ
{\displaystyle \mathbf {\lambda } }
are also denoted as Lagrange multipliers. In a rigid body, possible coordinates could be split into two parts,
q
=
[
u
Ψ
]
T
{\displaystyle \mathbf {q} =\left[\mathbf {u} \quad \mathbf {\Psi } \right]^{T}}
where
u
{\displaystyle \mathbf {u} }
represents translations and
Ψ
{\displaystyle \mathbf {\Psi } }
describes the rotations.
=== Quadratic velocity vector ===
In the case of rigid bodies, the so-called quadratic velocity vector
Q
v
{\displaystyle \mathbf {Q} _{v}}
is used to describe Coriolis and centrifugal terms in the equations of motion. The name is because
Q
v
{\displaystyle \mathbf {Q} _{v}}
includes quadratic terms of velocities and it results due to partial derivatives of the kinetic energy of the body.
=== Lagrange multipliers ===
The Lagrange multiplier
λ
i
{\displaystyle \lambda _{i}}
is related to a constraint condition
C
i
=
0
{\displaystyle C_{i}=0}
and usually represents a force or a moment, which acts in “direction” of the constraint degree of freedom. The Lagrange multipliers do no "work" as compared to external forces that change the potential energy of a body.
=== Minimal coordinates ===
The equations of motion (1,2) are represented by means of redundant coordinates, meaning that the coordinates are not independent. This can be exemplified by the slider-crank mechanism shown above, where each body has six degrees of freedom while most of the coordinates are dependent on the motion of the other bodies. For example, 18 coordinates and 17 constraints could be used to describe the motion of the slider-crank with rigid bodies. However, as there is only one degree of freedom, the equation of motion could be also represented by means of one equation and one degree of freedom, using e.g. the angle of the driving link as degree of freedom. The latter formulation has then the minimum number of coordinates in order to describe the motion of the system and can be thus called a minimal coordinates formulation. The transformation of redundant coordinates to minimal coordinates is sometimes cumbersome and only possible in the case of holonomic constraints and without kinematical loops. Several algorithms have been developed for the derivation of minimal coordinate equations of motion, to mention only the so-called recursive formulation. The resulting equations are easier to be solved because in the absence of constraint conditions, standard time integration methods can be used to integrate the equations of motion in time. While the reduced system might be solved more efficiently, the transformation of the coordinates might be computationally expensive. In very general multibody system formulations and software systems, redundant coordinates are used in order to make the systems user-friendly and flexible.
== Flexible multibody ==
There are several cases in which it is necessary to consider the flexibility of the bodies. For example in cases where flexibility plays a fundamental role in kinematics as well as in compliant mechanisms.
Flexibility could be take in account in different way. There are three main approaches:
Discrete flexible multibody, the flexible body is divided into a set of rigid bodies connected by elastic stiffnesses representative of the body's elasticity
Modal condensation, in which elasticity is described through a finite number of modes of vibration of the body by exploiting the degrees of freedom linked to the amplitude of the mode
Full flex, all the flexibility of the body is taken into account by discretize body in sub elements with singles displacement linked from elastic material properties
== See also ==
Dynamic simulation
Multibody simulation (solution techniques)
Physics engine
== References ==
J. Wittenburg, Dynamics of Systems of Rigid Bodies, Teubner, Stuttgart (1977).
J. Wittenburg, Dynamics of Multibody Systems, Berlin, Springer (2008).
K. Magnus, Dynamics of multibody systems, Springer Verlag, Berlin (1978).
P.E. Nikravesh, Computer-Aided Analysis of Mechanical Systems, Prentice-Hall (1988).
E.J. Haug, Computer-Aided Kinematics and Dynamics of Mechanical Systems, Allyn and Bacon, Boston (1989).
H. Bremer and F. Pfeiffer, Elastische Mehrkörpersysteme, B. G. Teubner, Stuttgart, Germany (1992).
J. García de Jalón, E. Bayo, Kinematic and Dynamic Simulation of Multibody Systems - The Real-Time Challenge, Springer-Verlag, New York (1994).
A.A. Shabana, Dynamics of multibody systems, Second Edition, John Wiley & Sons (1998).
M. Géradin, A. Cardona, Flexible multibody dynamics – A finite element approach, Wiley, New York (2001).
E. Eich-Soellner, C. Führer, Numerical Methods in Multibody Dynamics, Teubner, Stuttgart, 1998 (reprint Lund, 2008).
T. Wasfy and A. Noor, "Computational strategies for flexible multibody systems," ASME. Appl. Mech. Rev. 2003;56(6):553-613. doi:10.1115/1.1590354.
== External links ==
http://real.uwaterloo.ca/~mbody/ Collected links of John McPhee | Wikipedia/Multibody_dynamics |
Electromyography (EMG) is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG is performed using an instrument called an electromyograph to produce a record called an electromyogram. An electromyograph detects the electric potential generated by muscle cells when these cells are electrically or neurologically activated. The signals can be analyzed to detect abnormalities, activation level, or recruitment order, or to analyze the biomechanics of human or animal movement. Needle EMG is an electrodiagnostic medicine technique commonly used by neurologists. Surface EMG is a non-medical procedure used to assess muscle activation by several professionals, including physiotherapists, kinesiologists and biomedical engineers. In computer science, EMG is also used as middleware in gesture recognition towards allowing the input of physical action to a computer as a form of human-computer interaction.
== Clinical uses ==
EMG testing has a variety of clinical and biomedical applications. Needle EMG is used as a diagnostics tool for identifying neuromuscular diseases, or as a research tool for studying kinesiology, and disorders of motor control. EMG signals are sometimes used to guide botulinum toxin or phenol injections into muscles. Surface EMG is used for functional diagnosis and during instrumental motion analysis. EMG signals are also used as a control signal for prosthetic devices such as prosthetic hands, arms and lower limbs.
An acceleromyograph may be used for neuromuscular monitoring in general anesthesia with neuromuscular-blocking drugs, in order to avoid postoperative residual curarization.
Except in the case of some purely primary myopathic conditions EMG is usually performed with another electrodiagnostic medicine test that measures the conducting function of nerves. This is called nerve conduction study (NCS). Needle EMG and NCSs are typically indicated when there is pain in the limbs, weakness from spinal nerve compression, or concern about some other neurologic injury or disorder. Spinal nerve injury does not cause neck, mid back pain or low back pain, and for this reason, evidence has not shown EMG or NCS to be helpful in diagnosing causes of axial lumbar pain, thoracic pain, or cervical spine pain. Needle EMG may aid with the diagnosis of nerve compression or injury (such as carpal tunnel syndrome), nerve root injury (such as sciatica), and with other problems of the muscles or nerves. Less common medical conditions include amyotrophic lateral sclerosis, myasthenia gravis, and muscular dystrophy.
== Technique ==
=== Skin preparation and risks ===
The first step before insertion of the needle electrode is skin preparation. This typically involves simply cleaning the skin with an alcohol pad.
The actual placement of the needle electrode can be difficult and depends on a number of factors, such as specific muscle selection and the size of that muscle. Proper needle EMG placement is very important for accurate representation of the muscle of interest, although EMG is more effective on superficial muscles as it is unable to bypass the action potentials of superficial muscles and detect deeper muscles. Also, the more body fat an individual has, the weaker the EMG signal. When placing the EMG sensor, the ideal location is at the belly of the muscle: the longitudinal midline. The belly of the muscle can also be thought of as in-between the motor point (middle) of the muscle and the tendonus insertion point.
Cardiac pacemakers and implanted cardiac defibrillators (ICDs) are used increasingly in clinical practice, and no evidence exists indicating that performing routine electrodiagnostic studies on patients with these devices pose a safety hazard. However, there are theoretical concerns that electrical impulses of nerve conduction studies (NCS) could be erroneously sensed by devices and result in unintended inhibition or triggering of output or reprogramming of the device. In general, the closer the stimulation site is to the pacemaker and pacing leads, the greater the chance for inducing a voltage of sufficient amplitude to inhibit the pacemaker. Despite such concerns, no immediate or delayed adverse effects have been reported with routine NCS.
No known contraindications exist for performing needle EMG or NCS on pregnant patients. Additionally, no complications from these procedures have been reported in the literature. Evoked potential testing, likewise, has not been reported to cause any problems when it is performed during pregnancy.
Patients with lymphedema or patients at risk for lymphedema are routinely cautioned to avoid percutaneous procedures in the affected extremity, namely venipuncture, to prevent development or worsening of lymphedema or cellulitis. Despite the potential risk, the evidence for such complications subsequent to venipuncture is limited. No published reports exist of cellulitis, infection, or other complications related to EMG performed in the setting of lymphedema or prior lymph node dissection. However, given the unknown risk of cellulitis in patients with lymphedema, reasonable caution should be exercised in performing needle examinations in lymphedematous regions to avoid complications. In patients with gross edema and taut skin, skin puncture by needle electrodes may result in chronic weeping of serous fluid. The potential bacterial media of such serous fluid and the violation of skin integrity may increase the risk of cellulitis. Before proceeding, the physician should weigh the potential risks of performing the study with the need to obtain the information gained.
=== Surface and intramuscular EMG recording electrodes ===
There are two kinds of EMG: surface EMG and intramuscular EMG. Surface EMG assesses muscle function by recording muscle activity from the surface above the muscle on the skin. Surface EMG can be recorded by a pair of electrodes or by a more complex array of multiple electrodes. More than one electrode is needed because EMG recordings display the potential difference (voltage difference) between two separate electrodes. Limitations of this approach are the fact that surface electrode recordings are restricted to superficial muscles, are influenced by the depth of the subcutaneous tissue at the site of the recording which can be highly variable depending on the weight of a patient, and cannot reliably discriminate between the discharges of adjacent muscles. Specific electrode placements and functional tests have been developed to minimize this risk, thus providing reliable examinations.
Intramuscular EMG can be performed using a variety of different types of recording electrodes. The simplest approach is a monopolar needle electrode. This can be a fine wire inserted into a muscle with a surface electrode as a reference; or two fine wires inserted into muscle referenced to each other. Most commonly fine wire recordings are for research or kinesiology studies. Diagnostic monopolar EMG electrodes are typically insulated and stiff enough to penetrate skin, with only the tip exposed using a surface electrode for reference. Needles for injecting therapeutic botulinum toxin or phenol are typically monopolar electrodes that use a surface reference, in this case, however, the metal shaft of a hypodermic needle, insulated so that only the tip is exposed, is used both to record signals and to inject. Slightly more complex in design is the concentric needle electrode. These needles have a fine wire, embedded in a layer of insulation that fills the barrel of a hypodermic needle, that has an exposed shaft, and the shaft serves as the reference electrode. The exposed tip of the fine wire serves as the active electrode. As a result of this configuration, signals tend to be smaller when recorded from a concentric electrode than when recorded from a monopolar electrode and they are more resistant to electrical artifacts from tissue and measurements tend to be somewhat more reliable. However, because the shaft is exposed throughout its length, superficial muscle activity can contaminate the recording of deeper muscles. Single fiber EMG needle electrodes are designed to have very tiny recording areas, and allow for the discharges of individual muscle fibers to be discriminated.
To perform intramuscular EMG, typically either a monopolar or concentric needle electrode is inserted through the skin into the muscle tissue. The needle is then moved to multiple spots within a relaxed muscle to evaluate both insertional activity and resting activity in the muscle. Normal muscles exhibit a brief burst of muscle fiber activation when stimulated by needle movement, but this rarely lasts more than 100ms. The two most common pathologic types of resting activity in muscle are fasciculation and fibrillation potentials. A fasciculation potential is an involuntary activation of a motor unit within the muscle, sometimes visible with the naked eye as a muscle twitch or by surface electrodes. Fibrillations, however, are detected only by needle EMG, and represent the isolated activation of individual muscle fibers, usually as the result of nerve or muscle disease. Often, fibrillations are triggered by needle movement (insertional activity) and persist for several seconds or more after the movement ceases.
After assessing resting and insertional activity, the electromyographer assess the activity of muscle during voluntary contraction. The shape, size, and frequency of the resulting electrical signals are judged. Then the electrode is retracted a few millimetres, and again the activity is analyzed. This is repeated, sometimes until data on 10–20 motor units have been collected in order to draw conclusions about motor unit function. Each electrode track gives only a very local picture of the activity of the whole muscle. Because skeletal muscles differ in the inner structure, the electrode has to be placed at various locations to obtain an accurate study. For the interpretation of EMG study is important to evaluate parameters of tested muscle motor units. This process may well be partially automated using appropriate software.
Single fiber electromyography assesses the delay between the contractions of individual muscle fibers within a motor unit and is a sensitive test for dysfunction of the neuromuscular junction caused by drugs, poisons, or diseases such as myasthenia gravis. The technique is complicated and typically performed only by individuals with special advanced training.
Surface EMG is used in a number of settings; for example, in the physiotherapy clinic, muscle activation is monitored using surface EMG and patients have an auditory or visual stimulus to help them know when they are activating the muscle (biofeedback). A review of the literature on surface EMG published in 2008, concluded that surface EMG may be useful to detect the presence of neuromuscular disease (level C rating, class III data), but there are insufficient data to support its utility for distinguishing between neuropathic and myopathic conditions or for the diagnosis of specific neuromuscular diseases. EMGs may be useful for additional study of fatigue associated with post-poliomyelitis syndrome and electromechanical function in myotonic dystrophy (level C rating, class III data). Recently, with the rise of technology in sports, sEMG has become an area of focus for coaches to reduce the incidence of soft tissue injury and improve player performance.
Certain US states limit the performance of needle EMG by nonphysicians. New Jersey declared that it cannot be delegated to a physician's assistant. Michigan has passed legislation saying needle EMG is the practice of medicine. Special training in diagnosing medical diseases with EMG is required only in residency and fellowship programs in neurology, clinical neurophysiology, neuromuscular medicine, and physical medicine and rehabilitation. There are certain subspecialists in otolaryngology who have had selective training in performing EMG of the laryngeal muscles, and subspecialists in urology, obstetrics and gynecology who have had selective training in performing EMG of muscles controlling bowel and bladder function.
=== Maximal voluntary contraction ===
One basic function of EMG is to see how well a muscle can be activated. The most common way that can be determined is by performing a maximal voluntary contraction (MVC) of the muscle that is being tested. Each muscle group type has different characteristics, and MVC positions are varied for different muscle group types. Therefore, the researcher should be very careful while choosing the MVC position type to elicit the greater muscle activity level from the subjects.
The types of MVC positions can vary among muscle types, contingent upon the specific muscle group being considered, including trunk muscles, lower limb muscles, and others.
Muscle force, which is measured mechanically, typically correlates highly with measures of EMG activation of muscle. Most commonly this is assessed with surface electrodes, but it should be recognized that these typically record only from muscle fibers in close proximity to the surface.
Several analytical methods for determining muscle activation are commonly used depending on the application. The use of mean EMG activation or the peak contraction value is a debated topic. Most studies commonly use the maximal voluntary contraction as a means of analyzing peak force and force generated by target muscles. According to the article "Peak and average rectified EMG measures: Which method of data reduction should be used for assessing core training exercises?", it was concluded that the "average rectified EMG data (ARV) is significantly less variable when measuring the muscle activity of the core musculature compared to the peak EMG variable." Therefore, these researchers would suggest that "ARV EMG data should be recorded alongside the peak EMG measure when assessing core exercises." Providing the reader with both sets of data would result in enhanced validity of the study and potentially eradicate the contradictions within the research.
=== Other measurements ===
EMG can also be used for indicating the amount of fatigue in a muscle. The following changes in the EMG signal can signify muscle fatigue: an increase in the mean absolute value of the signal, increase in the amplitude and duration of the muscle action potential and an overall shift to lower frequencies. Monitoring the changes of different frequency changes the most common way of using EMG to determine levels of fatigue. The lower conduction velocities enable the slower motor neurons to remain active.
A motor unit is defined as one motor neuron and all of the muscle fibers it innervates. When a motor unit fires, the impulse (called an action potential) is carried down the motor neuron to the muscle. The area where the nerve contacts the muscle is called the neuromuscular junction, or the motor end plate. After the action potential is transmitted across the neuromuscular junction, an action potential is elicited in all of the innervated muscle fibers of that particular motor unit. The sum of all this electrical activity is known as a motor unit action potential (MUAP). This electrophysiologic activity from multiple motor units is the signal typically evaluated during an EMG. The composition of the motor unit, the number of muscle fibres per motor unit, the metabolic type of muscle fibres and many other factors affect the shape of the motor unit potentials in the myogram.
Nerve conduction testing is also often done at the same time as an EMG to diagnose neurological diseases.
Some patients can find the procedure somewhat painful, whereas others experience only a small amount of discomfort when the needle is inserted. The muscle or muscles being tested may be slightly sore for a day or two after the procedure.
=== EMG signal decomposition ===
EMG signals are essentially made up of superimposed motor unit action potentials (MUAPs) from several motor units. For a thorough analysis, the measured EMG signals can be decomposed into their constituent MUAPs. MUAPs from different motor units tend to have different characteristic shapes, while MUAPs recorded by the same electrode from the same motor unit are typically similar. Notably MUAP size and shape depend on where the electrode is located with respect to the fibers and so can appear to be different if the electrode moves position. EMG decomposition is non-trivial, although many methods have been proposed.
=== EMG signal processing ===
Rectification is the translation of the raw EMG signal to a signal with a single polarity, usually positive. The purpose of rectifying the signal is to ensure the signal does not average to zero, due to the raw EMG signal having positive and negative components. Two types of rectification are used: full-wave and half-wave rectification. Full-wave rectification adds the EMG signal below the baseline to the signal above the baseline to make a conditioned signal that is all positive. If the baseline is zero, this is equivalent to taking the absolute value of the signal. This is the preferred method of rectification because it conserves all of the signal energy for analysis. Half-wave rectification discards the portion of the EMG signal that is below the baseline. In doing so, the average of the data is no longer zero therefore it can be used in statistical analyses.
=== Limitations ===
Needle EMG used in clinical settings has practical applications such as helping to discover disease. Needle EMG has limitations, however, in that it does involve voluntary activation of muscle, and as such is less informative in patients unwilling or unable to cooperate, children and infants, and in individuals with paralysis. Surface EMG can have limited applications due to inherent problems associated with surface EMG. Adipose tissue (fat) can affect EMG recordings. Studies show that as adipose tissue increased the active muscle directly below the surface decreased. As adipose tissue increased, the amplitude of the surface EMG signal directly above the center of the active muscle decreased. EMG signal recordings are typically more accurate with individuals who have lower body fat, and more compliant skin, such as young people when compared to old. Muscle cross talk occurs when the EMG signal from one muscle interferes with that of another limiting reliability of the signal of the muscle being tested. Surface EMG is limited due to lack of deep muscles reliability. Deep muscles require intramuscular wires that are intrusive and painful in order to achieve an EMG signal. Surface EMG can measure only superficial muscles and even then it is hard to narrow down the signal to a single muscle.
=== Electrical characteristics ===
The electrical source is the muscle membrane potential of about –90 mV. Measured EMG potentials range between less than 50 μV and up to 30 mV, depending on the muscle under observation.
Typical repetition rate of muscle motor unit firing is about 7–20 Hz, depending on the size of the muscle (eye muscles versus seat (gluteal) muscles), previous axonal damage and other factors. Damage to motor units can be expected at ranges between 450 and 780 mV.
== Procedure outcomes ==
=== Normal results ===
Muscle tissue at rest is normally electrically inactive. After the electrical activity caused by the irritation of needle insertion subsides, the electromyograph should detect no abnormal spontaneous activity (i.e., a muscle at rest should be electrically silent, with the exception of the area of the neuromuscular junction, which is, under normal circumstances, very spontaneously active). When the muscle is voluntarily contracted, action potentials begin to appear. As the strength of the muscle contraction is increased, more and more muscle fibers produce action potentials. When the muscle is fully contracted, there should appear a disorderly group of action potentials of varying rates and amplitudes (a complete recruitment); this can be described as an interference pattern.
=== Abnormal results ===
EMG findings vary with the type of disorder, the duration of the problem, the age of the patient, the degree to which the patient can be cooperative, the type of needle electrode used to study the patient, and sampling error in terms of the number of areas studied within a single muscle and the number of muscles studied overall. Interpreting EMG findings is usually best done by an individual informed by a focused history and physical examination of the patient, and in conjunction with the results of other relevant diagnostic studies performed including most importantly, nerve conduction studies, but also, where appropriate, imaging studies such as MRI and ultrasound, muscle and nerve biopsy, muscle enzymes, and serologic studies.
Abnormal results may be caused by the following medical conditions (please note this is not an exhaustive list of conditions that can result in abnormal EMG studies):
== History ==
The first documented experiments dealing with EMG started with Francesco Redi's works in 1666. Redi discovered a highly specialized muscle of the electric ray fish (Electric Eel) generated electricity. By 1773, Walsh had been able to demonstrate that the eel fish's muscle tissue could generate a spark of electricity. In 1792, a publication entitled De Viribus Electricitatis in Motu Musculari Commentarius appeared, written by Luigi Galvani, in which the author demonstrated that electricity could initiate muscle contraction. Six decades later, in 1849, Emil du Bois-Reymond discovered that it was also possible to record electrical activity during a voluntary muscle contraction. The first actual recording of this activity was made by Marey in 1890, who also introduced the term electromyography. In 1922, Gasser and Erlanger used an oscilloscope to show the electrical signals from muscles. Because of the stochastic nature of the myoelectric signal, only rough information could be obtained from its observation. The capability of detecting electromyographic signals improved steadily from the 1930s through the 1950s, and researchers began to use improved electrodes more widely for the study of muscles. The AANEM was formed in 1953 as one of several currently active medical societies with a special interest in advancing the science and clinical use of the technique. Clinical use of surface EMG (sEMG) for the treatment of more specific disorders began in the 1960s. Hardyck and his researchers were the first (1966) practitioners to use sEMG. In the early 1980s, Cram and Steger introduced a clinical method for scanning a variety of muscles using an EMG sensing device.
Research began at the Mayo Clinic in Rochester, Minnesota under the guidance of Edward H. Lambert, MD, PhD (1915–2003) in the early 1950s. Lambert, known as the "Father of EMG", with the assistance of his Research Technician, Ervin L Schmidt, a self taught electrical engineer, developed a machine that could be moved from the EMG Lab, and was relatively easy to use. Oscilloscopes at that time were not able to store or print results, so a Polaroid camera was affixed to the front on a hinge and synchronized to photograph the scan. Fellows studying at Mayo soon learned that this was a tool they wanted, too. As Mayo has no interest in marketing their inventions, Schmidt went on to continue to develop them in his basement for decades, selling them under the name ErMel Inc.
It was not until the middle of the 1980s that integration techniques in electrodes had sufficiently advanced to allow batch production of the required small and lightweight instrumentation and amplifiers. At present, a number of suitable amplifiers are commercially available. In the early 1980s, cables that produced signals in the desired microvolt range became available. Recent research has resulted in a better understanding of the properties of surface EMG recording. Surface electromyography is increasingly used for recording from superficial muscles in clinical or kinesiological protocols, where intramuscular electrodes are used for investigating deep muscles or localized muscle activity.
There are many applications for the use of EMG. EMG is used clinically for the diagnosis of neurological and neuromuscular problems. It is used diagnostically by gait laboratories and by clinicians trained in the use of biofeedback or ergonomic assessment. EMG is also used in many types of research laboratories, including those involved in biomechanics, motor control, neuromuscular physiology, movement disorders, postural control, and physical therapy.
== Research ==
EMG can be used to sense isometric muscular activity where no movement is produced. This enables definition of a class of subtle motionless gestures to control interfaces without being noticed and without disrupting the surrounding environment. These signals can be used to control a prosthesis or as a control signal for an electronic device such as a mobile phone or PDA .
EMG signals have been targeted as control for flight systems. The Human Senses Group at the NASA Ames Research Center at Moffett Field, CA seeks to advance man-machine interfaces by directly connecting a person to a computer. In this project, an EMG signal is used to substitute for mechanical joysticks and keyboards. EMG has also been used in research towards a "wearable cockpit", which employs EMG-based gestures to manipulate switches and control sticks necessary for flight in conjunction with a goggle-based display.
Unvoiced or silent speech recognition recognizes speech by observing the EMG activity of muscles associated with speech. It is targeted for use in noisy environments, and may be helpful for people without vocal cords, with aphasia, with dysphonia, and more.
EMG has also been used as a control signal for computers and other devices. An interface device based on an EMG Switch can be used to control moving objects, such as mobile robots or an electric wheelchair. This may be helpful for individuals that cannot operate a joystick-controlled wheelchair. Surface EMG recordings may also be a suitable control signal for some interactive video games.
A joint project involving Microsoft, the University of Washington in Seattle, and the University of Toronto in Canada has explored using muscle signals from hand gestures as an interface device. A patent based on this research was submitted on June 26, 2008.
In 2016 a startup called Emteq Labs launched a virtual reality headset with embedded EMG sensors for measuring facial expressions. In September 2019 Facebook, later renamed Meta Platforms, bought a startup called CTRL-labs that was working on EMG. In 2024, Meta unveiled augmented reality glasses that were paired with a wristband that reads a user's hand gestures using electromyography.
== See also ==
Chronaxie
Compound muscle action potential
Electrical impedance myography
Electrical muscle stimulation
Electrodiagnostic medicine
Electromyoneurography
Magnetomyography
Nerve conduction study
Neuromuscular ultrasound
Phonomyography
== References ==
== Further reading ==
Piper, H.: Elektrophysiologie menschlicher Muskeln. Berlin, Germany: J. Springer, 1912.
== External links ==
MedlinePlus entry on EMG describes EMG
American Association of Neuromuscular & Electrodiagnostic Medicine
EmedicineHealth page on EMG
Risks in Electrodiagnostic Medicine | Wikipedia/Electromyography |
Soft-body dynamics is a field of computer graphics that focuses on visually realistic physical simulations of the motion and properties of deformable objects (or soft bodies). The applications are mostly in video games and films. Unlike in simulation of rigid bodies, the shape of soft bodies can change, meaning that the relative distance of two points on the object is not fixed. While the relative distances of points are not fixed, the body is expected to retain its shape to some degree (unlike a fluid). The scope of soft body dynamics is quite broad, including simulation of soft organic materials such as muscle, fat, hair and vegetation, as well as other deformable materials such as clothing and fabric. Generally, these methods only provide visually plausible emulations rather than accurate scientific/engineering simulations, though there is some crossover with scientific methods, particularly in the case of finite element simulations. Several physics engines currently provide software for soft-body simulation.
== Deformable solids ==
The simulation of volumetric solid soft bodies can be realised by using a variety of approaches.
=== Spring/mass models ===
In this approach, the body is modeled as a set of point masses (nodes) connected by ideal weightless elastic springs obeying some variant of Hooke's law. The nodes may either derive from the edges of a two-dimensional polygonal mesh representation of the surface of the object, or from a three-dimensional network of nodes and edges modeling the internal structure of the object (or even a one-dimensional system of links, if for example a rope or hair strand is being simulated). Additional springs between nodes can be added, or the force law of the springs modified, to achieve desired effects. Applying Newton's second law to the point masses including the forces applied by the springs and any external forces (due to contact, gravity, air resistance, wind, and so on) gives a system of differential equations for the motion of the nodes, which is solved by standard numerical schemes for solving ODEs. Rendering of a three-dimensional mass-spring lattice is often done using free-form deformation, in which the rendered mesh is embedded in the lattice and distorted to conform to the shape of the lattice as it evolves. Assuming all point masses equal to zero one can obtain the Stretched grid method aimed at several engineering problems solution relative to the elastic grid behavior. These are sometimes known as mass-spring-damper models. In pressurized soft bodies spring-mass model is combined with a pressure force based on the ideal gas law.
=== Finite element simulation ===
This is a more physically accurate approach, which uses the widely used finite element method to solve the partial differential equations which govern the dynamics of an elastic material. The body is modeled as a three-dimensional elastic continuum by breaking it into a large number of solid elements which fit together, and solving for the stresses and strains in each element using a model of the material. The elements are typically tetrahedral, the nodes being the vertices of the tetrahedra (relatively simple methods exist to tetrahedralize a three dimensional region bounded by a polygon mesh into tetrahedra, similarly to how a two-dimensional polygon may be triangulated into triangles). The strain (which measures the local deformation of the points of the material from their rest state) is quantified by the strain tensor
ϵ
{\displaystyle {\boldsymbol {\epsilon }}}
. The stress (which measures the local forces per-unit area in all directions acting on the material) is quantified by the Cauchy stress tensor
σ
{\displaystyle {\boldsymbol {\sigma }}}
. Given the current local strain, the local stress can be computed via the generalized form of Hooke's law:
σ
=
C
ε
{\displaystyle {\boldsymbol {\sigma }}={\mathsf {C}}{\boldsymbol {\varepsilon }}\,}
where
C
{\displaystyle {\mathsf {C}}}
is the elasticity tensor, which encodes the material properties (parametrized in linear elasticity for an isotropic material by the Poisson ratio and Young's modulus).
The equation of motion of the element nodes is obtained by integrating the stress field over each element and relating this, via Newton's second law, to the node accelerations.
Pixelux (developers of the Digital Molecular Matter system) use a finite-element-based approach for their soft bodies, using a tetrahedral mesh and converting the stress tensor directly into node forces. Rendering is done via a form of free-form deformation.
=== Energy minimization methods ===
This approach is motivated by variational principles and the physics of surfaces, which dictate that a constrained surface will
assume the shape which minimizes the total energy of deformation (analogous to a soap bubble). Expressing the energy of a surface in terms of its local deformation (the energy is due to a combination of stretching and bending), the local force on the surface is given by differentiating the energy with respect to position, yielding an equation of motion which can be solved in the standard ways.
=== Shape matching ===
In this scheme, penalty forces or constraints are applied to the model to drive it towards its original shape (i.e. the material behaves as if it has shape memory). To conserve momentum the rotation of the body must be estimated properly, for example via polar decomposition. To approximate finite element simulation, shape matching can be applied to three dimensional lattices and multiple shape matching constraints blended.
=== Rigid-body based deformation ===
Deformation can also be handled by a traditional rigid-body physics engine, modeling the soft-body motion using a network of multiple rigid bodies connected by constraints, and using (for example) matrix-palette skinning to generate a surface mesh for rendering. This is the approach used for deformable objects in Havok Destruction.
== Cloth simulation ==
In the context of computer graphics, cloth simulation refers to the simulation of soft bodies in the form of two dimensional continuum elastic membranes, that is, for this purpose, the actual structure of real cloth on the yarn level can be ignored (though modeling cloth on the yarn level has been tried). Via rendering effects, this can produce a visually plausible emulation of textiles and clothing, used in a variety of contexts in video games, animation, and film. It can also be used to simulate two dimensional sheets of materials other than textiles, such as deformable metal panels or vegetation. In video games it is often used to enhance the realism of clothed animated characters.
Cloth simulators are generally based on mass-spring models, but a distinction must be made between force-based and position-based solvers.
=== Force-based cloth ===
The mass-spring model (obtained from a polygonal mesh representation of the cloth) determines the internal spring forces acting on the nodes at each timestep (in combination with gravity and applied forces). Newton's second law gives equations of motion which can be solved via standard ODE solvers. To create high resolution cloth with a realistic stiffness is not possible however with simple explicit solvers (such as forward Euler integration), unless the timestep is made too small for interactive applications (since as is well known, explicit integrators are numerically unstable for sufficiently stiff systems). Therefore, implicit solvers must be used, requiring solution of a large sparse matrix system (via e.g. the conjugate gradient method), which itself may also be difficult to achieve at interactive frame rates. An alternative is to use an explicit method with low stiffness, with ad hoc methods to avoid instability and excessive stretching (e.g. strain limiting corrections).
=== Position-based dynamics ===
To avoid needing to do an expensive implicit solution of a system of ODEs, many real-time cloth simulators (notably PhysX, Havok Cloth, and Maya nCloth) use position based dynamics (PBD), an approach based on constraint relaxation. The mass-spring model is converted into a system of constraints, which demands that the distance between the connected nodes be equal to the initial distance. This system is solved sequentially and iteratively, by directly moving nodes to satisfy each constraint, until sufficiently stiff cloth is obtained. This is similar to a Gauss-Seidel solution of the implicit matrix system for the mass-spring model. Care must be taken though to solve the constraints in the same sequence each timestep, to avoid spurious oscillations, and to make sure that the constraints do not violate linear and angular momentum conservation. Additional position constraints can be applied, for example to keep the nodes within desired regions of space (sufficiently close to an animated model for example), or to maintain the body's overall shape via shape matching.
== Collision detection for deformable objects ==
Realistic interaction of simulated soft objects with their environment may be important for obtaining visually realistic results. Cloth self-intersection is important in some applications for acceptably realistic simulated garments. This is challenging to achieve at interactive frame rates, particularly in the case of detecting and resolving self collisions and mutual collisions between two or more deformable objects.
Collision detection may be discrete/a posteriori (meaning objects are advanced in time through a pre-determined interval, and then any penetrations detected and resolved), or continuous/a priori (objects are advanced only until a collision occurs, and the collision is handled before proceeding). The former is easier to implement and faster, but leads to failure to detect collisions (or detection of spurious collisions) if objects move fast enough. Real-time systems generally have to use discrete collision detection, with other ad hoc ways to avoid failing to detect collisions.
Detection of collisions between cloth and environmental objects with a well defined "inside" is straightforward since the system can detect unambiguously whether the cloth mesh vertices and faces are intersecting the body and resolve them accordingly. If a well defined "inside" does not exist (e.g. in the case of collision with a mesh which does not form a closed boundary), an "inside" may be constructed via extrusion. Mutual- or self-collisions of soft bodies defined by tetrahedra is straightforward, since it reduces to detection of collisions between solid tetrahedra.
However, detection of collisions between two polygonal cloths (or collision of a cloth with itself) via discrete collision detection is much more difficult, since there is no unambiguous way to locally detect after a timestep whether a cloth node which has penetrated is on the "wrong" side or not. Solutions involve either using the history of the cloth motion to determine if an intersection event has occurred, or doing a global analysis of the cloth state to detect and resolve self-intersections. Pixar has presented a method which uses a global topological analysis of mesh intersections in configuration space to detect and resolve self-interpenetration of cloth. Currently, this is generally too computationally expensive for real-time cloth systems.
To do collision detection efficiently, primitives which are certainly not colliding must be identified as soon as possible and discarded from consideration to avoid wasting time.
To do this, some form of spatial subdivision scheme is essential, to avoid a brute force test of
O
[
n
2
]
{\displaystyle O[n^{2}]}
primitive collisions. Approaches used include:
Bounding volume hierarchies (AABB trees, OBB trees, sphere trees)
Grids, either uniform (using hashing for memory efficiency) or hierarchical (e.g. Octree, kd-tree)
Coherence-exploiting schemes, such as sweep and prune with insertion sort, or tree-tree collisions with front tracking.
Hybrid methods involving a combination of various of these schemes, e.g. a coarse AABB tree plus sweep-and-prune with coherence between colliding leaves.
== Other applications ==
Other effects which may be simulated via the methods of soft-body dynamics are:
Destructible materials: fracture of brittle solids, cutting of soft bodies, and tearing of cloth. The finite element method is especially suited to modelling fracture as it includes a realistic model of the distribution of internal stresses in the material, which physically is what determines when fracture occurs, according to fracture mechanics.
Plasticity (permanent deformation) and melting
Simulated hair, fur, and feathers
Simulated organs for biomedical applications
Simulating fluids in the context of computer graphics would not normally be considered soft-body dynamics, which is usually restricted to mean simulation of materials which have a tendency to retain their shape and form. In contrast, a fluid assumes the shape of whatever vessel contains it, as the particles are bound together by relatively weak forces.
== Software supporting soft body physics ==
=== Simulation engines ===
=== Games ===
== See also ==
Deformable body
Dynamical simulation
Rigid body dynamics
Strength of materials
Cloth modeling
Breast physics
== References ==
== External links ==
"The Animation of Natural Phenomena", CMU course on physically based animation, including deformable bodies
Soft body dynamics video example
Introductory article
Article by Thomas Jakobsen which explains the basics of the PBD method | Wikipedia/Soft_body_dynamics |
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
== Organ and tissue systems ==
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Exocrine system: various functions including lubrication and protection by exocrine glands such sweat glands, mucous glands, lacrimal glands and mammary glands
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from foreign bodies.
Nervous system: collecting, transferring and processing information with brain, spinal cord, peripheral nervous system and sense organs.
Sensory systems: visual system, auditory system, olfactory system, gustatory system, somatosensory system, vestibular system.
Muscular system: allows for manipulation of the environment, provides locomotion, maintains posture, and produces heat. Includes skeletal muscles, smooth muscles and cardiac muscle.
Reproductive system: the sex organs, such as ovaries, fallopian tubes, uterus, vagina, mammary glands, testes, vas deferens, seminal vesicles and prostate.
== History ==
The notion of system (or apparatus) relies upon the concept of vital or organic function: a system is a set of organs with a definite function. This idea was already present in Antiquity (Galen, Aristotle), but the application of the term "system" is more recent. For example, the nervous system was named by Monro (1783), but Rufus of Ephesus (c. 90–120), clearly viewed for the first time the brain, spinal cord, and craniospinal nerves as an anatomical unit, although he wrote little about its function, nor gave a name to this unit.
The enumeration of the principal functions - and consequently of the systems - remained almost the same since Antiquity, but the classification of them has been very various, e.g., compare Aristotle, Bichat, Cuvier.
The notion of physiological division of labor, introduced in the 1820s by the French physiologist Henri Milne-Edwards, allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils).
== Cellular organelle systems ==
The exact components of a cell are determined by whether the cell is a eukaryote or prokaryote.
Nucleus (eukaryotic only): storage of genetic material; control center of the cell.
Cytosol: component of the cytoplasm consisting of jelly-like fluid in which organelles are suspended within
Cell membrane (plasma membrane):
Endoplasmic reticulum: outer part of the nuclear envelope forming a continuous channel used for transportation; consists of the rough endoplasmic reticulum and the smooth endoplasmic reticulum
Rough endoplasmic reticulum (RER): considered "rough" due to the ribosomes attached to the channeling; made up of cisternae that allow for protein production
Smooth endoplasmic reticulum (SER): storage and synthesis of lipids and steroid hormones as well as detoxification
Ribosome: site of biological protein synthesis essential for internal activity and cannot be reproduced in other organs
Mitochondrion (mitochondria): powerhouse of the cell; site of cellular respiration producing ATP (adenosine triphosphate)
Lysosome: center of breakdown for unwanted/unneeded material within the cell
Peroxisome: breaks down toxic materials from the contained digestive enzymes such as H2O2(hydrogen peroxide)
Golgi apparatus (eukaryotic only): folded network involved in modification, transport, and secretion
Chloroplast: site of photosynthesis; storage of chlorophyll
== See also ==
== External links ==
Systems Biology: An Overview by Mario Jardon: A review from the Science Creative Quarterly, 2005.
Synthesis and Analysis of a Biological System, by Hiroyuki Kurata, 1999.
It from bit and fit from bit. On the origin and impact of information in the average evolution. Includes how life forms and biological systems originate and from there evolve to become more and more complex, including evolution of genes and memes, into the complex memetics from organisations and multinational corporations and a "global brain", (Yves Decadt, 2000). Book published in Dutch with English paper summary in The Information Philosopher, http://www.informationphilosopher.com/solutions/scientists/decadt/
Schmidt-Rhaesa, A. 2007. The Evolution of Organ Systems. Oxford University Press, Oxford, [2].
== References == | Wikipedia/Biological_systems |
The mechanics of human sexuality or mechanics of sex, or more formally the biomechanics of human sexuality, is the study of the mechanics related to human sexual activity. Examples of topics include the biomechanical study of the strength of vaginal tissues and the biomechanics of male erectile function. The mechanics of sex under limit circumstances, such as sexual activity at zero-gravity in outer space, are also being studied.
Pioneering researchers studied the male and female genitals during coitus (penile-vaginal penetration) with ultrasound technology in 1992 and magnetic resonance imaging (MRI) in 1999, mapping the anatomy of the activity and taking images illustrating the fit of male and female genitals. In the research using MRI, researchers imaged couples performing coitus inside an MRI machine. The magnetic resonance images also showed that the penis has the shape of a boomerang, that one third of its length consists of the root of the penis, and that the vaginal walls wrap snugly around it. Moreover, MRI during coitus indicate that the internal part of the clitoris is stimulated by penile-vaginal movements. These studies highlight the role of the clitoris and indicate that what is termed the G-spot may only exist because the highly innervated clitoris is pulled closely to the anterior wall of the vagina when the woman is sexually aroused and during vaginal penetration.
== References ==
== Further reading ==
Bondil, P; Costa, P; Daures, JP; Louis, JF; Navratil, H (1992). "Clinical study of the longitudinal deformation of the flaccid penis and of its variations with aging". European Urology. 21 (4): 284–6. doi:10.1159/000474858. PMID 1459150.
Traxer, Olivier; Haab, François; Anidjar, Maurice; Gattegno, Bernard; Cussenot, Olivier; Thibault, Philippe (1999). "Comparaison des propriétés biomécaniques de l'ancrage vaginal dans les suspensions du col vésical selon la technique de Burch et une technique percutanée" [Comparison of biomechanical properties of the vaginal fixation in bladder neck suspensions according to the Burch technique and a percutaneous technique] (PDF). Progrès en Urologie (in French). 9 (4): 727–30. PMID 10555228. Archived from the original (PDF) on 2013-06-03. Retrieved 2010-10-05. | Wikipedia/Mechanics_of_human_sexuality |
Forensic biomechanics is the application of biomechanical engineering science to litigation where biomechanical experts determine whether an accident was the cause of an alleged injury. (See "New York State Bar Association Bar Journal November/December 2010 - The Rise of Biomechanical Experts at Trial by Robert Glick, Esq. and Sean O'Loughlin, Esq.) Application of biomechanics to the analysis of an accident involves an accident reconstruction coupled with an analysis of the motions and forces affecting the people involved in the accident. ( See "New York State Bar Association Bar Journal November/December 2010 - The Rise of Biomechanical Experts at Trial by Robert Glick, Esq. and Sean O'Loughlin, Esq.) A biomechanical expert’s testimony on the motions and forces involved in an accident may be both relevant and probative on the issue of injury causation. (See "New York State Bar Association Bar Journal November/December 2010 - The Rise of Biomechanical Experts at Trial by Robert Glick, Esq. and Sean O'Loughlin, Esq.)
== History ==
During the years 2005 to 2019, the Courts of New York City witnessed the innovation and widespread use of biomechanical experts. Soon after the innovation of biomechanical experts in the Courts of New York City, prominent trial attorneys and the New York State Bar Association began offering scholarly articles and educational seminars on the use of biomechanical experts. Notable articles on Biomechanical Experts include: "New York State Bar Association Bar Journal November/December 2010 - The Rise of Biomechanical Experts at Trial by Robert Glick, Esq. and Sean O'Loughlin, Esq.", "The Role of Biomechanics Engineering: an Explanation in Accident Reconstruction by Richard Sands, Esq.", "New York Law Journal - Using Biomechanical Science in Labor Law and Premises Cases by Richard Sands, Esq.", "New York Law Journal - Winning the Biomechanical 'Frye' Hearing by Steven Balson-Cohen, Esq," PropertyCasualty360 - Insurers Tap Biomechanics To Fix Blame For Injuries Claimed In Crashes."
On December 28, 2018, the New York Law Journal published an article by prominent trial attorney Steven Balson-Cohen titled "Requiem for the Biomechanical ‘Frye’ Hearing?" revealing that the current state of the law in New York was that all four appellate divisions had accepted the legitimacy of biomechanical science in the courtroom.
Prominent trial attorneys in biomechanics include but are not limited to: Stephen B. Toner, Francis J. Scahill, Richard M. Sands, Claire F. Rush, Steven Balson-Cohen, John J. Komar, Howard Greenwald, Maurice J. Recchia, Cecil E. Floyd, Philip J. Rizzuto, Milene Mansouri, Joseph Jednak, Paul Koors, Anthony E. Graziani, Kristen N. Reed, and John Corring.
== References == | Wikipedia/Forensic_biomechanics |
Neuromechanics is an interdisciplinary field that combines biomechanics and neuroscience to understand how the nervous system interacts with the skeletal and muscular systems to enable animals to move. In a motor task, like reaching for an object, neural commands are sent to motor neurons to activate a set of muscles, called muscle synergies. Given which muscles are activated and how they are connected to the skeleton, there will be a corresponding and specific movement of the body. In addition to participating in reflexes, neuromechanical process may also be shaped through motor adaptation and learning.
== Neuromechanics underlying behavior ==
=== Walking ===
The inverted pendulum theory of gait is a neuromechanical approach to understand how humans walk. As the name of the theory implies, a walking human is modeled as an inverted pendulum consisting of a center of mass (COM) suspended above the ground via a support leg (Fig. 2). As the inverted pendulum swings forward, ground reaction forces occur between the modeled leg and the ground. Importantly, the magnitude of the ground reaction forces depends on the COM position and size. The velocity vector of the center of mass is always perpendicular to the ground reaction force.
Walking consists of alternating single-support and double-support phases. The single-support phase occurs when one leg is in contact with the ground while the double-support phase occurs when two legs are in contact with the ground.
==== Neurological influences ====
The inverted pendulum is stabilized by constant feedback from the brain and can operate even in the presence of sensory loss. In animals who have lost all sensory input to the moving limb, the variables produced by gait (center of mass acceleration, velocity of animal, and position of the animal) remain constant between both groups.
During postural control, delayed feedback mechanisms are used in the temporal reproduction of task-level functions such as walking. The nervous system takes into account feedback from the center of mass acceleration, velocity, and position of an individual and utilizes the information to predict and plan future movements. Center of mass acceleration is essential in the feedback mechanism as this feedback takes place before any significant displacement data can be determined.
==== Controversy ====
The inverted pendulum theory directly contradicts the six determinants of gait, another theory for gait analysis. The six determinants of gait predict very high energy expenditure for the sinusoidal motion of the Center of Mass during gait, while the inverted pendulum theory offers the possibility that energy expenditure can be near zero; the inverted pendulum theory predicts that little to no work is required for walking.
== Measuring the neural control of muscles - Electromyography ==
Electromyography (EMG) is a tool used to measure the electrical outputs produced by skeletal muscles upon activation. Motor nerves innervate skeletal muscles and cause contraction upon command from the central nervous system. This contraction is measured by EMG and is typically measured on the scale of millivolts (mV). Another form of EMG data that is analyzed is integrated EMG (iEMG) data. iEMG measures the area under the EMG signal which corresponds to the overall muscle effort rather than the effort at a specific instant.
=== Equipment ===
There are four instrumentation components used to detect these signals: (1) the signal source, (2) the transducer used to detect the signal, (3) the amplifier, and (4) the signal processing circuit. The signal source refers to the location at which the EMG electrode is place. EMG signal acquisition is dependent on distance from the electrode to the muscle fiber, so placement is imperative. The transducer used to detect the signal is an EMG electrode than transforms the bioelectric signal from the muscle to a readable electric signal. The amplifier reproduces an undistorted bioelectric signal and also allows for noise reduction in the signal. Signal processing involves taking the recorded electrical impulses, filtering them, and enveloping the data.
=== Latency ===
Latency is a measure of the time span between the activation of a muscle and its peak EMG value. Latency is used as a means to diagnose disorders of the nervous system such as a herniated disc, amyotrophic lateral sclerosis (ALS), or myasthenia gravis (MG). These disorders may cause a disruption of the signal at the muscle, the nerve, or the junction between the muscle and the nerve.
The use of EMG to identify nervous systems disorders is known as a nerve conduction study (NCS). Nerve conduction studies can only diagnose diseases on the muscular and nerve level. They cannot detect disease in the spinal cord or the brain. In most disorders of the muscle, nerve, or neuromuscular junction, the latency time is increased. This is a result of decreased nerve conduction or electrical stimulation at the site of the muscle. In 50% of patients with cerebral atrophy cases, the M3 spinal reflex latency, was increased and on occasion separated from the M2 spinal reflex response. The separation between the M2 and M3 spinal reflex responses is typically 20 milliseconds, but in patients with cerebral atrophy, the separation was increased to 50 ms. In some cases, however, other muscles can compensate for the muscle suffering from decreased electrical stimulation. In the compensatory muscle, the latency time is actually decreased in order to substitute for the function of the diseased muscle. These kinds of studies are used in neuromechanics to identify motor disorders and their effects on a cellular and electrical level rather than a system motion level.
== Coordinated movements enabled through muscle synergies ==
A muscle synergy is a group of synergistic muscles and agonists that work together to perform a motor task. A muscle synergy is composed of agonist and synergistic muscles. An agonist muscle is a muscle that contracts individually, and it can cause a cascade of motion in neighboring muscles. Synergistic muscles aid the agonist muscles in motor control tasks, but they act against excess motion that the agonists may create.
=== Muscle synergy hypothesis ===
The muscle synergy hypothesis is based on the assumption that the central nervous system controls muscle groups independently rather than individual muscles
. The muscle synergy hypothesis presents motor control as a three-tiered hierarchy. In tier one, a motor task vector is created by the central nervous system. The central nervous system then transforms the muscle vector to act upon a group of muscle synergies in tier two. Then in tier three, muscle synergies define a specific ratio of the motor task for each muscle and assign it to its respective muscle to act upon the joint to perform the motor task.
=== Redundancy ===
Redundancy plays a large role in muscle synergy. Muscle redundancy is a degrees of freedom problem on the muscular level. The central nervous system is presented with the opportunity to coordinate muscle movements, and it must choose one out of many. The muscle redundancy problem is a result of more muscle vectors than dimensions in the task space. Muscles can only generate tension by pulling, not pushing. This results in many muscle force vectors in multiple directions rather than a push and pull in the same direction.
One debate on muscle synergies is between the prime mover strategy and the cooperation strategy. The prime mover strategy arises when a muscle's vector can act in the same direction as the mechanical action vector, the vector of the limb's motion. The cooperation strategy, however, takes place when no muscle can act directly in the vector direction of the mechanical action resulting in a coordination of multiple muscles to achieve the task. The prime mover strategy over time has declined in popularity as it has been found through electromyography studies that no one muscle consistently provides more force than other muscles that are acting to move about a joint.
=== Criticisms ===
The muscle synergy theory is difficult to falsify. Though experimentation has shown that groups of muscles indeed work together to control motor tasks, neural connections allow for individual muscles to be activated. Though individual muscle activation may contradict muscle synergy, it also obscures it. Activation of individual muscles may override or block the input from and overall effect of muscle synergies.
== Motor adaptation ==
Adaptation in the neuromechanical sense is the body's ability to change an action to better suit the situation or environment in which it is acting. Adaptation can be a result of injury, fatigue, or practice. Adaptation can be measured in a variety of ways: electromyography, three-dimensional reconstruction of joints, and changes in other variables pertaining to the specific adaptation being studied.
=== Injury ===
Injury can cause adaptation in a number of ways. Compensation is a large factor in injury adaptation. Compensation is a result of one or more weakened muscles. The brain is given the task to perform a certain motor task, and once a muscle has been weakened, the brain computes energy ratios to send to other muscles to perform the original task in the desired fashion. Change in muscle contribution is not the only byproduct of a muscle-related injury. Change in loading of the joint is another result which, if prolonged, can be harmful for the individual.
=== Fatigue ===
Muscle fatigue is the neuromuscular adaptation to challenges over a period of time. The use of motor units over a period of time can result in changes in the motor command from the brain. Since the force of contraction cannot be changed, the brain instead recruits more motor units to achieve maximal muscle contraction. Recruitment of motor units varies from muscle to muscle depending on the upper limit of motor recruitment in the muscle.
=== Practice ===
Adaptation due to practice can be a result of intended practice such as sports or unintended practice such as wearing an orthosis. In athletes, repetition results in muscle memory. The motor task becomes a long-term memory that can be repeated without much conscious effort. This allows the athlete to focus on fine-tuning their motor task strategy. Resistance to fatigue also comes with practice as the muscle is strengthened, but the speed at which an athlete can complete a motor task is also increased with practice. Volleyball players compared to non-jumpers show more repeatable control of muscles surrounding the knee that is controlled by co-activation in the single jump condition. In the repeated jump condition, both volleyball players and non-jumpers have a linear decrease in normalized jump flight time. Though the normalized linear decrease is the same for athletes and non-athletes, athletes consistently have higher flight times.
There is also adaptation associated with use of a prosthesis or an orthosis. This operates similarly to adaptation due to fatigue; however, muscles can actually be fatigued or alter their mechanical contribution to a motor task as a result of wearing the orthosis. An ankle foot orthosis is a common solution to injury of the lower limb, specifically around the ankle joint. An ankle foot orthosis can be assistive or resistive. An assistive ankle orthosis encourages ankle movement, and a resistive ankle orthosis inhibits ankle movement. Upon wearing an assistive ankle foot orthosis, individuals have decreased EMG amplitude and joint stiffness over time while the opposite occurs for resistive ankle foot orthoses. Additionally, not only can electromyography readings differ, but the physical path that joints travel along can be altered as well.
== References == | Wikipedia/Neuromechanics |
In osteology, bone remodeling or bone metabolism is a lifelong process where mature bone tissue is removed from the skeleton (a process called bone resorption) and new bone tissue is formed (a process called ossification or new bone formation). Recent research has identified a specialised subset of blood vessels, termed Type R endothelial cells, in the bone microenvironment. These blood vessels play a crucial role in adult bone remodelling by mediating interactions between bone-resorbing osteoclasts and bone-forming osteoblasts. Type R blood vessels are characterised by their association with post-arterial capillaries and exhibit unique remodelling properties crucial for bone homeostasis. These processes also control the reshaping or replacement of bone following injuries like fractures but also micro-damage, which occurs during normal activity. Remodeling responds also to functional demands of the mechanical loading.
In the first year of life, almost 100% of the skeleton is replaced. In adults, remodeling proceeds at about 10% per year.
An imbalance in the regulation of bone remodeling's two sub-processes, bone resorption and bone formation, results in many metabolic bone diseases, such as osteoporosis.
== Physiology ==
Bone homeostasis involves multiple but coordinated cellular and molecular events. Two main types of cells are responsible for bone metabolism: osteoblasts (which secrete new bone), and osteoclasts (which break bone down). The structure of bones as well as adequate supply of calcium requires close cooperation between these two cell types and other cell populations present at the bone remodeling sites (e.g. immune cells). Bone metabolism relies on complex signaling pathways and control mechanisms to achieve proper rates of growth and differentiation. These controls include the action of several hormones, including parathyroid hormone (PTH), vitamin D, growth hormone, steroids, and calcitonin, as well as several bone marrow-derived membrane and soluble cytokines and growth factors (e.g. M-CSF, RANKL, VEGF and IL-6 family). It is in this way that the body is able to maintain proper levels of calcium required for physiological processes. Thus bone remodeling is not just occasional "repair of bone damage" but rather an active, continual process that is always happening in a healthy body.
Subsequent to appropriate signaling, osteoclasts move to resorb the surface of the bone, followed by deposition of bone by osteoblasts. Together, the cells that are responsible for bone remodeling are known as the basic multicellular unit (BMU), and the temporal duration (i.e. lifespan) of the BMU is referred to as the bone remodeling period.
== Gallery ==
== See also ==
Biomineralization, the general class of forming and maintaining mineralized tissues
Tissue remodeling
Wolff's law
== References == | Wikipedia/Bone_remodeling |
Force platforms or force plates are measuring instruments that measure the ground reaction forces generated by a body standing on or moving across them, to quantify balance, gait and other parameters of biomechanics. Most common areas of application are medicine and sports.
== Operation ==
The simplest force platform is a plate with a single pedestal, instrumented as a load cell. Better designs have a pair of rectangular plates, although triangular can also work, one over another with load cells or triaxial force transducers between them at the corners.
Like single-force platforms, dual-force platforms can be used to assess performance in double leg tests and strength and power asymmetries in unilateral jump and isometric tests. However, they also provide an additional level of intelligence on neuromuscular status by evaluating the force distribution between limbs during double-limb tests, revealing critical information on strength asymmetries and compensatory strategies.
The simplest force plates measure only the vertical component of the force in the geometric center of the platform. More advanced models measure the three-dimensional components of the single equivalent force applied to the surface and its point of application, usually called the centre of pressure (CoP), as well as the vertical moment of force. Cylindrical force plates have also been constructed for studying arboreal locomotion, including brachiation.
Force platforms may be classified as single-pedestal or multi-pedestal and by the transducer (force and moment transducer) type: strain gauge, piezoelectric sensors, capacitance gauge, piezoresistive, etc., each with its advantages and drawbacks. Single pedestal models, sometimes called load cells, are suitable for forces that are applied over a small area. For studies of movements, such as gait analysis, force platforms with at least three pedestals and usually four are used to permit forces that migrate across the plate. For example, during walking ground reaction forces start at the heel and finish near the big toe.
Force platforms should be distinguished from pressure measuring systems that, although they too quantify centre of pressure, do not directly measure the applied force vector. Pressure measuring plates are useful for quantifying the pressure patterns under a foot over time but cannot quantify horizontal or shear components of the applied forces.
The measurements from a force platform can be either studied in isolation, or combined with other data, such as limb kinematics to understand the principles of locomotion. If an organism makes a standing jump from a force plate, the data from the plate alone is sufficient to calculate acceleration, work, power output, jump angle, and jump distance using basic physics. Simultaneous video measurements of leg joint angles and force plate output can allow the determination of torque, work and power at each joint using a method called inverse dynamics.
== Recent developments in technology ==
Advancements in technology have allowed force platforms to take on a new role within the kinetics field. Traditional laboratory-grade force plates cost (usually in the thousands) have made them very impractical for the everyday clinician. However, Nintendo introduced the Wii Balance Board (WBB) (Nintendo, Kyoto, Japan) in 2007 and changed the structure of what a force plate can be. By 2010, it was found that the WBB is a valid and reliable instrument to measure the weight distribution, when directly compared to the "gold-standard" laboratory-grade force plate, while costing less than $100. More so, this has been verified in both healthy and clinical populations. This is possible due to the four force transducers found in the corners of the WBB. These studies are conducted using customized software, such as LabVIEW (National Instruments, Austin, TX, USA) that can be integrated with the board to be able to measure the amount of body sway or the CoP path length during trials for time. The other benefit to having a posturography system, such as the WBB, is that it is portable so clinicians around the world are able to measure body sway quantitatively, instead of relying on the subjective, clinical balance assessments currently in use.
According to Digital Trends, Nintendo's Wii and the WiiU successor product have both been discontinued as of March 2016. This exemplifies one of the issues arising from the adoption of inexpensive off-the-shelf consumer products re-purposed for medical measurements. Further issues with such adoption arise from the regulatory and standards bodies around the world. Force platforms used for measuring a patient's balance and mobility performance are classified by the U.S. FDA (United States Food and Drug Administration) as Class I Medical Devices. As such they must be manufactured to certain quality standards as established by ISO (International Standards Organization)ISO 9001 Quality Management Principles or ISO 13485 Medical Device Quality Management Systems. The European Union's MDD (Medical Device Directive) also classifies force platforms used for medical measurements as Class I medical devices and require medical CE certification for importation and use in the European Union for such medical applications. A notable recent standard, ASTM F3109-16 Standard Test Method for Verification of Multi-Axis Force Measuring Platforms presents a framework for manufactures and users to verify the performance of Force platforms across the extents of their working surface. Standards such as these are used by manufactures of medical grade force platforms to ensure that measurements made on a patient population are accurate, repeatable and reliable. In short, inexpensive consumer grade entertainment components may be a poor choice for medical measurements given the lack of continuity of such products and their legal, regulatory and perhaps quality unsuitability for such applications.
== Use in sport ==
Force plates are commonly used in sport to access an athlete's force producing capabilities, strength and imbalance [1]. A practitioner can use a force plate to assess training needs, readiness to train, and also during the return to play process.
Typical force plate assessments in sport include the countermovement jump (CMJ), squat jump (SJ), drop jump (DJ), countermovement rebound jump, and isometric mid thigh pull (IMTP).
Practitioners often have trouble understanding which metrics to track when using force plates. A leading biomechanist out of the University of Chichester has created a system for easily selecting force plate metrics. This system is called the 'ODSF System' by Dr. Jason Lake.
== History ==
Chronology
•1976• Advanced Mechanical Technology, Inc. (AMTI) constructed the first commercially available strain gauge force plate for gait analysis at the biomechanics laboratory of the Boston Children's Hospital.
•2017• Hawkin Dynamics created the first wireless force platform and mobile app.
== See also ==
Gait analysis
Posturography
== References == | Wikipedia/Force_platform |
Computational Mechanics is a monthly scientific journal focused on computational mechanics. It is published by Springer and was founded in 1986. The journal reports original research in computational mechanics. It focuses on areas that involve the rational application of mechanics, mathematics, and numerical methods in the practice of modern engineering.
Areas covered include solid and structural mechanics, multi-body system dynamics, constitutive modeling, inelastic and finite deformation response, and structural control. The journal also covers fluid mechanics and fluid-structure interactions, biomechanics, free-surface and two-fluid flows, aerodynamics, fracture mechanics and structural integrity, multi-scale mechanics, particle and meshfree methods, transport phenomena, and heat transfer. Lastly, the journal publishes modern variational methods in mechanics in general.
According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.014.
== References ==
== External links ==
Official website | Wikipedia/Computational_Mechanics_(journal) |
In theoretical physics, the eikonal approximation (Greek εἰκών for likeness, icon or image) is an approximative method useful in wave scattering equations, which occur in optics, seismology, quantum mechanics, quantum electrodynamics, and partial wave expansion.
== Informal description ==
The main advantage that the eikonal approximation offers is that the equations reduce to a differential equation in a single variable. This reduction into a single variable is the result of the straight line approximation or the eikonal approximation, which allows us to choose the straight line as a special direction.
== Relation to the WKB approximation ==
The early steps involved in the eikonal approximation in quantum mechanics are very closely related to the WKB approximation for one-dimensional waves. The WKB method, like the eikonal approximation, reduces the equations into a differential equation in a single variable. But the difficulty with the WKB approximation is that this variable is described by the trajectory of the particle which, in general, is complicated.
== Formal description ==
Making use of WKB approximation we can write the wave function of the scattered system in terms of action S:
Ψ
=
e
i
S
/
ℏ
{\displaystyle \Psi =e^{iS/{\hbar }}}
Inserting the wavefunction Ψ in the Schrödinger equation without the presence of a magnetic field we obtain
−
ℏ
2
2
m
∇
2
Ψ
=
(
E
−
V
)
Ψ
{\displaystyle -{\frac {{\hbar }^{2}}{2m}}{\nabla }^{2}\Psi =(E-V)\Psi }
−
ℏ
2
2
m
∇
2
e
i
S
/
ℏ
=
(
E
−
V
)
e
i
S
/
ℏ
{\displaystyle -{\frac {{\hbar }^{2}}{2m}}{\nabla }^{2}{e^{iS/{\hbar }}}=(E-V)e^{iS/{\hbar }}}
1
2
m
(
∇
S
)
2
−
i
ℏ
2
m
∇
2
S
=
E
−
V
{\displaystyle {\frac {1}{2m}}{(\nabla S)}^{2}-{\frac {i\hbar }{2m}}{\nabla }^{2}S=E-V}
We write S as a power series in ħ
S
=
S
0
+
ℏ
i
S
1
+
.
.
.
{\displaystyle S=S_{0}+{\frac {\hbar }{i}}S_{1}+...}
For the zero-th order:
1
2
m
(
∇
S
0
)
2
=
E
−
V
{\displaystyle {\frac {1}{2m}}{(\nabla S_{0})}^{2}=E-V}
If we consider the one-dimensional case then
∇
2
→
∂
z
2
{\displaystyle {\nabla }^{2}\rightarrow {\partial _{z}}^{2}}
.
We obtain a differential equation with the boundary condition:
S
(
z
=
z
0
)
ℏ
=
k
z
0
{\displaystyle {\frac {S(z=z_{0})}{\hbar }}=kz_{0}}
for
V
→
0
{\displaystyle V\rightarrow 0}
,
z
→
−
∞
{\displaystyle z\rightarrow -\infty }
.
d
d
z
S
0
ℏ
=
k
2
−
2
m
V
/
ℏ
2
{\displaystyle {\frac {d}{dz}}{\frac {S_{0}}{\hbar }}={\sqrt {k^{2}-2mV/{\hbar }^{2}}}}
S
0
(
z
)
ℏ
=
k
z
−
m
ℏ
2
k
∫
−
∞
z
V
d
z
′
{\displaystyle {\frac {S_{0}(z)}{\hbar }}=kz-{\frac {m}{{\hbar }^{2}k}}\int _{-\infty }^{z}{Vdz'}}
== See also ==
Eikonal equation
Correspondence principle
Principle of least action
== References ==
=== Notes ===
[1]Eikonal Approximation K. V. Shajesh Department of Physics and Astronomy, University of Oklahoma
=== Further reading ===
R.R. Dubey (1995). Comparison of exact solution with Eikonal approximation for elastic heavy ion scattering (3rd ed.). NASA.
W. Qian; H. Narumi; N. Daigaku. P. Kenkyūjo (1989). Eikonal approximation in partial wave version (3rd ed.). Nagoya.{{cite book}}: CS1 maint: location missing publisher (link)
M. Lévy; J. Sucher (1969). "Eikonal Approximation in Quantum Field Theory". Phys. Rev. 186 (5). Maryland, USA: 1656–1670. Bibcode:1969PhRv..186.1656L. doi:10.1103/PhysRev.186.1656.
I. T. Todorov (1970). "Quasipotential Equation Corresponding to the Relativistic Eikonal Approximation". Phys. Rev. D. 3 (10). New Jersey, USA: 2351–2356. Bibcode:1971PhRvD...3.2351T. doi:10.1103/PhysRevD.3.2351. Archived from the original on 2013-02-23.
D.R. Harrington (1969). "Multiple Scattering, the Glauber Approximation, and the Off-Shell Eikonal Approximation". Phys. Rev. 184 (5). New Jersey, USA: 1745–1749. Bibcode:1969PhRv..184.1745H. doi:10.1103/PhysRev.184.1745. | Wikipedia/Eikonal_approximation |
Classical Mechanics is a textbook written by Herbert Goldstein, a professor at Columbia University. Intended for advanced undergraduate and beginning graduate students, it has been one of the standard references on its subject around the world since its first publication in 1950.
== Overview ==
In the second edition, Goldstein corrected all the errors that had been pointed out, added a new chapter on perturbation theory, a new section on Bertrand's theorem, and another on Noether's theorem. Other arguments and proofs were simplified and supplemented.
Before the death of its primary author in 2005, a new (third) edition of the book was released, with the collaboration of Charles P. Poole and John L. Safko from the University of South Carolina. In the third edition, the book discusses at length various mathematically sophisticated reformations of Newtonian mechanics, namely analytical mechanics, as applied to particles, rigid bodies and continua. In addition, it covers in some detail classical electromagnetism, special relativity, and field theory, both classical and relativistic. There is an appendix on group theory. New to the third edition include a chapter on nonlinear dynamics and chaos, a section on the exact solutions to the three-body problem obtained by Euler and Lagrange, and a discussion of the damped driven pendulum that explains the Josephson junctions. This is counterbalanced by the reduction of several existing chapters motivated by the desire to prevent this edition from exceeding the previous one in length. For example, the discussions of Hermitian and unitary matrices were omitted because they are more relevant to quantum mechanics rather than classical mechanics, while those of Routh's procedure and time-independent perturbation theory were reduced.
== Table of Contents (3rd Edition) ==
== Editions ==
Goldstein, Herbert (1950). Classical Mechanics (1st ed.). Addison-Wesley.
Goldstein, Herbert (1951). Classical Mechanics (1st ed.). Addison-Wesley. ASIN B000OL8LOM.
Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Addison-Wesley. ISBN 978-0-201-02918-5.
Goldstein, Herbert; Poole, C. P.; Safko, J. L. (2001). Classical Mechanics (3rd ed.). Addison-Wesley. ISBN 978-0-201-65702-9.
== Reception ==
=== First edition ===
S.L. Quimby of Columbia University noted that the first half of the first edition of the book is dedicated to the development of Lagrangian mechanics with the treatment of velocity-dependent potentials, which are important in electromagnetism, and the use of the Cayley-Klein parameters and matrix algebra for rigid-body dynamics. This is followed by a comprehensive and clear discussion of Hamiltonian mechanics. End-of-chapter references improve the value of the book. Quimby pointed out that although this book is suitable for students preparing for quantum mechanics, it is not helpful for those interested in analytical mechanics because its treatment omits too much. Quimby praised the quality of printing and binding which make the book attractive.
In the Journal of the Franklin Institute, Rupen Eskergian noted that the first edition of Classical Mechanics offers a mature take on the subject using vector and tensor notations and with a welcome emphasis on variational methods. This book begins with a review of elementary concepts, then introduces the principle of virtual work, constraints, generalized coordinates, and Lagrangian mechanics. Scattering is treated in the same chapter as central forces and the two-body problem. Unlike most other books on mechanics, this one elaborates upon the virial theorem. The discussion of canonical and contact transformations, the Hamilton-Jacobi theory, and action-angle coordinates is followed by a presentation of geometric optics and wave mechanics. Eskergian believed this book serves as a bridge to modern physics.
Writing for The Mathematical Gazette on the first edition, L. Rosenhead congratulated Goldstein for a lucid account of classical mechanics leading to modern theoretical physics, which he believed would stand the test of time alongside acknowledged classics such as E.T. Whittaker's Analytical Dynamics and Arnold Sommerfeld's Lectures on Theoretical Physics. This book is self-contained and is suitable for students who have completed courses in mathematics and physics of the first two years of university. End-of-chapter references with comments and some example problems enhance the book. Rosenhead also liked the diagrams, index, and printing.
Concerning the second printing of the first edition, Vic Twersky of the Mathematical Research Group at New York University considered the book to be of pedagogical merit because it explains things in a clear and simple manner, and its humor is not forced. Published in the 1950s, this book replaced the outdated and fragmented treatises and supplements typically assigned to beginning graduate students as a modern text on classical mechanics with exercises and examples demonstrating the link between this and other branches of physics, including acoustics, electrodynamics, thermodynamics, geometric optics, and quantum mechanics. It also has a chapter on the mechanics of fields and continua. At the end of each chapter, there is a list of references with the author's candid reviews of each. Twersky said that Goldstein's Classical Mechanics is more suitable for physicists compared to the much older treatise Analytical Dynamics by E.T. Whittaker, which he deemed more appropriate for mathematicians.
E. W. Banhagel, an instructor from Detroit, Michigan, observed that despite requiring no more than multivariable and vector calculus, the first edition of Classical Mechanics successfully introduces some sophisticated new ideas in physics to students. Mathematical tools are introduced as needed. He believed that the annotated references at the end of each chapter are of great value.
=== Third edition ===
Stephen R. Addison from the University of Central Arkansas commented that while the first edition of Classical Mechanics was essentially a treatise with exercises, the third has become less scholarly and more of a textbook. This book is most useful for students who are interested in learning the necessary material in preparation for quantum mechanics. The presentation of most materials in the third edition remain unchanged compared to that of the second, though many of the old references and footnotes were removed. Sections on the relations between the action-angle coordinates and the Hamilton-Jacobi equation with the old quantum theory, wave mechanics, and geometric optics were removed. Chapter 7, which deals with special relativity, has been heavily revised and could prove to be more useful to students who want to study general relativity than its equivalent in previous editions. Chapter 11 provides a clear, if somewhat dated, survey of classical chaos. Appendix B could help advanced students refresh their memories but may be too short to learn from. In all, Addison believed that this book remains a classic text on the eighteenth- and nineteenth-century approaches to theoretical mechanics; those interested in a more modern approach – expressed in the language of differential geometry and Lie groups – should refer to Mathematical Methods of Classical Mechanics by Vladimir Arnold.
Martin Tiersten from the City University of New York pointed out a serious error in the book that persisted in all three editions and even got promoted to the front cover of the book. Such a closed orbit, depicted in a diagram on page 80 (as Figure 3.7) is impossible for an attractive central force because the path cannot be concave away from the center of force. A similarly erroneous diagram appears on page 91 (as Figure 3.13). Tiersten suggested that the reason why this error remained unnoticed for so long is because advanced mechanics texts typically do not use vectors in their treatment of central-force problems, in particular the tangential and normal components of the acceleration vector. He wrote, "Because an attractive force is always directed in toward the center of force, the direction toward the center of curvature at the turning points must be toward the center of force." In response, Poole and Safko acknowledged the error and stated they were working on a list of errata.
== See also ==
Newtonian mechanics
Classical Mechanics (Kibble and Berkshire)
Course of Theoretical Physics (Landau and Lifshitz)
List of textbooks on classical and quantum mechanics
Introduction to Electrodynamics (Griffiths)
Classical Electrodynamics (Jackson)
== References ==
== External links ==
Errata, corrections, and comments on the third edition. John L. Safko and Charles P. Poole. University of South Carolina. | Wikipedia/Classical_Mechanics_(Goldstein) |
In mathematics, Mathieu functions, sometimes called angular Mathieu functions, are solutions of Mathieu's differential equation
d
2
y
d
x
2
+
(
a
−
2
q
cos
(
2
x
)
)
y
=
0
,
{\displaystyle {\frac {d^{2}y}{dx^{2}}}+(a-2q\cos(2x))y=0,}
where a, q are real-valued parameters. Since we may add π/2 to x to change the sign of q, it is a usual convention to set q ≥ 0.
They were first introduced by Émile Léonard Mathieu, who encountered them while studying vibrating elliptical drumheads. They have applications in many fields of the physical sciences, such as optics, quantum mechanics, and general relativity. They tend to occur in problems involving periodic motion, or in the analysis of partial differential equation (PDE) boundary value problems possessing elliptic symmetry.
== Definition ==
=== Mathieu functions ===
In some usages, Mathieu function refers to solutions of the Mathieu differential equation for arbitrary values of
a
{\displaystyle a}
and
q
{\displaystyle q}
. When no confusion can arise, other authors use the term to refer specifically to
π
{\displaystyle \pi }
- or
2
π
{\displaystyle 2\pi }
-periodic solutions, which exist only for special values of
a
{\displaystyle a}
and
q
{\displaystyle q}
. More precisely, for given (real)
q
{\displaystyle q}
such periodic solutions exist for an infinite number of values of
a
{\displaystyle a}
, called characteristic numbers, conventionally indexed as two separate sequences
a
n
(
q
)
{\displaystyle a_{n}(q)}
and
b
n
(
q
)
{\displaystyle b_{n}(q)}
, for
n
=
1
,
2
,
3
,
…
{\displaystyle n=1,2,3,\ldots }
. The corresponding functions are denoted
ce
n
(
x
,
q
)
{\displaystyle {\text{ce}}_{n}(x,q)}
and
se
n
(
x
,
q
)
{\displaystyle {\text{se}}_{n}(x,q)}
, respectively. They are sometimes also referred to as cosine-elliptic and sine-elliptic, or Mathieu functions of the first kind.
As a result of assuming that
q
{\displaystyle q}
is real, both the characteristic numbers and associated functions are real-valued.
ce
n
(
x
,
q
)
{\displaystyle {\text{ce}}_{n}(x,q)}
and
se
n
(
x
,
q
)
{\displaystyle {\text{se}}_{n}(x,q)}
can be further classified by parity and periodicity (both with respect to
x
{\displaystyle x}
), as follows:
The indexing with the integer
n
{\displaystyle n}
, besides serving to arrange the characteristic numbers in ascending order, is convenient in that
ce
n
(
x
,
q
)
{\displaystyle {\text{ce}}_{n}(x,q)}
and
se
n
(
x
,
q
)
{\displaystyle {\text{se}}_{n}(x,q)}
become proportional to
cos
n
x
{\displaystyle \cos nx}
and
sin
n
x
{\displaystyle \sin nx}
as
q
→
0
{\displaystyle q\rightarrow 0}
. With
n
{\displaystyle n}
being an integer, this gives rise to the classification of
ce
n
{\displaystyle {\text{ce}}_{n}}
and
se
n
{\displaystyle {\text{se}}_{n}}
as Mathieu functions (of the first kind) of integral order. For general
a
{\displaystyle a}
and
q
{\displaystyle q}
, solutions besides these can be defined, including Mathieu functions of fractional order as well as non-periodic solutions.
=== Modified Mathieu functions ===
Closely related are the modified Mathieu functions, also known as radial Mathieu functions, which are solutions of Mathieu's modified differential equation
d
2
y
d
x
2
−
(
a
−
2
q
cosh
2
x
)
y
=
0
,
{\displaystyle {\frac {d^{2}y}{dx^{2}}}-(a-2q\cosh 2x)y=0,}
which can be related to the original Mathieu equation by taking
x
→
±
i
x
{\displaystyle x\to \pm {\rm {i}}x}
. Accordingly, the modified Mathieu functions of the first kind of integral order, denoted by
Ce
n
(
x
,
q
)
{\displaystyle {\text{Ce}}_{n}(x,q)}
and
Se
n
(
x
,
q
)
{\displaystyle {\text{Se}}_{n}(x,q)}
, are defined from
Ce
n
(
x
,
q
)
=
ce
n
(
i
x
,
q
)
.
Se
n
(
x
,
q
)
=
−
i
se
n
(
i
x
,
q
)
.
{\displaystyle {\begin{aligned}{\text{Ce}}_{n}(x,q)&={\text{ce}}_{n}({\rm {i}}x,q).\\{\text{Se}}_{n}(x,q)&=-{\rm {i}}\,{\text{se}}_{n}({\rm {i}}x,q).\end{aligned}}}
These functions are real-valued when
x
{\displaystyle x}
is real.
=== Normalization ===
A common normalization, which will be adopted throughout this article, is to demand
∫
0
2
π
ce
n
(
x
,
q
)
2
d
x
=
∫
0
2
π
se
n
(
x
,
q
)
2
d
x
=
π
{\displaystyle \int _{0}^{2\pi }{\text{ce}}_{n}(x,q)^{2}dx=\int _{0}^{2\pi }{\text{se}}_{n}(x,q)^{2}dx=\pi }
as well as require
ce
n
(
x
,
q
)
→
+
cos
n
x
{\displaystyle {\text{ce}}_{n}(x,q)\rightarrow +\cos nx}
and
se
n
(
x
,
q
)
→
+
sin
n
x
{\displaystyle {\text{se}}_{n}(x,q)\rightarrow +\sin nx}
as
q
→
0
{\displaystyle q\rightarrow 0}
.
== Floquet theory ==
Many properties of the Mathieu differential equation can be deduced from the general theory of ordinary differential equations with periodic coefficients, called Floquet theory. The central result is Floquet's theorem:
It is natural to associate the characteristic numbers
a
(
q
)
{\displaystyle a(q)}
with those values of
a
{\displaystyle a}
which result in
σ
=
±
1
{\displaystyle \sigma =\pm 1}
. Note, however, that the theorem only guarantees the existence of at least one solution satisfying
y
(
x
+
π
)
=
σ
y
(
x
)
{\displaystyle y(x+\pi )=\sigma y(x)}
, when Mathieu's equation in fact has two independent solutions for any given
a
{\displaystyle a}
,
q
{\displaystyle q}
. Indeed, it turns out that with
a
{\displaystyle a}
equal to one of the characteristic numbers, Mathieu's equation has only one periodic solution (that is, with period
π
{\displaystyle \pi }
or
2
π
{\displaystyle 2\pi }
), and this solution is one of the
ce
n
(
x
,
q
)
{\displaystyle {\text{ce}}_{n}(x,q)}
,
se
n
(
x
,
q
)
{\displaystyle {\text{se}}_{n}(x,q)}
. The other solution is nonperiodic, denoted
fe
n
(
x
,
q
)
{\displaystyle {\text{fe}}_{n}(x,q)}
and
ge
n
(
x
,
q
)
{\displaystyle {\text{ge}}_{n}(x,q)}
, respectively, and referred to as a Mathieu function of the second kind. This result can be formally stated as Ince's theorem:
An equivalent statement of Floquet's theorem is that Mathieu's equation admits a complex-valued solution of form
F
(
a
,
q
,
x
)
=
exp
(
i
μ
x
)
P
(
a
,
q
,
x
)
,
{\displaystyle F(a,q,x)=\exp(i\mu \,x)\,P(a,q,x),}
where
μ
{\displaystyle \mu }
is a complex number, the Floquet exponent (or sometimes Mathieu exponent), and
P
{\displaystyle P}
is a complex valued function periodic in
x
{\displaystyle x}
with period
π
{\displaystyle \pi }
. An example
P
(
a
,
q
,
x
)
{\displaystyle P(a,q,x)}
is plotted to the right.
=== Stability in parameter space ===
The Mathieu equation has two parameters. For almost all choices of these parameters, Floquet theory says that any solution either converges to zero or diverges to infinity.
If the Mathieu equation is parameterized as
x
¨
+
k
(
1
−
m
cos
(
t
)
)
x
=
0
{\displaystyle {\ddot {x}}+k(1-m\cos(t))x=0}
, where
k
∈
R
,
m
≥
0
{\displaystyle k\in \mathbb {R} ,m\geq 0}
, then the regions of stability and instability are separated by the following curves:
m
(
k
)
=
{
2
k
(
k
−
1
)
(
k
−
4
)
3
k
−
8
,
k
<
0
;
1
4
[
(
9
−
4
k
)
(
13
−
20
k
)
−
(
9
−
4
k
)
]
,
k
<
1
4
;
1
4
[
9
−
4
k
∓
(
9
−
4
k
)
(
13
−
20
k
)
]
,
1
4
<
k
<
13
20
;
2
(
k
−
1
)
(
k
−
4
)
(
k
−
9
)
k
−
5
,
13
20
<
k
<
1
;
2
k
(
k
−
1
)
(
k
−
4
)
3
k
−
8
,
k
>
1.
{\displaystyle m(k)={\begin{cases}2{\sqrt {\frac {k(k-1)(k-4)}{3k-8}}},&k<0;\\[4pt]{\frac {1}{4}}\left[{\sqrt {(9-4k)(13-20k)}}-(9-4k)\right],&k<{\frac {1}{4}};\\[10pt]{\frac {1}{4}}\left[9-4k\mp {\sqrt {(9-4k)(13-20k)}}\right],&{\frac {1}{4}}<k<{\frac {13}{20}};\\[6pt]{\sqrt {\frac {2(k-1)(k-4)(k-9)}{k-5}}},&{\frac {13}{20}}<k<1;\\[2pt]2{\sqrt {\frac {k(k-1)(k-4)}{3k-8}}},&k>1.\end{cases}}}
== Other types of Mathieu functions ==
=== Second kind ===
Since Mathieu's equation is a second order differential equation, one can construct two linearly independent solutions. Floquet's theory says that if
a
{\displaystyle a}
is equal to a characteristic number, one of these solutions can be taken to be periodic, and the other nonperiodic. The periodic solution is one of the
ce
n
(
x
,
q
)
{\displaystyle {\text{ce}}_{n}(x,q)}
and
se
n
(
x
,
q
)
{\displaystyle {\text{se}}_{n}(x,q)}
, called a Mathieu function of the first kind of integral order. The nonperiodic one is denoted either
fe
n
(
x
,
q
)
{\displaystyle {\text{fe}}_{n}(x,q)}
and
ge
n
(
x
,
q
)
{\displaystyle {\text{ge}}_{n}(x,q)}
, respectively, and is called a Mathieu function of the second kind (of integral order). The nonperiodic solutions are unstable, that is, they diverge as
z
→
±
∞
{\displaystyle z\rightarrow \pm \infty }
.
The second solutions corresponding to the modified Mathieu functions
Ce
n
(
x
,
q
)
{\displaystyle {\text{Ce}}_{n}(x,q)}
and
Se
n
(
x
,
q
)
{\displaystyle {\text{Se}}_{n}(x,q)}
are naturally defined as
Fe
n
(
x
,
q
)
=
−
i
fe
n
(
x
i
,
q
)
{\displaystyle {\text{Fe}}_{n}(x,q)=-i{\text{fe}}_{n}(xi,q)}
and
Ge
n
(
x
,
q
)
=
ge
n
(
x
i
,
q
)
{\displaystyle {\text{Ge}}_{n}(x,q)={\text{ge}}_{n}(xi,q)}
.
=== Fractional order ===
Mathieu functions of fractional order can be defined as those solutions
ce
p
(
x
,
q
)
{\displaystyle {\text{ce}}_{p}(x,q)}
and
se
p
(
x
,
q
)
{\displaystyle {\text{se}}_{p}(x,q)}
,
p
{\displaystyle p}
a non-integer, which turn into
cos
p
x
{\displaystyle \cos px}
and
sin
p
x
{\displaystyle \sin px}
as
q
→
0
{\displaystyle q\rightarrow 0}
. If
p
{\displaystyle p}
is irrational, they are non-periodic; however, they remain bounded as
x
→
∞
{\displaystyle x\rightarrow \infty }
.
An important property of the solutions
ce
p
(
x
,
q
)
{\displaystyle {\text{ce}}_{p}(x,q)}
and
se
p
(
x
,
q
)
{\displaystyle {\text{se}}_{p}(x,q)}
, for
p
{\displaystyle p}
non-integer, is that they exist for the same value of
a
{\displaystyle a}
. In contrast, when
p
{\displaystyle p}
is an integer,
ce
p
(
x
,
q
)
{\displaystyle {\text{ce}}_{p}(x,q)}
and
se
p
(
x
,
q
)
{\displaystyle {\text{se}}_{p}(x,q)}
never occur for the same value of
a
{\displaystyle a}
. (See Ince's Theorem above.)
These classifications are summarized in the table below. The modified Mathieu function counterparts are defined similarly.
== Explicit representation and computation ==
=== First kind ===
Mathieu functions of the first kind can be represented as Fourier series:
ce
2
n
(
x
,
q
)
=
∑
r
=
0
∞
A
2
r
(
2
n
)
(
q
)
cos
(
2
r
x
)
ce
2
n
+
1
(
x
,
q
)
=
∑
r
=
0
∞
A
2
r
+
1
(
2
n
+
1
)
(
q
)
cos
[
(
2
r
+
1
)
x
]
se
2
n
+
1
(
x
,
q
)
=
∑
r
=
0
∞
B
2
r
+
1
(
2
n
+
1
)
(
q
)
sin
[
(
2
r
+
1
)
x
]
se
2
n
+
2
(
x
,
q
)
=
∑
r
=
0
∞
B
2
r
+
2
(
2
n
+
2
)
(
q
)
sin
[
(
2
r
+
2
)
x
]
{\displaystyle {\begin{aligned}{\text{ce}}_{2n}(x,q)&=\sum _{r=0}^{\infty }A_{2r}^{(2n)}(q)\cos(2rx)\\{\text{ce}}_{2n+1}(x,q)&=\sum _{r=0}^{\infty }A_{2r+1}^{(2n+1)}(q)\cos \left[(2r+1)x\right]\\{\text{se}}_{2n+1}(x,q)&=\sum _{r=0}^{\infty }B_{2r+1}^{(2n+1)}(q)\sin \left[(2r+1)x\right]\\{\text{se}}_{2n+2}(x,q)&=\sum _{r=0}^{\infty }B_{2r+2}^{(2n+2)}(q)\sin \left[(2r+2)x\right]\\\end{aligned}}}
The expansion coefficients
A
j
(
i
)
(
q
)
{\displaystyle A_{j}^{(i)}(q)}
and
B
j
(
i
)
(
q
)
{\displaystyle B_{j}^{(i)}(q)}
are functions of
q
{\displaystyle q}
but independent of
x
{\displaystyle x}
. By substitution into the Mathieu equation, they can be shown to obey three-term recurrence relations in the lower index. For instance, for each
ce
2
n
{\displaystyle {\text{ce}}_{2n}}
one finds
a
A
0
−
q
A
2
=
0
(
a
−
4
)
A
2
−
q
(
A
4
+
2
A
0
)
=
0
(
a
−
4
r
2
)
A
2
r
−
q
(
A
2
r
+
2
+
A
2
r
−
2
)
=
0
,
r
≥
2
{\displaystyle {\begin{aligned}aA_{0}-qA_{2}&=0\\(a-4)A_{2}-q(A_{4}+2A_{0})&=0\\(a-4r^{2})A_{2r}-q(A_{2r+2}+A_{2r-2})&=0,\quad r\geq 2\end{aligned}}}
Being a second-order recurrence in the index
2
r
{\displaystyle 2r}
, one can always find two independent solutions
X
2
r
{\displaystyle X_{2r}}
and
Y
2
r
{\displaystyle Y_{2r}}
such that the general solution can be expressed as a linear combination of the two:
A
2
r
=
c
1
X
2
r
+
c
2
Y
2
r
{\displaystyle A_{2r}=c_{1}X_{2r}+c_{2}Y_{2r}}
. Moreover, in this particular case, an asymptotic analysis shows that one possible choice of fundamental solutions has the property
X
2
r
=
r
−
2
r
−
1
(
−
e
2
q
4
)
r
[
1
+
O
(
r
−
1
)
]
Y
2
r
=
r
2
r
−
1
(
−
4
e
2
q
)
r
[
1
+
O
(
r
−
1
)
]
{\displaystyle {\begin{aligned}X_{2r}&=r^{-2r-1}\left(-{\frac {e^{2}q}{4}}\right)^{r}\left[1+{\mathcal {O}}(r^{-1})\right]\\Y_{2r}&=r^{2r-1}\left(-{\frac {4}{e^{2}q}}\right)^{r}\left[1+{\mathcal {O}}(r^{-1})\right]\end{aligned}}}
In particular,
X
2
r
{\displaystyle X_{2r}}
is finite whereas
Y
2
r
{\displaystyle Y_{2r}}
diverges. Writing
A
2
r
=
c
1
X
2
r
+
c
2
Y
2
r
{\displaystyle A_{2r}=c_{1}X_{2r}+c_{2}Y_{2r}}
, we therefore see that in order for the Fourier series representation of
ce
2
n
{\displaystyle {\text{ce}}_{2n}}
to converge,
a
{\displaystyle a}
must be chosen such that
c
2
=
0.
{\displaystyle c_{2}=0.}
These choices of
a
{\displaystyle a}
correspond to the characteristic numbers.
In general, however, the solution of a three-term recurrence with variable coefficients
cannot be represented in a simple manner, and hence there is no simple way to determine
a
{\displaystyle a}
from the condition
c
2
=
0
{\displaystyle c_{2}=0}
. Moreover, even if the approximate value of a characteristic number is known, it cannot be used to obtain the coefficients
A
2
r
{\displaystyle A_{2r}}
by numerically iterating the recurrence towards increasing
r
{\displaystyle r}
. The reason is that as long as
a
{\displaystyle a}
only approximates a characteristic number,
c
2
{\displaystyle c_{2}}
is not identically
0
{\displaystyle 0}
and the divergent solution
Y
2
r
{\displaystyle Y_{2r}}
eventually dominates for large enough
r
{\displaystyle r}
.
To overcome these issues, more sophisticated semi-analytical/numerical approaches are required, for instance using a continued fraction expansion, casting the recurrence as a matrix eigenvalue problem, or implementing a backwards recurrence algorithm. The complexity of the three-term recurrence relation is one of the reasons there are few simple formulas and identities involving Mathieu functions.
In practice, Mathieu functions and the corresponding characteristic numbers can be calculated using pre-packaged software, such as Mathematica, Maple, MATLAB, and SciPy. For small values of
q
{\displaystyle q}
and low order
n
{\displaystyle n}
, they can also be expressed perturbatively as power series of
q
{\displaystyle q}
, which can be useful in physical applications.
=== Second kind ===
There are several ways to represent Mathieu functions of the second kind. One representation is in terms of Bessel functions:
fe
2
n
(
x
,
q
)
=
−
π
γ
n
2
∑
r
=
0
∞
(
−
1
)
r
+
n
A
2
r
(
2
n
)
(
−
q
)
Im
[
J
r
(
q
e
i
x
)
Y
r
(
q
e
−
i
x
)
]
,
where
γ
n
=
{
2
,
if
n
=
0
2
n
,
if
n
≥
1
fe
2
n
+
1
(
x
,
q
)
=
π
q
2
∑
r
=
0
∞
(
−
1
)
r
+
n
A
2
r
+
1
(
2
n
+
1
)
(
−
q
)
Im
[
J
r
(
q
e
i
x
)
Y
r
+
1
(
q
e
−
i
x
)
+
J
r
+
1
(
q
e
i
x
)
Y
r
(
q
e
−
i
x
)
]
ge
2
n
+
1
(
x
,
q
)
=
−
π
q
2
∑
r
=
0
∞
(
−
1
)
r
+
n
B
2
r
+
1
(
2
n
+
1
)
(
−
q
)
Re
[
J
r
(
q
e
i
x
)
Y
r
+
1
(
q
e
−
i
x
)
−
J
r
+
1
(
q
e
i
x
)
Y
r
(
q
e
−
i
x
)
]
ge
2
n
+
2
(
x
,
q
)
=
−
π
q
4
(
n
+
1
)
∑
r
=
0
∞
(
−
1
)
r
+
n
B
2
r
+
2
(
2
n
+
2
)
(
−
q
)
Re
[
J
r
(
q
e
i
x
)
Y
r
+
2
(
q
e
−
i
x
)
−
J
r
+
2
(
q
e
i
x
)
Y
r
(
q
e
−
i
x
)
]
{\displaystyle {\begin{aligned}{\text{fe}}_{2n}(x,q)&=-{\frac {\pi \gamma _{n}}{2}}\sum _{r=0}^{\infty }(-1)^{r+n}A_{2r}^{(2n)}(-q)\ {\text{Im}}[J_{r}({\sqrt {q}}e^{ix})Y_{r}({\sqrt {q}}e^{-ix})],\quad {\text{where }}\gamma _{n}=\left\{{\begin{array}{cc}{\sqrt {2}},&{\text{ if }}n=0\\2n,&{\text{ if }}n\geq 1\end{array}}\right.\\{\text{fe}}_{2n+1}(x,q)&={\frac {\pi {\sqrt {q}}}{2}}\sum _{r=0}^{\infty }(-1)^{r+n}A_{2r+1}^{(2n+1)}(-q)\ {\text{Im}}[J_{r}({\sqrt {q}}e^{ix})Y_{r+1}({\sqrt {q}}e^{-ix})+J_{r+1}({\sqrt {q}}e^{ix})Y_{r}({\sqrt {q}}e^{-ix})]\\{\text{ge}}_{2n+1}(x,q)&=-{\frac {\pi {\sqrt {q}}}{2}}\sum _{r=0}^{\infty }(-1)^{r+n}B_{2r+1}^{(2n+1)}(-q)\ {\text{Re}}[J_{r}({\sqrt {q}}e^{ix})Y_{r+1}({\sqrt {q}}e^{-ix})-J_{r+1}({\sqrt {q}}e^{ix})Y_{r}({\sqrt {q}}e^{-ix})]\\{\text{ge}}_{2n+2}(x,q)&=-{\frac {\pi q}{4(n+1)}}\sum _{r=0}^{\infty }(-1)^{r+n}B_{2r+2}^{(2n+2)}(-q)\ {\text{Re}}[J_{r}({\sqrt {q}}e^{ix})Y_{r+2}({\sqrt {q}}e^{-ix})-J_{r+2}({\sqrt {q}}e^{ix})Y_{r}({\sqrt {q}}e^{-ix})]\end{aligned}}}
where
n
,
q
>
0
{\displaystyle n,q>0}
, and
J
r
(
x
)
{\displaystyle J_{r}(x)}
and
Y
r
(
x
)
{\displaystyle Y_{r}(x)}
are Bessel functions of the first and second kind.
=== Modified functions ===
A traditional approach for numerical evaluation of the modified Mathieu functions is through Bessel function product series. For large
n
{\displaystyle n}
and
q
{\displaystyle q}
, the form of the series must be chosen carefully to avoid subtraction errors.
== Properties ==
There are relatively few analytic expressions and identities involving Mathieu functions. Moreover, unlike many other special functions, the solutions of Mathieu's equation cannot in general be expressed in terms of hypergeometric functions. This can be seen by transformation of Mathieu's equation to algebraic form, using the change of variable
t
=
cos
(
x
)
{\displaystyle t=\cos(x)}
:
(
1
−
t
2
)
d
2
y
d
t
2
−
t
d
y
d
t
+
(
a
+
2
q
(
1
−
2
t
2
)
)
y
=
0.
{\displaystyle (1-t^{2}){\frac {d^{2}y}{dt^{2}}}-t\,{\frac {dy}{dt}}+(a+2q(1-2t^{2}))\,y=0.}
Since this equation has an irregular singular point at infinity, it cannot be transformed into an equation of the hypergeometric type.
=== Qualitative behavior ===
For small
q
{\displaystyle q}
,
ce
n
{\displaystyle {\text{ce}}_{n}}
and
se
n
{\displaystyle {\text{se}}_{n}}
behave similarly to
cos
n
x
{\displaystyle \cos nx}
and
sin
n
x
{\displaystyle \sin nx}
. For arbitrary
q
{\displaystyle q}
, they may deviate significantly from their trigonometric counterparts; however, they remain periodic in general. Moreover, for any real
q
{\displaystyle q}
,
ce
m
(
x
,
q
)
{\displaystyle {\text{ce}}_{m}(x,q)}
and
se
m
+
1
(
x
,
q
)
{\displaystyle {\text{se}}_{m+1}(x,q)}
have exactly
m
{\displaystyle m}
simple zeros in
0
<
x
<
π
{\displaystyle 0<x<\pi }
, and as
q
→
∞
{\displaystyle q\rightarrow \infty }
the zeros cluster about
x
=
π
/
2
{\displaystyle x=\pi /2}
.
For
q
>
0
{\displaystyle q>0}
and as
x
→
∞
{\displaystyle x\rightarrow \infty }
the modified Mathieu functions tend to behave as damped periodic functions.
In the following, the
A
{\displaystyle A}
and
B
{\displaystyle B}
factors from the Fourier expansions for
ce
n
{\displaystyle {\text{ce}}_{n}}
and
se
n
{\displaystyle {\text{se}}_{n}}
may be referenced (see Explicit representation and computation). They depend on
q
{\displaystyle q}
and
n
{\displaystyle n}
but are independent of
x
{\displaystyle x}
.
=== Reflections and translations ===
Due to their parity and periodicity,
ce
n
{\displaystyle {\text{ce}}_{n}}
and
se
n
{\displaystyle {\text{se}}_{n}}
have simple properties under reflections and translations by multiples of
π
{\displaystyle \pi }
:
ce
n
(
x
+
π
)
=
(
−
1
)
n
ce
n
(
x
)
se
n
(
x
+
π
)
=
(
−
1
)
n
se
n
(
x
)
ce
n
(
x
+
π
/
2
)
=
(
−
1
)
n
ce
n
(
−
x
+
π
/
2
)
se
n
+
1
(
x
+
π
/
2
)
=
(
−
1
)
n
se
n
+
1
(
−
x
+
π
/
2
)
{\displaystyle {\begin{aligned}&{\text{ce}}_{n}(x+\pi )=(-1)^{n}{\text{ce}}_{n}(x)\\&{\text{se}}_{n}(x+\pi )=(-1)^{n}{\text{se}}_{n}(x)\\&{\text{ce}}_{n}(x+\pi /2)=(-1)^{n}{\text{ce}}_{n}(-x+\pi /2)\\&{\text{se}}_{n+1}(x+\pi /2)=(-1)^{n}{\text{se}}_{n+1}(-x+\pi /2)\end{aligned}}}
One can also write functions with negative
q
{\displaystyle q}
in terms of those with positive
q
{\displaystyle q}
:
ce
2
n
+
1
(
x
,
−
q
)
=
(
−
1
)
n
se
2
n
+
1
(
−
x
+
π
/
2
,
q
)
ce
2
n
+
2
(
x
,
−
q
)
=
(
−
1
)
n
ce
2
n
+
2
(
−
x
+
π
/
2
,
q
)
se
2
n
+
1
(
x
,
−
q
)
=
(
−
1
)
n
ce
2
n
+
1
(
−
x
+
π
/
2
,
q
)
se
2
n
+
2
(
x
,
−
q
)
=
(
−
1
)
n
se
2
n
+
2
(
−
x
+
π
/
2
,
q
)
{\displaystyle {\begin{aligned}&{\text{ce}}_{2n+1}(x,-q)=(-1)^{n}{\text{se}}_{2n+1}(-x+\pi /2,q)\\&{\text{ce}}_{2n+2}(x,-q)=(-1)^{n}{\text{ce}}_{2n+2}(-x+\pi /2,q)\\&{\text{se}}_{2n+1}(x,-q)=(-1)^{n}{\text{ce}}_{2n+1}(-x+\pi /2,q)\\&{\text{se}}_{2n+2}(x,-q)=(-1)^{n}{\text{se}}_{2n+2}(-x+\pi /2,q)\end{aligned}}}
Moreover,
a
2
n
+
1
(
q
)
=
b
2
n
+
1
(
−
q
)
b
2
n
+
2
(
q
)
=
b
2
n
+
2
(
−
q
)
{\displaystyle {\begin{aligned}&a_{2n+1}(q)=b_{2n+1}(-q)\\&b_{2n+2}(q)=b_{2n+2}(-q)\end{aligned}}}
=== Orthogonality and completeness ===
Like their trigonometric counterparts
cos
n
x
{\displaystyle \cos nx}
and
sin
n
x
{\displaystyle \sin nx}
, the periodic Mathieu functions
ce
n
(
x
,
q
)
{\displaystyle {\text{ce}}_{n}(x,q)}
and
se
n
(
x
,
q
)
{\displaystyle {\text{se}}_{n}(x,q)}
satisfy orthogonality relations
∫
0
2
π
ce
n
ce
m
d
x
=
∫
0
2
π
se
n
se
m
d
x
=
δ
n
m
π
∫
0
2
π
ce
n
se
m
d
x
=
0
{\displaystyle {\begin{aligned}&\int _{0}^{2\pi }{\text{ce}}_{n}{\text{ce}}_{m}\,dx=\int _{0}^{2\pi }{\text{se}}_{n}{\text{se}}_{m}\,dx=\delta _{nm}\pi \\&\int _{0}^{2\pi }{\text{ce}}_{n}{\text{se}}_{m}\,dx=0\end{aligned}}}
Moreover, with
q
{\displaystyle q}
fixed and
a
{\displaystyle a}
treated as the eigenvalue, the Mathieu equation is of Sturm–Liouville form. This implies that the eigenfunctions
ce
n
(
x
,
q
)
{\displaystyle {\text{ce}}_{n}(x,q)}
and
se
n
(
x
,
q
)
{\displaystyle {\text{se}}_{n}(x,q)}
form a complete set, i.e. any
π
{\displaystyle \pi }
- or
2
π
{\displaystyle 2\pi }
-periodic function of
x
{\displaystyle x}
can be expanded as a series in
ce
n
(
x
,
q
)
{\displaystyle {\text{ce}}_{n}(x,q)}
and
se
n
(
x
,
q
)
{\displaystyle {\text{se}}_{n}(x,q)}
.
=== Integral identities ===
Solutions of Mathieu's equation satisfy a class of integral identities with respect to kernels
χ
(
x
,
x
′
)
{\displaystyle \chi (x,x')}
that are solutions of
∂
2
χ
∂
x
2
−
∂
2
χ
∂
x
′
2
=
2
q
(
cos
2
x
−
cos
2
x
′
)
χ
{\displaystyle {\frac {\partial ^{2}\chi }{\partial x^{2}}}-{\frac {\partial ^{2}\chi }{\partial x'^{2}}}=2q\left(\cos 2x-\cos 2x'\right)\chi }
More precisely, if
ϕ
(
x
)
{\displaystyle \phi (x)}
solves Mathieu's equation with given
a
{\displaystyle a}
and
q
{\displaystyle q}
, then the integral
ψ
(
x
)
≡
∫
C
χ
(
x
,
x
′
)
ϕ
(
x
′
)
d
x
′
{\displaystyle \psi (x)\equiv \int _{C}\chi (x,x')\phi (x')dx'}
where
C
{\displaystyle C}
is a path in the complex plane, also solves Mathieu's equation with the same
a
{\displaystyle a}
and
q
{\displaystyle q}
, provided the following conditions are met:
χ
(
x
,
x
′
)
{\displaystyle \chi (x,x')}
solves
∂
2
χ
∂
x
2
−
∂
2
χ
∂
x
′
2
=
2
q
(
cos
2
x
−
cos
2
x
′
)
χ
{\displaystyle {\frac {\partial ^{2}\chi }{\partial x^{2}}}-{\frac {\partial ^{2}\chi }{\partial x'^{2}}}=2q\left(\cos 2x-\cos 2x'\right)\chi }
In the regions under consideration,
ψ
(
x
)
{\displaystyle \psi (x)}
exists and
χ
(
x
,
x
′
)
{\displaystyle \chi (x,x')}
is analytic
(
ϕ
∂
χ
∂
x
′
−
∂
ϕ
∂
x
′
χ
)
{\displaystyle \left(\phi {\frac {\partial \chi }{\partial x'}}-{\frac {\partial \phi }{\partial x'}}\chi \right)}
has the same value at the endpoints of
C
{\displaystyle C}
Using an appropriate change of variables, the equation for
χ
{\displaystyle \chi }
can be transformed into the wave equation and solved. For instance, one solution is
χ
(
x
,
x
′
)
=
sinh
(
2
q
1
/
2
sin
x
sin
x
′
)
{\displaystyle \chi (x,x')=\sinh(2q^{1/2}\sin x\sin x')}
. Examples of identities obtained in this way are
se
2
n
+
1
(
x
,
q
)
=
se
2
n
+
1
′
(
0
,
q
)
π
q
1
/
2
B
1
(
2
n
+
1
)
∫
0
π
sinh
(
2
q
1
/
2
sin
x
sin
x
′
)
se
2
n
+
1
(
x
′
,
q
)
d
x
′
(
q
>
0
)
Ce
2
n
(
x
,
q
)
=
ce
2
n
(
π
/
2
,
q
)
π
A
0
(
2
n
)
∫
0
π
cos
(
2
q
1
/
2
cosh
x
cos
x
′
)
ce
2
n
(
x
′
,
q
)
d
x
′
(
q
>
0
)
{\displaystyle {\begin{aligned}{\text{se}}_{2n+1}(x,q)&={\frac {{\text{se}}'_{2n+1}(0,q)}{\pi q^{1/2}B_{1}^{(2n+1)}}}\int _{0}^{\pi }\sinh(2q^{1/2}\sin x\sin x'){\text{se}}_{2n+1}(x',q)dx'\qquad (q>0)\\{\text{Ce}}_{2n}(x,q)&={\frac {{\text{ce}}_{2n}(\pi /2,q)}{\pi A_{0}^{(2n)}}}\int _{0}^{\pi }\cos(2q^{1/2}\cosh x\cos x'){\text{ce}}_{2n}(x',q)dx'\qquad \ \ \ (q>0)\end{aligned}}}
Identities of the latter type are useful for studying asymptotic properties of the modified Mathieu functions.
There also exist integral relations between functions of the first and second kind, for instance:
fe
2
n
(
x
,
q
)
=
2
n
∫
0
x
ce
2
n
(
τ
,
−
q
)
J
0
(
2
q
(
cos
2
x
−
cos
2
τ
)
)
d
τ
,
n
≥
1
{\displaystyle {\text{fe}}_{2n}(x,q)=2n\int _{0}^{x}{\text{ce}}_{2n}(\tau ,-q)\ J_{0}\left({\sqrt {2q(\cos 2x-\cos 2\tau )}}\right)d\tau ,\qquad n\geq 1}
valid for any complex
x
{\displaystyle x}
and real
q
{\displaystyle q}
.
=== Asymptotic expansions ===
The following asymptotic expansions hold for
q
>
0
{\displaystyle q>0}
,
Im
(
x
)
=
0
{\displaystyle {\text{Im}}(x)=0}
,
Re
(
x
)
→
∞
{\displaystyle {\text{Re}}(x)\rightarrow \infty }
, and
2
q
1
/
2
cosh
x
≃
q
1
/
2
e
x
{\displaystyle 2q^{1/2}\cosh x\simeq q^{1/2}e^{x}}
:
Ce
2
n
(
x
,
q
)
∼
(
2
π
q
1
/
2
)
1
/
2
ce
2
n
(
0
,
q
)
ce
2
n
(
π
/
2
,
q
)
A
0
(
2
n
)
⋅
e
−
x
/
2
sin
(
q
1
/
2
e
x
+
π
4
)
Ce
2
n
+
1
(
x
,
q
)
∼
(
2
π
q
3
/
2
)
1
/
2
ce
2
n
+
1
(
0
,
q
)
ce
2
n
+
1
′
(
π
/
2
,
q
)
A
1
(
2
n
+
1
)
⋅
e
−
x
/
2
cos
(
q
1
/
2
e
x
+
π
4
)
Se
2
n
+
1
(
x
,
q
)
∼
−
(
2
π
q
3
/
2
)
1
/
2
se
2
n
+
1
′
(
0
,
q
)
se
2
n
+
1
(
π
/
2
,
q
)
B
1
(
2
n
+
1
)
⋅
e
−
x
/
2
cos
(
q
1
/
2
e
x
+
π
4
)
Se
2
n
+
2
(
x
,
q
)
∼
(
2
π
q
5
/
2
)
1
/
2
se
2
n
+
2
′
(
0
,
q
)
se
2
n
+
2
′
(
π
/
2
,
q
)
B
2
(
2
n
+
2
)
⋅
e
−
x
/
2
sin
(
q
1
/
2
e
x
+
π
4
)
{\displaystyle {\begin{aligned}{\text{Ce}}_{2n}(x,q)&\sim \left({\frac {2}{\pi q^{1/2}}}\right)^{1/2}{\frac {{\text{ce}}_{2n}(0,q){\text{ce}}_{2n}(\pi /2,q)}{A_{0}^{(2n)}}}\cdot e^{-x/2}\sin \left(q^{1/2}e^{x}+{\frac {\pi }{4}}\right)\\{\text{Ce}}_{2n+1}(x,q)&\sim \left({\frac {2}{\pi q^{3/2}}}\right)^{1/2}{\frac {{\text{ce}}_{2n+1}(0,q){\text{ce}}'_{2n+1}(\pi /2,q)}{A_{1}^{(2n+1)}}}\cdot e^{-x/2}\cos \left(q^{1/2}e^{x}+{\frac {\pi }{4}}\right)\\{\text{Se}}_{2n+1}(x,q)&\sim -\left({\frac {2}{\pi q^{3/2}}}\right)^{1/2}{\frac {{\text{se}}'_{2n+1}(0,q){\text{se}}_{2n+1}(\pi /2,q)}{B_{1}^{(2n+1)}}}\cdot e^{-x/2}\cos \left(q^{1/2}e^{x}+{\frac {\pi }{4}}\right)\\{\text{Se}}_{2n+2}(x,q)&\sim \left({\frac {2}{\pi q^{5/2}}}\right)^{1/2}{\frac {{\text{se}}'_{2n+2}(0,q){\text{se}}'_{2n+2}(\pi /2,q)}{B_{2}^{(2n+2)}}}\cdot e^{-x/2}\sin \left(q^{1/2}e^{x}+{\frac {\pi }{4}}\right)\end{aligned}}}
Thus, the modified Mathieu functions decay exponentially for large real argument. Similar asymptotic expansions can be written down for
Fe
n
{\displaystyle {\text{Fe}}_{n}}
and
Ge
n
{\displaystyle {\text{Ge}}_{n}}
; these also decay exponentially for large real argument.
For the even and odd periodic Mathieu functions
c
e
,
s
e
{\displaystyle ce,se}
and the associated characteristic numbers
a
{\displaystyle a}
one can also derive asymptotic expansions for large
q
{\displaystyle q}
. For the characteristic numbers in particular, one has with
N
{\displaystyle N}
approximately an odd integer, i.e.
N
≈
N
0
=
2
n
+
1
,
n
=
1
,
2
,
3
,
.
.
.
,
{\displaystyle N\approx N_{0}=2n+1,n=1,2,3,...,}
a
(
N
)
=
−
2
q
+
2
q
1
/
2
N
−
1
2
3
(
N
2
+
1
)
−
1
2
7
q
1
/
2
N
(
N
2
+
3
)
−
1
2
12
q
(
5
N
4
+
34
N
2
+
9
)
−
1
2
17
q
3
/
2
N
(
33
N
4
+
410
N
2
+
405
)
−
1
2
20
q
2
(
63
N
6
+
1260
N
4
+
2943
N
2
+
41807
)
+
O
(
q
−
5
/
2
)
{\displaystyle {\begin{aligned}a(N)={}&-2q+2q^{1/2}N-{\frac {1}{2^{3}}}(N^{2}+1)-{\frac {1}{2^{7}q^{1/2}}}N(N^{2}+3)-{\frac {1}{2^{12}q}}(5N^{4}+34N^{2}+9)\\&-{\frac {1}{2^{17}q^{3/2}}}N(33N^{4}+410N^{2}+405)-{\frac {1}{2^{20}q^{2}}}(63N^{6}+1260N^{4}+2943N^{2}+41807)+{\mathcal {O}}(q^{-5/2})\end{aligned}}}
Observe the symmetry here in replacing
q
1
/
2
{\displaystyle q^{1/2}}
and
N
{\displaystyle N}
by
−
q
1
/
2
{\displaystyle -q^{1/2}}
and
−
N
{\displaystyle -N}
, which is a significant feature of the expansion. Terms of this expansion have been obtained explicitly up to and including the term of order
|
q
|
−
7
/
2
{\displaystyle |q|^{-7/2}}
. Here
N
{\displaystyle N}
is only approximately an odd integer because in the limit of
q
→
∞
{\displaystyle q\to \infty }
all minimum segments of the periodic potential
cos
2
x
{\displaystyle \cos 2x}
become effectively independent harmonic oscillators (hence
N
0
{\displaystyle N_{0}}
an odd integer). By decreasing
q
{\displaystyle q}
, tunneling through the barriers becomes possible (in physical language), leading to a splitting of the characteristic numbers
a
→
a
∓
{\displaystyle a\to a_{\mp }}
(in quantum mechanics called eigenvalues) corresponding to even and odd periodic Mathieu functions. This splitting is obtained with boundary conditions (in quantum mechanics this provides the splitting of the eigenvalues into energy bands). The boundary conditions are:
(
d
c
e
N
0
−
1
d
x
)
π
/
2
=
0
,
c
e
N
0
(
π
/
2
)
=
0
,
(
d
s
e
N
0
d
x
)
π
/
2
=
0
,
s
e
N
0
+
1
(
π
/
2
)
=
0.
{\displaystyle \left({\frac {dce_{N_{0}-1}}{dx}}\right)_{\pi /2}=0,\;\;ce_{N_{0}}(\pi /2)=0,\;\;\left({\frac {dse_{N_{0}}}{dx}}\right)_{\pi /2}=0,\;\;se_{N_{0}+1}(\pi /2)=0.}
Imposing these boundary conditions on the asymptotic periodic Mathieu functions associated with the above expansion for
a
{\displaystyle a}
one obtains
N
−
N
0
=
∓
2
(
2
π
)
1
/
2
(
16
q
1
/
2
)
N
0
/
2
e
−
4
q
1
/
2
[
1
2
(
N
0
−
1
)
]
!
[
1
−
3
(
N
0
2
+
1
)
2
6
q
1
/
2
+
1
2
13
q
(
9
N
0
4
−
40
N
0
3
+
18
N
0
2
−
136
N
0
+
9
)
+
…
]
.
{\displaystyle N-N_{0}=\mp 2\left({\frac {2}{\pi }}\right)^{1/2}{\frac {(16q^{1/2})^{N_{0}/2}e^{-4q^{1/2}}}{[{\frac {1}{2}}(N_{0}-1)]!}}\left[1-{\frac {3(N_{0}^{2}+1)}{2^{6}q^{1/2}}}+{\frac {1}{2^{13}q}}(9N_{0}^{4}-40N_{0}^{3}+18N_{0}^{2}-136N_{0}+9)+\dots \right].}
The corresponding characteristic numbers or eigenvalues then follow by expansion, i.e.
a
(
N
)
=
a
(
N
0
)
+
(
N
−
N
0
)
(
∂
a
∂
N
)
N
0
+
⋯
.
{\displaystyle a(N)=a(N_{0})+(N-N_{0})\left({\frac {\partial a}{\partial N}}\right)_{N_{0}}+\cdots .}
Insertion of the appropriate expressions above yields the result
a
(
N
)
→
a
∓
(
N
0
)
=
−
2
q
+
2
q
1
/
2
N
0
−
1
2
3
(
N
0
2
+
1
)
−
1
2
7
q
1
/
2
N
0
(
N
0
2
+
3
)
−
1
2
12
q
(
5
N
0
4
+
34
N
0
2
+
9
)
−
⋯
∓
(
16
q
1
/
2
)
N
0
/
2
+
1
e
−
4
q
1
/
2
(
8
π
)
1
/
2
[
1
2
(
N
0
−
1
)
]
!
[
1
−
N
0
2
6
q
1
/
2
(
3
N
0
2
+
8
N
0
+
3
)
+
⋯
]
.
{\displaystyle {\begin{aligned}a(N)\to a_{\mp }(N_{0})={}&-2q+2q^{1/2}N_{0}-{\frac {1}{2^{3}}}(N_{0}^{2}+1)-{\frac {1}{2^{7}q^{1/2}}}N_{0}(N_{0}^{2}+3)-{\frac {1}{2^{12}q}}(5N_{0}^{4}+34N_{0}^{2}+9)-\cdots \\&\mp {\frac {(16q^{1/2})^{N_{0}/2+1}e^{-4q^{1/2}}}{(8\pi )^{1/2}[{\frac {1}{2}}(N_{0}-1)]!}}{\bigg [}1-{\frac {N_{0}}{2^{6}q^{1/2}}}(3N_{0}^{2}+8N_{0}+3)+\cdots {\bigg ]}.\end{aligned}}}
For
N
0
=
1
,
3
,
5
,
…
{\displaystyle N_{0}=1,3,5,\dots }
these are the eigenvalues associated with the even Mathieu eigenfunctions
c
e
N
0
{\displaystyle ce_{N_{0}}}
or
c
e
N
0
−
1
{\displaystyle ce_{N_{0}-1}}
(i.e. with upper, minus sign) and odd Mathieu eigenfunctions
s
e
N
0
+
1
{\displaystyle se_{N_{0}+1}}
or
s
e
N
0
{\displaystyle se_{N_{0}}}
(i.e. with lower, plus sign). The explicit and normalised expansions of the eigenfunctions can be found in or.
Similar asymptotic expansions can be obtained for the solutions of other periodic differential equations, as for Lamé functions and prolate and oblate spheroidal wave functions.
== Applications ==
Mathieu's differential equations appear in a wide range of contexts in engineering, physics, and applied mathematics. Many of these applications fall into one of two general categories: 1) the analysis of partial differential equations in elliptic geometries, and 2) dynamical problems which involve forces that are periodic in either space or time. Examples within both categories are discussed below.
=== Partial differential equations ===
Mathieu functions arise when separation of variables in elliptic coordinates is applied to 1) the Laplace equation in 3 dimensions, and 2) the Helmholtz equation in either 2 or 3 dimensions. Since the Helmholtz equation is a prototypical equation for modeling the spatial variation of classical waves, Mathieu functions can be used to describe a variety of wave phenomena. For instance, in computational electromagnetics they can be used to analyze the scattering of electromagnetic waves off elliptic cylinders, and wave propagation in elliptic waveguides. In general relativity, an exact plane wave solution to the Einstein field equation can be given in terms of Mathieu functions.
More recently, Mathieu functions have been used to solve a special case of the Smoluchowski equation, describing the steady-state statistics of self-propelled particles.
The remainder of this section details the analysis for the two-dimensional Helmholtz equation. In rectangular coordinates, the Helmholtz equation is
(
∂
2
∂
x
2
+
∂
2
∂
y
2
)
ψ
+
k
2
ψ
=
0
,
{\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}\right)\psi +k^{2}\psi =0,}
Elliptic coordinates are defined by
x
=
c
cosh
μ
cos
ν
y
=
c
sinh
μ
sin
ν
{\displaystyle {\begin{aligned}x&=c\cosh \mu \cos \nu \\y&=c\sinh \mu \sin \nu \end{aligned}}}
where
0
≤
μ
<
∞
{\displaystyle 0\leq \mu <\infty }
,
0
≤
ν
<
2
π
{\displaystyle 0\leq \nu <2\pi }
, and
c
{\displaystyle c}
is a positive constant. The Helmholtz equation in these coordinates is
1
c
2
(
sinh
2
μ
+
sin
2
ν
)
(
∂
2
∂
μ
2
+
∂
2
∂
ν
2
)
ψ
+
k
2
ψ
=
0
{\displaystyle {\frac {1}{c^{2}(\sinh ^{2}\mu +\sin ^{2}\nu )}}\left({\frac {\partial ^{2}}{\partial \mu ^{2}}}+{\frac {\partial ^{2}}{\partial \nu ^{2}}}\right)\psi +k^{2}\psi =0}
The constant
μ
{\displaystyle \mu }
curves are confocal ellipses with focal length
c
{\displaystyle c}
; hence, these coordinates are convenient for solving the Helmholtz equation on domains with elliptic boundaries. Separation of variables via
ψ
(
μ
,
ν
)
=
F
(
μ
)
G
(
ν
)
{\displaystyle \psi (\mu ,\nu )=F(\mu )G(\nu )}
yields the Mathieu equations
d
2
F
d
μ
2
−
(
a
−
c
2
k
2
2
cosh
2
μ
)
F
=
0
d
2
G
d
ν
2
+
(
a
−
c
2
k
2
2
cos
2
ν
)
G
=
0
{\displaystyle {\begin{aligned}&{\frac {d^{2}F}{d\mu ^{2}}}-\left(a-{\frac {c^{2}k^{2}}{2}}\cosh 2\mu \right)F=0\\&{\frac {d^{2}G}{d\nu ^{2}}}+\left(a-{\frac {c^{2}k^{2}}{2}}\cos 2\nu \right)G=0\\\end{aligned}}}
where
a
{\displaystyle a}
is a separation constant.
As a specific physical example, the Helmholtz equation can be interpreted as describing normal modes of an elastic membrane under uniform tension. In this case, the following physical conditions are imposed:
Periodicity with respect to
ν
{\displaystyle \nu }
, i.e.
ψ
(
μ
,
ν
)
=
ψ
(
μ
,
ν
+
2
π
)
{\displaystyle \psi (\mu ,\nu )=\psi (\mu ,\nu +2\pi )}
Continuity of displacement across the interfocal line:
ψ
(
0
,
ν
)
=
ψ
(
0
,
−
ν
)
{\displaystyle \psi (0,\nu )=\psi (0,-\nu )}
Continuity of derivative across the interfocal line:
ψ
μ
(
0
,
ν
)
=
−
ψ
μ
(
0
,
−
ν
)
{\displaystyle \psi _{\mu }(0,\nu )=-\psi _{\mu }(0,-\nu )}
For given
k
{\displaystyle k}
, this restricts the solutions to those of the form
Ce
n
(
μ
,
q
)
ce
n
(
ν
,
q
)
{\displaystyle {\text{Ce}}_{n}(\mu ,q){\text{ce}}_{n}(\nu ,q)}
and
Se
n
(
μ
,
q
)
se
n
(
ν
,
q
)
{\displaystyle {\text{Se}}_{n}(\mu ,q){\text{se}}_{n}(\nu ,q)}
, where
q
=
c
2
k
2
/
4
{\displaystyle q=c^{2}k^{2}/4}
. This is the same as restricting allowable values of
a
{\displaystyle a}
, for given
k
{\displaystyle k}
. Restrictions on
k
{\displaystyle k}
then arise due to imposition of physical conditions on some bounding surface, such as an elliptic boundary defined by
μ
=
μ
0
>
0
{\displaystyle \mu =\mu _{0}>0}
. For instance, clamping the membrane at
μ
=
μ
0
{\displaystyle \mu =\mu _{0}}
imposes
ψ
(
μ
0
,
ν
)
=
0
{\displaystyle \psi (\mu _{0},\nu )=0}
, which in turn requires
Ce
n
(
μ
0
,
q
)
=
0
Se
n
(
μ
0
,
q
)
=
0
{\displaystyle {\begin{aligned}{\text{Ce}}_{n}(\mu _{0},q)=0\\{\text{Se}}_{n}(\mu _{0},q)=0\end{aligned}}}
These conditions define the normal modes of the system.
=== Dynamical problems ===
In dynamical problems with periodically varying forces, the equation of motion sometimes takes the form of Mathieu's equation. In such cases, knowledge of the general properties of Mathieu's equation— particularly with regard to stability of the solutions—can be essential for understanding qualitative features of the physical dynamics. A classic example along these lines is the inverted pendulum. Other examples are
vibrations of a string with periodically varying tension
stability of railroad rails as trains drive over them
seasonally forced population dynamics
the phenomenon of parametric resonance in forced oscillators
motion of ions in a quadrupole ion trap
the Stark effect for a rotating electric dipole
the Floquet theory of the stability of limit cycles
analytic traveling-wave solutions of the Kardar-Parisi-Zhang interface growing equation with periodic noise term
=== Quantum mechanics ===
Mathieu functions play a role in certain quantum mechanical systems, particularly those with spatially periodic potentials such as the quantum pendulum and crystalline lattices.
The modified Mathieu equation also arises when describing the quantum mechanics of singular potentials. For the particular singular potential
V
(
r
)
=
g
2
/
r
4
{\displaystyle V(r)=g^{2}/r^{4}}
the radial Schrödinger equation
d
2
y
d
r
2
+
[
k
2
−
ℓ
(
ℓ
+
1
)
r
2
−
g
2
r
4
]
y
=
0
{\displaystyle {\frac {d^{2}y}{dr^{2}}}+\left[k^{2}-{\frac {\ell (\ell +1)}{r^{2}}}-{\frac {g^{2}}{r^{4}}}\right]y=0}
can be converted into the equation
d
2
φ
d
z
2
+
[
2
h
2
cosh
2
z
−
(
ℓ
+
1
2
)
2
]
φ
=
0.
{\displaystyle {\frac {d^{2}\varphi }{dz^{2}}}+\left[2h^{2}\cosh 2z-\left(\ell +{\frac {1}{2}}\right)^{2}\right]\varphi =0.}
The transformation is achieved with the following substitutions
y
=
r
1
/
2
φ
,
r
=
γ
e
z
,
γ
=
i
g
h
,
h
2
=
i
k
g
,
h
=
e
I
π
/
4
(
k
g
)
1
/
2
.
{\displaystyle y=r^{1/2}\varphi ,r=\gamma e^{z},\gamma ={\frac {ig}{h}},h^{2}=ikg,h=e^{I\pi /4}(kg)^{1/2}.}
By solving the Schrödinger equation (for this particular potential) in terms of solutions of the modified Mathieu equation, scattering properties such as the S-matrix and the absorptivity can be obtained.
Originally the Schrödinger equation with cosine function was solved in 1928 by Strutt.
== See also ==
Almost Mathieu operator
Bessel function
Hill differential equation
Inverted pendulum
Lamé function
List of mathematical functions
Monochromatic electromagnetic plane wave
== Notes ==
== References ==
== External links ==
Weisstein, Eric W. "Mathieu function". MathWorld.
List of equations and identities for Mathieu Functions functions.wolfram.com
"Mathieu functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Timothy Jones, Mathieu's Equations and the Ideal rf-Paul Trap (2006)
Mathieu equation, EqWorld
NIST Digital Library of Mathematical Functions: Mathieu Functions and Hill's Equation | Wikipedia/Mathieu_function |
In mathematics, the Jacobi elliptic functions are a set of basic elliptic functions. They are found in the description of the motion of a pendulum, as well as in the design of electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation
sn
{\displaystyle \operatorname {sn} }
for
sin
{\displaystyle \sin }
. The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by Carl Gustav Jakob Jacobi (1829). Carl Friedrich Gauss had already studied special Jacobi elliptic functions in 1797, the lemniscate elliptic functions in particular, but his work was published much later.
== Overview ==
There are twelve Jacobi elliptic functions denoted by
pq
(
u
,
m
)
{\displaystyle \operatorname {pq} (u,m)}
, where
p
{\displaystyle \mathrm {p} }
and
q
{\displaystyle \mathrm {q} }
are any of the letters
c
{\displaystyle \mathrm {c} }
,
s
{\displaystyle \mathrm {s} }
,
n
{\displaystyle \mathrm {n} }
, and
d
{\displaystyle \mathrm {d} }
. (Functions of the form
pp
(
u
,
m
)
{\displaystyle \operatorname {pp} (u,m)}
are trivially set to unity for notational completeness.)
u
{\displaystyle u}
is the argument, and
m
{\displaystyle m}
is the parameter, both of which may be complex. In fact, the Jacobi elliptic functions are meromorphic in both
u
{\displaystyle u}
and
m
{\displaystyle m}
. The distribution of the zeros and poles in the
u
{\displaystyle u}
-plane is well-known. However, questions of the distribution of the zeros and poles in the
m
{\displaystyle m}
-plane remain to be investigated.
In the complex plane of the argument
u
{\displaystyle u}
, the twelve functions form a repeating lattice of simple poles and zeroes. Depending on the function, one repeating parallelogram, or unit cell, will have sides of length
2
K
{\displaystyle 2K}
or
4
K
{\displaystyle 4K}
on the real axis, and
2
K
′
{\displaystyle 2K'}
or
4
K
′
{\displaystyle 4K'}
on the imaginary axis, where
K
=
K
(
m
)
{\displaystyle K=K(m)}
and
K
′
=
K
(
1
−
m
)
{\displaystyle K'=K(1-m)}
are known as the quarter periods with
K
(
⋅
)
{\displaystyle K(\cdot )}
being the elliptic integral of the first kind. The nature of the unit cell can be determined by inspecting the "auxiliary rectangle" (generally a parallelogram), which is a rectangle formed by the origin
(
0
,
0
)
{\displaystyle (0,0)}
at one corner, and
(
K
,
K
′
)
{\displaystyle (K,K')}
as the diagonally opposite corner. As in the diagram, the four corners of the auxiliary rectangle are named
s
{\displaystyle \mathrm {s} }
,
c
{\displaystyle \mathrm {c} }
,
d
{\displaystyle \mathrm {d} }
, and
n
{\displaystyle \mathrm {n} }
, going counter-clockwise from the origin. The function
pq
(
u
,
m
)
{\displaystyle \operatorname {pq} (u,m)}
will have a zero at the
p
{\displaystyle \mathrm {p} }
corner and a pole at the
q
{\displaystyle \mathrm {q} }
corner. The twelve functions correspond to the twelve ways of arranging these poles and zeroes in the corners of the rectangle.
When the argument
u
{\displaystyle u}
and parameter
m
{\displaystyle m}
are real, with
0
<
m
<
1
{\displaystyle 0<m<1}
,
K
{\displaystyle K}
and
K
′
{\displaystyle K'}
will be real and the auxiliary parallelogram will in fact be a rectangle, and the Jacobi elliptic functions will all be real valued on the real line.
Since the Jacobi elliptic functions are doubly periodic in
u
{\displaystyle u}
, they factor through a torus – in effect, their domain can be taken to be a torus, just as cosine and sine are in effect defined on a circle. Instead of having only one circle, we now have the product of two circles, one real and the other imaginary. The complex plane can be replaced by a complex torus. The circumference of the first circle is
4
K
{\displaystyle 4K}
and the second
4
K
′
{\displaystyle 4K'}
, where
K
{\displaystyle K}
and
K
′
{\displaystyle K'}
are the quarter periods. Each function has two zeroes and two poles at opposite positions on the torus. Among the points
0
{\displaystyle 0}
,
K
{\displaystyle K}
,
K
+
i
K
′
{\displaystyle K+iK'}
,
i
K
′
{\displaystyle iK'}
there is one zero and one pole.
The Jacobi elliptic functions are then doubly periodic, meromorphic functions satisfying the following properties:
There is a simple zero at the corner
p
{\displaystyle \mathrm {p} }
, and a simple pole at the corner
q
{\displaystyle \mathrm {q} }
.
The complex number
p
−
q
{\displaystyle \mathrm {p} -\mathrm {q} }
is equal to half the period of the function
pq
u
{\displaystyle \operatorname {pq} u}
; that is, the function
pq
u
{\displaystyle \operatorname {pq} u}
is periodic in the direction
pq
{\displaystyle \operatorname {pq} }
, with the period being
2
(
p
−
q
)
{\displaystyle 2(\mathrm {p} -\mathrm {q} )}
. The function
pq
u
{\displaystyle \operatorname {pq} u}
is also periodic in the other two directions
p
p
′
{\displaystyle \mathrm {pp} '}
and
p
q
′
{\displaystyle \mathrm {pq} '}
, with periods such that
p
−
p
′
{\displaystyle \mathrm {p} -\mathrm {p} '}
and
p
−
q
′
{\displaystyle \mathrm {p} -\mathrm {q} '}
are quarter periods.
== Notation ==
The elliptic functions can be given in a variety of notations, which can make the subject unnecessarily confusing. Elliptic functions are functions of two variables. The first variable might be given in terms of the amplitude
φ
{\displaystyle \varphi }
, or more commonly, in terms of
u
{\displaystyle u}
given below. The second variable might be given in terms of the parameter
m
{\displaystyle m}
, or as the elliptic modulus
k
{\displaystyle k}
, where
k
2
=
m
{\displaystyle k^{2}=m}
, or in terms of the modular angle
α
{\displaystyle \alpha }
, where
m
=
sin
2
α
{\displaystyle m=\sin ^{2}\alpha }
. The complements of
k
{\displaystyle k}
and
m
{\displaystyle m}
are defined as
m
′
=
1
−
m
{\displaystyle m'=1-m}
and
k
′
=
m
′
{\textstyle k'={\sqrt {m'}}}
. These four terms are used below without comment to simplify various expressions.
The twelve Jacobi elliptic functions are generally written as
pq
(
u
,
m
)
{\displaystyle \operatorname {pq} (u,m)}
where
p
{\displaystyle \mathrm {p} }
and
q
{\displaystyle \mathrm {q} }
are any of the letters
c
{\displaystyle \mathrm {c} }
,
s
{\displaystyle \mathrm {s} }
,
n
{\displaystyle \mathrm {n} }
, and
d
{\displaystyle \mathrm {d} }
. Functions of the form
pp
(
u
,
m
)
{\displaystyle \operatorname {pp} (u,m)}
are trivially set to unity for notational completeness. The “major” functions are generally taken to be
cn
(
u
,
m
)
{\displaystyle \operatorname {cn} (u,m)}
,
sn
(
u
,
m
)
{\displaystyle \operatorname {sn} (u,m)}
and
dn
(
u
,
m
)
{\displaystyle \operatorname {dn} (u,m)}
from which all other functions can be derived and expressions are often written solely in terms of these three functions, however, various symmetries and generalizations are often most conveniently expressed using the full set. (This notation is due to Gudermann and Glaisher and is not Jacobi's original notation.)
Throughout this article,
pq
(
u
,
t
2
)
=
pq
(
u
;
t
)
{\displaystyle \operatorname {pq} (u,t^{2})=\operatorname {pq} (u;t)}
.
The functions are notationally related to each other by the multiplication rule: (arguments suppressed)
pq
⋅
p
′
q
′
=
p
q
′
⋅
p
′
q
{\displaystyle \operatorname {pq} \cdot \operatorname {p'q'} =\operatorname {pq'} \cdot \operatorname {p'q} }
from which other commonly used relationships can be derived:
pr
qr
=
pq
{\displaystyle {\frac {\operatorname {pr} }{\operatorname {qr} }}=\operatorname {pq} }
pr
⋅
rq
=
pq
{\displaystyle \operatorname {pr} \cdot \operatorname {rq} =\operatorname {pq} }
1
qp
=
pq
{\displaystyle {\frac {1}{\operatorname {qp} }}=\operatorname {pq} }
The multiplication rule follows immediately from the identification of the elliptic functions with the Neville theta functions
pq
(
u
,
m
)
=
θ
p
(
u
,
m
)
θ
q
(
u
,
m
)
{\displaystyle \operatorname {pq} (u,m)={\frac {\theta _{\operatorname {p} }(u,m)}{\theta _{\operatorname {q} }(u,m)}}}
Also note that:
K
(
m
)
=
K
(
k
2
)
=
∫
0
1
d
t
(
1
−
t
2
)
(
1
−
m
t
2
)
=
∫
0
1
d
t
(
1
−
t
2
)
(
1
−
k
2
t
2
)
.
{\displaystyle K(m)=K(k^{2})=\int _{0}^{1}{\frac {dt}{\sqrt {(1-t^{2})(1-mt^{2})}}}=\int _{0}^{1}{\frac {dt}{\sqrt {(1-t^{2})(1-k^{2}t^{2})}}}.}
== Definition in terms of inverses of elliptic integrals ==
There is a definition, relating the elliptic functions to the inverse of the incomplete elliptic integral of the first kind
F
{\displaystyle F}
. These functions take the parameters
u
{\displaystyle u}
and
m
{\displaystyle m}
as inputs. The
φ
{\displaystyle \varphi }
that satisfies
u
=
F
(
φ
,
m
)
=
∫
0
φ
d
θ
1
−
m
sin
2
θ
{\displaystyle u=F(\varphi ,m)=\int _{0}^{\varphi }{\frac {\mathrm {d} \theta }{\sqrt {1-m\sin ^{2}\theta }}}}
is called the Jacobi amplitude:
am
(
u
,
m
)
=
φ
.
{\displaystyle \operatorname {am} (u,m)=\varphi .}
In this framework, the elliptic sine sn u (Latin: sinus amplitudinis) is given by
sn
(
u
,
m
)
=
sin
am
(
u
,
m
)
{\displaystyle \operatorname {sn} (u,m)=\sin \operatorname {am} (u,m)}
and the elliptic cosine cn u (Latin: cosinus amplitudinis) is given by
cn
(
u
,
m
)
=
cos
am
(
u
,
m
)
{\displaystyle \operatorname {cn} (u,m)=\cos \operatorname {am} (u,m)}
and the delta amplitude dn u (Latin: delta amplitudinis)
dn
(
u
,
m
)
=
d
d
u
am
(
u
,
m
)
.
{\displaystyle \operatorname {dn} (u,m)={\frac {\mathrm {d} }{\mathrm {d} u}}\operatorname {am} (u,m).}
In the above, the value
m
{\displaystyle m}
is a free parameter, usually taken to be real such that
0
≤
m
≤
1
{\displaystyle 0\leq m\leq 1}
(but can be complex in general), and so the elliptic functions can be thought of as being given by two variables,
u
{\displaystyle u}
and the parameter
m
{\displaystyle m}
. The remaining nine elliptic functions are easily built from the above three (
sn
{\displaystyle \operatorname {sn} }
,
cn
{\displaystyle \operatorname {cn} }
,
dn
{\displaystyle \operatorname {dn} }
), and are given in a section below. Note that when
φ
=
π
/
2
{\displaystyle \varphi =\pi /2}
, that
u
{\displaystyle u}
then equals the quarter period
K
{\displaystyle K}
.
In the most general setting,
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
is a multivalued function (in
u
{\displaystyle u}
) with infinitely many logarithmic branch points (the branches differ by integer multiples of
2
π
{\displaystyle 2\pi }
), namely the points
2
s
K
(
m
)
+
(
4
t
+
1
)
K
(
1
−
m
)
i
{\displaystyle 2sK(m)+(4t+1)K(1-m)i}
and
2
s
K
(
m
)
+
(
4
t
+
3
)
K
(
1
−
m
)
i
{\displaystyle 2sK(m)+(4t+3)K(1-m)i}
where
s
,
t
∈
Z
{\displaystyle s,t\in \mathbb {Z} }
. This multivalued function can be made single-valued by cutting the complex plane along the line segments joining these branch points (the cutting can be done in non-equivalent ways, giving non-equivalent single-valued functions), thus making
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
analytic everywhere except on the branch cuts. In contrast,
sin
am
(
u
,
m
)
{\displaystyle \sin \operatorname {am} (u,m)}
and other elliptic functions have no branch points, give consistent values for every branch of
am
{\displaystyle \operatorname {am} }
, and are meromorphic in the whole complex plane. Since every elliptic function is meromorphic in the whole complex plane (by definition),
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
(when considered as a single-valued function) is not an elliptic function.
However, a particular cutting for
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
can be made in the
u
{\displaystyle u}
-plane by line segments from
2
s
K
(
m
)
+
(
4
t
+
1
)
K
(
1
−
m
)
i
{\displaystyle 2sK(m)+(4t+1)K(1-m)i}
to
2
s
K
(
m
)
+
(
4
t
+
3
)
K
(
1
−
m
)
i
{\displaystyle 2sK(m)+(4t+3)K(1-m)i}
with
s
,
t
∈
Z
{\displaystyle s,t\in \mathbb {Z} }
; then it only remains to define
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
at the branch cuts by continuity from some direction. Then
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
becomes single-valued and singly-periodic in
u
{\displaystyle u}
with the minimal period
4
i
K
(
1
−
m
)
{\displaystyle 4iK(1-m)}
and it has singularities at the logarithmic branch points mentioned above. If
m
∈
R
{\displaystyle m\in \mathbb {R} }
and
m
≤
1
{\displaystyle m\leq 1}
,
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
is continuous in
u
{\displaystyle u}
on the real line. When
m
>
1
{\displaystyle m>1}
, the branch cuts of
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
in the
u
{\displaystyle u}
-plane cross the real line at
2
(
2
s
+
1
)
K
(
1
/
m
)
/
m
{\displaystyle 2(2s+1)K(1/m)/{\sqrt {m}}}
for
s
∈
Z
{\displaystyle s\in \mathbb {Z} }
; therefore for
m
>
1
{\displaystyle m>1}
,
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
is not continuous in
u
{\displaystyle u}
on the real line and jumps by
2
π
{\displaystyle 2\pi }
on the discontinuities.
But defining
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
this way gives rise to very complicated branch cuts in the
m
{\displaystyle m}
-plane (not the
u
{\displaystyle u}
-plane); they have not been fully described as of yet.
Let
E
(
φ
,
m
)
=
∫
0
φ
1
−
m
sin
2
θ
d
θ
{\displaystyle E(\varphi ,m)=\int _{0}^{\varphi }{\sqrt {1-m\sin ^{2}\theta }}\,\mathrm {d} \theta }
be the incomplete elliptic integral of the second kind with parameter
m
{\displaystyle m}
.
Then the Jacobi epsilon function can be defined as
E
(
u
,
m
)
=
E
(
am
(
u
,
m
)
,
m
)
{\displaystyle {\mathcal {E}}(u,m)=E(\operatorname {am} (u,m),m)}
for
u
∈
R
{\displaystyle u\in \mathbb {R} }
and
0
<
m
<
1
{\displaystyle 0<m<1}
and by analytic continuation in each of the variables otherwise: the Jacobi epsilon function is meromorphic in the whole complex plane (in both
u
{\displaystyle u}
and
m
{\displaystyle m}
). Alternatively, throughout both the
u
{\displaystyle u}
-plane and
m
{\displaystyle m}
-plane,
E
(
u
,
m
)
=
∫
0
u
dn
2
(
t
,
m
)
d
t
;
{\displaystyle {\mathcal {E}}(u,m)=\int _{0}^{u}\operatorname {dn} ^{2}(t,m)\,\mathrm {d} t;}
E
{\displaystyle {\mathcal {E}}}
is well-defined in this way because all residues of
t
↦
dn
(
t
,
m
)
2
{\displaystyle t\mapsto \operatorname {dn} (t,m)^{2}}
are zero, so the integral is path-independent. So the Jacobi epsilon relates the incomplete elliptic integral of the first kind to the incomplete elliptic integral of the second kind:
E
(
φ
,
m
)
=
E
(
F
(
φ
,
m
)
,
m
)
.
{\displaystyle E(\varphi ,m)={\mathcal {E}}(F(\varphi ,m),m).}
The Jacobi epsilon function is not an elliptic function, but it appears when differentiating the Jacobi elliptic functions with respect to the parameter.
The Jacobi zn function is defined by
zn
(
u
,
m
)
=
E
(
u
,
m
)
−
E
(
m
)
K
(
m
)
u
.
{\displaystyle \operatorname {zn} (u,m)={\mathcal {E}}(u,m)-{\frac {E(m)}{K(m)}}u.}
It is a singly periodic function which is meromorphic in
u
{\displaystyle u}
, but not in
m
{\displaystyle m}
(due to the branch cuts of
E
{\displaystyle E}
and
K
{\displaystyle K}
). Its minimal period in
u
{\displaystyle u}
is
2
K
(
m
)
{\displaystyle 2K(m)}
. It is related to the Jacobi zeta function by
Z
(
φ
,
m
)
=
zn
(
F
(
φ
,
m
)
,
m
)
.
{\displaystyle Z(\varphi ,m)=\operatorname {zn} (F(\varphi ,m),m).}
Historically, the Jacobi elliptic functions were first defined by using the amplitude. In more modern texts on elliptic functions, the Jacobi elliptic functions are defined by other means, for example by ratios of theta functions (see below), and the amplitude is ignored.
In modern terms, the relation to elliptic integrals would be expressed by
sn
(
F
(
φ
,
m
)
,
m
)
=
sin
φ
{\displaystyle \operatorname {sn} (F(\varphi ,m),m)=\sin \varphi }
(or
cn
(
F
(
φ
,
m
)
,
m
)
=
cos
φ
{\displaystyle \operatorname {cn} (F(\varphi ,m),m)=\cos \varphi }
) instead of
am
(
F
(
φ
,
m
)
,
m
)
=
φ
{\displaystyle \operatorname {am} (F(\varphi ,m),m)=\varphi }
.
== Definition as trigonometry: the Jacobi ellipse ==
cos
φ
,
sin
φ
{\displaystyle \cos \varphi ,\sin \varphi }
are defined on the unit circle, with radius r = 1 and angle
φ
=
{\displaystyle \varphi =}
arc length of the unit circle measured from the positive x-axis. Similarly, Jacobi elliptic functions are defined on the unit ellipse, with a = 1. Let
x
2
+
y
2
b
2
=
1
,
b
>
1
,
m
=
1
−
1
b
2
,
0
<
m
<
1
,
x
=
r
cos
φ
,
y
=
r
sin
φ
{\displaystyle {\begin{aligned}&x^{2}+{\frac {y^{2}}{b^{2}}}=1,\quad b>1,\\&m=1-{\frac {1}{b^{2}}},\quad 0<m<1,\\&x=r\cos \varphi ,\quad y=r\sin \varphi \end{aligned}}}
then:
r
(
φ
,
m
)
=
1
1
−
m
sin
2
φ
.
{\displaystyle r(\varphi ,m)={\frac {1}{\sqrt {1-m\sin ^{2}\varphi }}}\,.}
For each angle
φ
{\displaystyle \varphi }
the parameter
u
=
u
(
φ
,
m
)
=
∫
0
φ
r
(
θ
,
m
)
d
θ
{\displaystyle u=u(\varphi ,m)=\int _{0}^{\varphi }r(\theta ,m)\,d\theta }
(the incomplete elliptic integral of the first kind) is computed.
On the unit circle (
a
=
b
=
1
{\displaystyle a=b=1}
),
u
{\displaystyle u}
would be an arc length.
However, the relation of
u
{\displaystyle u}
to the arc length of an ellipse is more complicated.
Let
P
=
(
x
,
y
)
=
(
r
cos
φ
,
r
sin
φ
)
{\displaystyle P=(x,y)=(r\cos \varphi ,r\sin \varphi )}
be a point on the ellipse, and let
P
′
=
(
x
′
,
y
′
)
=
(
cos
φ
,
sin
φ
)
{\displaystyle P'=(x',y')=(\cos \varphi ,\sin \varphi )}
be the point where the unit circle intersects the line between
P
{\displaystyle P}
and the origin
O
{\displaystyle O}
.
Then the familiar relations from the unit circle:
x
′
=
cos
φ
,
y
′
=
sin
φ
{\displaystyle x'=\cos \varphi ,\quad y'=\sin \varphi }
read for the ellipse:
x
′
=
cn
(
u
,
m
)
,
y
′
=
sn
(
u
,
m
)
.
{\displaystyle x'=\operatorname {cn} (u,m),\quad y'=\operatorname {sn} (u,m).}
So the projections of the intersection point
P
′
{\displaystyle P'}
of the line
O
P
{\displaystyle OP}
with the unit circle on the x- and y-axes are simply
cn
(
u
,
m
)
{\displaystyle \operatorname {cn} (u,m)}
and
sn
(
u
,
m
)
{\displaystyle \operatorname {sn} (u,m)}
. These projections may be interpreted as 'definition as trigonometry'. In short:
cn
(
u
,
m
)
=
x
r
(
φ
,
m
)
,
sn
(
u
,
m
)
=
y
r
(
φ
,
m
)
,
dn
(
u
,
m
)
=
1
r
(
φ
,
m
)
.
{\displaystyle \operatorname {cn} (u,m)={\frac {x}{r(\varphi ,m)}},\quad \operatorname {sn} (u,m)={\frac {y}{r(\varphi ,m)}},\quad \operatorname {dn} (u,m)={\frac {1}{r(\varphi ,m)}}.}
For the
x
{\displaystyle x}
and
y
{\displaystyle y}
value of the point
P
{\displaystyle P}
with
u
{\displaystyle u}
and parameter
m
{\displaystyle m}
we get, after inserting the relation:
r
(
φ
,
m
)
=
1
dn
(
u
,
m
)
{\displaystyle r(\varphi ,m)={\frac {1}{\operatorname {dn} (u,m)}}}
into:
x
=
r
(
φ
,
m
)
cos
(
φ
)
,
y
=
r
(
φ
,
m
)
sin
(
φ
)
{\displaystyle x=r(\varphi ,m)\cos(\varphi ),y=r(\varphi ,m)\sin(\varphi )}
that:
x
=
cn
(
u
,
m
)
dn
(
u
,
m
)
,
y
=
sn
(
u
,
m
)
dn
(
u
,
m
)
.
{\displaystyle x={\frac {\operatorname {cn} (u,m)}{\operatorname {dn} (u,m)}},\quad y={\frac {\operatorname {sn} (u,m)}{\operatorname {dn} (u,m)}}.}
The latter relations for the x- and y-coordinates of points on the unit ellipse may be considered as generalization of the relations
x
=
cos
φ
,
y
=
sin
φ
{\displaystyle x=\cos \varphi ,y=\sin \varphi }
for the coordinates of points on the unit circle.
The following table summarizes the expressions for all Jacobi elliptic functions pq(u,m) in the variables (x,y,r) and (φ,dn) with
r
=
x
2
+
y
2
{\textstyle r={\sqrt {x^{2}+y^{2}}}}
== Definition in terms of the Jacobi theta functions ==
=== Using elliptic integrals ===
Equivalently, Jacobi's elliptic functions can be defined in terms of the theta functions. With
z
,
τ
∈
C
{\displaystyle z,\tau \in \mathbb {C} }
such that
Im
τ
>
0
{\displaystyle \operatorname {Im} \tau >0}
, let
θ
1
(
z
|
τ
)
=
∑
n
=
−
∞
∞
(
−
1
)
n
−
1
2
e
(
2
n
+
1
)
i
z
+
π
i
τ
(
n
+
1
2
)
2
,
{\displaystyle \theta _{1}(z|\tau )=\displaystyle \sum _{n=-\infty }^{\infty }(-1)^{n-{\frac {1}{2}}}e^{(2n+1)iz+\pi i\tau \left(n+{\frac {1}{2}}\right)^{2}},}
θ
2
(
z
|
τ
)
=
∑
n
=
−
∞
∞
e
(
2
n
+
1
)
i
z
+
π
i
τ
(
n
+
1
2
)
2
,
{\displaystyle \theta _{2}(z|\tau )=\displaystyle \sum _{n=-\infty }^{\infty }e^{(2n+1)iz+\pi i\tau \left(n+{\frac {1}{2}}\right)^{2}},}
θ
3
(
z
|
τ
)
=
∑
n
=
−
∞
∞
e
2
n
i
z
+
π
i
τ
n
2
,
{\displaystyle \theta _{3}(z|\tau )=\displaystyle \sum _{n=-\infty }^{\infty }e^{2niz+\pi i\tau n^{2}},}
θ
4
(
z
|
τ
)
=
∑
n
=
−
∞
∞
(
−
1
)
n
e
2
n
i
z
+
π
i
τ
n
2
{\displaystyle \theta _{4}(z|\tau )=\displaystyle \sum _{n=-\infty }^{\infty }(-1)^{n}e^{2niz+\pi i\tau n^{2}}}
and let
θ
2
(
τ
)
=
θ
2
(
0
|
τ
)
{\displaystyle \theta _{2}(\tau )=\theta _{2}(0|\tau )}
,
θ
3
(
τ
)
=
θ
3
(
0
|
τ
)
{\displaystyle \theta _{3}(\tau )=\theta _{3}(0|\tau )}
,
θ
4
(
τ
)
=
θ
4
(
0
|
τ
)
{\displaystyle \theta _{4}(\tau )=\theta _{4}(0|\tau )}
. Then with
K
=
K
(
m
)
{\displaystyle K=K(m)}
,
K
′
=
K
(
1
−
m
)
{\displaystyle K'=K(1-m)}
,
ζ
=
π
u
/
(
2
K
)
{\displaystyle \zeta =\pi u/(2K)}
and
τ
=
i
K
′
/
K
{\displaystyle \tau =iK'/K}
,
sn
(
u
,
m
)
=
θ
3
(
τ
)
θ
1
(
ζ
|
τ
)
θ
2
(
τ
)
θ
4
(
ζ
|
τ
)
,
cn
(
u
,
m
)
=
θ
4
(
τ
)
θ
2
(
ζ
|
τ
)
θ
2
(
τ
)
θ
4
(
ζ
|
τ
)
,
dn
(
u
,
m
)
=
θ
4
(
τ
)
θ
3
(
ζ
|
τ
)
θ
3
(
τ
)
θ
4
(
ζ
|
τ
)
.
{\displaystyle {\begin{aligned}\operatorname {sn} (u,m)&={\frac {\theta _{3}(\tau )\theta _{1}(\zeta |\tau )}{\theta _{2}(\tau )\theta _{4}(\zeta |\tau )}},\\\operatorname {cn} (u,m)&={\frac {\theta _{4}(\tau )\theta _{2}(\zeta |\tau )}{\theta _{2}(\tau )\theta _{4}(\zeta |\tau )}},\\\operatorname {dn} (u,m)&={\frac {\theta _{4}(\tau )\theta _{3}(\zeta |\tau )}{\theta _{3}(\tau )\theta _{4}(\zeta |\tau )}}.\end{aligned}}}
The Jacobi zn function can be expressed by theta functions as well:
zn
(
u
,
m
)
=
π
2
K
θ
4
′
(
ζ
|
τ
)
θ
4
(
ζ
|
τ
)
=
π
2
K
θ
3
′
(
ζ
|
τ
)
θ
3
(
ζ
|
τ
)
+
m
sn
(
u
,
m
)
cn
(
u
,
m
)
dn
(
u
,
m
)
=
π
2
K
θ
2
′
(
ζ
|
τ
)
θ
2
(
ζ
|
τ
)
+
dn
(
u
,
m
)
sn
(
u
,
m
)
cn
(
u
,
m
)
=
π
2
K
θ
1
′
(
ζ
|
τ
)
θ
1
(
ζ
|
τ
)
−
cn
(
u
,
m
)
dn
(
u
,
m
)
sn
(
u
,
m
)
{\displaystyle {\begin{aligned}\operatorname {zn} (u,m)&={\frac {\pi }{2K}}{\frac {\theta _{4}'(\zeta |\tau )}{\theta _{4}(\zeta |\tau )}}\\&={\frac {\pi }{2K}}{\frac {\theta _{3}'(\zeta |\tau )}{\theta _{3}(\zeta |\tau )}}+m{\frac {\operatorname {sn} (u,m)\operatorname {cn} (u,m)}{\operatorname {dn} (u,m)}}\\&={\frac {\pi }{2K}}{\frac {\theta _{2}'(\zeta |\tau )}{\theta _{2}(\zeta |\tau )}}+{\frac {\operatorname {dn} (u,m)\operatorname {sn} (u,m)}{\operatorname {cn} (u,m)}}\\&={\frac {\pi }{2K}}{\frac {\theta _{1}'(\zeta |\tau )}{\theta _{1}(\zeta |\tau )}}-{\frac {\operatorname {cn} (u,m)\operatorname {dn} (u,m)}{\operatorname {sn} (u,m)}}\end{aligned}}}
where
′
{\displaystyle '}
denotes the partial derivative with respect to the first variable.
=== Using modular inversion ===
In fact, the definition of the Jacobi elliptic functions in Whittaker & Watson is stated a little bit differently than the one given above (but it's equivalent to it) and relies on modular inversion: The function
λ
{\displaystyle \lambda }
, defined by
λ
(
τ
)
=
θ
2
(
τ
)
4
θ
3
(
τ
)
4
,
{\displaystyle \lambda (\tau )={\frac {\theta _{2}(\tau )^{4}}{\theta _{3}(\tau )^{4}}},}
assumes every value in
C
−
{
0
,
1
}
{\displaystyle \mathbb {C} -\{0,1\}}
once and only once in
F
1
−
(
∂
F
1
∩
{
τ
∈
H
:
Re
τ
<
0
}
)
{\displaystyle F_{1}-(\partial F_{1}\cap \{\tau \in \mathbb {H} :\operatorname {Re} \tau <0\})}
where
H
{\displaystyle \mathbb {H} }
is the upper half-plane in the complex plane,
∂
F
1
{\displaystyle \partial F_{1}}
is the boundary of
F
1
{\displaystyle F_{1}}
and
F
1
=
{
τ
∈
H
:
|
Re
τ
|
≤
1
,
|
Re
(
1
/
τ
)
|
≤
1
}
.
{\displaystyle F_{1}=\{\tau \in \mathbb {H} :\left|\operatorname {Re} \tau \right|\leq 1,\left|\operatorname {Re} (1/\tau )\right|\leq 1\}.}
In this way, each
m
=
def
λ
(
τ
)
∈
C
−
{
0
,
1
}
{\displaystyle m\,{\overset {\text{def}}{=}}\,\lambda (\tau )\in \mathbb {C} -\{0,1\}}
can be associated with one and only one
τ
{\displaystyle \tau }
. Then Whittaker & Watson define the Jacobi elliptic functions by
sn
(
u
,
m
)
=
θ
3
(
τ
)
θ
1
(
ζ
|
τ
)
θ
2
(
τ
)
θ
4
(
ζ
|
τ
)
,
cn
(
u
,
m
)
=
θ
4
(
τ
)
θ
2
(
ζ
|
τ
)
θ
2
(
τ
)
θ
4
(
ζ
|
τ
)
,
dn
(
u
,
m
)
=
θ
4
(
τ
)
θ
3
(
ζ
|
τ
)
θ
3
(
τ
)
θ
4
(
ζ
|
τ
)
{\displaystyle {\begin{aligned}\operatorname {sn} (u,m)&={\frac {\theta _{3}(\tau )\theta _{1}(\zeta |\tau )}{\theta _{2}(\tau )\theta _{4}(\zeta |\tau )}},\\\operatorname {cn} (u,m)&={\frac {\theta _{4}(\tau )\theta _{2}(\zeta |\tau )}{\theta _{2}(\tau )\theta _{4}(\zeta |\tau )}},\\\operatorname {dn} (u,m)&={\frac {\theta _{4}(\tau )\theta _{3}(\zeta |\tau )}{\theta _{3}(\tau )\theta _{4}(\zeta |\tau )}}\end{aligned}}}
where
ζ
=
u
/
θ
3
(
τ
)
2
{\displaystyle \zeta =u/\theta _{3}(\tau )^{2}}
.
In the book, they place an additional restriction on
m
{\displaystyle m}
(that
m
∉
(
−
∞
,
0
)
∪
(
1
,
∞
)
{\displaystyle m\notin (-\infty ,0)\cup (1,\infty )}
), but it is in fact not a necessary restriction (see the Cox reference). Also, if
m
=
0
{\displaystyle m=0}
or
m
=
1
{\displaystyle m=1}
, the Jacobi elliptic functions degenerate to non-elliptic functions which is described below.
== Definition in terms of Neville theta functions ==
The Jacobi elliptic functions can be defined very simply using the Neville theta functions:
pq
(
u
,
m
)
=
θ
p
(
u
,
m
)
θ
q
(
u
,
m
)
{\displaystyle \operatorname {pq} (u,m)={\frac {\theta _{\operatorname {p} }(u,m)}{\theta _{\operatorname {q} }(u,m)}}}
Simplifications of complicated products of the Jacobi elliptic functions are often made easier using these identities.
== Jacobi transformations ==
=== The Jacobi imaginary transformations ===
The Jacobi imaginary transformations relate various functions of the imaginary variable i u or, equivalently, relations between various values of the m parameter. In terms of the major functions:: 506
cn
(
u
,
m
)
=
nc
(
i
u
,
1
−
m
)
{\displaystyle \operatorname {cn} (u,m)=\operatorname {nc} (i\,u,1\!-\!m)}
sn
(
u
,
m
)
=
−
i
sc
(
i
u
,
1
−
m
)
{\displaystyle \operatorname {sn} (u,m)=-i\operatorname {sc} (i\,u,1\!-\!m)}
dn
(
u
,
m
)
=
dc
(
i
u
,
1
−
m
)
{\displaystyle \operatorname {dn} (u,m)=\operatorname {dc} (i\,u,1\!-\!m)}
Using the multiplication rule, all other functions may be expressed in terms of the above three. The transformations may be generally written as
pq
(
u
,
m
)
=
γ
pq
pq
′
(
i
u
,
1
−
m
)
{\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\operatorname {pq} '(i\,u,1\!-\!m)}
. The following table gives the
γ
pq
pq
′
(
i
u
,
1
−
m
)
{\displaystyle \gamma _{\operatorname {pq} }\operatorname {pq} '(i\,u,1\!-\!m)}
for the specified pq(u,m). (The arguments
(
i
u
,
1
−
m
)
{\displaystyle (i\,u,1\!-\!m)}
are suppressed)
Since the hyperbolic trigonometric functions are proportional to the circular trigonometric functions with imaginary arguments, it follows that the Jacobi functions will yield the hyperbolic functions for m=1.: 249 In the figure, the Jacobi curve has degenerated to two vertical lines at x = 1 and x = −1.
=== The Jacobi real transformations ===
The Jacobi real transformations: 308 yield expressions for the elliptic functions in terms with alternate values of m. The transformations may be generally written as
pq
(
u
,
m
)
=
γ
pq
pq
′
(
k
u
,
1
/
m
)
{\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\operatorname {pq} '(k\,u,1/m)}
. The following table gives the
γ
pq
pq
′
(
k
u
,
1
/
m
)
{\displaystyle \gamma _{\operatorname {pq} }\operatorname {pq} '(k\,u,1/m)}
for the specified pq(u,m). (The arguments
(
k
u
,
1
/
m
)
{\displaystyle (k\,u,1/m)}
are suppressed)
=== Other Jacobi transformations ===
Jacobi's real and imaginary transformations can be combined in various ways to yield three more simple transformations
.: 214 The real and imaginary transformations are two transformations in a group (D3 or anharmonic group) of six transformations. If
μ
R
(
m
)
=
1
/
m
{\displaystyle \mu _{R}(m)=1/m}
is the transformation for the m parameter in the real transformation, and
μ
I
(
m
)
=
1
−
m
=
m
′
{\displaystyle \mu _{I}(m)=1-m=m'}
is the transformation of m in the imaginary transformation, then the other transformations can be built up by successive application of these two basic transformations, yielding only three more possibilities:
μ
I
R
(
m
)
=
μ
I
(
μ
R
(
m
)
)
=
−
m
′
/
m
μ
R
I
(
m
)
=
μ
R
(
μ
I
(
m
)
)
=
1
/
m
′
μ
R
I
R
(
m
)
=
μ
R
(
μ
I
(
μ
R
(
m
)
)
)
=
−
m
/
m
′
{\displaystyle {\begin{aligned}\mu _{IR}(m)&=&\mu _{I}(\mu _{R}(m))&=&-m'/m\\\mu _{RI}(m)&=&\mu _{R}(\mu _{I}(m))&=&1/m'\\\mu _{RIR}(m)&=&\mu _{R}(\mu _{I}(\mu _{R}(m)))&=&-m/m'\end{aligned}}}
These five transformations, along with the identity transformation (μU(m) = m) yield the six-element group. With regard to the Jacobi elliptic functions, the general transformation can be expressed using just three functions:
cs
(
u
,
m
)
=
γ
i
c
s
′
(
γ
i
u
,
μ
i
(
m
)
)
{\displaystyle \operatorname {cs} (u,m)=\gamma _{i}\operatorname {cs'} (\gamma _{i}u,\mu _{i}(m))}
ns
(
u
,
m
)
=
γ
i
n
s
′
(
γ
i
u
,
μ
i
(
m
)
)
{\displaystyle \operatorname {ns} (u,m)=\gamma _{i}\operatorname {ns'} (\gamma _{i}u,\mu _{i}(m))}
ds
(
u
,
m
)
=
γ
i
d
s
′
(
γ
i
u
,
μ
i
(
m
)
)
{\displaystyle \operatorname {ds} (u,m)=\gamma _{i}\operatorname {ds'} (\gamma _{i}u,\mu _{i}(m))}
where i = U, I, IR, R, RI, or RIR, identifying the transformation, γi is a multiplication factor common to these three functions, and the prime indicates the transformed function. The other nine transformed functions can be built up from the above three. The reason the cs, ns, ds functions were chosen to represent the transformation is that the other functions will be ratios of these three (except for their inverses) and the multiplication factors will cancel.
The following table lists the multiplication factors for the three ps functions, the transformed m's, and the transformed function names for each of the six transformations.: 214 (As usual, k2 = m, 1 − k2 = k12 = m′ and the arguments (
γ
i
u
,
μ
i
(
m
)
{\displaystyle \gamma _{i}u,\mu _{i}(m)}
) are suppressed)
Thus, for example, we may build the following table for the RIR transformation. The transformation is generally written
pq
(
u
,
m
)
=
γ
pq
p
q
′
(
k
′
u
,
−
m
/
m
′
)
{\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\,\operatorname {pq'} (k'\,u,-m/m')}
(The arguments
(
k
′
u
,
−
m
/
m
′
)
{\displaystyle (k'\,u,-m/m')}
are suppressed)
The value of the Jacobi transformations is that any set of Jacobi elliptic functions with any real-valued parameter m can be converted into another set for which
0
<
m
≤
1
/
2
{\displaystyle 0<m\leq 1/2}
and, for real values of u, the function values will be real.: p. 215
=== Amplitude transformations ===
In the following, the second variable is suppressed and is equal to
m
{\displaystyle m}
:
sin
(
am
(
u
+
v
)
+
am
(
u
−
v
)
)
=
2
sn
u
cn
u
dn
v
1
−
m
sn
2
u
sn
2
v
,
{\displaystyle \sin(\operatorname {am} (u+v)+\operatorname {am} (u-v))={\frac {2\operatorname {sn} u\operatorname {cn} u\operatorname {dn} v}{1-m\operatorname {sn} ^{2}u\operatorname {sn} ^{2}v}},}
cos
(
am
(
u
+
v
)
−
am
(
u
−
v
)
)
=
cn
2
v
−
sn
2
v
dn
2
u
1
−
m
sn
2
u
sn
2
v
{\displaystyle \cos(\operatorname {am} (u+v)-\operatorname {am} (u-v))={\dfrac {\operatorname {cn} ^{2}v-\operatorname {sn} ^{2}v\operatorname {dn} ^{2}u}{1-m\operatorname {sn} ^{2}u\operatorname {sn} ^{2}v}}}
where both identities are valid for all
u
,
v
,
m
∈
C
{\displaystyle u,v,m\in \mathbb {C} }
such that both sides are well-defined.
With
m
1
=
(
1
−
m
′
1
+
m
′
)
2
,
{\displaystyle m_{1}=\left({\frac {1-{\sqrt {m'}}}{1+{\sqrt {m'}}}}\right)^{2},}
we have
cos
(
am
(
u
,
m
)
+
am
(
K
−
u
,
m
)
)
=
−
sn
(
(
1
−
m
′
)
u
,
1
/
m
1
)
,
{\displaystyle \cos(\operatorname {am} (u,m)+\operatorname {am} (K-u,m))=-\operatorname {sn} ((1-{\sqrt {m'}})u,1/m_{1}),}
sin
(
am
(
m
′
u
,
−
m
/
m
′
)
+
am
(
(
1
−
m
′
)
u
,
1
/
m
1
)
)
=
sn
(
u
,
m
)
,
{\displaystyle \sin(\operatorname {am} ({\sqrt {m'}}u,-m/m')+\operatorname {am} ((1-{\sqrt {m'}})u,1/m_{1}))=\operatorname {sn} (u,m),}
sin
(
am
(
(
1
+
m
′
)
u
,
m
1
)
+
am
(
(
1
−
m
′
)
u
,
1
/
m
1
)
)
=
sin
(
2
am
(
u
,
m
)
)
{\displaystyle \sin(\operatorname {am} ((1+{\sqrt {m'}})u,m_{1})+\operatorname {am} ((1-{\sqrt {m'}})u,1/m_{1}))=\sin(2\operatorname {am} (u,m))}
where all the identities are valid for all
u
,
m
∈
C
{\displaystyle u,m\in \mathbb {C} }
such that both sides are well-defined.
== The Jacobi hyperbola ==
Introducing complex numbers, our ellipse has an associated hyperbola:
x
2
−
y
2
b
2
=
1
{\displaystyle x^{2}-{\frac {y^{2}}{b^{2}}}=1}
from applying Jacobi's imaginary transformation to the elliptic functions in the above equation for x and y.
x
=
1
dn
(
u
,
1
−
m
)
,
y
=
sn
(
u
,
1
−
m
)
dn
(
u
,
1
−
m
)
{\displaystyle x={\frac {1}{\operatorname {dn} (u,1-m)}},\quad y={\frac {\operatorname {sn} (u,1-m)}{\operatorname {dn} (u,1-m)}}}
It follows that we can put
x
=
dn
(
u
,
1
−
m
)
,
y
=
sn
(
u
,
1
−
m
)
{\displaystyle x=\operatorname {dn} (u,1-m),y=\operatorname {sn} (u,1-m)}
. So our ellipse has a dual ellipse with m replaced by 1-m. This leads to the complex torus mentioned in the Introduction. Generally, m may be a complex number, but when m is real and m<0, the curve is an ellipse with major axis in the x direction. At m=0 the curve is a circle, and for 0<m<1, the curve is an ellipse with major axis in the y direction. At m = 1, the curve degenerates into two vertical lines at x = ±1. For m > 1, the curve is a hyperbola. When m is complex but not real, x or y or both are complex and the curve cannot be described on a real x-y diagram.
== Minor functions ==
Reversing the order of the two letters of the function name results in the reciprocals of the three functions above:
ns
(
u
)
=
1
sn
(
u
)
,
nc
(
u
)
=
1
cn
(
u
)
,
nd
(
u
)
=
1
dn
(
u
)
.
{\displaystyle \operatorname {ns} (u)={\frac {1}{\operatorname {sn} (u)}},\qquad \operatorname {nc} (u)={\frac {1}{\operatorname {cn} (u)}},\qquad \operatorname {nd} (u)={\frac {1}{\operatorname {dn} (u)}}.}
Similarly, the ratios of the three primary functions correspond to the first letter of the numerator followed by the first letter of the denominator:
sc
(
u
)
=
sn
(
u
)
cn
(
u
)
,
sd
(
u
)
=
sn
(
u
)
dn
(
u
)
,
dc
(
u
)
=
dn
(
u
)
cn
(
u
)
,
ds
(
u
)
=
dn
(
u
)
sn
(
u
)
,
cs
(
u
)
=
cn
(
u
)
sn
(
u
)
,
cd
(
u
)
=
cn
(
u
)
dn
(
u
)
.
{\displaystyle {\begin{aligned}\operatorname {sc} (u)={\frac {\operatorname {sn} (u)}{\operatorname {cn} (u)}},\qquad \operatorname {sd} (u)={\frac {\operatorname {sn} (u)}{\operatorname {dn} (u)}},\qquad \operatorname {dc} (u)={\frac {\operatorname {dn} (u)}{\operatorname {cn} (u)}},\qquad \operatorname {ds} (u)={\frac {\operatorname {dn} (u)}{\operatorname {sn} (u)}},\qquad \operatorname {cs} (u)={\frac {\operatorname {cn} (u)}{\operatorname {sn} (u)}},\qquad \operatorname {cd} (u)={\frac {\operatorname {cn} (u)}{\operatorname {dn} (u)}}.\end{aligned}}}
More compactly, we have
pq
(
u
)
=
pn
(
u
)
qn
(
u
)
{\displaystyle \operatorname {pq} (u)={\frac {\operatorname {pn} (u)}{\operatorname {qn} (u)}}}
where p and q are any of the letters s, c, d.
== Periodicity, poles, and residues ==
In the complex plane of the argument u, the Jacobi elliptic functions form a repeating pattern of poles (and zeroes). The residues of the poles all have the same absolute value, differing only in sign. Each function pq(u,m) has an "inverse function" (in the multiplicative sense) qp(u,m) in which the positions of the poles and zeroes are exchanged. The periods of repetition are generally different in the real and imaginary directions, hence the use of the term "doubly periodic" to describe them.
For the Jacobi amplitude and the Jacobi epsilon function:
am
(
u
+
2
K
,
m
)
=
am
(
u
,
m
)
+
π
,
{\displaystyle \operatorname {am} (u+2K,m)=\operatorname {am} (u,m)+\pi ,}
am
(
u
+
4
i
K
′
,
m
)
=
am
(
u
,
m
)
,
{\displaystyle \operatorname {am} (u+4iK',m)=\operatorname {am} (u,m),}
E
(
u
+
2
K
,
m
)
=
E
(
u
,
m
)
+
2
E
,
{\displaystyle {\mathcal {E}}(u+2K,m)={\mathcal {E}}(u,m)+2E,}
E
(
u
+
2
i
K
′
,
m
)
=
E
(
u
,
m
)
+
2
i
E
K
′
K
−
π
i
K
{\displaystyle {\mathcal {E}}(u+2iK',m)={\mathcal {E}}(u,m)+2iE{\frac {K'}{K}}-{\frac {\pi i}{K}}}
where
E
(
m
)
{\displaystyle E(m)}
is the complete elliptic integral of the second kind with parameter
m
{\displaystyle m}
.
The double periodicity of the Jacobi elliptic functions may be expressed as:
pq
(
u
+
2
α
K
(
m
)
+
2
i
β
K
(
1
−
m
)
,
m
)
=
(
−
1
)
γ
pq
(
u
,
m
)
{\displaystyle \operatorname {pq} (u+2\alpha K(m)+2i\beta K(1-m)\,,\,m)=(-1)^{\gamma }\operatorname {pq} (u,m)}
where α and β are any pair of integers. K(⋅) is the complete elliptic integral of the first kind, also known as the quarter period. The power of negative unity (γ) is given in the following table:
When the factor (−1)γ is equal to −1, the equation expresses quasi-periodicity. When it is equal to unity, it expresses full periodicity. It can be seen, for example, that for the entries containing only α when α is even, full periodicity is expressed by the above equation, and the function has full periods of 4K(m) and 2iK(1 − m). Likewise, functions with entries containing only β have full periods of 2K(m) and 4iK(1 − m), while those with α + β have full periods of 4K(m) and 4iK(1 − m).
In the diagram on the right, which plots one repeating unit for each function, indicating phase along with the location of poles and zeroes, a number of regularities can be noted: The inverse of each function is opposite the diagonal, and has the same size unit cell, with poles and zeroes exchanged. The pole and zero arrangement in the auxiliary rectangle formed by (0,0), (K,0), (0,K′) and (K,K′) are in accordance with the description of the pole and zero placement described in the introduction above. Also, the size of the white ovals indicating poles are a rough measure of the absolute value of the residue for that pole. The residues of the poles closest to the origin in the figure (i.e. in the auxiliary rectangle) are listed in the following table:
When applicable, poles displaced above by 2K or displaced to the right by 2K′ have the same value but with signs reversed, while those diagonally opposite have the same value. Note that poles and zeroes on the left and lower edges are considered part of the unit cell, while those on the upper and right edges are not.
The information about poles can in fact be used to characterize the Jacobi elliptic functions:
The function
u
↦
sn
(
u
,
m
)
{\displaystyle u\mapsto \operatorname {sn} (u,m)}
is the unique elliptic function having simple poles at
2
r
K
+
(
2
s
+
1
)
i
K
′
{\displaystyle 2rK+(2s+1)iK'}
(with
r
,
s
∈
Z
{\displaystyle r,s\in \mathbb {Z} }
) with residues
(
−
1
)
r
/
m
{\displaystyle (-1)^{r}/{\sqrt {m}}}
taking the value
0
{\displaystyle 0}
at
0
{\displaystyle 0}
.
The function
u
↦
cn
(
u
,
m
)
{\displaystyle u\mapsto \operatorname {cn} (u,m)}
is the unique elliptic function having simple poles at
2
r
K
+
(
2
s
+
1
)
i
K
′
{\displaystyle 2rK+(2s+1)iK'}
(with
r
,
s
∈
Z
{\displaystyle r,s\in \mathbb {Z} }
) with residues
(
−
1
)
r
+
s
−
1
i
/
m
{\displaystyle (-1)^{r+s-1}i/{\sqrt {m}}}
taking the value
1
{\displaystyle 1}
at
0
{\displaystyle 0}
.
The function
u
↦
dn
(
u
,
m
)
{\displaystyle u\mapsto \operatorname {dn} (u,m)}
is the unique elliptic function having simple poles at
2
r
K
+
(
2
s
+
1
)
i
K
′
{\displaystyle 2rK+(2s+1)iK'}
(with
r
,
s
∈
Z
{\displaystyle r,s\in \mathbb {Z} }
) with residues
(
−
1
)
s
−
1
i
{\displaystyle (-1)^{s-1}i}
taking the value
1
{\displaystyle 1}
at
0
{\displaystyle 0}
.
== Special values ==
Setting
m
=
−
1
{\displaystyle m=-1}
gives the lemniscate elliptic functions
sl
{\displaystyle \operatorname {sl} }
and
cl
{\displaystyle \operatorname {cl} }
:
sl
u
=
sn
(
u
,
−
1
)
,
cl
u
=
cd
(
u
,
−
1
)
=
cn
(
u
,
−
1
)
dn
(
u
,
−
1
)
.
{\displaystyle \operatorname {sl} u=\operatorname {sn} (u,-1),\quad \operatorname {cl} u=\operatorname {cd} (u,-1)={\frac {\operatorname {cn} (u,-1)}{\operatorname {dn} (u,-1)}}.}
When
m
=
0
{\displaystyle m=0}
or
m
=
1
{\displaystyle m=1}
, the Jacobi elliptic functions are reduced to non-elliptic functions:
For the Jacobi amplitude,
am
(
u
,
0
)
=
u
{\displaystyle \operatorname {am} (u,0)=u}
and
am
(
u
,
1
)
=
gd
u
{\displaystyle \operatorname {am} (u,1)=\operatorname {gd} u}
where
gd
{\displaystyle \operatorname {gd} }
is the Gudermannian function.
In general if neither of p,q is d then
pq
(
u
,
1
)
=
pq
(
gd
(
u
)
,
0
)
{\displaystyle \operatorname {pq} (u,1)=\operatorname {pq} (\operatorname {gd} (u),0)}
.
== Identities ==
=== Half angle formula ===
sn
(
u
2
,
m
)
=
±
1
−
cn
(
u
,
m
)
1
+
dn
(
u
,
m
)
{\displaystyle \operatorname {sn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {1-\operatorname {cn} (u,m)}{1+\operatorname {dn} (u,m)}}}}
cn
(
u
2
,
m
)
=
±
cn
(
u
,
m
)
+
dn
(
u
,
m
)
1
+
dn
(
u
,
m
)
{\displaystyle \operatorname {cn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {\operatorname {cn} (u,m)+\operatorname {dn} (u,m)}{1+\operatorname {dn} (u,m)}}}}
cn
(
u
2
,
m
)
=
±
m
′
+
dn
(
u
,
m
)
+
m
cn
(
u
,
m
)
1
+
dn
(
u
,
m
)
{\displaystyle \operatorname {cn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {m'+\operatorname {dn} (u,m)+m\operatorname {cn} (u,m)}{1+\operatorname {dn} (u,m)}}}}
=== K formulas ===
Half K formula
sn
[
1
2
K
(
k
)
;
k
]
=
2
1
+
k
+
1
−
k
{\displaystyle \operatorname {sn} \left[{\tfrac {1}{2}}K(k);k\right]={\frac {\sqrt {2}}{{\sqrt {1+k}}+{\sqrt {1-k}}}}}
cn
[
1
2
K
(
k
)
;
k
]
=
2
1
−
k
2
4
1
+
k
+
1
−
k
{\displaystyle \operatorname {cn} \left[{\tfrac {1}{2}}K(k);k\right]={\frac {{\sqrt {2}}\,{\sqrt[{4}]{1-k^{2}}}}{{\sqrt {1+k}}+{\sqrt {1-k}}}}}
dn
[
1
2
K
(
k
)
;
k
]
=
1
−
k
2
4
{\displaystyle \operatorname {dn} \left[{\tfrac {1}{2}}K(k);k\right]={\sqrt[{4}]{1-k^{2}}}}
Third K formula
sn
[
1
3
K
(
x
3
x
6
+
1
+
1
)
;
x
3
x
6
+
1
+
1
]
=
2
x
4
−
x
2
+
1
−
x
2
+
2
+
x
2
+
1
−
1
2
x
4
−
x
2
+
1
−
x
2
+
2
+
x
2
+
1
+
1
{\displaystyle \operatorname {sn} \left[{\frac {1}{3}}K\left({\frac {x^{3}}{{\sqrt {x^{6}+1}}+1}}\right);{\frac {x^{3}}{{\sqrt {x^{6}+1}}+1}}\right]={\frac {{\sqrt {2{\sqrt {x^{4}-x^{2}+1}}-x^{2}+2}}+{\sqrt {x^{2}+1}}-1}{{\sqrt {2{\sqrt {x^{4}-x^{2}+1}}-x^{2}+2}}+{\sqrt {x^{2}+1}}+1}}}
To get x3, we take the tangent of twice the arctangent of the modulus.
Also this equation leads to the sn-value of the third of K:
k
2
s
4
−
2
k
2
s
3
+
2
s
−
1
=
0
{\displaystyle k^{2}s^{4}-2k^{2}s^{3}+2s-1=0}
s
=
sn
[
1
3
K
(
k
)
;
k
]
{\displaystyle s=\operatorname {sn} \left[{\tfrac {1}{3}}K(k);k\right]}
These equations lead to the other values of the Jacobi-Functions:
cn
[
2
3
K
(
k
)
;
k
]
=
1
−
sn
[
1
3
K
(
k
)
;
k
]
{\displaystyle \operatorname {cn} \left[{\tfrac {2}{3}}K(k);k\right]=1-\operatorname {sn} \left[{\tfrac {1}{3}}K(k);k\right]}
dn
[
2
3
K
(
k
)
;
k
]
=
1
/
sn
[
1
3
K
(
k
)
;
k
]
−
1
{\displaystyle \operatorname {dn} \left[{\tfrac {2}{3}}K(k);k\right]=1/\operatorname {sn} \left[{\tfrac {1}{3}}K(k);k\right]-1}
Fifth K formula
Following equation has following solution:
4
k
2
x
6
+
8
k
2
x
5
+
2
(
1
−
k
2
)
2
x
−
(
1
−
k
2
)
2
=
0
{\displaystyle 4k^{2}x^{6}+8k^{2}x^{5}+2(1-k^{2})^{2}x-(1-k^{2})^{2}=0}
x
=
1
2
−
1
2
k
2
sn
[
2
5
K
(
k
)
;
k
]
2
sn
[
4
5
K
(
k
)
;
k
]
2
=
sn
[
4
5
K
(
k
)
;
k
]
2
−
sn
[
2
5
K
(
k
)
;
k
]
2
2
sn
[
2
5
K
(
k
)
;
k
]
sn
[
4
5
K
(
k
)
;
k
]
{\displaystyle x={\frac {1}{2}}-{\frac {1}{2}}k^{2}\operatorname {sn} \left[{\tfrac {2}{5}}K(k);k\right]^{2}\operatorname {sn} \left[{\tfrac {4}{5}}K(k);k\right]^{2}={\frac {\operatorname {sn} \left[{\frac {4}{5}}K(k);k\right]^{2}-\operatorname {sn} \left[{\frac {2}{5}}K(k);k\right]^{2}}{2\operatorname {sn} \left[{\frac {2}{5}}K(k);k\right]\operatorname {sn} \left[{\frac {4}{5}}K(k);k\right]}}}
To get the sn-values, we put the solution x into following expressions:
sn
[
2
5
K
(
k
)
;
k
]
=
(
1
+
k
2
)
−
1
/
2
2
(
1
−
x
−
x
2
)
(
x
2
+
1
−
x
x
2
+
1
)
{\displaystyle \operatorname {sn} \left[{\tfrac {2}{5}}K(k);k\right]=(1+k^{2})^{-1/2}{\sqrt {2(1-x-x^{2})(x^{2}+1-x{\sqrt {x^{2}+1}})}}}
sn
[
4
5
K
(
k
)
;
k
]
=
(
1
+
k
2
)
−
1
/
2
2
(
1
−
x
−
x
2
)
(
x
2
+
1
+
x
x
2
+
1
)
{\displaystyle \operatorname {sn} \left[{\tfrac {4}{5}}K(k);k\right]=(1+k^{2})^{-1/2}{\sqrt {2(1-x-x^{2})(x^{2}+1+x{\sqrt {x^{2}+1}})}}}
=== Relations between squares of the functions ===
Relations between squares of the functions can be derived from two basic relationships (Arguments (u,m) suppressed):
cn
2
+
sn
2
=
1
{\displaystyle \operatorname {cn} ^{2}+\operatorname {sn} ^{2}=1}
cn
2
+
m
′
sn
2
=
dn
2
{\displaystyle \operatorname {cn} ^{2}+m'\operatorname {sn} ^{2}=\operatorname {dn} ^{2}}
where m + m' = 1. Multiplying by any function of the form nq yields more general equations:
cq
2
+
sq
2
=
nq
2
{\displaystyle \operatorname {cq} ^{2}+\operatorname {sq} ^{2}=\operatorname {nq} ^{2}}
cq
2
+
m
′
sq
2
=
dq
2
{\displaystyle \operatorname {cq} ^{2}{}+m'\operatorname {sq} ^{2}=\operatorname {dq} ^{2}}
With q = d, these correspond trigonometrically to the equations for the unit circle (
x
2
+
y
2
=
r
2
{\displaystyle x^{2}+y^{2}=r^{2}}
) and the unit ellipse (
x
2
+
m
′
y
2
=
1
{\displaystyle x^{2}{}+m'y^{2}=1}
), with x = cd, y = sd and r = nd. Using the multiplication rule, other relationships may be derived. For example:
−
dn
2
+
m
′
=
−
m
cn
2
=
m
sn
2
−
m
{\displaystyle -\operatorname {dn} ^{2}{}+m'=-m\operatorname {cn} ^{2}=m\operatorname {sn} ^{2}-m}
−
m
′
nd
2
+
m
′
=
−
m
m
′
sd
2
=
m
cd
2
−
m
{\displaystyle -m'\operatorname {nd} ^{2}{}+m'=-mm'\operatorname {sd} ^{2}=m\operatorname {cd} ^{2}-m}
m
′
sc
2
+
m
′
=
m
′
nc
2
=
dc
2
−
m
{\displaystyle m'\operatorname {sc} ^{2}{}+m'=m'\operatorname {nc} ^{2}=\operatorname {dc} ^{2}-m}
cs
2
+
m
′
=
ds
2
=
ns
2
−
m
{\displaystyle \operatorname {cs} ^{2}{}+m'=\operatorname {ds} ^{2}=\operatorname {ns} ^{2}-m}
=== Addition theorems ===
The functions satisfy the two square relations (dependence on m suppressed)
cn
2
(
u
)
+
sn
2
(
u
)
=
1
,
{\displaystyle \operatorname {cn} ^{2}(u)+\operatorname {sn} ^{2}(u)=1,\,}
dn
2
(
u
)
+
m
sn
2
(
u
)
=
1.
{\displaystyle \operatorname {dn} ^{2}(u)+m\operatorname {sn} ^{2}(u)=1.\,}
From this we see that (cn, sn, dn) parametrizes an elliptic curve which is the intersection of the two quadrics defined by the above two equations. We now may define a group law for points on this curve by the addition formulas for the Jacobi functions
cn
(
x
+
y
)
=
cn
(
x
)
cn
(
y
)
−
sn
(
x
)
sn
(
y
)
dn
(
x
)
dn
(
y
)
1
−
m
sn
2
(
x
)
sn
2
(
y
)
,
sn
(
x
+
y
)
=
sn
(
x
)
cn
(
y
)
dn
(
y
)
+
sn
(
y
)
cn
(
x
)
dn
(
x
)
1
−
m
sn
2
(
x
)
sn
2
(
y
)
,
dn
(
x
+
y
)
=
dn
(
x
)
dn
(
y
)
−
m
sn
(
x
)
sn
(
y
)
cn
(
x
)
cn
(
y
)
1
−
m
sn
2
(
x
)
sn
2
(
y
)
.
{\displaystyle {\begin{aligned}\operatorname {cn} (x+y)&={\operatorname {cn} (x)\operatorname {cn} (y)-\operatorname {sn} (x)\operatorname {sn} (y)\operatorname {dn} (x)\operatorname {dn} (y) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}},\\[8pt]\operatorname {sn} (x+y)&={\operatorname {sn} (x)\operatorname {cn} (y)\operatorname {dn} (y)+\operatorname {sn} (y)\operatorname {cn} (x)\operatorname {dn} (x) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}},\\[8pt]\operatorname {dn} (x+y)&={\operatorname {dn} (x)\operatorname {dn} (y)-m\operatorname {sn} (x)\operatorname {sn} (y)\operatorname {cn} (x)\operatorname {cn} (y) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}}.\end{aligned}}}
The Jacobi epsilon and zn functions satisfy a quasi-addition theorem:
E
(
x
+
y
,
m
)
=
E
(
x
,
m
)
+
E
(
y
,
m
)
−
m
sn
(
x
,
m
)
sn
(
y
,
m
)
sn
(
x
+
y
,
m
)
,
zn
(
x
+
y
,
m
)
=
zn
(
x
,
m
)
+
zn
(
y
,
m
)
−
m
sn
(
x
,
m
)
sn
(
y
,
m
)
sn
(
x
+
y
,
m
)
.
{\displaystyle {\begin{aligned}{\mathcal {E}}(x+y,m)&={\mathcal {E}}(x,m)+{\mathcal {E}}(y,m)-m\operatorname {sn} (x,m)\operatorname {sn} (y,m)\operatorname {sn} (x+y,m),\\\operatorname {zn} (x+y,m)&=\operatorname {zn} (x,m)+\operatorname {zn} (y,m)-m\operatorname {sn} (x,m)\operatorname {sn} (y,m)\operatorname {sn} (x+y,m).\end{aligned}}}
Double angle formulae can be easily derived from the above equations by setting x = y. Half angle formulae are all of the form:
pq
(
1
2
u
,
m
)
2
=
f
p
/
f
q
{\displaystyle \operatorname {pq} ({\tfrac {1}{2}}u,m)^{2}=f_{\mathrm {p} }/f_{\mathrm {q} }}
where:
f
c
=
cn
(
u
,
m
)
+
dn
(
u
,
m
)
{\displaystyle f_{\mathrm {c} }=\operatorname {cn} (u,m)+\operatorname {dn} (u,m)}
f
s
=
1
−
cn
(
u
,
m
)
{\displaystyle f_{\mathrm {s} }=1-\operatorname {cn} (u,m)}
f
n
=
1
+
dn
(
u
,
m
)
{\displaystyle f_{\mathrm {n} }=1+\operatorname {dn} (u,m)}
f
d
=
(
1
+
dn
(
u
,
m
)
)
−
m
(
1
−
cn
(
u
,
m
)
)
{\displaystyle f_{\mathrm {d} }=(1+\operatorname {dn} (u,m))-m(1-\operatorname {cn} (u,m))}
== Jacobi elliptic functions as solutions of nonlinear ordinary differential equations ==
=== Derivatives with respect to the first variable ===
The derivatives of the three basic Jacobi elliptic functions (with respect to the first variable, with
m
{\displaystyle m}
fixed) are:
d
d
z
sn
(
z
)
=
cn
(
z
)
dn
(
z
)
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {sn} (z)=\operatorname {cn} (z)\operatorname {dn} (z),}
d
d
z
cn
(
z
)
=
−
sn
(
z
)
dn
(
z
)
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {cn} (z)=-\operatorname {sn} (z)\operatorname {dn} (z),}
d
d
z
dn
(
z
)
=
−
m
sn
(
z
)
cn
(
z
)
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {dn} (z)=-m\operatorname {sn} (z)\operatorname {cn} (z).}
These can be used to derive the derivatives of all other functions as shown in the table below (arguments (u,m) suppressed):
Also
d
d
z
E
(
z
)
=
dn
(
z
)
2
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}{\mathcal {E}}(z)=\operatorname {dn} (z)^{2}.}
With the addition theorems above and for a given m with 0 < m < 1 the major functions are therefore solutions to the following nonlinear ordinary differential equations:
am
(
x
)
{\displaystyle \operatorname {am} (x)}
solves the differential equations
d
2
y
d
x
2
+
m
sin
(
y
)
cos
(
y
)
=
0
{\displaystyle {\frac {\mathrm {d} ^{2}y}{\mathrm {d} x^{2}}}+m\sin(y)\cos(y)=0}
and
(
d
y
d
x
)
2
=
1
−
m
sin
(
y
)
2
{\displaystyle \left({\frac {\mathrm {d} y}{\mathrm {d} x}}\right)^{2}=1-m\sin(y)^{2}}
(for
x
{\displaystyle x}
not on a branch cut)
sn
(
x
)
{\displaystyle \operatorname {sn} (x)}
solves the differential equations
d
2
y
d
x
2
+
(
1
+
m
)
y
−
2
m
y
3
=
0
{\displaystyle {\frac {\mathrm {d} ^{2}y}{\mathrm {d} x^{2}}}+(1+m)y-2my^{3}=0}
and
(
d
y
d
x
)
2
=
(
1
−
y
2
)
(
1
−
m
y
2
)
{\displaystyle \left({\frac {\mathrm {d} y}{\mathrm {d} x}}\right)^{2}=(1-y^{2})(1-my^{2})}
cn
(
x
)
{\displaystyle \operatorname {cn} (x)}
solves the differential equations
d
2
y
d
x
2
+
(
1
−
2
m
)
y
+
2
m
y
3
=
0
{\displaystyle {\frac {\mathrm {d} ^{2}y}{\mathrm {d} x^{2}}}+(1-2m)y+2my^{3}=0}
and
(
d
y
d
x
)
2
=
(
1
−
y
2
)
(
1
−
m
+
m
y
2
)
{\displaystyle \left({\frac {\mathrm {d} y}{\mathrm {d} x}}\right)^{2}=(1-y^{2})(1-m+my^{2})}
dn
(
x
)
{\displaystyle \operatorname {dn} (x)}
solves the differential equations
d
2
y
d
x
2
−
(
2
−
m
)
y
+
2
y
3
=
0
{\displaystyle {\frac {\mathrm {d} ^{2}y}{\mathrm {d} x^{2}}}-(2-m)y+2y^{3}=0}
and
(
d
y
d
x
)
2
=
(
y
2
−
1
)
(
1
−
m
−
y
2
)
{\displaystyle \left({\frac {\mathrm {d} y}{\mathrm {d} x}}\right)^{2}=(y^{2}-1)(1-m-y^{2})}
The function which exactly solves the pendulum differential equation,
d
2
θ
d
t
2
+
c
sin
θ
=
0
,
{\displaystyle {\frac {\mathrm {d} ^{2}\theta }{\mathrm {d} t^{2}}}+c\sin \theta =0,}
with initial angle
θ
0
{\displaystyle \theta _{0}}
and zero initial angular velocity is
θ
=
2
arcsin
(
m
cd
(
c
t
,
m
)
)
=
2
am
(
1
+
m
2
(
c
t
+
K
)
,
4
m
(
1
+
m
)
2
)
−
2
am
(
1
+
m
2
(
c
t
−
K
)
,
4
m
(
1
+
m
)
2
)
−
π
{\displaystyle {\begin{aligned}\theta &=2\arcsin({\sqrt {m}}\operatorname {cd} ({\sqrt {c}}t,m))\\&=2\operatorname {am} \left({\frac {1+{\sqrt {m}}}{2}}({\sqrt {c}}t+K),{\frac {4{\sqrt {m}}}{(1+{\sqrt {m}})^{2}}}\right)-2\operatorname {am} \left({\frac {1+{\sqrt {m}}}{2}}({\sqrt {c}}t-K),{\frac {4{\sqrt {m}}}{(1+{\sqrt {m}})^{2}}}\right)-\pi \end{aligned}}}
where
m
=
sin
(
θ
0
/
2
)
2
{\displaystyle m=\sin(\theta _{0}/2)^{2}}
,
c
>
0
{\displaystyle c>0}
and
t
∈
R
{\displaystyle t\in \mathbb {R} }
.
=== Derivatives with respect to the second variable ===
With the first argument
z
{\displaystyle z}
fixed, the derivatives with respect to the second variable
m
{\displaystyle m}
are as follows:
d
d
m
sn
(
z
)
=
dn
(
z
)
cn
(
z
)
(
(
1
−
m
)
z
−
E
(
z
)
+
m
cd
(
z
)
sn
(
z
)
)
2
m
(
1
−
m
)
,
d
d
m
cn
(
z
)
=
sn
(
z
)
dn
(
z
)
(
(
m
−
1
)
z
+
E
(
z
)
−
m
sn
(
z
)
cd
(
z
)
)
2
m
(
1
−
m
)
,
d
d
m
dn
(
z
)
=
sn
(
z
)
cn
(
z
)
(
(
m
−
1
)
z
+
E
(
z
)
−
dn
(
z
)
sc
(
z
)
)
2
(
1
−
m
)
,
d
d
m
E
(
z
)
=
cn
(
z
)
(
sn
(
z
)
dn
(
z
)
−
cn
(
z
)
E
(
z
)
)
2
(
1
−
m
)
−
z
2
sn
(
z
)
2
.
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} m}}\operatorname {sn} (z)&={\frac {\operatorname {dn} (z)\operatorname {cn} (z)((1-m)z-{\mathcal {E}}(z)+m\operatorname {cd} (z)\operatorname {sn} (z))}{2m(1-m)}},\\{\frac {\mathrm {d} }{\mathrm {d} m}}\operatorname {cn} (z)&={\frac {\operatorname {sn} (z)\operatorname {dn} (z)((m-1)z+{\mathcal {E}}(z)-m\operatorname {sn} (z)\operatorname {cd} (z))}{2m(1-m)}},\\{\frac {\mathrm {d} }{\mathrm {d} m}}\operatorname {dn} (z)&={\frac {\operatorname {sn} (z)\operatorname {cn} (z)((m-1)z+{\mathcal {E}}(z)-\operatorname {dn} (z)\operatorname {sc} (z))}{2(1-m)}},\\{\frac {\mathrm {d} }{\mathrm {d} m}}{\mathcal {E}}(z)&={\frac {\operatorname {cn} (z)(\operatorname {sn} (z)\operatorname {dn} (z)-\operatorname {cn} (z){\mathcal {E}}(z))}{2(1-m)}}-{\frac {z}{2}}\operatorname {sn} (z)^{2}.\end{aligned}}}
== Expansion in terms of the nome ==
Let the nome be
q
=
exp
(
−
π
K
′
(
m
)
/
K
(
m
)
)
=
e
i
π
τ
{\displaystyle q=\exp(-\pi K'(m)/K(m))=e^{i\pi \tau }}
,
Im
(
τ
)
>
0
{\displaystyle \operatorname {Im} (\tau )>0}
,
m
=
k
2
{\displaystyle m=k^{2}}
and let
v
=
π
u
/
(
2
K
(
m
)
)
{\displaystyle v=\pi u/(2K(m))}
. Then the functions have expansions as Lambert series
am
(
u
,
m
)
=
π
u
2
K
(
m
)
+
2
∑
n
=
1
∞
q
n
n
(
1
+
q
2
n
)
sin
(
2
n
v
)
,
{\displaystyle \operatorname {am} (u,m)={\frac {\pi u}{2K(m)}}+2\sum _{n=1}^{\infty }{\frac {q^{n}}{n(1+q^{2n})}}\sin(2nv),}
sn
(
u
,
m
)
=
2
π
k
K
(
m
)
∑
n
=
0
∞
q
n
+
1
/
2
1
−
q
2
n
+
1
sin
(
(
2
n
+
1
)
v
)
,
{\displaystyle \operatorname {sn} (u,m)={\frac {2\pi }{kK(m)}}\sum _{n=0}^{\infty }{\frac {q^{n+1/2}}{1-q^{2n+1}}}\sin((2n+1)v),}
cn
(
u
,
m
)
=
2
π
k
K
(
m
)
∑
n
=
0
∞
q
n
+
1
/
2
1
+
q
2
n
+
1
cos
(
(
2
n
+
1
)
v
)
,
{\displaystyle \operatorname {cn} (u,m)={\frac {2\pi }{kK(m)}}\sum _{n=0}^{\infty }{\frac {q^{n+1/2}}{1+q^{2n+1}}}\cos((2n+1)v),}
dn
(
u
,
m
)
=
π
2
K
(
m
)
+
2
π
K
(
m
)
∑
n
=
1
∞
q
n
1
+
q
2
n
cos
(
2
n
v
)
,
{\displaystyle \operatorname {dn} (u,m)={\frac {\pi }{2K(m)}}+{\frac {2\pi }{K(m)}}\sum _{n=1}^{\infty }{\frac {q^{n}}{1+q^{2n}}}\cos(2nv),}
zn
(
u
,
m
)
=
2
π
K
(
m
)
∑
n
=
1
∞
q
n
1
−
q
2
n
sin
(
2
n
v
)
{\displaystyle \operatorname {zn} (u,m)={\frac {2\pi }{K(m)}}\sum _{n=1}^{\infty }{\frac {q^{n}}{1-q^{2n}}}\sin(2nv)}
when
|
Im
(
u
/
K
)
|
<
Im
(
i
K
′
/
K
)
.
{\displaystyle \left|\operatorname {Im} (u/K)\right|<\operatorname {Im} (iK'/K).}
Bivariate power series expansions have been published by Schett.
== Fast computation ==
The theta function ratios provide an efficient way of computing the Jacobi elliptic functions. There is an alternative method, based on the arithmetic-geometric mean and Landen's transformations:
Initialize
a
0
=
1
,
b
0
=
1
−
m
{\displaystyle a_{0}=1,\,b_{0}={\sqrt {1-m}}}
where
0
<
m
<
1
{\displaystyle 0<m<1}
.
Define
a
n
=
a
n
−
1
+
b
n
−
1
2
,
b
n
=
a
n
−
1
b
n
−
1
,
c
n
=
a
n
−
1
−
b
n
−
1
2
{\displaystyle a_{n}={\frac {a_{n-1}+b_{n-1}}{2}},\,b_{n}={\sqrt {a_{n-1}b_{n-1}}},\,c_{n}={\frac {a_{n-1}-b_{n-1}}{2}}}
where
n
≥
1
{\displaystyle n\geq 1}
.
Then define
φ
N
=
2
N
a
N
u
{\displaystyle \varphi _{N}=2^{N}a_{N}u}
for
u
∈
R
{\displaystyle u\in \mathbb {R} }
and a fixed
N
∈
N
{\displaystyle N\in \mathbb {N} }
. If
φ
n
−
1
=
1
2
(
φ
n
+
arcsin
(
c
n
a
n
sin
φ
n
)
)
{\displaystyle \varphi _{n-1}={\frac {1}{2}}\left(\varphi _{n}+\arcsin \left({\frac {c_{n}}{a_{n}}}\sin \varphi _{n}\right)\right)}
for
n
≥
1
{\displaystyle n\geq 1}
, then
am
(
u
,
m
)
=
φ
0
,
zn
(
u
,
m
)
=
∑
n
=
1
N
c
n
sin
φ
n
{\displaystyle \operatorname {am} (u,m)=\varphi _{0},\quad \operatorname {zn} (u,m)=\sum _{n=1}^{N}c_{n}\sin \varphi _{n}}
as
N
→
∞
{\displaystyle N\to \infty }
. This is notable for its rapid convergence. It is then trivial to compute all Jacobi elliptic functions from the Jacobi amplitude
am
{\displaystyle \operatorname {am} }
on the real line.
In conjunction with the addition theorems for elliptic functions (which hold for complex numbers in general) and the Jacobi transformations, the method of computation described above can be used to compute all Jacobi elliptic functions in the whole complex plane.
Another method of fast computation of the Jacobi elliptic functions via the arithmetic–geometric mean, avoiding the computation of the Jacobi amplitude, is due to Herbert E. Salzer:
Let
0
≤
m
≤
1
,
0
≤
u
≤
K
(
m
)
,
a
0
=
1
,
b
0
=
1
−
m
,
{\displaystyle 0\leq m\leq 1,\,0\leq u\leq K(m),\,a_{0}=1,\,b_{0}={\sqrt {1-m}},}
a
n
+
1
=
a
n
+
b
n
2
,
b
n
+
1
=
a
n
b
n
,
c
n
+
1
=
a
n
−
b
n
2
.
{\displaystyle a_{n+1}={\frac {a_{n}+b_{n}}{2}},\,b_{n+1}={\sqrt {a_{n}b_{n}}},\,c_{n+1}={\frac {a_{n}-b_{n}}{2}}.}
Set
y
N
=
a
N
sin
(
a
N
u
)
y
N
−
1
=
y
N
+
a
N
c
N
y
N
y
N
−
2
=
y
N
−
1
+
a
N
−
1
c
N
−
1
y
N
−
1
⋮
=
⋮
y
0
=
y
1
+
m
4
y
1
.
{\displaystyle {\begin{aligned}y_{N}&={\frac {a_{N}}{\sin(a_{N}u)}}\\y_{N-1}&=y_{N}+{\frac {a_{N}c_{N}}{y_{N}}}\\y_{N-2}&=y_{N-1}+{\frac {a_{N-1}c_{N-1}}{y_{N-1}}}\\\vdots &=\vdots \\y_{0}&=y_{1}+{\frac {m}{4y_{1}}}.\end{aligned}}}
Then
sn
(
u
,
m
)
=
1
y
0
cn
(
u
,
m
)
=
1
−
1
y
0
2
dn
(
u
,
m
)
=
1
−
m
y
0
2
{\displaystyle {\begin{aligned}\operatorname {sn} (u,m)&={\frac {1}{y_{0}}}\\\operatorname {cn} (u,m)&={\sqrt {1-{\frac {1}{y_{0}^{2}}}}}\\\operatorname {dn} (u,m)&={\sqrt {1-{\frac {m}{y_{0}^{2}}}}}\end{aligned}}}
as
N
→
∞
{\displaystyle N\to \infty }
.
Yet, another method for a rapidly converging fast computation of the Jacobi elliptic sine function found in the literature is shown below.
Let:
a
0
=
u
b
0
=
1
−
1
−
m
1
+
1
−
m
a
1
=
a
0
1
+
b
0
b
1
=
1
−
1
−
b
0
2
1
+
1
−
b
0
2
⋮
=
⋮
⋮
=
⋮
a
n
=
a
n
−
1
1
+
b
n
−
1
b
n
=
1
−
1
−
b
n
−
1
2
1
+
1
−
b
n
−
1
2
{\displaystyle {\begin{aligned}&a_{0}=u&b_{0}={\frac {1-{\sqrt {1-m}}}{1+{\sqrt {1-m}}}}\\&a_{1}={\frac {a_{0}}{1+b_{0}}}&b_{1}={\frac {1-{\sqrt {1-b_{0}^{2}}}}{1+{\sqrt {1-b_{0}^{2}}}}}\\&\vdots =\vdots &\vdots =\vdots \\&a_{n}={\frac {a_{n-1}}{1+b_{n-1}}}&b_{n}={\frac {1-{\sqrt {1-b_{n-1}^{2}}}}{1+{\sqrt {1-b_{n-1}^{2}}}}}\\\end{aligned}}}
Then set:
y
n
+
1
=
sin
(
a
n
)
y
n
=
y
n
+
1
(
1
+
b
n
)
1
+
y
n
+
1
2
b
n
⋮
=
⋮
y
0
=
y
1
(
1
+
b
0
)
1
+
y
1
2
b
0
{\displaystyle {\begin{aligned}y_{n+1}&=\sin(a_{n})\\y_{n}&={\frac {y_{n+1}(1+b_{n})}{1+y_{n+1}^{2}b_{n}}}\\\vdots &=\vdots \\y_{0}&={\frac {y_{1}(1+b_{0})}{1+y_{1}^{2}b_{0}}}\\\end{aligned}}}
Then:
sn
(
u
,
m
)
=
y
0
as
n
→
∞
{\displaystyle \operatorname {sn} (u,m)=y_{0}{\text{ as }}n\rightarrow \infty }
.
== Approximation in terms of hyperbolic functions ==
The Jacobi elliptic functions can be expanded in terms of the hyperbolic functions. When
m
{\displaystyle m}
is close to unity, such that
m
′
2
{\displaystyle m'^{2}}
and higher powers of
m
′
{\displaystyle m'}
can be neglected, we have:
sn(u):
sn
(
u
,
m
)
≈
tanh
(
u
)
+
1
4
m
′
(
sinh
(
u
)
cosh
(
u
)
−
u
)
sech
2
(
u
)
.
{\displaystyle \operatorname {sn} (u,m)\approx \tanh(u)+{\frac {1}{4}}m'(\sinh(u)\cosh(u)-u)\operatorname {sech} ^{2}(u).}
cn(u):
cn
(
u
,
m
)
≈
sech
(
u
)
−
1
4
m
′
(
sinh
(
u
)
cosh
(
u
)
−
u
)
tanh
(
u
)
sech
(
u
)
.
{\displaystyle \operatorname {cn} (u,m)\approx \operatorname {sech} (u)-{\frac {1}{4}}m'(\sinh(u)\cosh(u)-u)\tanh(u)\operatorname {sech} (u).}
dn(u):
dn
(
u
,
m
)
≈
sech
(
u
)
+
1
4
m
′
(
sinh
(
u
)
cosh
(
u
)
+
u
)
tanh
(
u
)
sech
(
u
)
.
{\displaystyle \operatorname {dn} (u,m)\approx \operatorname {sech} (u)+{\frac {1}{4}}m'(\sinh(u)\cosh(u)+u)\tanh(u)\operatorname {sech} (u).}
For the Jacobi amplitude,
am
(
u
,
m
)
≈
gd
(
u
)
+
1
4
m
′
(
sinh
(
u
)
cosh
(
u
)
−
u
)
sech
(
u
)
.
{\displaystyle \operatorname {am} (u,m)\approx \operatorname {gd} (u)+{\frac {1}{4}}m'(\sinh(u)\cosh(u)-u)\operatorname {sech} (u).}
== Continued fractions ==
Assuming real numbers
a
,
p
{\displaystyle a,p}
with
0
<
a
<
p
{\displaystyle 0<a<p}
and the nome
q
=
e
π
i
τ
{\displaystyle q=e^{\pi i\tau }}
,
Im
(
τ
)
>
0
{\displaystyle \operatorname {Im} (\tau )>0}
with elliptic modulus
k
(
τ
)
=
1
−
k
′
(
τ
)
2
=
(
ϑ
10
(
0
;
τ
)
/
ϑ
00
(
0
;
τ
)
)
2
{\textstyle k(\tau )={\sqrt {1-k'(\tau )^{2}}}=(\vartheta _{10}(0;\tau )/\vartheta _{00}(0;\tau ))^{2}}
. If
K
[
τ
]
=
K
(
k
(
τ
)
)
{\displaystyle K[\tau ]=K(k(\tau ))}
, where
K
(
x
)
=
π
/
2
⋅
2
F
1
(
1
/
2
,
1
/
2
;
1
;
x
2
)
{\displaystyle K(x)=\pi /2\cdot {}_{2}F_{1}(1/2,1/2;1;x^{2})}
is the complete elliptic integral of the first kind, then holds the following continued fraction expansion
dn
(
(
p
/
2
−
a
)
τ
K
[
p
τ
2
]
;
k
(
p
τ
2
)
)
k
′
(
p
τ
2
)
=
∑
n
=
−
∞
∞
q
p
/
2
n
2
+
(
p
/
2
−
a
)
n
∑
n
=
−
∞
∞
(
−
1
)
n
q
p
/
2
n
2
+
(
p
/
2
−
a
)
n
=
−
1
+
2
1
−
q
a
+
q
p
−
a
1
−
q
p
+
(
q
a
+
q
2
p
−
a
)
(
q
a
+
p
+
q
p
−
a
)
1
−
q
3
p
+
q
p
(
q
a
+
q
3
p
−
a
)
(
q
a
+
2
p
+
q
p
−
a
)
1
−
q
5
p
+
q
2
p
(
q
a
+
q
4
p
−
a
)
(
q
a
+
3
p
+
q
p
−
a
)
1
−
q
7
p
+
⋯
{\displaystyle {\begin{aligned}&{\frac {{\textrm {dn}}\left((p/2-a)\tau K\left[{\frac {p\tau }{2}}\right];k\left({\frac {p\tau }{2}}\right)\right)}{\sqrt {k'\left({\frac {p\tau }{2}}\right)}}}={\frac {\sum _{n=-\infty }^{\infty }q^{p/2n^{2}+(p/2-a)n}}{\sum _{n=-\infty }^{\infty }(-1)^{n}q^{p/2n^{2}+(p/2-a)n}}}\\[4pt]={}&-1+{\frac {2}{1-{}}}\,{\frac {q^{a}+q^{p-a}}{1-q^{p}+{}}}\,{\frac {(q^{a}+q^{2p-a})(q^{a+p}+q^{p-a})}{1-q^{3p}+{}}}\,{\frac {q^{p}(q^{a}+q^{3p-a})(q^{a+2p}+q^{p-a})}{1-q^{5p}+{}}}\,{\frac {q^{2p}(q^{a}+q^{4p-a})(q^{a+3p}+q^{p-a})}{1-q^{7p}+{}}}\cdots \end{aligned}}}
Known continued fractions involving
sn
(
t
)
,
cn
(
t
)
{\displaystyle {\textrm {sn}}(t),{\textrm {cn}}(t)}
and
dn
(
t
)
{\displaystyle {\textrm {dn}}(t)}
with elliptic modulus
k
{\displaystyle k}
are
For
z
∈
C
{\displaystyle z\in \mathbb {C} }
,
|
k
|
<
1
{\displaystyle |k|<1}
: pg. 374
∫
0
∞
sn
(
t
)
e
−
t
z
d
t
=
1
1
2
(
1
+
k
2
)
+
z
2
−
1
⋅
2
2
⋅
3
k
2
3
2
(
1
+
k
2
)
+
z
2
−
3
⋅
4
2
⋅
5
k
2
5
2
(
1
+
k
2
)
+
z
2
−
⋯
{\displaystyle \int _{0}^{\infty }{\textrm {sn}}(t)e^{-tz}\,\mathrm {d} t={\frac {1}{1^{2}(1+k^{2})+z^{2}-{}}}\,{\frac {1\cdot 2^{2}\cdot 3k^{2}}{3^{2}(1+k^{2})+z^{2}-{}}}\,{\frac {3\cdot 4^{2}\cdot 5k^{2}}{5^{2}(1+k^{2})+z^{2}-{}}}\cdots }
For
z
∈
C
∖
{
0
}
{\displaystyle z\in \mathbb {C} \setminus \{0\}}
,
|
k
|
<
1
{\displaystyle |k|<1}
: pg. 375
∫
0
∞
sn
2
(
t
)
e
−
t
z
d
t
=
2
z
−
1
2
2
(
1
+
k
2
)
+
z
2
−
2
⋅
3
2
⋅
4
k
2
4
2
(
1
+
k
2
)
+
z
2
−
4
⋅
5
2
⋅
6
k
2
6
2
(
1
+
k
2
)
+
z
2
−
⋯
{\displaystyle \int _{0}^{\infty }{\textrm {sn}}^{2}(t)e^{-tz}\,\mathrm {d} t={\frac {2z^{-1}}{2^{2}(1+k^{2})+z^{2}-{}}}\,{\frac {2\cdot 3^{2}\cdot 4k^{2}}{4^{2}(1+k^{2})+z^{2}-{}}}\,{\frac {4\cdot 5^{2}\cdot 6k^{2}}{6^{2}(1+k^{2})+z^{2}-{}}}\cdots }
For
z
∈
C
∖
{
0
}
{\displaystyle z\in \mathbb {C} \setminus \{0\}}
,
|
k
|
<
1
{\displaystyle |k|<1}
: pg. 220
∫
0
∞
cn
(
t
)
e
−
t
z
d
t
=
1
z
+
1
2
z
+
2
2
k
2
z
+
3
2
z
+
4
2
k
2
z
+
5
2
z
+
⋯
{\displaystyle \int _{0}^{\infty }{\textrm {cn}}(t)e^{-tz}\,\mathrm {d} t={\frac {1}{z+{}}}\,{\frac {1^{2}}{z+{}}}\,{\frac {2^{2}k^{2}}{z+{}}}\,{\frac {3^{2}}{z+{}}}\,{\frac {4^{2}k^{2}}{z+{}}}\,{\frac {5^{2}}{z+{}}}\cdots }
For
z
∈
C
∖
{
0
}
{\displaystyle z\in \mathbb {C} \setminus \{0\}}
,
|
k
|
<
1
{\displaystyle |k|<1}
: pg. 374
∫
0
∞
dn
(
t
)
e
−
t
z
d
t
=
1
z
+
1
2
k
2
z
+
2
2
z
+
3
2
k
2
z
+
4
2
z
+
5
2
k
2
z
+
⋯
{\displaystyle \int _{0}^{\infty }{\textrm {dn}}(t)e^{-tz}\,\mathrm {d} t={\frac {1}{z+{}}}\,{\frac {1^{2}k^{2}}{z+{}}}\,{\frac {2^{2}}{z+{}}}\,{\frac {3^{2}k^{2}}{z+{}}}\,{\frac {4^{2}}{z+{}}}\,{\frac {5^{2}k^{2}}{z+{}}}\cdots }
For
z
∈
C
{\displaystyle z\in \mathbb {C} }
,
|
k
|
<
1
{\displaystyle |k|<1}
: pg. 375
∫
0
∞
sn
(
t
)
cn
(
t
)
dn
(
t
)
e
−
t
z
d
t
=
1
2
⋅
1
2
(
2
−
k
2
)
+
z
2
−
1
⋅
2
2
⋅
3
k
4
2
⋅
3
2
(
2
−
k
2
)
+
z
2
−
3
⋅
4
2
⋅
5
k
4
2
⋅
5
2
(
2
−
k
2
)
+
z
2
−
⋯
{\displaystyle \int _{0}^{\infty }{\frac {{\textrm {sn}}(t){\textrm {cn}}(t)}{{\textrm {dn}}(t)}}e^{-tz}\,\mathrm {d} t={\frac {1}{2\cdot 1^{2}(2-k^{2})+z^{2}-{}}}\,{\frac {1\cdot 2^{2}\cdot 3k^{4}}{2\cdot 3^{2}(2-k^{2})+z^{2}-{}}}\,{\frac {3\cdot 4^{2}\cdot 5k^{4}}{2\cdot 5^{2}(2-k^{2})+z^{2}-{}}}\cdots }
== Inverse functions ==
The inverses of the Jacobi elliptic functions can be defined similarly to the inverse trigonometric functions; if
x
=
sn
(
ξ
,
m
)
{\displaystyle x=\operatorname {sn} (\xi ,m)}
,
ξ
=
arcsn
(
x
,
m
)
{\displaystyle \xi =\operatorname {arcsn} (x,m)}
. They can be represented as elliptic integrals, and power series representations have been found.
arcsn
(
x
,
m
)
=
∫
0
x
d
t
(
1
−
t
2
)
(
1
−
m
t
2
)
{\displaystyle \operatorname {arcsn} (x,m)=\int _{0}^{x}{\frac {\mathrm {d} t}{\sqrt {(1-t^{2})(1-mt^{2})}}}}
arccn
(
x
,
m
)
=
∫
x
1
d
t
(
1
−
t
2
)
(
1
−
m
+
m
t
2
)
{\displaystyle \operatorname {arccn} (x,m)=\int _{x}^{1}{\frac {\mathrm {d} t}{\sqrt {(1-t^{2})(1-m+mt^{2})}}}}
arcdn
(
x
,
m
)
=
∫
x
1
d
t
(
1
−
t
2
)
(
t
2
+
m
−
1
)
{\displaystyle \operatorname {arcdn} (x,m)=\int _{x}^{1}{\frac {\mathrm {d} t}{\sqrt {(1-t^{2})(t^{2}+m-1)}}}}
== Map projection ==
The Peirce quincuncial projection is a map projection based on Jacobian elliptic functions.
== See also ==
Elliptic curve
Schwarz–Christoffel mapping
Carlson symmetric form
Jacobi theta function
Ramanujan theta function
Dixon elliptic functions
Abel elliptic functions
Weierstrass elliptic function
Lemniscate elliptic functions
== Notes ==
== Citations ==
== References ==
Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 16". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 569. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
N. I. Akhiezer, Elements of the Theory of Elliptic Functions (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island ISBN 0-8218-4532-2
A. C. Dixon The elementary properties of the elliptic functions, with examples (Macmillan, 1894)
Alfred George Greenhill The applications of elliptic functions (London, New York, Macmillan, 1892)
Edmund T. Whittaker, George Neville Watson: A Course in Modern Analysis. 4th ed. Cambridge, England: Cambridge University Press, 1990. S. 469–470.
H. Hancock Lectures on the theory of elliptic functions (New York, J. Wiley & sons, 1910)
Jacobi, C. G. J. (1829), Fundamenta nova theoriae functionum ellipticarum (in Latin), Königsberg, ISBN 978-1-108-05200-9, Reprinted by Cambridge University Press 2012 {{citation}}: ISBN / Date incompatibility (help)
Reinhardt, William P.; Walker, Peter L. (2010), "Jacobian Elliptic Functions", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
(in French) P. Appell and E. Lacour Principes de la théorie des fonctions elliptiques et applications (Paris, Gauthier Villars, 1897)
(in French) G. H. Halphen Traité des fonctions elliptiques et de leurs applications (vol. 1) (Paris, Gauthier-Villars, 1886–1891)
(in French) G. H. Halphen Traité des fonctions elliptiques et de leurs applications (vol. 2) (Paris, Gauthier-Villars, 1886–1891)
(in French) G. H. Halphen Traité des fonctions elliptiques et de leurs applications (vol. 3) (Paris, Gauthier-Villars, 1886–1891)
(in French) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome I, Introduction. Calcul différentiel. Ire partie (Paris : Gauthier-Villars et fils, 1893)
(in French) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome II, Calcul différentiel. IIe partie (Paris : Gauthier-Villars et fils, 1893)
(in French) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome III, Calcul intégral. Ire partie, Théorèmes généraux. Inversion (Paris : Gauthier-Villars et fils, 1893)
(in French) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome IV, Calcul intégral. IIe partie, Applications (Paris : Gauthier-Villars et fils, 1893)
(in French) C. Briot and J. C. Bouquet Théorie des fonctions elliptiques ( Paris : Gauthier-Villars, 1875)
Toshio Fukushima: Fast Computation of Complete Elliptic Integrals and Jacobian Elliptic Functions. 2012, National Astronomical Observatory of Japan (国立天文台)
Lowan, Blanch und Horenstein: On the Inversion of the q-Series Associated with Jacobian Elliptic Functions. Bull. Amer. Math. Soc. 48, 1942
H. Ferguson, D. E. Nielsen, G. Cook: A partition formula for the integer coefficients of the theta function nome. Mathematics of computation, Volume 29, Nummer 131, Juli 1975
J. D. Fenton and R. S. Gardiner-Garden: Rapidly-convergent methods for evaluating elliptic integrals and theta and elliptic functions. J. Austral. Math. Soc. (Series B) 24, 1982, S. 57
Adolf Kneser: Neue Untersuchung einer Reihe aus der Theorie der elliptischen Funktionen. J. reine u. angew. Math. 157, 1927. pages 209 – 218
== External links ==
"Jacobi elliptic functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Jacobi Elliptic Functions". MathWorld. | Wikipedia/Jacobi's_elliptic_functions |
In mathematics, the Jacobi elliptic functions are a set of basic elliptic functions. They are found in the description of the motion of a pendulum, as well as in the design of electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation
sn
{\displaystyle \operatorname {sn} }
for
sin
{\displaystyle \sin }
. The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by Carl Gustav Jakob Jacobi (1829). Carl Friedrich Gauss had already studied special Jacobi elliptic functions in 1797, the lemniscate elliptic functions in particular, but his work was published much later.
== Overview ==
There are twelve Jacobi elliptic functions denoted by
pq
(
u
,
m
)
{\displaystyle \operatorname {pq} (u,m)}
, where
p
{\displaystyle \mathrm {p} }
and
q
{\displaystyle \mathrm {q} }
are any of the letters
c
{\displaystyle \mathrm {c} }
,
s
{\displaystyle \mathrm {s} }
,
n
{\displaystyle \mathrm {n} }
, and
d
{\displaystyle \mathrm {d} }
. (Functions of the form
pp
(
u
,
m
)
{\displaystyle \operatorname {pp} (u,m)}
are trivially set to unity for notational completeness.)
u
{\displaystyle u}
is the argument, and
m
{\displaystyle m}
is the parameter, both of which may be complex. In fact, the Jacobi elliptic functions are meromorphic in both
u
{\displaystyle u}
and
m
{\displaystyle m}
. The distribution of the zeros and poles in the
u
{\displaystyle u}
-plane is well-known. However, questions of the distribution of the zeros and poles in the
m
{\displaystyle m}
-plane remain to be investigated.
In the complex plane of the argument
u
{\displaystyle u}
, the twelve functions form a repeating lattice of simple poles and zeroes. Depending on the function, one repeating parallelogram, or unit cell, will have sides of length
2
K
{\displaystyle 2K}
or
4
K
{\displaystyle 4K}
on the real axis, and
2
K
′
{\displaystyle 2K'}
or
4
K
′
{\displaystyle 4K'}
on the imaginary axis, where
K
=
K
(
m
)
{\displaystyle K=K(m)}
and
K
′
=
K
(
1
−
m
)
{\displaystyle K'=K(1-m)}
are known as the quarter periods with
K
(
⋅
)
{\displaystyle K(\cdot )}
being the elliptic integral of the first kind. The nature of the unit cell can be determined by inspecting the "auxiliary rectangle" (generally a parallelogram), which is a rectangle formed by the origin
(
0
,
0
)
{\displaystyle (0,0)}
at one corner, and
(
K
,
K
′
)
{\displaystyle (K,K')}
as the diagonally opposite corner. As in the diagram, the four corners of the auxiliary rectangle are named
s
{\displaystyle \mathrm {s} }
,
c
{\displaystyle \mathrm {c} }
,
d
{\displaystyle \mathrm {d} }
, and
n
{\displaystyle \mathrm {n} }
, going counter-clockwise from the origin. The function
pq
(
u
,
m
)
{\displaystyle \operatorname {pq} (u,m)}
will have a zero at the
p
{\displaystyle \mathrm {p} }
corner and a pole at the
q
{\displaystyle \mathrm {q} }
corner. The twelve functions correspond to the twelve ways of arranging these poles and zeroes in the corners of the rectangle.
When the argument
u
{\displaystyle u}
and parameter
m
{\displaystyle m}
are real, with
0
<
m
<
1
{\displaystyle 0<m<1}
,
K
{\displaystyle K}
and
K
′
{\displaystyle K'}
will be real and the auxiliary parallelogram will in fact be a rectangle, and the Jacobi elliptic functions will all be real valued on the real line.
Since the Jacobi elliptic functions are doubly periodic in
u
{\displaystyle u}
, they factor through a torus – in effect, their domain can be taken to be a torus, just as cosine and sine are in effect defined on a circle. Instead of having only one circle, we now have the product of two circles, one real and the other imaginary. The complex plane can be replaced by a complex torus. The circumference of the first circle is
4
K
{\displaystyle 4K}
and the second
4
K
′
{\displaystyle 4K'}
, where
K
{\displaystyle K}
and
K
′
{\displaystyle K'}
are the quarter periods. Each function has two zeroes and two poles at opposite positions on the torus. Among the points
0
{\displaystyle 0}
,
K
{\displaystyle K}
,
K
+
i
K
′
{\displaystyle K+iK'}
,
i
K
′
{\displaystyle iK'}
there is one zero and one pole.
The Jacobi elliptic functions are then doubly periodic, meromorphic functions satisfying the following properties:
There is a simple zero at the corner
p
{\displaystyle \mathrm {p} }
, and a simple pole at the corner
q
{\displaystyle \mathrm {q} }
.
The complex number
p
−
q
{\displaystyle \mathrm {p} -\mathrm {q} }
is equal to half the period of the function
pq
u
{\displaystyle \operatorname {pq} u}
; that is, the function
pq
u
{\displaystyle \operatorname {pq} u}
is periodic in the direction
pq
{\displaystyle \operatorname {pq} }
, with the period being
2
(
p
−
q
)
{\displaystyle 2(\mathrm {p} -\mathrm {q} )}
. The function
pq
u
{\displaystyle \operatorname {pq} u}
is also periodic in the other two directions
p
p
′
{\displaystyle \mathrm {pp} '}
and
p
q
′
{\displaystyle \mathrm {pq} '}
, with periods such that
p
−
p
′
{\displaystyle \mathrm {p} -\mathrm {p} '}
and
p
−
q
′
{\displaystyle \mathrm {p} -\mathrm {q} '}
are quarter periods.
== Notation ==
The elliptic functions can be given in a variety of notations, which can make the subject unnecessarily confusing. Elliptic functions are functions of two variables. The first variable might be given in terms of the amplitude
φ
{\displaystyle \varphi }
, or more commonly, in terms of
u
{\displaystyle u}
given below. The second variable might be given in terms of the parameter
m
{\displaystyle m}
, or as the elliptic modulus
k
{\displaystyle k}
, where
k
2
=
m
{\displaystyle k^{2}=m}
, or in terms of the modular angle
α
{\displaystyle \alpha }
, where
m
=
sin
2
α
{\displaystyle m=\sin ^{2}\alpha }
. The complements of
k
{\displaystyle k}
and
m
{\displaystyle m}
are defined as
m
′
=
1
−
m
{\displaystyle m'=1-m}
and
k
′
=
m
′
{\textstyle k'={\sqrt {m'}}}
. These four terms are used below without comment to simplify various expressions.
The twelve Jacobi elliptic functions are generally written as
pq
(
u
,
m
)
{\displaystyle \operatorname {pq} (u,m)}
where
p
{\displaystyle \mathrm {p} }
and
q
{\displaystyle \mathrm {q} }
are any of the letters
c
{\displaystyle \mathrm {c} }
,
s
{\displaystyle \mathrm {s} }
,
n
{\displaystyle \mathrm {n} }
, and
d
{\displaystyle \mathrm {d} }
. Functions of the form
pp
(
u
,
m
)
{\displaystyle \operatorname {pp} (u,m)}
are trivially set to unity for notational completeness. The “major” functions are generally taken to be
cn
(
u
,
m
)
{\displaystyle \operatorname {cn} (u,m)}
,
sn
(
u
,
m
)
{\displaystyle \operatorname {sn} (u,m)}
and
dn
(
u
,
m
)
{\displaystyle \operatorname {dn} (u,m)}
from which all other functions can be derived and expressions are often written solely in terms of these three functions, however, various symmetries and generalizations are often most conveniently expressed using the full set. (This notation is due to Gudermann and Glaisher and is not Jacobi's original notation.)
Throughout this article,
pq
(
u
,
t
2
)
=
pq
(
u
;
t
)
{\displaystyle \operatorname {pq} (u,t^{2})=\operatorname {pq} (u;t)}
.
The functions are notationally related to each other by the multiplication rule: (arguments suppressed)
pq
⋅
p
′
q
′
=
p
q
′
⋅
p
′
q
{\displaystyle \operatorname {pq} \cdot \operatorname {p'q'} =\operatorname {pq'} \cdot \operatorname {p'q} }
from which other commonly used relationships can be derived:
pr
qr
=
pq
{\displaystyle {\frac {\operatorname {pr} }{\operatorname {qr} }}=\operatorname {pq} }
pr
⋅
rq
=
pq
{\displaystyle \operatorname {pr} \cdot \operatorname {rq} =\operatorname {pq} }
1
qp
=
pq
{\displaystyle {\frac {1}{\operatorname {qp} }}=\operatorname {pq} }
The multiplication rule follows immediately from the identification of the elliptic functions with the Neville theta functions
pq
(
u
,
m
)
=
θ
p
(
u
,
m
)
θ
q
(
u
,
m
)
{\displaystyle \operatorname {pq} (u,m)={\frac {\theta _{\operatorname {p} }(u,m)}{\theta _{\operatorname {q} }(u,m)}}}
Also note that:
K
(
m
)
=
K
(
k
2
)
=
∫
0
1
d
t
(
1
−
t
2
)
(
1
−
m
t
2
)
=
∫
0
1
d
t
(
1
−
t
2
)
(
1
−
k
2
t
2
)
.
{\displaystyle K(m)=K(k^{2})=\int _{0}^{1}{\frac {dt}{\sqrt {(1-t^{2})(1-mt^{2})}}}=\int _{0}^{1}{\frac {dt}{\sqrt {(1-t^{2})(1-k^{2}t^{2})}}}.}
== Definition in terms of inverses of elliptic integrals ==
There is a definition, relating the elliptic functions to the inverse of the incomplete elliptic integral of the first kind
F
{\displaystyle F}
. These functions take the parameters
u
{\displaystyle u}
and
m
{\displaystyle m}
as inputs. The
φ
{\displaystyle \varphi }
that satisfies
u
=
F
(
φ
,
m
)
=
∫
0
φ
d
θ
1
−
m
sin
2
θ
{\displaystyle u=F(\varphi ,m)=\int _{0}^{\varphi }{\frac {\mathrm {d} \theta }{\sqrt {1-m\sin ^{2}\theta }}}}
is called the Jacobi amplitude:
am
(
u
,
m
)
=
φ
.
{\displaystyle \operatorname {am} (u,m)=\varphi .}
In this framework, the elliptic sine sn u (Latin: sinus amplitudinis) is given by
sn
(
u
,
m
)
=
sin
am
(
u
,
m
)
{\displaystyle \operatorname {sn} (u,m)=\sin \operatorname {am} (u,m)}
and the elliptic cosine cn u (Latin: cosinus amplitudinis) is given by
cn
(
u
,
m
)
=
cos
am
(
u
,
m
)
{\displaystyle \operatorname {cn} (u,m)=\cos \operatorname {am} (u,m)}
and the delta amplitude dn u (Latin: delta amplitudinis)
dn
(
u
,
m
)
=
d
d
u
am
(
u
,
m
)
.
{\displaystyle \operatorname {dn} (u,m)={\frac {\mathrm {d} }{\mathrm {d} u}}\operatorname {am} (u,m).}
In the above, the value
m
{\displaystyle m}
is a free parameter, usually taken to be real such that
0
≤
m
≤
1
{\displaystyle 0\leq m\leq 1}
(but can be complex in general), and so the elliptic functions can be thought of as being given by two variables,
u
{\displaystyle u}
and the parameter
m
{\displaystyle m}
. The remaining nine elliptic functions are easily built from the above three (
sn
{\displaystyle \operatorname {sn} }
,
cn
{\displaystyle \operatorname {cn} }
,
dn
{\displaystyle \operatorname {dn} }
), and are given in a section below. Note that when
φ
=
π
/
2
{\displaystyle \varphi =\pi /2}
, that
u
{\displaystyle u}
then equals the quarter period
K
{\displaystyle K}
.
In the most general setting,
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
is a multivalued function (in
u
{\displaystyle u}
) with infinitely many logarithmic branch points (the branches differ by integer multiples of
2
π
{\displaystyle 2\pi }
), namely the points
2
s
K
(
m
)
+
(
4
t
+
1
)
K
(
1
−
m
)
i
{\displaystyle 2sK(m)+(4t+1)K(1-m)i}
and
2
s
K
(
m
)
+
(
4
t
+
3
)
K
(
1
−
m
)
i
{\displaystyle 2sK(m)+(4t+3)K(1-m)i}
where
s
,
t
∈
Z
{\displaystyle s,t\in \mathbb {Z} }
. This multivalued function can be made single-valued by cutting the complex plane along the line segments joining these branch points (the cutting can be done in non-equivalent ways, giving non-equivalent single-valued functions), thus making
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
analytic everywhere except on the branch cuts. In contrast,
sin
am
(
u
,
m
)
{\displaystyle \sin \operatorname {am} (u,m)}
and other elliptic functions have no branch points, give consistent values for every branch of
am
{\displaystyle \operatorname {am} }
, and are meromorphic in the whole complex plane. Since every elliptic function is meromorphic in the whole complex plane (by definition),
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
(when considered as a single-valued function) is not an elliptic function.
However, a particular cutting for
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
can be made in the
u
{\displaystyle u}
-plane by line segments from
2
s
K
(
m
)
+
(
4
t
+
1
)
K
(
1
−
m
)
i
{\displaystyle 2sK(m)+(4t+1)K(1-m)i}
to
2
s
K
(
m
)
+
(
4
t
+
3
)
K
(
1
−
m
)
i
{\displaystyle 2sK(m)+(4t+3)K(1-m)i}
with
s
,
t
∈
Z
{\displaystyle s,t\in \mathbb {Z} }
; then it only remains to define
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
at the branch cuts by continuity from some direction. Then
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
becomes single-valued and singly-periodic in
u
{\displaystyle u}
with the minimal period
4
i
K
(
1
−
m
)
{\displaystyle 4iK(1-m)}
and it has singularities at the logarithmic branch points mentioned above. If
m
∈
R
{\displaystyle m\in \mathbb {R} }
and
m
≤
1
{\displaystyle m\leq 1}
,
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
is continuous in
u
{\displaystyle u}
on the real line. When
m
>
1
{\displaystyle m>1}
, the branch cuts of
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
in the
u
{\displaystyle u}
-plane cross the real line at
2
(
2
s
+
1
)
K
(
1
/
m
)
/
m
{\displaystyle 2(2s+1)K(1/m)/{\sqrt {m}}}
for
s
∈
Z
{\displaystyle s\in \mathbb {Z} }
; therefore for
m
>
1
{\displaystyle m>1}
,
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
is not continuous in
u
{\displaystyle u}
on the real line and jumps by
2
π
{\displaystyle 2\pi }
on the discontinuities.
But defining
am
(
u
,
m
)
{\displaystyle \operatorname {am} (u,m)}
this way gives rise to very complicated branch cuts in the
m
{\displaystyle m}
-plane (not the
u
{\displaystyle u}
-plane); they have not been fully described as of yet.
Let
E
(
φ
,
m
)
=
∫
0
φ
1
−
m
sin
2
θ
d
θ
{\displaystyle E(\varphi ,m)=\int _{0}^{\varphi }{\sqrt {1-m\sin ^{2}\theta }}\,\mathrm {d} \theta }
be the incomplete elliptic integral of the second kind with parameter
m
{\displaystyle m}
.
Then the Jacobi epsilon function can be defined as
E
(
u
,
m
)
=
E
(
am
(
u
,
m
)
,
m
)
{\displaystyle {\mathcal {E}}(u,m)=E(\operatorname {am} (u,m),m)}
for
u
∈
R
{\displaystyle u\in \mathbb {R} }
and
0
<
m
<
1
{\displaystyle 0<m<1}
and by analytic continuation in each of the variables otherwise: the Jacobi epsilon function is meromorphic in the whole complex plane (in both
u
{\displaystyle u}
and
m
{\displaystyle m}
). Alternatively, throughout both the
u
{\displaystyle u}
-plane and
m
{\displaystyle m}
-plane,
E
(
u
,
m
)
=
∫
0
u
dn
2
(
t
,
m
)
d
t
;
{\displaystyle {\mathcal {E}}(u,m)=\int _{0}^{u}\operatorname {dn} ^{2}(t,m)\,\mathrm {d} t;}
E
{\displaystyle {\mathcal {E}}}
is well-defined in this way because all residues of
t
↦
dn
(
t
,
m
)
2
{\displaystyle t\mapsto \operatorname {dn} (t,m)^{2}}
are zero, so the integral is path-independent. So the Jacobi epsilon relates the incomplete elliptic integral of the first kind to the incomplete elliptic integral of the second kind:
E
(
φ
,
m
)
=
E
(
F
(
φ
,
m
)
,
m
)
.
{\displaystyle E(\varphi ,m)={\mathcal {E}}(F(\varphi ,m),m).}
The Jacobi epsilon function is not an elliptic function, but it appears when differentiating the Jacobi elliptic functions with respect to the parameter.
The Jacobi zn function is defined by
zn
(
u
,
m
)
=
E
(
u
,
m
)
−
E
(
m
)
K
(
m
)
u
.
{\displaystyle \operatorname {zn} (u,m)={\mathcal {E}}(u,m)-{\frac {E(m)}{K(m)}}u.}
It is a singly periodic function which is meromorphic in
u
{\displaystyle u}
, but not in
m
{\displaystyle m}
(due to the branch cuts of
E
{\displaystyle E}
and
K
{\displaystyle K}
). Its minimal period in
u
{\displaystyle u}
is
2
K
(
m
)
{\displaystyle 2K(m)}
. It is related to the Jacobi zeta function by
Z
(
φ
,
m
)
=
zn
(
F
(
φ
,
m
)
,
m
)
.
{\displaystyle Z(\varphi ,m)=\operatorname {zn} (F(\varphi ,m),m).}
Historically, the Jacobi elliptic functions were first defined by using the amplitude. In more modern texts on elliptic functions, the Jacobi elliptic functions are defined by other means, for example by ratios of theta functions (see below), and the amplitude is ignored.
In modern terms, the relation to elliptic integrals would be expressed by
sn
(
F
(
φ
,
m
)
,
m
)
=
sin
φ
{\displaystyle \operatorname {sn} (F(\varphi ,m),m)=\sin \varphi }
(or
cn
(
F
(
φ
,
m
)
,
m
)
=
cos
φ
{\displaystyle \operatorname {cn} (F(\varphi ,m),m)=\cos \varphi }
) instead of
am
(
F
(
φ
,
m
)
,
m
)
=
φ
{\displaystyle \operatorname {am} (F(\varphi ,m),m)=\varphi }
.
== Definition as trigonometry: the Jacobi ellipse ==
cos
φ
,
sin
φ
{\displaystyle \cos \varphi ,\sin \varphi }
are defined on the unit circle, with radius r = 1 and angle
φ
=
{\displaystyle \varphi =}
arc length of the unit circle measured from the positive x-axis. Similarly, Jacobi elliptic functions are defined on the unit ellipse, with a = 1. Let
x
2
+
y
2
b
2
=
1
,
b
>
1
,
m
=
1
−
1
b
2
,
0
<
m
<
1
,
x
=
r
cos
φ
,
y
=
r
sin
φ
{\displaystyle {\begin{aligned}&x^{2}+{\frac {y^{2}}{b^{2}}}=1,\quad b>1,\\&m=1-{\frac {1}{b^{2}}},\quad 0<m<1,\\&x=r\cos \varphi ,\quad y=r\sin \varphi \end{aligned}}}
then:
r
(
φ
,
m
)
=
1
1
−
m
sin
2
φ
.
{\displaystyle r(\varphi ,m)={\frac {1}{\sqrt {1-m\sin ^{2}\varphi }}}\,.}
For each angle
φ
{\displaystyle \varphi }
the parameter
u
=
u
(
φ
,
m
)
=
∫
0
φ
r
(
θ
,
m
)
d
θ
{\displaystyle u=u(\varphi ,m)=\int _{0}^{\varphi }r(\theta ,m)\,d\theta }
(the incomplete elliptic integral of the first kind) is computed.
On the unit circle (
a
=
b
=
1
{\displaystyle a=b=1}
),
u
{\displaystyle u}
would be an arc length.
However, the relation of
u
{\displaystyle u}
to the arc length of an ellipse is more complicated.
Let
P
=
(
x
,
y
)
=
(
r
cos
φ
,
r
sin
φ
)
{\displaystyle P=(x,y)=(r\cos \varphi ,r\sin \varphi )}
be a point on the ellipse, and let
P
′
=
(
x
′
,
y
′
)
=
(
cos
φ
,
sin
φ
)
{\displaystyle P'=(x',y')=(\cos \varphi ,\sin \varphi )}
be the point where the unit circle intersects the line between
P
{\displaystyle P}
and the origin
O
{\displaystyle O}
.
Then the familiar relations from the unit circle:
x
′
=
cos
φ
,
y
′
=
sin
φ
{\displaystyle x'=\cos \varphi ,\quad y'=\sin \varphi }
read for the ellipse:
x
′
=
cn
(
u
,
m
)
,
y
′
=
sn
(
u
,
m
)
.
{\displaystyle x'=\operatorname {cn} (u,m),\quad y'=\operatorname {sn} (u,m).}
So the projections of the intersection point
P
′
{\displaystyle P'}
of the line
O
P
{\displaystyle OP}
with the unit circle on the x- and y-axes are simply
cn
(
u
,
m
)
{\displaystyle \operatorname {cn} (u,m)}
and
sn
(
u
,
m
)
{\displaystyle \operatorname {sn} (u,m)}
. These projections may be interpreted as 'definition as trigonometry'. In short:
cn
(
u
,
m
)
=
x
r
(
φ
,
m
)
,
sn
(
u
,
m
)
=
y
r
(
φ
,
m
)
,
dn
(
u
,
m
)
=
1
r
(
φ
,
m
)
.
{\displaystyle \operatorname {cn} (u,m)={\frac {x}{r(\varphi ,m)}},\quad \operatorname {sn} (u,m)={\frac {y}{r(\varphi ,m)}},\quad \operatorname {dn} (u,m)={\frac {1}{r(\varphi ,m)}}.}
For the
x
{\displaystyle x}
and
y
{\displaystyle y}
value of the point
P
{\displaystyle P}
with
u
{\displaystyle u}
and parameter
m
{\displaystyle m}
we get, after inserting the relation:
r
(
φ
,
m
)
=
1
dn
(
u
,
m
)
{\displaystyle r(\varphi ,m)={\frac {1}{\operatorname {dn} (u,m)}}}
into:
x
=
r
(
φ
,
m
)
cos
(
φ
)
,
y
=
r
(
φ
,
m
)
sin
(
φ
)
{\displaystyle x=r(\varphi ,m)\cos(\varphi ),y=r(\varphi ,m)\sin(\varphi )}
that:
x
=
cn
(
u
,
m
)
dn
(
u
,
m
)
,
y
=
sn
(
u
,
m
)
dn
(
u
,
m
)
.
{\displaystyle x={\frac {\operatorname {cn} (u,m)}{\operatorname {dn} (u,m)}},\quad y={\frac {\operatorname {sn} (u,m)}{\operatorname {dn} (u,m)}}.}
The latter relations for the x- and y-coordinates of points on the unit ellipse may be considered as generalization of the relations
x
=
cos
φ
,
y
=
sin
φ
{\displaystyle x=\cos \varphi ,y=\sin \varphi }
for the coordinates of points on the unit circle.
The following table summarizes the expressions for all Jacobi elliptic functions pq(u,m) in the variables (x,y,r) and (φ,dn) with
r
=
x
2
+
y
2
{\textstyle r={\sqrt {x^{2}+y^{2}}}}
== Definition in terms of the Jacobi theta functions ==
=== Using elliptic integrals ===
Equivalently, Jacobi's elliptic functions can be defined in terms of the theta functions. With
z
,
τ
∈
C
{\displaystyle z,\tau \in \mathbb {C} }
such that
Im
τ
>
0
{\displaystyle \operatorname {Im} \tau >0}
, let
θ
1
(
z
|
τ
)
=
∑
n
=
−
∞
∞
(
−
1
)
n
−
1
2
e
(
2
n
+
1
)
i
z
+
π
i
τ
(
n
+
1
2
)
2
,
{\displaystyle \theta _{1}(z|\tau )=\displaystyle \sum _{n=-\infty }^{\infty }(-1)^{n-{\frac {1}{2}}}e^{(2n+1)iz+\pi i\tau \left(n+{\frac {1}{2}}\right)^{2}},}
θ
2
(
z
|
τ
)
=
∑
n
=
−
∞
∞
e
(
2
n
+
1
)
i
z
+
π
i
τ
(
n
+
1
2
)
2
,
{\displaystyle \theta _{2}(z|\tau )=\displaystyle \sum _{n=-\infty }^{\infty }e^{(2n+1)iz+\pi i\tau \left(n+{\frac {1}{2}}\right)^{2}},}
θ
3
(
z
|
τ
)
=
∑
n
=
−
∞
∞
e
2
n
i
z
+
π
i
τ
n
2
,
{\displaystyle \theta _{3}(z|\tau )=\displaystyle \sum _{n=-\infty }^{\infty }e^{2niz+\pi i\tau n^{2}},}
θ
4
(
z
|
τ
)
=
∑
n
=
−
∞
∞
(
−
1
)
n
e
2
n
i
z
+
π
i
τ
n
2
{\displaystyle \theta _{4}(z|\tau )=\displaystyle \sum _{n=-\infty }^{\infty }(-1)^{n}e^{2niz+\pi i\tau n^{2}}}
and let
θ
2
(
τ
)
=
θ
2
(
0
|
τ
)
{\displaystyle \theta _{2}(\tau )=\theta _{2}(0|\tau )}
,
θ
3
(
τ
)
=
θ
3
(
0
|
τ
)
{\displaystyle \theta _{3}(\tau )=\theta _{3}(0|\tau )}
,
θ
4
(
τ
)
=
θ
4
(
0
|
τ
)
{\displaystyle \theta _{4}(\tau )=\theta _{4}(0|\tau )}
. Then with
K
=
K
(
m
)
{\displaystyle K=K(m)}
,
K
′
=
K
(
1
−
m
)
{\displaystyle K'=K(1-m)}
,
ζ
=
π
u
/
(
2
K
)
{\displaystyle \zeta =\pi u/(2K)}
and
τ
=
i
K
′
/
K
{\displaystyle \tau =iK'/K}
,
sn
(
u
,
m
)
=
θ
3
(
τ
)
θ
1
(
ζ
|
τ
)
θ
2
(
τ
)
θ
4
(
ζ
|
τ
)
,
cn
(
u
,
m
)
=
θ
4
(
τ
)
θ
2
(
ζ
|
τ
)
θ
2
(
τ
)
θ
4
(
ζ
|
τ
)
,
dn
(
u
,
m
)
=
θ
4
(
τ
)
θ
3
(
ζ
|
τ
)
θ
3
(
τ
)
θ
4
(
ζ
|
τ
)
.
{\displaystyle {\begin{aligned}\operatorname {sn} (u,m)&={\frac {\theta _{3}(\tau )\theta _{1}(\zeta |\tau )}{\theta _{2}(\tau )\theta _{4}(\zeta |\tau )}},\\\operatorname {cn} (u,m)&={\frac {\theta _{4}(\tau )\theta _{2}(\zeta |\tau )}{\theta _{2}(\tau )\theta _{4}(\zeta |\tau )}},\\\operatorname {dn} (u,m)&={\frac {\theta _{4}(\tau )\theta _{3}(\zeta |\tau )}{\theta _{3}(\tau )\theta _{4}(\zeta |\tau )}}.\end{aligned}}}
The Jacobi zn function can be expressed by theta functions as well:
zn
(
u
,
m
)
=
π
2
K
θ
4
′
(
ζ
|
τ
)
θ
4
(
ζ
|
τ
)
=
π
2
K
θ
3
′
(
ζ
|
τ
)
θ
3
(
ζ
|
τ
)
+
m
sn
(
u
,
m
)
cn
(
u
,
m
)
dn
(
u
,
m
)
=
π
2
K
θ
2
′
(
ζ
|
τ
)
θ
2
(
ζ
|
τ
)
+
dn
(
u
,
m
)
sn
(
u
,
m
)
cn
(
u
,
m
)
=
π
2
K
θ
1
′
(
ζ
|
τ
)
θ
1
(
ζ
|
τ
)
−
cn
(
u
,
m
)
dn
(
u
,
m
)
sn
(
u
,
m
)
{\displaystyle {\begin{aligned}\operatorname {zn} (u,m)&={\frac {\pi }{2K}}{\frac {\theta _{4}'(\zeta |\tau )}{\theta _{4}(\zeta |\tau )}}\\&={\frac {\pi }{2K}}{\frac {\theta _{3}'(\zeta |\tau )}{\theta _{3}(\zeta |\tau )}}+m{\frac {\operatorname {sn} (u,m)\operatorname {cn} (u,m)}{\operatorname {dn} (u,m)}}\\&={\frac {\pi }{2K}}{\frac {\theta _{2}'(\zeta |\tau )}{\theta _{2}(\zeta |\tau )}}+{\frac {\operatorname {dn} (u,m)\operatorname {sn} (u,m)}{\operatorname {cn} (u,m)}}\\&={\frac {\pi }{2K}}{\frac {\theta _{1}'(\zeta |\tau )}{\theta _{1}(\zeta |\tau )}}-{\frac {\operatorname {cn} (u,m)\operatorname {dn} (u,m)}{\operatorname {sn} (u,m)}}\end{aligned}}}
where
′
{\displaystyle '}
denotes the partial derivative with respect to the first variable.
=== Using modular inversion ===
In fact, the definition of the Jacobi elliptic functions in Whittaker & Watson is stated a little bit differently than the one given above (but it's equivalent to it) and relies on modular inversion: The function
λ
{\displaystyle \lambda }
, defined by
λ
(
τ
)
=
θ
2
(
τ
)
4
θ
3
(
τ
)
4
,
{\displaystyle \lambda (\tau )={\frac {\theta _{2}(\tau )^{4}}{\theta _{3}(\tau )^{4}}},}
assumes every value in
C
−
{
0
,
1
}
{\displaystyle \mathbb {C} -\{0,1\}}
once and only once in
F
1
−
(
∂
F
1
∩
{
τ
∈
H
:
Re
τ
<
0
}
)
{\displaystyle F_{1}-(\partial F_{1}\cap \{\tau \in \mathbb {H} :\operatorname {Re} \tau <0\})}
where
H
{\displaystyle \mathbb {H} }
is the upper half-plane in the complex plane,
∂
F
1
{\displaystyle \partial F_{1}}
is the boundary of
F
1
{\displaystyle F_{1}}
and
F
1
=
{
τ
∈
H
:
|
Re
τ
|
≤
1
,
|
Re
(
1
/
τ
)
|
≤
1
}
.
{\displaystyle F_{1}=\{\tau \in \mathbb {H} :\left|\operatorname {Re} \tau \right|\leq 1,\left|\operatorname {Re} (1/\tau )\right|\leq 1\}.}
In this way, each
m
=
def
λ
(
τ
)
∈
C
−
{
0
,
1
}
{\displaystyle m\,{\overset {\text{def}}{=}}\,\lambda (\tau )\in \mathbb {C} -\{0,1\}}
can be associated with one and only one
τ
{\displaystyle \tau }
. Then Whittaker & Watson define the Jacobi elliptic functions by
sn
(
u
,
m
)
=
θ
3
(
τ
)
θ
1
(
ζ
|
τ
)
θ
2
(
τ
)
θ
4
(
ζ
|
τ
)
,
cn
(
u
,
m
)
=
θ
4
(
τ
)
θ
2
(
ζ
|
τ
)
θ
2
(
τ
)
θ
4
(
ζ
|
τ
)
,
dn
(
u
,
m
)
=
θ
4
(
τ
)
θ
3
(
ζ
|
τ
)
θ
3
(
τ
)
θ
4
(
ζ
|
τ
)
{\displaystyle {\begin{aligned}\operatorname {sn} (u,m)&={\frac {\theta _{3}(\tau )\theta _{1}(\zeta |\tau )}{\theta _{2}(\tau )\theta _{4}(\zeta |\tau )}},\\\operatorname {cn} (u,m)&={\frac {\theta _{4}(\tau )\theta _{2}(\zeta |\tau )}{\theta _{2}(\tau )\theta _{4}(\zeta |\tau )}},\\\operatorname {dn} (u,m)&={\frac {\theta _{4}(\tau )\theta _{3}(\zeta |\tau )}{\theta _{3}(\tau )\theta _{4}(\zeta |\tau )}}\end{aligned}}}
where
ζ
=
u
/
θ
3
(
τ
)
2
{\displaystyle \zeta =u/\theta _{3}(\tau )^{2}}
.
In the book, they place an additional restriction on
m
{\displaystyle m}
(that
m
∉
(
−
∞
,
0
)
∪
(
1
,
∞
)
{\displaystyle m\notin (-\infty ,0)\cup (1,\infty )}
), but it is in fact not a necessary restriction (see the Cox reference). Also, if
m
=
0
{\displaystyle m=0}
or
m
=
1
{\displaystyle m=1}
, the Jacobi elliptic functions degenerate to non-elliptic functions which is described below.
== Definition in terms of Neville theta functions ==
The Jacobi elliptic functions can be defined very simply using the Neville theta functions:
pq
(
u
,
m
)
=
θ
p
(
u
,
m
)
θ
q
(
u
,
m
)
{\displaystyle \operatorname {pq} (u,m)={\frac {\theta _{\operatorname {p} }(u,m)}{\theta _{\operatorname {q} }(u,m)}}}
Simplifications of complicated products of the Jacobi elliptic functions are often made easier using these identities.
== Jacobi transformations ==
=== The Jacobi imaginary transformations ===
The Jacobi imaginary transformations relate various functions of the imaginary variable i u or, equivalently, relations between various values of the m parameter. In terms of the major functions:: 506
cn
(
u
,
m
)
=
nc
(
i
u
,
1
−
m
)
{\displaystyle \operatorname {cn} (u,m)=\operatorname {nc} (i\,u,1\!-\!m)}
sn
(
u
,
m
)
=
−
i
sc
(
i
u
,
1
−
m
)
{\displaystyle \operatorname {sn} (u,m)=-i\operatorname {sc} (i\,u,1\!-\!m)}
dn
(
u
,
m
)
=
dc
(
i
u
,
1
−
m
)
{\displaystyle \operatorname {dn} (u,m)=\operatorname {dc} (i\,u,1\!-\!m)}
Using the multiplication rule, all other functions may be expressed in terms of the above three. The transformations may be generally written as
pq
(
u
,
m
)
=
γ
pq
pq
′
(
i
u
,
1
−
m
)
{\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\operatorname {pq} '(i\,u,1\!-\!m)}
. The following table gives the
γ
pq
pq
′
(
i
u
,
1
−
m
)
{\displaystyle \gamma _{\operatorname {pq} }\operatorname {pq} '(i\,u,1\!-\!m)}
for the specified pq(u,m). (The arguments
(
i
u
,
1
−
m
)
{\displaystyle (i\,u,1\!-\!m)}
are suppressed)
Since the hyperbolic trigonometric functions are proportional to the circular trigonometric functions with imaginary arguments, it follows that the Jacobi functions will yield the hyperbolic functions for m=1.: 249 In the figure, the Jacobi curve has degenerated to two vertical lines at x = 1 and x = −1.
=== The Jacobi real transformations ===
The Jacobi real transformations: 308 yield expressions for the elliptic functions in terms with alternate values of m. The transformations may be generally written as
pq
(
u
,
m
)
=
γ
pq
pq
′
(
k
u
,
1
/
m
)
{\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\operatorname {pq} '(k\,u,1/m)}
. The following table gives the
γ
pq
pq
′
(
k
u
,
1
/
m
)
{\displaystyle \gamma _{\operatorname {pq} }\operatorname {pq} '(k\,u,1/m)}
for the specified pq(u,m). (The arguments
(
k
u
,
1
/
m
)
{\displaystyle (k\,u,1/m)}
are suppressed)
=== Other Jacobi transformations ===
Jacobi's real and imaginary transformations can be combined in various ways to yield three more simple transformations
.: 214 The real and imaginary transformations are two transformations in a group (D3 or anharmonic group) of six transformations. If
μ
R
(
m
)
=
1
/
m
{\displaystyle \mu _{R}(m)=1/m}
is the transformation for the m parameter in the real transformation, and
μ
I
(
m
)
=
1
−
m
=
m
′
{\displaystyle \mu _{I}(m)=1-m=m'}
is the transformation of m in the imaginary transformation, then the other transformations can be built up by successive application of these two basic transformations, yielding only three more possibilities:
μ
I
R
(
m
)
=
μ
I
(
μ
R
(
m
)
)
=
−
m
′
/
m
μ
R
I
(
m
)
=
μ
R
(
μ
I
(
m
)
)
=
1
/
m
′
μ
R
I
R
(
m
)
=
μ
R
(
μ
I
(
μ
R
(
m
)
)
)
=
−
m
/
m
′
{\displaystyle {\begin{aligned}\mu _{IR}(m)&=&\mu _{I}(\mu _{R}(m))&=&-m'/m\\\mu _{RI}(m)&=&\mu _{R}(\mu _{I}(m))&=&1/m'\\\mu _{RIR}(m)&=&\mu _{R}(\mu _{I}(\mu _{R}(m)))&=&-m/m'\end{aligned}}}
These five transformations, along with the identity transformation (μU(m) = m) yield the six-element group. With regard to the Jacobi elliptic functions, the general transformation can be expressed using just three functions:
cs
(
u
,
m
)
=
γ
i
c
s
′
(
γ
i
u
,
μ
i
(
m
)
)
{\displaystyle \operatorname {cs} (u,m)=\gamma _{i}\operatorname {cs'} (\gamma _{i}u,\mu _{i}(m))}
ns
(
u
,
m
)
=
γ
i
n
s
′
(
γ
i
u
,
μ
i
(
m
)
)
{\displaystyle \operatorname {ns} (u,m)=\gamma _{i}\operatorname {ns'} (\gamma _{i}u,\mu _{i}(m))}
ds
(
u
,
m
)
=
γ
i
d
s
′
(
γ
i
u
,
μ
i
(
m
)
)
{\displaystyle \operatorname {ds} (u,m)=\gamma _{i}\operatorname {ds'} (\gamma _{i}u,\mu _{i}(m))}
where i = U, I, IR, R, RI, or RIR, identifying the transformation, γi is a multiplication factor common to these three functions, and the prime indicates the transformed function. The other nine transformed functions can be built up from the above three. The reason the cs, ns, ds functions were chosen to represent the transformation is that the other functions will be ratios of these three (except for their inverses) and the multiplication factors will cancel.
The following table lists the multiplication factors for the three ps functions, the transformed m's, and the transformed function names for each of the six transformations.: 214 (As usual, k2 = m, 1 − k2 = k12 = m′ and the arguments (
γ
i
u
,
μ
i
(
m
)
{\displaystyle \gamma _{i}u,\mu _{i}(m)}
) are suppressed)
Thus, for example, we may build the following table for the RIR transformation. The transformation is generally written
pq
(
u
,
m
)
=
γ
pq
p
q
′
(
k
′
u
,
−
m
/
m
′
)
{\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\,\operatorname {pq'} (k'\,u,-m/m')}
(The arguments
(
k
′
u
,
−
m
/
m
′
)
{\displaystyle (k'\,u,-m/m')}
are suppressed)
The value of the Jacobi transformations is that any set of Jacobi elliptic functions with any real-valued parameter m can be converted into another set for which
0
<
m
≤
1
/
2
{\displaystyle 0<m\leq 1/2}
and, for real values of u, the function values will be real.: p. 215
=== Amplitude transformations ===
In the following, the second variable is suppressed and is equal to
m
{\displaystyle m}
:
sin
(
am
(
u
+
v
)
+
am
(
u
−
v
)
)
=
2
sn
u
cn
u
dn
v
1
−
m
sn
2
u
sn
2
v
,
{\displaystyle \sin(\operatorname {am} (u+v)+\operatorname {am} (u-v))={\frac {2\operatorname {sn} u\operatorname {cn} u\operatorname {dn} v}{1-m\operatorname {sn} ^{2}u\operatorname {sn} ^{2}v}},}
cos
(
am
(
u
+
v
)
−
am
(
u
−
v
)
)
=
cn
2
v
−
sn
2
v
dn
2
u
1
−
m
sn
2
u
sn
2
v
{\displaystyle \cos(\operatorname {am} (u+v)-\operatorname {am} (u-v))={\dfrac {\operatorname {cn} ^{2}v-\operatorname {sn} ^{2}v\operatorname {dn} ^{2}u}{1-m\operatorname {sn} ^{2}u\operatorname {sn} ^{2}v}}}
where both identities are valid for all
u
,
v
,
m
∈
C
{\displaystyle u,v,m\in \mathbb {C} }
such that both sides are well-defined.
With
m
1
=
(
1
−
m
′
1
+
m
′
)
2
,
{\displaystyle m_{1}=\left({\frac {1-{\sqrt {m'}}}{1+{\sqrt {m'}}}}\right)^{2},}
we have
cos
(
am
(
u
,
m
)
+
am
(
K
−
u
,
m
)
)
=
−
sn
(
(
1
−
m
′
)
u
,
1
/
m
1
)
,
{\displaystyle \cos(\operatorname {am} (u,m)+\operatorname {am} (K-u,m))=-\operatorname {sn} ((1-{\sqrt {m'}})u,1/m_{1}),}
sin
(
am
(
m
′
u
,
−
m
/
m
′
)
+
am
(
(
1
−
m
′
)
u
,
1
/
m
1
)
)
=
sn
(
u
,
m
)
,
{\displaystyle \sin(\operatorname {am} ({\sqrt {m'}}u,-m/m')+\operatorname {am} ((1-{\sqrt {m'}})u,1/m_{1}))=\operatorname {sn} (u,m),}
sin
(
am
(
(
1
+
m
′
)
u
,
m
1
)
+
am
(
(
1
−
m
′
)
u
,
1
/
m
1
)
)
=
sin
(
2
am
(
u
,
m
)
)
{\displaystyle \sin(\operatorname {am} ((1+{\sqrt {m'}})u,m_{1})+\operatorname {am} ((1-{\sqrt {m'}})u,1/m_{1}))=\sin(2\operatorname {am} (u,m))}
where all the identities are valid for all
u
,
m
∈
C
{\displaystyle u,m\in \mathbb {C} }
such that both sides are well-defined.
== The Jacobi hyperbola ==
Introducing complex numbers, our ellipse has an associated hyperbola:
x
2
−
y
2
b
2
=
1
{\displaystyle x^{2}-{\frac {y^{2}}{b^{2}}}=1}
from applying Jacobi's imaginary transformation to the elliptic functions in the above equation for x and y.
x
=
1
dn
(
u
,
1
−
m
)
,
y
=
sn
(
u
,
1
−
m
)
dn
(
u
,
1
−
m
)
{\displaystyle x={\frac {1}{\operatorname {dn} (u,1-m)}},\quad y={\frac {\operatorname {sn} (u,1-m)}{\operatorname {dn} (u,1-m)}}}
It follows that we can put
x
=
dn
(
u
,
1
−
m
)
,
y
=
sn
(
u
,
1
−
m
)
{\displaystyle x=\operatorname {dn} (u,1-m),y=\operatorname {sn} (u,1-m)}
. So our ellipse has a dual ellipse with m replaced by 1-m. This leads to the complex torus mentioned in the Introduction. Generally, m may be a complex number, but when m is real and m<0, the curve is an ellipse with major axis in the x direction. At m=0 the curve is a circle, and for 0<m<1, the curve is an ellipse with major axis in the y direction. At m = 1, the curve degenerates into two vertical lines at x = ±1. For m > 1, the curve is a hyperbola. When m is complex but not real, x or y or both are complex and the curve cannot be described on a real x-y diagram.
== Minor functions ==
Reversing the order of the two letters of the function name results in the reciprocals of the three functions above:
ns
(
u
)
=
1
sn
(
u
)
,
nc
(
u
)
=
1
cn
(
u
)
,
nd
(
u
)
=
1
dn
(
u
)
.
{\displaystyle \operatorname {ns} (u)={\frac {1}{\operatorname {sn} (u)}},\qquad \operatorname {nc} (u)={\frac {1}{\operatorname {cn} (u)}},\qquad \operatorname {nd} (u)={\frac {1}{\operatorname {dn} (u)}}.}
Similarly, the ratios of the three primary functions correspond to the first letter of the numerator followed by the first letter of the denominator:
sc
(
u
)
=
sn
(
u
)
cn
(
u
)
,
sd
(
u
)
=
sn
(
u
)
dn
(
u
)
,
dc
(
u
)
=
dn
(
u
)
cn
(
u
)
,
ds
(
u
)
=
dn
(
u
)
sn
(
u
)
,
cs
(
u
)
=
cn
(
u
)
sn
(
u
)
,
cd
(
u
)
=
cn
(
u
)
dn
(
u
)
.
{\displaystyle {\begin{aligned}\operatorname {sc} (u)={\frac {\operatorname {sn} (u)}{\operatorname {cn} (u)}},\qquad \operatorname {sd} (u)={\frac {\operatorname {sn} (u)}{\operatorname {dn} (u)}},\qquad \operatorname {dc} (u)={\frac {\operatorname {dn} (u)}{\operatorname {cn} (u)}},\qquad \operatorname {ds} (u)={\frac {\operatorname {dn} (u)}{\operatorname {sn} (u)}},\qquad \operatorname {cs} (u)={\frac {\operatorname {cn} (u)}{\operatorname {sn} (u)}},\qquad \operatorname {cd} (u)={\frac {\operatorname {cn} (u)}{\operatorname {dn} (u)}}.\end{aligned}}}
More compactly, we have
pq
(
u
)
=
pn
(
u
)
qn
(
u
)
{\displaystyle \operatorname {pq} (u)={\frac {\operatorname {pn} (u)}{\operatorname {qn} (u)}}}
where p and q are any of the letters s, c, d.
== Periodicity, poles, and residues ==
In the complex plane of the argument u, the Jacobi elliptic functions form a repeating pattern of poles (and zeroes). The residues of the poles all have the same absolute value, differing only in sign. Each function pq(u,m) has an "inverse function" (in the multiplicative sense) qp(u,m) in which the positions of the poles and zeroes are exchanged. The periods of repetition are generally different in the real and imaginary directions, hence the use of the term "doubly periodic" to describe them.
For the Jacobi amplitude and the Jacobi epsilon function:
am
(
u
+
2
K
,
m
)
=
am
(
u
,
m
)
+
π
,
{\displaystyle \operatorname {am} (u+2K,m)=\operatorname {am} (u,m)+\pi ,}
am
(
u
+
4
i
K
′
,
m
)
=
am
(
u
,
m
)
,
{\displaystyle \operatorname {am} (u+4iK',m)=\operatorname {am} (u,m),}
E
(
u
+
2
K
,
m
)
=
E
(
u
,
m
)
+
2
E
,
{\displaystyle {\mathcal {E}}(u+2K,m)={\mathcal {E}}(u,m)+2E,}
E
(
u
+
2
i
K
′
,
m
)
=
E
(
u
,
m
)
+
2
i
E
K
′
K
−
π
i
K
{\displaystyle {\mathcal {E}}(u+2iK',m)={\mathcal {E}}(u,m)+2iE{\frac {K'}{K}}-{\frac {\pi i}{K}}}
where
E
(
m
)
{\displaystyle E(m)}
is the complete elliptic integral of the second kind with parameter
m
{\displaystyle m}
.
The double periodicity of the Jacobi elliptic functions may be expressed as:
pq
(
u
+
2
α
K
(
m
)
+
2
i
β
K
(
1
−
m
)
,
m
)
=
(
−
1
)
γ
pq
(
u
,
m
)
{\displaystyle \operatorname {pq} (u+2\alpha K(m)+2i\beta K(1-m)\,,\,m)=(-1)^{\gamma }\operatorname {pq} (u,m)}
where α and β are any pair of integers. K(⋅) is the complete elliptic integral of the first kind, also known as the quarter period. The power of negative unity (γ) is given in the following table:
When the factor (−1)γ is equal to −1, the equation expresses quasi-periodicity. When it is equal to unity, it expresses full periodicity. It can be seen, for example, that for the entries containing only α when α is even, full periodicity is expressed by the above equation, and the function has full periods of 4K(m) and 2iK(1 − m). Likewise, functions with entries containing only β have full periods of 2K(m) and 4iK(1 − m), while those with α + β have full periods of 4K(m) and 4iK(1 − m).
In the diagram on the right, which plots one repeating unit for each function, indicating phase along with the location of poles and zeroes, a number of regularities can be noted: The inverse of each function is opposite the diagonal, and has the same size unit cell, with poles and zeroes exchanged. The pole and zero arrangement in the auxiliary rectangle formed by (0,0), (K,0), (0,K′) and (K,K′) are in accordance with the description of the pole and zero placement described in the introduction above. Also, the size of the white ovals indicating poles are a rough measure of the absolute value of the residue for that pole. The residues of the poles closest to the origin in the figure (i.e. in the auxiliary rectangle) are listed in the following table:
When applicable, poles displaced above by 2K or displaced to the right by 2K′ have the same value but with signs reversed, while those diagonally opposite have the same value. Note that poles and zeroes on the left and lower edges are considered part of the unit cell, while those on the upper and right edges are not.
The information about poles can in fact be used to characterize the Jacobi elliptic functions:
The function
u
↦
sn
(
u
,
m
)
{\displaystyle u\mapsto \operatorname {sn} (u,m)}
is the unique elliptic function having simple poles at
2
r
K
+
(
2
s
+
1
)
i
K
′
{\displaystyle 2rK+(2s+1)iK'}
(with
r
,
s
∈
Z
{\displaystyle r,s\in \mathbb {Z} }
) with residues
(
−
1
)
r
/
m
{\displaystyle (-1)^{r}/{\sqrt {m}}}
taking the value
0
{\displaystyle 0}
at
0
{\displaystyle 0}
.
The function
u
↦
cn
(
u
,
m
)
{\displaystyle u\mapsto \operatorname {cn} (u,m)}
is the unique elliptic function having simple poles at
2
r
K
+
(
2
s
+
1
)
i
K
′
{\displaystyle 2rK+(2s+1)iK'}
(with
r
,
s
∈
Z
{\displaystyle r,s\in \mathbb {Z} }
) with residues
(
−
1
)
r
+
s
−
1
i
/
m
{\displaystyle (-1)^{r+s-1}i/{\sqrt {m}}}
taking the value
1
{\displaystyle 1}
at
0
{\displaystyle 0}
.
The function
u
↦
dn
(
u
,
m
)
{\displaystyle u\mapsto \operatorname {dn} (u,m)}
is the unique elliptic function having simple poles at
2
r
K
+
(
2
s
+
1
)
i
K
′
{\displaystyle 2rK+(2s+1)iK'}
(with
r
,
s
∈
Z
{\displaystyle r,s\in \mathbb {Z} }
) with residues
(
−
1
)
s
−
1
i
{\displaystyle (-1)^{s-1}i}
taking the value
1
{\displaystyle 1}
at
0
{\displaystyle 0}
.
== Special values ==
Setting
m
=
−
1
{\displaystyle m=-1}
gives the lemniscate elliptic functions
sl
{\displaystyle \operatorname {sl} }
and
cl
{\displaystyle \operatorname {cl} }
:
sl
u
=
sn
(
u
,
−
1
)
,
cl
u
=
cd
(
u
,
−
1
)
=
cn
(
u
,
−
1
)
dn
(
u
,
−
1
)
.
{\displaystyle \operatorname {sl} u=\operatorname {sn} (u,-1),\quad \operatorname {cl} u=\operatorname {cd} (u,-1)={\frac {\operatorname {cn} (u,-1)}{\operatorname {dn} (u,-1)}}.}
When
m
=
0
{\displaystyle m=0}
or
m
=
1
{\displaystyle m=1}
, the Jacobi elliptic functions are reduced to non-elliptic functions:
For the Jacobi amplitude,
am
(
u
,
0
)
=
u
{\displaystyle \operatorname {am} (u,0)=u}
and
am
(
u
,
1
)
=
gd
u
{\displaystyle \operatorname {am} (u,1)=\operatorname {gd} u}
where
gd
{\displaystyle \operatorname {gd} }
is the Gudermannian function.
In general if neither of p,q is d then
pq
(
u
,
1
)
=
pq
(
gd
(
u
)
,
0
)
{\displaystyle \operatorname {pq} (u,1)=\operatorname {pq} (\operatorname {gd} (u),0)}
.
== Identities ==
=== Half angle formula ===
sn
(
u
2
,
m
)
=
±
1
−
cn
(
u
,
m
)
1
+
dn
(
u
,
m
)
{\displaystyle \operatorname {sn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {1-\operatorname {cn} (u,m)}{1+\operatorname {dn} (u,m)}}}}
cn
(
u
2
,
m
)
=
±
cn
(
u
,
m
)
+
dn
(
u
,
m
)
1
+
dn
(
u
,
m
)
{\displaystyle \operatorname {cn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {\operatorname {cn} (u,m)+\operatorname {dn} (u,m)}{1+\operatorname {dn} (u,m)}}}}
cn
(
u
2
,
m
)
=
±
m
′
+
dn
(
u
,
m
)
+
m
cn
(
u
,
m
)
1
+
dn
(
u
,
m
)
{\displaystyle \operatorname {cn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {m'+\operatorname {dn} (u,m)+m\operatorname {cn} (u,m)}{1+\operatorname {dn} (u,m)}}}}
=== K formulas ===
Half K formula
sn
[
1
2
K
(
k
)
;
k
]
=
2
1
+
k
+
1
−
k
{\displaystyle \operatorname {sn} \left[{\tfrac {1}{2}}K(k);k\right]={\frac {\sqrt {2}}{{\sqrt {1+k}}+{\sqrt {1-k}}}}}
cn
[
1
2
K
(
k
)
;
k
]
=
2
1
−
k
2
4
1
+
k
+
1
−
k
{\displaystyle \operatorname {cn} \left[{\tfrac {1}{2}}K(k);k\right]={\frac {{\sqrt {2}}\,{\sqrt[{4}]{1-k^{2}}}}{{\sqrt {1+k}}+{\sqrt {1-k}}}}}
dn
[
1
2
K
(
k
)
;
k
]
=
1
−
k
2
4
{\displaystyle \operatorname {dn} \left[{\tfrac {1}{2}}K(k);k\right]={\sqrt[{4}]{1-k^{2}}}}
Third K formula
sn
[
1
3
K
(
x
3
x
6
+
1
+
1
)
;
x
3
x
6
+
1
+
1
]
=
2
x
4
−
x
2
+
1
−
x
2
+
2
+
x
2
+
1
−
1
2
x
4
−
x
2
+
1
−
x
2
+
2
+
x
2
+
1
+
1
{\displaystyle \operatorname {sn} \left[{\frac {1}{3}}K\left({\frac {x^{3}}{{\sqrt {x^{6}+1}}+1}}\right);{\frac {x^{3}}{{\sqrt {x^{6}+1}}+1}}\right]={\frac {{\sqrt {2{\sqrt {x^{4}-x^{2}+1}}-x^{2}+2}}+{\sqrt {x^{2}+1}}-1}{{\sqrt {2{\sqrt {x^{4}-x^{2}+1}}-x^{2}+2}}+{\sqrt {x^{2}+1}}+1}}}
To get x3, we take the tangent of twice the arctangent of the modulus.
Also this equation leads to the sn-value of the third of K:
k
2
s
4
−
2
k
2
s
3
+
2
s
−
1
=
0
{\displaystyle k^{2}s^{4}-2k^{2}s^{3}+2s-1=0}
s
=
sn
[
1
3
K
(
k
)
;
k
]
{\displaystyle s=\operatorname {sn} \left[{\tfrac {1}{3}}K(k);k\right]}
These equations lead to the other values of the Jacobi-Functions:
cn
[
2
3
K
(
k
)
;
k
]
=
1
−
sn
[
1
3
K
(
k
)
;
k
]
{\displaystyle \operatorname {cn} \left[{\tfrac {2}{3}}K(k);k\right]=1-\operatorname {sn} \left[{\tfrac {1}{3}}K(k);k\right]}
dn
[
2
3
K
(
k
)
;
k
]
=
1
/
sn
[
1
3
K
(
k
)
;
k
]
−
1
{\displaystyle \operatorname {dn} \left[{\tfrac {2}{3}}K(k);k\right]=1/\operatorname {sn} \left[{\tfrac {1}{3}}K(k);k\right]-1}
Fifth K formula
Following equation has following solution:
4
k
2
x
6
+
8
k
2
x
5
+
2
(
1
−
k
2
)
2
x
−
(
1
−
k
2
)
2
=
0
{\displaystyle 4k^{2}x^{6}+8k^{2}x^{5}+2(1-k^{2})^{2}x-(1-k^{2})^{2}=0}
x
=
1
2
−
1
2
k
2
sn
[
2
5
K
(
k
)
;
k
]
2
sn
[
4
5
K
(
k
)
;
k
]
2
=
sn
[
4
5
K
(
k
)
;
k
]
2
−
sn
[
2
5
K
(
k
)
;
k
]
2
2
sn
[
2
5
K
(
k
)
;
k
]
sn
[
4
5
K
(
k
)
;
k
]
{\displaystyle x={\frac {1}{2}}-{\frac {1}{2}}k^{2}\operatorname {sn} \left[{\tfrac {2}{5}}K(k);k\right]^{2}\operatorname {sn} \left[{\tfrac {4}{5}}K(k);k\right]^{2}={\frac {\operatorname {sn} \left[{\frac {4}{5}}K(k);k\right]^{2}-\operatorname {sn} \left[{\frac {2}{5}}K(k);k\right]^{2}}{2\operatorname {sn} \left[{\frac {2}{5}}K(k);k\right]\operatorname {sn} \left[{\frac {4}{5}}K(k);k\right]}}}
To get the sn-values, we put the solution x into following expressions:
sn
[
2
5
K
(
k
)
;
k
]
=
(
1
+
k
2
)
−
1
/
2
2
(
1
−
x
−
x
2
)
(
x
2
+
1
−
x
x
2
+
1
)
{\displaystyle \operatorname {sn} \left[{\tfrac {2}{5}}K(k);k\right]=(1+k^{2})^{-1/2}{\sqrt {2(1-x-x^{2})(x^{2}+1-x{\sqrt {x^{2}+1}})}}}
sn
[
4
5
K
(
k
)
;
k
]
=
(
1
+
k
2
)
−
1
/
2
2
(
1
−
x
−
x
2
)
(
x
2
+
1
+
x
x
2
+
1
)
{\displaystyle \operatorname {sn} \left[{\tfrac {4}{5}}K(k);k\right]=(1+k^{2})^{-1/2}{\sqrt {2(1-x-x^{2})(x^{2}+1+x{\sqrt {x^{2}+1}})}}}
=== Relations between squares of the functions ===
Relations between squares of the functions can be derived from two basic relationships (Arguments (u,m) suppressed):
cn
2
+
sn
2
=
1
{\displaystyle \operatorname {cn} ^{2}+\operatorname {sn} ^{2}=1}
cn
2
+
m
′
sn
2
=
dn
2
{\displaystyle \operatorname {cn} ^{2}+m'\operatorname {sn} ^{2}=\operatorname {dn} ^{2}}
where m + m' = 1. Multiplying by any function of the form nq yields more general equations:
cq
2
+
sq
2
=
nq
2
{\displaystyle \operatorname {cq} ^{2}+\operatorname {sq} ^{2}=\operatorname {nq} ^{2}}
cq
2
+
m
′
sq
2
=
dq
2
{\displaystyle \operatorname {cq} ^{2}{}+m'\operatorname {sq} ^{2}=\operatorname {dq} ^{2}}
With q = d, these correspond trigonometrically to the equations for the unit circle (
x
2
+
y
2
=
r
2
{\displaystyle x^{2}+y^{2}=r^{2}}
) and the unit ellipse (
x
2
+
m
′
y
2
=
1
{\displaystyle x^{2}{}+m'y^{2}=1}
), with x = cd, y = sd and r = nd. Using the multiplication rule, other relationships may be derived. For example:
−
dn
2
+
m
′
=
−
m
cn
2
=
m
sn
2
−
m
{\displaystyle -\operatorname {dn} ^{2}{}+m'=-m\operatorname {cn} ^{2}=m\operatorname {sn} ^{2}-m}
−
m
′
nd
2
+
m
′
=
−
m
m
′
sd
2
=
m
cd
2
−
m
{\displaystyle -m'\operatorname {nd} ^{2}{}+m'=-mm'\operatorname {sd} ^{2}=m\operatorname {cd} ^{2}-m}
m
′
sc
2
+
m
′
=
m
′
nc
2
=
dc
2
−
m
{\displaystyle m'\operatorname {sc} ^{2}{}+m'=m'\operatorname {nc} ^{2}=\operatorname {dc} ^{2}-m}
cs
2
+
m
′
=
ds
2
=
ns
2
−
m
{\displaystyle \operatorname {cs} ^{2}{}+m'=\operatorname {ds} ^{2}=\operatorname {ns} ^{2}-m}
=== Addition theorems ===
The functions satisfy the two square relations (dependence on m suppressed)
cn
2
(
u
)
+
sn
2
(
u
)
=
1
,
{\displaystyle \operatorname {cn} ^{2}(u)+\operatorname {sn} ^{2}(u)=1,\,}
dn
2
(
u
)
+
m
sn
2
(
u
)
=
1.
{\displaystyle \operatorname {dn} ^{2}(u)+m\operatorname {sn} ^{2}(u)=1.\,}
From this we see that (cn, sn, dn) parametrizes an elliptic curve which is the intersection of the two quadrics defined by the above two equations. We now may define a group law for points on this curve by the addition formulas for the Jacobi functions
cn
(
x
+
y
)
=
cn
(
x
)
cn
(
y
)
−
sn
(
x
)
sn
(
y
)
dn
(
x
)
dn
(
y
)
1
−
m
sn
2
(
x
)
sn
2
(
y
)
,
sn
(
x
+
y
)
=
sn
(
x
)
cn
(
y
)
dn
(
y
)
+
sn
(
y
)
cn
(
x
)
dn
(
x
)
1
−
m
sn
2
(
x
)
sn
2
(
y
)
,
dn
(
x
+
y
)
=
dn
(
x
)
dn
(
y
)
−
m
sn
(
x
)
sn
(
y
)
cn
(
x
)
cn
(
y
)
1
−
m
sn
2
(
x
)
sn
2
(
y
)
.
{\displaystyle {\begin{aligned}\operatorname {cn} (x+y)&={\operatorname {cn} (x)\operatorname {cn} (y)-\operatorname {sn} (x)\operatorname {sn} (y)\operatorname {dn} (x)\operatorname {dn} (y) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}},\\[8pt]\operatorname {sn} (x+y)&={\operatorname {sn} (x)\operatorname {cn} (y)\operatorname {dn} (y)+\operatorname {sn} (y)\operatorname {cn} (x)\operatorname {dn} (x) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}},\\[8pt]\operatorname {dn} (x+y)&={\operatorname {dn} (x)\operatorname {dn} (y)-m\operatorname {sn} (x)\operatorname {sn} (y)\operatorname {cn} (x)\operatorname {cn} (y) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}}.\end{aligned}}}
The Jacobi epsilon and zn functions satisfy a quasi-addition theorem:
E
(
x
+
y
,
m
)
=
E
(
x
,
m
)
+
E
(
y
,
m
)
−
m
sn
(
x
,
m
)
sn
(
y
,
m
)
sn
(
x
+
y
,
m
)
,
zn
(
x
+
y
,
m
)
=
zn
(
x
,
m
)
+
zn
(
y
,
m
)
−
m
sn
(
x
,
m
)
sn
(
y
,
m
)
sn
(
x
+
y
,
m
)
.
{\displaystyle {\begin{aligned}{\mathcal {E}}(x+y,m)&={\mathcal {E}}(x,m)+{\mathcal {E}}(y,m)-m\operatorname {sn} (x,m)\operatorname {sn} (y,m)\operatorname {sn} (x+y,m),\\\operatorname {zn} (x+y,m)&=\operatorname {zn} (x,m)+\operatorname {zn} (y,m)-m\operatorname {sn} (x,m)\operatorname {sn} (y,m)\operatorname {sn} (x+y,m).\end{aligned}}}
Double angle formulae can be easily derived from the above equations by setting x = y. Half angle formulae are all of the form:
pq
(
1
2
u
,
m
)
2
=
f
p
/
f
q
{\displaystyle \operatorname {pq} ({\tfrac {1}{2}}u,m)^{2}=f_{\mathrm {p} }/f_{\mathrm {q} }}
where:
f
c
=
cn
(
u
,
m
)
+
dn
(
u
,
m
)
{\displaystyle f_{\mathrm {c} }=\operatorname {cn} (u,m)+\operatorname {dn} (u,m)}
f
s
=
1
−
cn
(
u
,
m
)
{\displaystyle f_{\mathrm {s} }=1-\operatorname {cn} (u,m)}
f
n
=
1
+
dn
(
u
,
m
)
{\displaystyle f_{\mathrm {n} }=1+\operatorname {dn} (u,m)}
f
d
=
(
1
+
dn
(
u
,
m
)
)
−
m
(
1
−
cn
(
u
,
m
)
)
{\displaystyle f_{\mathrm {d} }=(1+\operatorname {dn} (u,m))-m(1-\operatorname {cn} (u,m))}
== Jacobi elliptic functions as solutions of nonlinear ordinary differential equations ==
=== Derivatives with respect to the first variable ===
The derivatives of the three basic Jacobi elliptic functions (with respect to the first variable, with
m
{\displaystyle m}
fixed) are:
d
d
z
sn
(
z
)
=
cn
(
z
)
dn
(
z
)
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {sn} (z)=\operatorname {cn} (z)\operatorname {dn} (z),}
d
d
z
cn
(
z
)
=
−
sn
(
z
)
dn
(
z
)
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {cn} (z)=-\operatorname {sn} (z)\operatorname {dn} (z),}
d
d
z
dn
(
z
)
=
−
m
sn
(
z
)
cn
(
z
)
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {dn} (z)=-m\operatorname {sn} (z)\operatorname {cn} (z).}
These can be used to derive the derivatives of all other functions as shown in the table below (arguments (u,m) suppressed):
Also
d
d
z
E
(
z
)
=
dn
(
z
)
2
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}{\mathcal {E}}(z)=\operatorname {dn} (z)^{2}.}
With the addition theorems above and for a given m with 0 < m < 1 the major functions are therefore solutions to the following nonlinear ordinary differential equations:
am
(
x
)
{\displaystyle \operatorname {am} (x)}
solves the differential equations
d
2
y
d
x
2
+
m
sin
(
y
)
cos
(
y
)
=
0
{\displaystyle {\frac {\mathrm {d} ^{2}y}{\mathrm {d} x^{2}}}+m\sin(y)\cos(y)=0}
and
(
d
y
d
x
)
2
=
1
−
m
sin
(
y
)
2
{\displaystyle \left({\frac {\mathrm {d} y}{\mathrm {d} x}}\right)^{2}=1-m\sin(y)^{2}}
(for
x
{\displaystyle x}
not on a branch cut)
sn
(
x
)
{\displaystyle \operatorname {sn} (x)}
solves the differential equations
d
2
y
d
x
2
+
(
1
+
m
)
y
−
2
m
y
3
=
0
{\displaystyle {\frac {\mathrm {d} ^{2}y}{\mathrm {d} x^{2}}}+(1+m)y-2my^{3}=0}
and
(
d
y
d
x
)
2
=
(
1
−
y
2
)
(
1
−
m
y
2
)
{\displaystyle \left({\frac {\mathrm {d} y}{\mathrm {d} x}}\right)^{2}=(1-y^{2})(1-my^{2})}
cn
(
x
)
{\displaystyle \operatorname {cn} (x)}
solves the differential equations
d
2
y
d
x
2
+
(
1
−
2
m
)
y
+
2
m
y
3
=
0
{\displaystyle {\frac {\mathrm {d} ^{2}y}{\mathrm {d} x^{2}}}+(1-2m)y+2my^{3}=0}
and
(
d
y
d
x
)
2
=
(
1
−
y
2
)
(
1
−
m
+
m
y
2
)
{\displaystyle \left({\frac {\mathrm {d} y}{\mathrm {d} x}}\right)^{2}=(1-y^{2})(1-m+my^{2})}
dn
(
x
)
{\displaystyle \operatorname {dn} (x)}
solves the differential equations
d
2
y
d
x
2
−
(
2
−
m
)
y
+
2
y
3
=
0
{\displaystyle {\frac {\mathrm {d} ^{2}y}{\mathrm {d} x^{2}}}-(2-m)y+2y^{3}=0}
and
(
d
y
d
x
)
2
=
(
y
2
−
1
)
(
1
−
m
−
y
2
)
{\displaystyle \left({\frac {\mathrm {d} y}{\mathrm {d} x}}\right)^{2}=(y^{2}-1)(1-m-y^{2})}
The function which exactly solves the pendulum differential equation,
d
2
θ
d
t
2
+
c
sin
θ
=
0
,
{\displaystyle {\frac {\mathrm {d} ^{2}\theta }{\mathrm {d} t^{2}}}+c\sin \theta =0,}
with initial angle
θ
0
{\displaystyle \theta _{0}}
and zero initial angular velocity is
θ
=
2
arcsin
(
m
cd
(
c
t
,
m
)
)
=
2
am
(
1
+
m
2
(
c
t
+
K
)
,
4
m
(
1
+
m
)
2
)
−
2
am
(
1
+
m
2
(
c
t
−
K
)
,
4
m
(
1
+
m
)
2
)
−
π
{\displaystyle {\begin{aligned}\theta &=2\arcsin({\sqrt {m}}\operatorname {cd} ({\sqrt {c}}t,m))\\&=2\operatorname {am} \left({\frac {1+{\sqrt {m}}}{2}}({\sqrt {c}}t+K),{\frac {4{\sqrt {m}}}{(1+{\sqrt {m}})^{2}}}\right)-2\operatorname {am} \left({\frac {1+{\sqrt {m}}}{2}}({\sqrt {c}}t-K),{\frac {4{\sqrt {m}}}{(1+{\sqrt {m}})^{2}}}\right)-\pi \end{aligned}}}
where
m
=
sin
(
θ
0
/
2
)
2
{\displaystyle m=\sin(\theta _{0}/2)^{2}}
,
c
>
0
{\displaystyle c>0}
and
t
∈
R
{\displaystyle t\in \mathbb {R} }
.
=== Derivatives with respect to the second variable ===
With the first argument
z
{\displaystyle z}
fixed, the derivatives with respect to the second variable
m
{\displaystyle m}
are as follows:
d
d
m
sn
(
z
)
=
dn
(
z
)
cn
(
z
)
(
(
1
−
m
)
z
−
E
(
z
)
+
m
cd
(
z
)
sn
(
z
)
)
2
m
(
1
−
m
)
,
d
d
m
cn
(
z
)
=
sn
(
z
)
dn
(
z
)
(
(
m
−
1
)
z
+
E
(
z
)
−
m
sn
(
z
)
cd
(
z
)
)
2
m
(
1
−
m
)
,
d
d
m
dn
(
z
)
=
sn
(
z
)
cn
(
z
)
(
(
m
−
1
)
z
+
E
(
z
)
−
dn
(
z
)
sc
(
z
)
)
2
(
1
−
m
)
,
d
d
m
E
(
z
)
=
cn
(
z
)
(
sn
(
z
)
dn
(
z
)
−
cn
(
z
)
E
(
z
)
)
2
(
1
−
m
)
−
z
2
sn
(
z
)
2
.
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} m}}\operatorname {sn} (z)&={\frac {\operatorname {dn} (z)\operatorname {cn} (z)((1-m)z-{\mathcal {E}}(z)+m\operatorname {cd} (z)\operatorname {sn} (z))}{2m(1-m)}},\\{\frac {\mathrm {d} }{\mathrm {d} m}}\operatorname {cn} (z)&={\frac {\operatorname {sn} (z)\operatorname {dn} (z)((m-1)z+{\mathcal {E}}(z)-m\operatorname {sn} (z)\operatorname {cd} (z))}{2m(1-m)}},\\{\frac {\mathrm {d} }{\mathrm {d} m}}\operatorname {dn} (z)&={\frac {\operatorname {sn} (z)\operatorname {cn} (z)((m-1)z+{\mathcal {E}}(z)-\operatorname {dn} (z)\operatorname {sc} (z))}{2(1-m)}},\\{\frac {\mathrm {d} }{\mathrm {d} m}}{\mathcal {E}}(z)&={\frac {\operatorname {cn} (z)(\operatorname {sn} (z)\operatorname {dn} (z)-\operatorname {cn} (z){\mathcal {E}}(z))}{2(1-m)}}-{\frac {z}{2}}\operatorname {sn} (z)^{2}.\end{aligned}}}
== Expansion in terms of the nome ==
Let the nome be
q
=
exp
(
−
π
K
′
(
m
)
/
K
(
m
)
)
=
e
i
π
τ
{\displaystyle q=\exp(-\pi K'(m)/K(m))=e^{i\pi \tau }}
,
Im
(
τ
)
>
0
{\displaystyle \operatorname {Im} (\tau )>0}
,
m
=
k
2
{\displaystyle m=k^{2}}
and let
v
=
π
u
/
(
2
K
(
m
)
)
{\displaystyle v=\pi u/(2K(m))}
. Then the functions have expansions as Lambert series
am
(
u
,
m
)
=
π
u
2
K
(
m
)
+
2
∑
n
=
1
∞
q
n
n
(
1
+
q
2
n
)
sin
(
2
n
v
)
,
{\displaystyle \operatorname {am} (u,m)={\frac {\pi u}{2K(m)}}+2\sum _{n=1}^{\infty }{\frac {q^{n}}{n(1+q^{2n})}}\sin(2nv),}
sn
(
u
,
m
)
=
2
π
k
K
(
m
)
∑
n
=
0
∞
q
n
+
1
/
2
1
−
q
2
n
+
1
sin
(
(
2
n
+
1
)
v
)
,
{\displaystyle \operatorname {sn} (u,m)={\frac {2\pi }{kK(m)}}\sum _{n=0}^{\infty }{\frac {q^{n+1/2}}{1-q^{2n+1}}}\sin((2n+1)v),}
cn
(
u
,
m
)
=
2
π
k
K
(
m
)
∑
n
=
0
∞
q
n
+
1
/
2
1
+
q
2
n
+
1
cos
(
(
2
n
+
1
)
v
)
,
{\displaystyle \operatorname {cn} (u,m)={\frac {2\pi }{kK(m)}}\sum _{n=0}^{\infty }{\frac {q^{n+1/2}}{1+q^{2n+1}}}\cos((2n+1)v),}
dn
(
u
,
m
)
=
π
2
K
(
m
)
+
2
π
K
(
m
)
∑
n
=
1
∞
q
n
1
+
q
2
n
cos
(
2
n
v
)
,
{\displaystyle \operatorname {dn} (u,m)={\frac {\pi }{2K(m)}}+{\frac {2\pi }{K(m)}}\sum _{n=1}^{\infty }{\frac {q^{n}}{1+q^{2n}}}\cos(2nv),}
zn
(
u
,
m
)
=
2
π
K
(
m
)
∑
n
=
1
∞
q
n
1
−
q
2
n
sin
(
2
n
v
)
{\displaystyle \operatorname {zn} (u,m)={\frac {2\pi }{K(m)}}\sum _{n=1}^{\infty }{\frac {q^{n}}{1-q^{2n}}}\sin(2nv)}
when
|
Im
(
u
/
K
)
|
<
Im
(
i
K
′
/
K
)
.
{\displaystyle \left|\operatorname {Im} (u/K)\right|<\operatorname {Im} (iK'/K).}
Bivariate power series expansions have been published by Schett.
== Fast computation ==
The theta function ratios provide an efficient way of computing the Jacobi elliptic functions. There is an alternative method, based on the arithmetic-geometric mean and Landen's transformations:
Initialize
a
0
=
1
,
b
0
=
1
−
m
{\displaystyle a_{0}=1,\,b_{0}={\sqrt {1-m}}}
where
0
<
m
<
1
{\displaystyle 0<m<1}
.
Define
a
n
=
a
n
−
1
+
b
n
−
1
2
,
b
n
=
a
n
−
1
b
n
−
1
,
c
n
=
a
n
−
1
−
b
n
−
1
2
{\displaystyle a_{n}={\frac {a_{n-1}+b_{n-1}}{2}},\,b_{n}={\sqrt {a_{n-1}b_{n-1}}},\,c_{n}={\frac {a_{n-1}-b_{n-1}}{2}}}
where
n
≥
1
{\displaystyle n\geq 1}
.
Then define
φ
N
=
2
N
a
N
u
{\displaystyle \varphi _{N}=2^{N}a_{N}u}
for
u
∈
R
{\displaystyle u\in \mathbb {R} }
and a fixed
N
∈
N
{\displaystyle N\in \mathbb {N} }
. If
φ
n
−
1
=
1
2
(
φ
n
+
arcsin
(
c
n
a
n
sin
φ
n
)
)
{\displaystyle \varphi _{n-1}={\frac {1}{2}}\left(\varphi _{n}+\arcsin \left({\frac {c_{n}}{a_{n}}}\sin \varphi _{n}\right)\right)}
for
n
≥
1
{\displaystyle n\geq 1}
, then
am
(
u
,
m
)
=
φ
0
,
zn
(
u
,
m
)
=
∑
n
=
1
N
c
n
sin
φ
n
{\displaystyle \operatorname {am} (u,m)=\varphi _{0},\quad \operatorname {zn} (u,m)=\sum _{n=1}^{N}c_{n}\sin \varphi _{n}}
as
N
→
∞
{\displaystyle N\to \infty }
. This is notable for its rapid convergence. It is then trivial to compute all Jacobi elliptic functions from the Jacobi amplitude
am
{\displaystyle \operatorname {am} }
on the real line.
In conjunction with the addition theorems for elliptic functions (which hold for complex numbers in general) and the Jacobi transformations, the method of computation described above can be used to compute all Jacobi elliptic functions in the whole complex plane.
Another method of fast computation of the Jacobi elliptic functions via the arithmetic–geometric mean, avoiding the computation of the Jacobi amplitude, is due to Herbert E. Salzer:
Let
0
≤
m
≤
1
,
0
≤
u
≤
K
(
m
)
,
a
0
=
1
,
b
0
=
1
−
m
,
{\displaystyle 0\leq m\leq 1,\,0\leq u\leq K(m),\,a_{0}=1,\,b_{0}={\sqrt {1-m}},}
a
n
+
1
=
a
n
+
b
n
2
,
b
n
+
1
=
a
n
b
n
,
c
n
+
1
=
a
n
−
b
n
2
.
{\displaystyle a_{n+1}={\frac {a_{n}+b_{n}}{2}},\,b_{n+1}={\sqrt {a_{n}b_{n}}},\,c_{n+1}={\frac {a_{n}-b_{n}}{2}}.}
Set
y
N
=
a
N
sin
(
a
N
u
)
y
N
−
1
=
y
N
+
a
N
c
N
y
N
y
N
−
2
=
y
N
−
1
+
a
N
−
1
c
N
−
1
y
N
−
1
⋮
=
⋮
y
0
=
y
1
+
m
4
y
1
.
{\displaystyle {\begin{aligned}y_{N}&={\frac {a_{N}}{\sin(a_{N}u)}}\\y_{N-1}&=y_{N}+{\frac {a_{N}c_{N}}{y_{N}}}\\y_{N-2}&=y_{N-1}+{\frac {a_{N-1}c_{N-1}}{y_{N-1}}}\\\vdots &=\vdots \\y_{0}&=y_{1}+{\frac {m}{4y_{1}}}.\end{aligned}}}
Then
sn
(
u
,
m
)
=
1
y
0
cn
(
u
,
m
)
=
1
−
1
y
0
2
dn
(
u
,
m
)
=
1
−
m
y
0
2
{\displaystyle {\begin{aligned}\operatorname {sn} (u,m)&={\frac {1}{y_{0}}}\\\operatorname {cn} (u,m)&={\sqrt {1-{\frac {1}{y_{0}^{2}}}}}\\\operatorname {dn} (u,m)&={\sqrt {1-{\frac {m}{y_{0}^{2}}}}}\end{aligned}}}
as
N
→
∞
{\displaystyle N\to \infty }
.
Yet, another method for a rapidly converging fast computation of the Jacobi elliptic sine function found in the literature is shown below.
Let:
a
0
=
u
b
0
=
1
−
1
−
m
1
+
1
−
m
a
1
=
a
0
1
+
b
0
b
1
=
1
−
1
−
b
0
2
1
+
1
−
b
0
2
⋮
=
⋮
⋮
=
⋮
a
n
=
a
n
−
1
1
+
b
n
−
1
b
n
=
1
−
1
−
b
n
−
1
2
1
+
1
−
b
n
−
1
2
{\displaystyle {\begin{aligned}&a_{0}=u&b_{0}={\frac {1-{\sqrt {1-m}}}{1+{\sqrt {1-m}}}}\\&a_{1}={\frac {a_{0}}{1+b_{0}}}&b_{1}={\frac {1-{\sqrt {1-b_{0}^{2}}}}{1+{\sqrt {1-b_{0}^{2}}}}}\\&\vdots =\vdots &\vdots =\vdots \\&a_{n}={\frac {a_{n-1}}{1+b_{n-1}}}&b_{n}={\frac {1-{\sqrt {1-b_{n-1}^{2}}}}{1+{\sqrt {1-b_{n-1}^{2}}}}}\\\end{aligned}}}
Then set:
y
n
+
1
=
sin
(
a
n
)
y
n
=
y
n
+
1
(
1
+
b
n
)
1
+
y
n
+
1
2
b
n
⋮
=
⋮
y
0
=
y
1
(
1
+
b
0
)
1
+
y
1
2
b
0
{\displaystyle {\begin{aligned}y_{n+1}&=\sin(a_{n})\\y_{n}&={\frac {y_{n+1}(1+b_{n})}{1+y_{n+1}^{2}b_{n}}}\\\vdots &=\vdots \\y_{0}&={\frac {y_{1}(1+b_{0})}{1+y_{1}^{2}b_{0}}}\\\end{aligned}}}
Then:
sn
(
u
,
m
)
=
y
0
as
n
→
∞
{\displaystyle \operatorname {sn} (u,m)=y_{0}{\text{ as }}n\rightarrow \infty }
.
== Approximation in terms of hyperbolic functions ==
The Jacobi elliptic functions can be expanded in terms of the hyperbolic functions. When
m
{\displaystyle m}
is close to unity, such that
m
′
2
{\displaystyle m'^{2}}
and higher powers of
m
′
{\displaystyle m'}
can be neglected, we have:
sn(u):
sn
(
u
,
m
)
≈
tanh
(
u
)
+
1
4
m
′
(
sinh
(
u
)
cosh
(
u
)
−
u
)
sech
2
(
u
)
.
{\displaystyle \operatorname {sn} (u,m)\approx \tanh(u)+{\frac {1}{4}}m'(\sinh(u)\cosh(u)-u)\operatorname {sech} ^{2}(u).}
cn(u):
cn
(
u
,
m
)
≈
sech
(
u
)
−
1
4
m
′
(
sinh
(
u
)
cosh
(
u
)
−
u
)
tanh
(
u
)
sech
(
u
)
.
{\displaystyle \operatorname {cn} (u,m)\approx \operatorname {sech} (u)-{\frac {1}{4}}m'(\sinh(u)\cosh(u)-u)\tanh(u)\operatorname {sech} (u).}
dn(u):
dn
(
u
,
m
)
≈
sech
(
u
)
+
1
4
m
′
(
sinh
(
u
)
cosh
(
u
)
+
u
)
tanh
(
u
)
sech
(
u
)
.
{\displaystyle \operatorname {dn} (u,m)\approx \operatorname {sech} (u)+{\frac {1}{4}}m'(\sinh(u)\cosh(u)+u)\tanh(u)\operatorname {sech} (u).}
For the Jacobi amplitude,
am
(
u
,
m
)
≈
gd
(
u
)
+
1
4
m
′
(
sinh
(
u
)
cosh
(
u
)
−
u
)
sech
(
u
)
.
{\displaystyle \operatorname {am} (u,m)\approx \operatorname {gd} (u)+{\frac {1}{4}}m'(\sinh(u)\cosh(u)-u)\operatorname {sech} (u).}
== Continued fractions ==
Assuming real numbers
a
,
p
{\displaystyle a,p}
with
0
<
a
<
p
{\displaystyle 0<a<p}
and the nome
q
=
e
π
i
τ
{\displaystyle q=e^{\pi i\tau }}
,
Im
(
τ
)
>
0
{\displaystyle \operatorname {Im} (\tau )>0}
with elliptic modulus
k
(
τ
)
=
1
−
k
′
(
τ
)
2
=
(
ϑ
10
(
0
;
τ
)
/
ϑ
00
(
0
;
τ
)
)
2
{\textstyle k(\tau )={\sqrt {1-k'(\tau )^{2}}}=(\vartheta _{10}(0;\tau )/\vartheta _{00}(0;\tau ))^{2}}
. If
K
[
τ
]
=
K
(
k
(
τ
)
)
{\displaystyle K[\tau ]=K(k(\tau ))}
, where
K
(
x
)
=
π
/
2
⋅
2
F
1
(
1
/
2
,
1
/
2
;
1
;
x
2
)
{\displaystyle K(x)=\pi /2\cdot {}_{2}F_{1}(1/2,1/2;1;x^{2})}
is the complete elliptic integral of the first kind, then holds the following continued fraction expansion
dn
(
(
p
/
2
−
a
)
τ
K
[
p
τ
2
]
;
k
(
p
τ
2
)
)
k
′
(
p
τ
2
)
=
∑
n
=
−
∞
∞
q
p
/
2
n
2
+
(
p
/
2
−
a
)
n
∑
n
=
−
∞
∞
(
−
1
)
n
q
p
/
2
n
2
+
(
p
/
2
−
a
)
n
=
−
1
+
2
1
−
q
a
+
q
p
−
a
1
−
q
p
+
(
q
a
+
q
2
p
−
a
)
(
q
a
+
p
+
q
p
−
a
)
1
−
q
3
p
+
q
p
(
q
a
+
q
3
p
−
a
)
(
q
a
+
2
p
+
q
p
−
a
)
1
−
q
5
p
+
q
2
p
(
q
a
+
q
4
p
−
a
)
(
q
a
+
3
p
+
q
p
−
a
)
1
−
q
7
p
+
⋯
{\displaystyle {\begin{aligned}&{\frac {{\textrm {dn}}\left((p/2-a)\tau K\left[{\frac {p\tau }{2}}\right];k\left({\frac {p\tau }{2}}\right)\right)}{\sqrt {k'\left({\frac {p\tau }{2}}\right)}}}={\frac {\sum _{n=-\infty }^{\infty }q^{p/2n^{2}+(p/2-a)n}}{\sum _{n=-\infty }^{\infty }(-1)^{n}q^{p/2n^{2}+(p/2-a)n}}}\\[4pt]={}&-1+{\frac {2}{1-{}}}\,{\frac {q^{a}+q^{p-a}}{1-q^{p}+{}}}\,{\frac {(q^{a}+q^{2p-a})(q^{a+p}+q^{p-a})}{1-q^{3p}+{}}}\,{\frac {q^{p}(q^{a}+q^{3p-a})(q^{a+2p}+q^{p-a})}{1-q^{5p}+{}}}\,{\frac {q^{2p}(q^{a}+q^{4p-a})(q^{a+3p}+q^{p-a})}{1-q^{7p}+{}}}\cdots \end{aligned}}}
Known continued fractions involving
sn
(
t
)
,
cn
(
t
)
{\displaystyle {\textrm {sn}}(t),{\textrm {cn}}(t)}
and
dn
(
t
)
{\displaystyle {\textrm {dn}}(t)}
with elliptic modulus
k
{\displaystyle k}
are
For
z
∈
C
{\displaystyle z\in \mathbb {C} }
,
|
k
|
<
1
{\displaystyle |k|<1}
: pg. 374
∫
0
∞
sn
(
t
)
e
−
t
z
d
t
=
1
1
2
(
1
+
k
2
)
+
z
2
−
1
⋅
2
2
⋅
3
k
2
3
2
(
1
+
k
2
)
+
z
2
−
3
⋅
4
2
⋅
5
k
2
5
2
(
1
+
k
2
)
+
z
2
−
⋯
{\displaystyle \int _{0}^{\infty }{\textrm {sn}}(t)e^{-tz}\,\mathrm {d} t={\frac {1}{1^{2}(1+k^{2})+z^{2}-{}}}\,{\frac {1\cdot 2^{2}\cdot 3k^{2}}{3^{2}(1+k^{2})+z^{2}-{}}}\,{\frac {3\cdot 4^{2}\cdot 5k^{2}}{5^{2}(1+k^{2})+z^{2}-{}}}\cdots }
For
z
∈
C
∖
{
0
}
{\displaystyle z\in \mathbb {C} \setminus \{0\}}
,
|
k
|
<
1
{\displaystyle |k|<1}
: pg. 375
∫
0
∞
sn
2
(
t
)
e
−
t
z
d
t
=
2
z
−
1
2
2
(
1
+
k
2
)
+
z
2
−
2
⋅
3
2
⋅
4
k
2
4
2
(
1
+
k
2
)
+
z
2
−
4
⋅
5
2
⋅
6
k
2
6
2
(
1
+
k
2
)
+
z
2
−
⋯
{\displaystyle \int _{0}^{\infty }{\textrm {sn}}^{2}(t)e^{-tz}\,\mathrm {d} t={\frac {2z^{-1}}{2^{2}(1+k^{2})+z^{2}-{}}}\,{\frac {2\cdot 3^{2}\cdot 4k^{2}}{4^{2}(1+k^{2})+z^{2}-{}}}\,{\frac {4\cdot 5^{2}\cdot 6k^{2}}{6^{2}(1+k^{2})+z^{2}-{}}}\cdots }
For
z
∈
C
∖
{
0
}
{\displaystyle z\in \mathbb {C} \setminus \{0\}}
,
|
k
|
<
1
{\displaystyle |k|<1}
: pg. 220
∫
0
∞
cn
(
t
)
e
−
t
z
d
t
=
1
z
+
1
2
z
+
2
2
k
2
z
+
3
2
z
+
4
2
k
2
z
+
5
2
z
+
⋯
{\displaystyle \int _{0}^{\infty }{\textrm {cn}}(t)e^{-tz}\,\mathrm {d} t={\frac {1}{z+{}}}\,{\frac {1^{2}}{z+{}}}\,{\frac {2^{2}k^{2}}{z+{}}}\,{\frac {3^{2}}{z+{}}}\,{\frac {4^{2}k^{2}}{z+{}}}\,{\frac {5^{2}}{z+{}}}\cdots }
For
z
∈
C
∖
{
0
}
{\displaystyle z\in \mathbb {C} \setminus \{0\}}
,
|
k
|
<
1
{\displaystyle |k|<1}
: pg. 374
∫
0
∞
dn
(
t
)
e
−
t
z
d
t
=
1
z
+
1
2
k
2
z
+
2
2
z
+
3
2
k
2
z
+
4
2
z
+
5
2
k
2
z
+
⋯
{\displaystyle \int _{0}^{\infty }{\textrm {dn}}(t)e^{-tz}\,\mathrm {d} t={\frac {1}{z+{}}}\,{\frac {1^{2}k^{2}}{z+{}}}\,{\frac {2^{2}}{z+{}}}\,{\frac {3^{2}k^{2}}{z+{}}}\,{\frac {4^{2}}{z+{}}}\,{\frac {5^{2}k^{2}}{z+{}}}\cdots }
For
z
∈
C
{\displaystyle z\in \mathbb {C} }
,
|
k
|
<
1
{\displaystyle |k|<1}
: pg. 375
∫
0
∞
sn
(
t
)
cn
(
t
)
dn
(
t
)
e
−
t
z
d
t
=
1
2
⋅
1
2
(
2
−
k
2
)
+
z
2
−
1
⋅
2
2
⋅
3
k
4
2
⋅
3
2
(
2
−
k
2
)
+
z
2
−
3
⋅
4
2
⋅
5
k
4
2
⋅
5
2
(
2
−
k
2
)
+
z
2
−
⋯
{\displaystyle \int _{0}^{\infty }{\frac {{\textrm {sn}}(t){\textrm {cn}}(t)}{{\textrm {dn}}(t)}}e^{-tz}\,\mathrm {d} t={\frac {1}{2\cdot 1^{2}(2-k^{2})+z^{2}-{}}}\,{\frac {1\cdot 2^{2}\cdot 3k^{4}}{2\cdot 3^{2}(2-k^{2})+z^{2}-{}}}\,{\frac {3\cdot 4^{2}\cdot 5k^{4}}{2\cdot 5^{2}(2-k^{2})+z^{2}-{}}}\cdots }
== Inverse functions ==
The inverses of the Jacobi elliptic functions can be defined similarly to the inverse trigonometric functions; if
x
=
sn
(
ξ
,
m
)
{\displaystyle x=\operatorname {sn} (\xi ,m)}
,
ξ
=
arcsn
(
x
,
m
)
{\displaystyle \xi =\operatorname {arcsn} (x,m)}
. They can be represented as elliptic integrals, and power series representations have been found.
arcsn
(
x
,
m
)
=
∫
0
x
d
t
(
1
−
t
2
)
(
1
−
m
t
2
)
{\displaystyle \operatorname {arcsn} (x,m)=\int _{0}^{x}{\frac {\mathrm {d} t}{\sqrt {(1-t^{2})(1-mt^{2})}}}}
arccn
(
x
,
m
)
=
∫
x
1
d
t
(
1
−
t
2
)
(
1
−
m
+
m
t
2
)
{\displaystyle \operatorname {arccn} (x,m)=\int _{x}^{1}{\frac {\mathrm {d} t}{\sqrt {(1-t^{2})(1-m+mt^{2})}}}}
arcdn
(
x
,
m
)
=
∫
x
1
d
t
(
1
−
t
2
)
(
t
2
+
m
−
1
)
{\displaystyle \operatorname {arcdn} (x,m)=\int _{x}^{1}{\frac {\mathrm {d} t}{\sqrt {(1-t^{2})(t^{2}+m-1)}}}}
== Map projection ==
The Peirce quincuncial projection is a map projection based on Jacobian elliptic functions.
== See also ==
Elliptic curve
Schwarz–Christoffel mapping
Carlson symmetric form
Jacobi theta function
Ramanujan theta function
Dixon elliptic functions
Abel elliptic functions
Weierstrass elliptic function
Lemniscate elliptic functions
== Notes ==
== Citations ==
== References ==
Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 16". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 569. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
N. I. Akhiezer, Elements of the Theory of Elliptic Functions (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island ISBN 0-8218-4532-2
A. C. Dixon The elementary properties of the elliptic functions, with examples (Macmillan, 1894)
Alfred George Greenhill The applications of elliptic functions (London, New York, Macmillan, 1892)
Edmund T. Whittaker, George Neville Watson: A Course in Modern Analysis. 4th ed. Cambridge, England: Cambridge University Press, 1990. S. 469–470.
H. Hancock Lectures on the theory of elliptic functions (New York, J. Wiley & sons, 1910)
Jacobi, C. G. J. (1829), Fundamenta nova theoriae functionum ellipticarum (in Latin), Königsberg, ISBN 978-1-108-05200-9, Reprinted by Cambridge University Press 2012 {{citation}}: ISBN / Date incompatibility (help)
Reinhardt, William P.; Walker, Peter L. (2010), "Jacobian Elliptic Functions", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
(in French) P. Appell and E. Lacour Principes de la théorie des fonctions elliptiques et applications (Paris, Gauthier Villars, 1897)
(in French) G. H. Halphen Traité des fonctions elliptiques et de leurs applications (vol. 1) (Paris, Gauthier-Villars, 1886–1891)
(in French) G. H. Halphen Traité des fonctions elliptiques et de leurs applications (vol. 2) (Paris, Gauthier-Villars, 1886–1891)
(in French) G. H. Halphen Traité des fonctions elliptiques et de leurs applications (vol. 3) (Paris, Gauthier-Villars, 1886–1891)
(in French) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome I, Introduction. Calcul différentiel. Ire partie (Paris : Gauthier-Villars et fils, 1893)
(in French) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome II, Calcul différentiel. IIe partie (Paris : Gauthier-Villars et fils, 1893)
(in French) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome III, Calcul intégral. Ire partie, Théorèmes généraux. Inversion (Paris : Gauthier-Villars et fils, 1893)
(in French) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome IV, Calcul intégral. IIe partie, Applications (Paris : Gauthier-Villars et fils, 1893)
(in French) C. Briot and J. C. Bouquet Théorie des fonctions elliptiques ( Paris : Gauthier-Villars, 1875)
Toshio Fukushima: Fast Computation of Complete Elliptic Integrals and Jacobian Elliptic Functions. 2012, National Astronomical Observatory of Japan (国立天文台)
Lowan, Blanch und Horenstein: On the Inversion of the q-Series Associated with Jacobian Elliptic Functions. Bull. Amer. Math. Soc. 48, 1942
H. Ferguson, D. E. Nielsen, G. Cook: A partition formula for the integer coefficients of the theta function nome. Mathematics of computation, Volume 29, Nummer 131, Juli 1975
J. D. Fenton and R. S. Gardiner-Garden: Rapidly-convergent methods for evaluating elliptic integrals and theta and elliptic functions. J. Austral. Math. Soc. (Series B) 24, 1982, S. 57
Adolf Kneser: Neue Untersuchung einer Reihe aus der Theorie der elliptischen Funktionen. J. reine u. angew. Math. 157, 1927. pages 209 – 218
== External links ==
"Jacobi elliptic functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Jacobi Elliptic Functions". MathWorld. | Wikipedia/Jacobi_elliptic_functions |
In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the d'Alembert principle of virtual work. It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique.
Lagrangian mechanics describes a mechanical system as a pair (M, L) consisting of a configuration space M and a smooth function
L
{\textstyle L}
within that space called a Lagrangian. For many systems, L = T − V, where T and V are the kinetic and potential energy of the system, respectively.
The stationary action principle requires that the action functional of the system derived from L must remain at a stationary point (specifically, a maximum, minimum, or saddle point) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations.
== Introduction ==
Newton's laws and the concept of forces are the usual starting point for teaching about mechanical systems. This method works well for many problems, but for others the approach is
nightmarishly complicated. For example, in calculation of the motion of a torus rolling on a horizontal surface with a pearl sliding inside, the time-varying constraint forces like the angular velocity of the torus, motion of the pearl in relation to the torus made it difficult to determine the motion of the torus with Newton's equations. Lagrangian mechanics adopts energy rather than force as its basic ingredient, leading to more abstract equations capable of tackling more complex problems.
Particularly, Lagrange's approach was to set up independent generalized coordinates for the position and speed of every object, which allows the writing down of a general form of Lagrangian (total kinetic energy minus potential energy of the system) and summing this over all possible paths of motion of the particles yielded a formula for the 'action', which he minimized to give a generalized set of equations. This summed quantity is minimized along the path that the particle actually takes. This choice eliminates the need for the constraint force to enter into the resultant generalized system of equations. There are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment.
For a wide variety of physical systems, if the size and shape of a massive object are negligible, it is a useful simplification to treat it as a point particle. For a system of N point particles with masses m1, m2, ..., mN, each particle has a position vector, denoted r1, r2, ..., rN. Cartesian coordinates are often sufficient, so r1 = (x1, y1, z1), r2 = (x2, y2, z2) and so on. In three-dimensional space, each position vector requires three coordinates to uniquely define the location of a point, so there are 3N coordinates to uniquely define the configuration of the system. These are all specific points in space to locate the particles; a general point in space is written r = (x, y, z). The velocity of each particle is how fast the particle moves along its path of motion, and is the time derivative of its position, thus
v
1
=
d
r
1
d
t
,
v
2
=
d
r
2
d
t
,
…
,
v
N
=
d
r
N
d
t
.
{\displaystyle \mathbf {v} _{1}={\frac {d\mathbf {r} _{1}}{dt}},\mathbf {v} _{2}={\frac {d\mathbf {r} _{2}}{dt}},\ldots ,\mathbf {v} _{N}={\frac {d\mathbf {r} _{N}}{dt}}.}
In Newtonian mechanics, the equations of motion are given by Newton's laws. The second law "net force equals mass times acceleration",
∑
F
=
m
d
2
r
d
t
2
,
{\displaystyle \sum \mathbf {F} =m{\frac {d^{2}\mathbf {r} }{dt^{2}}},}
applies to each particle. For an N-particle system in 3 dimensions, there are 3N second-order ordinary differential equations in the positions of the particles to solve for.
=== Lagrangian ===
Instead of forces, Lagrangian mechanics uses the energies in the system. The central quantity of Lagrangian mechanics is the Lagrangian, a function which summarizes the dynamics of the entire system. Overall, the Lagrangian has units of energy, but no single expression for all physical systems. Any function which generates the correct equations of motion, in agreement with physical laws, can be taken as a Lagrangian. It is nevertheless possible to construct general expressions for large classes of applications. The non-relativistic Lagrangian for a system of particles in the absence of an electromagnetic field is given by
L
=
T
−
V
,
{\displaystyle L=T-V,}
where
T
=
1
2
∑
k
=
1
N
m
k
v
k
2
{\displaystyle T={\frac {1}{2}}\sum _{k=1}^{N}m_{k}v_{k}^{2}}
is the total kinetic energy of the system, equaling the sum Σ of the kinetic energies of the
N
{\displaystyle N}
particles. Each particle labeled
k
{\displaystyle k}
has mass
m
k
,
{\displaystyle m_{k},}
and vk2 = vk · vk is the magnitude squared of its velocity, equivalent to the dot product of the velocity with itself.
Kinetic energy T is the energy of the system's motion and is a function only of the velocities vk, not the positions rk, nor time t, so T = T(v1, v2, ...).
V, the potential energy of the system, reflects the energy of interaction between the particles, i.e. how much energy any one particle has due to all the others, together with any external influences. For conservative forces (e.g. Newtonian gravity), it is a function of the position vectors of the particles only, so V = V(r1, r2, ...). For those non-conservative forces which can be derived from an appropriate potential (e.g. electromagnetic potential), the velocities will appear also, V = V(r1, r2, ..., v1, v2, ...). If there is some external field or external driving force changing with time, the potential changes with time, so most generally V = V(r1, r2, ..., v1, v2, ..., t).
As already noted, this form of L is applicable to many important classes of system, but not everywhere. For relativistic Lagrangian mechanics it must be replaced as a whole by a function consistent with special relativity (scalar under Lorentz transformations) or general relativity (4-scalar). Where a magnetic field is present, the expression for the potential energy needs restating. And for dissipative forces (e.g., friction), another function must be introduced alongside Lagrangian often referred to as a "Rayleigh dissipation function" to account for the loss of energy.
One or more of the particles may each be subject to one or more holonomic constraints; such a constraint is described by an equation of the form f(r, t) = 0. If the number of constraints in the system is C, then each constraint has an equation f1(r, t) = 0, f2(r, t) = 0, ..., fC(r, t) = 0, each of which could apply to any of the particles. If particle k is subject to constraint i, then fi(rk, t) = 0. At any instant of time, the coordinates of a constrained particle are linked together and not independent. The constraint equations determine the allowed paths the particles can move along, but not where they are or how fast they go at every instant of time. Nonholonomic constraints depend on the particle velocities, accelerations, or higher derivatives of position. Lagrangian mechanics can only be applied to systems whose constraints, if any, are all holonomic. Three examples of nonholonomic constraints are: when the constraint equations are non-integrable, when the constraints have inequalities, or when the constraints involve complicated non-conservative forces like friction. Nonholonomic constraints require special treatment, and one may have to revert to Newtonian mechanics or use other methods.
If T or V or both depend explicitly on time due to time-varying constraints or external influences, the Lagrangian L(r1, r2, ... v1, v2, ... t) is explicitly time-dependent. If neither the potential nor the kinetic energy depend on time, then the Lagrangian L(r1, r2, ... v1, v2, ...) is explicitly independent of time. In either case, the Lagrangian always has implicit time dependence through the generalized coordinates.
With these definitions, Lagrange's equations of the first kind are
where k = 1, 2, ..., N labels the particles, there is a Lagrange multiplier λi for each constraint equation fi, and
∂
∂
r
k
≡
(
∂
∂
x
k
,
∂
∂
y
k
,
∂
∂
z
k
)
,
∂
∂
r
˙
k
≡
(
∂
∂
x
˙
k
,
∂
∂
y
˙
k
,
∂
∂
z
˙
k
)
{\displaystyle {\frac {\partial }{\partial \mathbf {r} _{k}}}\equiv \left({\frac {\partial }{\partial x_{k}}},{\frac {\partial }{\partial y_{k}}},{\frac {\partial }{\partial z_{k}}}\right),\quad {\frac {\partial }{\partial {\dot {\mathbf {r} }}_{k}}}\equiv \left({\frac {\partial }{\partial {\dot {x}}_{k}}},{\frac {\partial }{\partial {\dot {y}}_{k}}},{\frac {\partial }{\partial {\dot {z}}_{k}}}\right)}
are each shorthands for a vector of partial derivatives ∂/∂ with respect to the indicated variables (not a derivative with respect to the entire vector). Each overdot is a shorthand for a time derivative. This procedure does increase the number of equations to solve compared to Newton's laws, from 3N to 3N + C, because there are 3N coupled second-order differential equations in the position coordinates and multipliers, plus C constraint equations. However, when solved alongside the position coordinates of the particles, the multipliers can yield information about the constraint forces. The coordinates do not need to be eliminated by solving the constraint equations.
In the Lagrangian, the position coordinates and velocity components are all independent variables, and derivatives of the Lagrangian are taken with respect to these separately according to the usual differentiation rules (e.g. the partial derivative of L with respect to the z velocity component of particle 2, defined by vz,2 = dz2/dt, is just ∂L/∂vz,2; no awkward chain rules or total derivatives need to be used to relate the velocity component to the corresponding coordinate z2).
In each constraint equation, one coordinate is redundant because it is determined from the other coordinates. The number of independent coordinates is therefore n = 3N − C. We can transform each position vector to a common set of n generalized coordinates, conveniently written as an n-tuple q = (q1, q2, ... qn), by expressing each position vector, and hence the position coordinates, as functions of the generalized coordinates and time:
r
k
=
r
k
(
q
,
t
)
=
(
x
k
(
q
,
t
)
,
y
k
(
q
,
t
)
,
z
k
(
q
,
t
)
,
t
)
.
{\displaystyle \mathbf {r} _{k}=\mathbf {r} _{k}(\mathbf {q} ,t)={\big (}x_{k}(\mathbf {q} ,t),y_{k}(\mathbf {q} ,t),z_{k}(\mathbf {q} ,t),t{\big )}.}
The vector q is a point in the configuration space of the system. The time derivatives of the generalized coordinates are called the generalized velocities, and for each particle the transformation of its velocity vector, the total derivative of its position with respect to time, is
q
˙
j
=
d
q
j
d
t
,
v
k
=
∑
j
=
1
n
∂
r
k
∂
q
j
q
˙
j
+
∂
r
k
∂
t
.
{\displaystyle {\dot {q}}_{j}={\frac {\mathrm {d} q_{j}}{\mathrm {d} t}},\quad \mathbf {v} _{k}=\sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}{\dot {q}}_{j}+{\frac {\partial \mathbf {r} _{k}}{\partial t}}.}
Given this vk, the kinetic energy in generalized coordinates depends on the generalized velocities, generalized coordinates, and time if the position vectors depend explicitly on time due to time-varying constraints, so
T
=
T
(
q
,
q
˙
,
t
)
.
{\displaystyle T=T(\mathbf {q} ,{\dot {\mathbf {q} }},t).}
With these definitions, the Euler–Lagrange equations, or Lagrange's equations of the second kind
are mathematical results from the calculus of variations, which can also be used in mechanics. Substituting in the Lagrangian L(q, dq/dt, t) gives the equations of motion of the system. The number of equations has decreased compared to Newtonian mechanics, from 3N to n = 3N − C coupled second-order differential equations in the generalized coordinates. These equations do not include constraint forces at all, only non-constraint forces need to be accounted for.
Although the equations of motion include partial derivatives, the results of the partial derivatives are still ordinary differential equations in the position coordinates of the particles. The total time derivative denoted d/dt often involves implicit differentiation. Both equations are linear in the Lagrangian, but generally are nonlinear coupled equations in the coordinates.
== From Newtonian to Lagrangian mechanics ==
=== Newton's laws ===
For simplicity, Newton's laws can be illustrated for one particle without much loss of generality (for a system of N particles, all of these equations apply to each particle in the system). The equation of motion for a particle of constant mass m is Newton's second law of 1687, in modern vector notation
F
=
m
a
,
{\displaystyle \mathbf {F} =m\mathbf {a} ,}
where a is its acceleration and F the resultant force acting on it. Where the mass is varying, the equation needs to be generalised to take the time derivative of the momentum. In three spatial dimensions, this is a system of three coupled second-order ordinary differential equations to solve, since there are three components in this vector equation. The solution is the position vector r of the particle at time t, subject to the initial conditions of r and v when t = 0.
Newton's laws are easy to use in Cartesian coordinates, but Cartesian coordinates are not always convenient, and for other coordinate systems the equations of motion can become complicated. In a set of curvilinear coordinates ξ = (ξ1, ξ2, ξ3), the law in tensor index notation is the "Lagrangian form"
F
a
=
m
(
d
2
ξ
a
d
t
2
+
Γ
a
b
c
d
ξ
b
d
t
d
ξ
c
d
t
)
=
g
a
k
(
d
d
t
∂
T
∂
ξ
˙
k
−
∂
T
∂
ξ
k
)
,
ξ
˙
a
≡
d
ξ
a
d
t
,
{\displaystyle F^{a}=m\left({\frac {\mathrm {d} ^{2}\xi ^{a}}{\mathrm {d} t^{2}}}+\Gamma ^{a}{}_{bc}{\frac {\mathrm {d} \xi ^{b}}{\mathrm {d} t}}{\frac {\mathrm {d} \xi ^{c}}{\mathrm {d} t}}\right)=g^{ak}\left({\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {\xi }}^{k}}}-{\frac {\partial T}{\partial \xi ^{k}}}\right),\quad {\dot {\xi }}^{a}\equiv {\frac {\mathrm {d} \xi ^{a}}{\mathrm {d} t}},}
where Fa is the a-th contravariant component of the resultant force acting on the particle, Γabc are the Christoffel symbols of the second kind,
T
=
1
2
m
g
b
c
d
ξ
b
d
t
d
ξ
c
d
t
{\displaystyle T={\frac {1}{2}}mg_{bc}{\frac {\mathrm {d} \xi ^{b}}{\mathrm {d} t}}{\frac {\mathrm {d} \xi ^{c}}{\mathrm {d} t}}}
is the kinetic energy of the particle, and gbc the covariant components of the metric tensor of the curvilinear coordinate system. All the indices a, b, c, each take the values 1, 2, 3. Curvilinear coordinates are not the same as generalized coordinates.
It may seem like an overcomplication to cast Newton's law in this form, but there are advantages. The acceleration components in terms of the Christoffel symbols can be avoided by evaluating derivatives of the kinetic energy instead. If there is no resultant force acting on the particle, F = 0, it does not accelerate, but moves with constant velocity in a straight line. Mathematically, the solutions of the differential equation are geodesics, the curves of extremal length between two points in space (these may end up being minimal, that is the shortest paths, but not necessarily). In flat 3D real space the geodesics are simply straight lines. So for a free particle, Newton's second law coincides with the geodesic equation and states that free particles follow geodesics, the extremal trajectories it can move along. If the particle is subject to forces F ≠ 0, the particle accelerates due to forces acting on it and deviates away from the geodesics it would follow if free. With appropriate extensions of the quantities given here in flat 3D space to 4D curved spacetime, the above form of Newton's law also carries over to Einstein's general relativity, in which case free particles follow geodesics in curved spacetime that are no longer "straight lines" in the ordinary sense.
However, we still need to know the total resultant force F acting on the particle, which in turn requires the resultant non-constraint force N plus the resultant constraint force C,
F
=
C
+
N
.
{\displaystyle \mathbf {F} =\mathbf {C} +\mathbf {N} .}
The constraint forces can be complicated, since they generally depend on time. Also, if there are constraints, the curvilinear coordinates are not independent but related by one or more constraint equations.
The constraint forces can either be eliminated from the equations of motion, so only the non-constraint forces remain, or included by including the constraint equations in the equations of motion.
=== D'Alembert's principle ===
A fundamental result in analytical mechanics is D'Alembert's principle, introduced in 1708 by Jacques Bernoulli to understand static equilibrium, and developed by D'Alembert in 1743 to solve dynamical problems. The principle asserts for N particles the virtual work, i.e. the work along a virtual displacement, δrk, is zero:
∑
k
=
1
N
(
N
k
+
C
k
−
m
k
a
k
)
⋅
δ
r
k
=
0.
{\displaystyle \sum _{k=1}^{N}(\mathbf {N} _{k}+\mathbf {C} _{k}-m_{k}\mathbf {a} _{k})\cdot \delta \mathbf {r} _{k}=0.}
The virtual displacements, δrk, are by definition infinitesimal changes in the configuration of the system consistent with the constraint forces acting on the system at an instant of time, i.e. in such a way that the constraint forces maintain the constrained motion. They are not the same as the actual displacements in the system, which are caused by the resultant constraint and non-constraint forces acting on the particle to accelerate and move it. Virtual work is the work done along a virtual displacement for any force (constraint or non-constraint).
Since the constraint forces act perpendicular to the motion of each particle in the system to maintain the constraints, the total virtual work by the constraint forces acting on the system is zero:
∑
k
=
1
N
C
k
⋅
δ
r
k
=
0
,
{\displaystyle \sum _{k=1}^{N}\mathbf {C} _{k}\cdot \delta \mathbf {r} _{k}=0,}
so that
∑
k
=
1
N
(
N
k
−
m
k
a
k
)
⋅
δ
r
k
=
0.
{\displaystyle \sum _{k=1}^{N}(\mathbf {N} _{k}-m_{k}\mathbf {a} _{k})\cdot \delta \mathbf {r} _{k}=0.}
Thus D'Alembert's principle allows us to concentrate on only the applied non-constraint forces, and exclude the constraint forces in the equations of motion. The form shown is also independent of the choice of coordinates. However, it cannot be readily used to set up the equations of motion in an arbitrary coordinate system since the displacements δrk might be connected by a constraint equation, which prevents us from setting the N individual summands to 0. We will therefore seek a system of mutually independent coordinates for which the total sum will be 0 if and only if the individual summands are 0. Setting each of the summands to 0 will eventually give us our separated equations of motion.
=== Equations of motion from D'Alembert's principle ===
If there are constraints on particle k, then since the coordinates of the position rk = (xk, yk, zk) are linked together by a constraint equation, so are those of the virtual displacements δrk = (δxk, δyk, δzk). Since the generalized coordinates are independent, we can avoid the complications with the δrk by converting to virtual displacements in the generalized coordinates. These are related in the same form as a total differential,
δ
r
k
=
∑
j
=
1
n
∂
r
k
∂
q
j
δ
q
j
.
{\displaystyle \delta \mathbf {r} _{k}=\sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}\delta q_{j}.}
There is no partial time derivative with respect to time multiplied by a time increment, since this is a virtual displacement, one along the constraints in an instant of time.
The first term in D'Alembert's principle above is the virtual work done by the non-constraint forces Nk along the virtual displacements δrk, and can without loss of generality be converted into the generalized analogues by the definition of generalized forces
Q
j
=
∑
k
=
1
N
N
k
⋅
∂
r
k
∂
q
j
,
{\displaystyle Q_{j}=\sum _{k=1}^{N}\mathbf {N} _{k}\cdot {\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}},}
so that
∑
k
=
1
N
N
k
⋅
δ
r
k
=
∑
k
=
1
N
N
k
⋅
∑
j
=
1
n
∂
r
k
∂
q
j
δ
q
j
=
∑
j
=
1
n
Q
j
δ
q
j
.
{\displaystyle \sum _{k=1}^{N}\mathbf {N} _{k}\cdot \delta \mathbf {r} _{k}=\sum _{k=1}^{N}\mathbf {N} _{k}\cdot \sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}\delta q_{j}=\sum _{j=1}^{n}Q_{j}\delta q_{j}.}
This is half of the conversion to generalized coordinates. It remains to convert the acceleration term into generalized coordinates, which is not immediately obvious. Recalling the Lagrange form of Newton's second law, the partial derivatives of the kinetic energy with respect to the generalized coordinates and velocities can be found to give the desired result:
∑
k
=
1
N
m
k
a
k
⋅
∂
r
k
∂
q
j
=
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
.
{\displaystyle \sum _{k=1}^{N}m_{k}\mathbf {a} _{k}\cdot {\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}.}
Now D'Alembert's principle is in the generalized coordinates as required,
∑
j
=
1
n
[
Q
j
−
(
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
)
]
δ
q
j
=
0
,
{\displaystyle \sum _{j=1}^{n}\left[Q_{j}-\left({\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}\right)\right]\delta q_{j}=0,}
and since these virtual displacements δqj are independent and nonzero, the coefficients can be equated to zero, resulting in Lagrange's equations or the generalized equations of motion,
Q
j
=
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
{\displaystyle Q_{j}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}}
These equations are equivalent to Newton's laws for the non-constraint forces. The generalized forces in this equation are derived from the non-constraint forces only – the constraint forces have been excluded from D'Alembert's principle and do not need to be found. The generalized forces may be non-conservative, provided they satisfy D'Alembert's principle.
=== Euler–Lagrange equations and Hamilton's principle ===
For a non-conservative force which depends on velocity, it may be possible to find a potential energy function V that depends on positions and velocities. If the generalized forces Qi can be derived from a potential V such that
Q
j
=
d
d
t
∂
V
∂
q
˙
j
−
∂
V
∂
q
j
,
{\displaystyle Q_{j}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial V}{\partial {\dot {q}}_{j}}}-{\frac {\partial V}{\partial q_{j}}},}
equating to Lagrange's equations and defining the Lagrangian as L = T − V obtains Lagrange's equations of the second kind or the Euler–Lagrange equations of motion
∂
L
∂
q
j
−
d
d
t
∂
L
∂
q
˙
j
=
0.
{\displaystyle {\frac {\partial L}{\partial q_{j}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}=0.}
However, the Euler–Lagrange equations can only account for non-conservative forces if a potential can be found as shown. This may not always be possible for non-conservative forces, and Lagrange's equations do not involve any potential, only generalized forces; therefore they are more general than the Euler–Lagrange equations.
The Euler–Lagrange equations also follow from the calculus of variations. The variation of the Lagrangian is
δ
L
=
∑
j
=
1
n
(
∂
L
∂
q
j
δ
q
j
+
∂
L
∂
q
˙
j
δ
q
˙
j
)
,
δ
q
˙
j
≡
δ
d
q
j
d
t
≡
d
(
δ
q
j
)
d
t
,
{\displaystyle \delta L=\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}\delta q_{j}+{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta {\dot {q}}_{j}\right),\quad \delta {\dot {q}}_{j}\equiv \delta {\frac {\mathrm {d} q_{j}}{\mathrm {d} t}}\equiv {\frac {\mathrm {d} (\delta q_{j})}{\mathrm {d} t}},}
which has a form similar to the total differential of L, but the virtual displacements and their time derivatives replace differentials, and there is no time increment in accordance with the definition of the virtual displacements. An integration by parts with respect to time can transfer the time derivative of δqj to the ∂L/∂(dqj/dt), in the process exchanging d(δqj)/dt for δqj, allowing the independent virtual displacements to be factorized from the derivatives of the Lagrangian,
∫
t
1
t
2
δ
L
d
t
=
∫
t
1
t
2
∑
j
=
1
n
(
∂
L
∂
q
j
δ
q
j
+
d
d
t
(
∂
L
∂
q
˙
j
δ
q
j
)
−
d
d
t
∂
L
∂
q
˙
j
δ
q
j
)
d
t
=
∑
j
=
1
n
[
∂
L
∂
q
˙
j
δ
q
j
]
t
1
t
2
+
∫
t
1
t
2
∑
j
=
1
n
(
∂
L
∂
q
j
−
d
d
t
∂
L
∂
q
˙
j
)
δ
q
j
d
t
.
{\displaystyle {\begin{aligned}\int _{t_{1}}^{t_{2}}\delta L\,\mathrm {d} t&=\int _{t_{1}}^{t_{2}}\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}\delta q_{j}+{\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)\,\mathrm {d} t\\&=\sum _{j=1}^{n}\left[{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right]_{t_{1}}^{t_{2}}+\int _{t_{1}}^{t_{2}}\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}\right)\delta q_{j}\,\mathrm {d} t.\end{aligned}}}
Now, if the condition δqj(t1) = δqj(t2) = 0 holds for all j, the terms not integrated are zero. If in addition the entire time integral of δL is zero, then because the δqj are independent, and the only way for a definite integral to be zero is if the integrand equals zero, each of the coefficients of δqj must also be zero. Then we obtain the equations of motion. This can be summarized by Hamilton's principle:
∫
t
1
t
2
δ
L
d
t
=
0.
{\displaystyle \int _{t_{1}}^{t_{2}}\delta L\,\mathrm {d} t=0.}
The time integral of the Lagrangian is another quantity called the action, defined as
S
=
∫
t
1
t
2
L
d
t
,
{\displaystyle S=\int _{t_{1}}^{t_{2}}L\,\mathrm {d} t,}
which is a functional; it takes in the Lagrangian function for all times between t1 and t2 and returns a scalar value. Its dimensions are the same as [angular momentum], [energy]·[time], or [length]·[momentum]. With this definition Hamilton's principle is
δ
S
=
0.
{\displaystyle \delta S=0.}
Instead of thinking about particles accelerating in response to applied forces, one might think of them picking out the path with a stationary action, with the end points of the path in configuration space held fixed at the initial and final times. Hamilton's principle is one of several action principles.
Historically, the idea of finding the shortest path a particle can follow subject to a force motivated the first applications of the calculus of variations to mechanical problems, such as the Brachistochrone problem solved by Jean Bernoulli in 1696, as well as Leibniz, Daniel Bernoulli, L'Hôpital around the same time, and Newton the following year. Newton himself was thinking along the lines of the variational calculus, but did not publish. These ideas in turn lead to the variational principles of mechanics, of Fermat, Maupertuis, Euler, Hamilton, and others.
Hamilton's principle can be applied to nonholonomic constraints if the constraint equations can be put into a certain form, a linear combination of first order differentials in the coordinates. The resulting constraint equation can be rearranged into first order differential equation. This will not be given here.
=== Lagrange multipliers and constraints ===
The Lagrangian L can be varied in the Cartesian rk coordinates, for N particles,
∫
t
1
t
2
∑
k
=
1
N
(
∂
L
∂
r
k
−
d
d
t
∂
L
∂
r
˙
k
)
⋅
δ
r
k
d
t
=
0.
{\displaystyle \int _{t_{1}}^{t_{2}}\sum _{k=1}^{N}\left({\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}\right)\cdot \delta \mathbf {r} _{k}\,\mathrm {d} t=0.}
Hamilton's principle is still valid even if the coordinates L is expressed in are not independent, here rk, but the constraints are still assumed to be holonomic. As always the end points are fixed δrk(t1) = δrk(t2) = 0 for all k. What cannot be done is to simply equate the coefficients of δrk to zero because the δrk are not independent. Instead, the method of Lagrange multipliers can be used to include the constraints. Multiplying each constraint equation fi(rk, t) = 0 by a Lagrange multiplier λi for i = 1, 2, ..., C, and adding the results to the original Lagrangian, gives the new Lagrangian
L
′
=
L
(
r
1
,
r
2
,
…
,
r
˙
1
,
r
˙
2
,
…
,
t
)
+
∑
i
=
1
C
λ
i
(
t
)
f
i
(
r
k
,
t
)
.
{\displaystyle L'=L(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,{\dot {\mathbf {r} }}_{1},{\dot {\mathbf {r} }}_{2},\ldots ,t)+\sum _{i=1}^{C}\lambda _{i}(t)f_{i}(\mathbf {r} _{k},t).}
The Lagrange multipliers are arbitrary functions of time t, but not functions of the coordinates rk, so the multipliers are on equal footing with the position coordinates. Varying this new Lagrangian and integrating with respect to time gives
∫
t
1
t
2
δ
L
′
d
t
=
∫
t
1
t
2
∑
k
=
1
N
(
∂
L
∂
r
k
−
d
d
t
∂
L
∂
r
˙
k
+
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
)
⋅
δ
r
k
d
t
=
0.
{\displaystyle \int _{t_{1}}^{t_{2}}\delta L'\mathrm {d} t=\int _{t_{1}}^{t_{2}}\sum _{k=1}^{N}\left({\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}\right)\cdot \delta \mathbf {r} _{k}\,\mathrm {d} t=0.}
The introduced multipliers can be found so that the coefficients of δrk are zero, even though the rk are not independent. The equations of motion follow. From the preceding analysis, obtaining the solution to this integral is equivalent to the statement
∂
L
′
∂
r
k
−
d
d
t
∂
L
′
∂
r
˙
k
=
0
⇒
∂
L
∂
r
k
−
d
d
t
∂
L
∂
r
˙
k
+
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
=
0
,
{\displaystyle {\frac {\partial L'}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {\mathbf {r} }}_{k}}}=0\quad \Rightarrow \quad {\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}=0,}
which are Lagrange's equations of the first kind. Also, the λi Euler-Lagrange equations for the new Lagrangian return the constraint equations
∂
L
′
∂
λ
i
−
d
d
t
∂
L
′
∂
λ
˙
i
=
0
⇒
f
i
(
r
k
,
t
)
=
0.
{\displaystyle {\frac {\partial L'}{\partial \lambda _{i}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {\lambda }}_{i}}}=0\quad \Rightarrow \quad f_{i}(\mathbf {r} _{k},t)=0.}
For the case of a conservative force given by the gradient of some potential energy V, a function of the rk coordinates only, substituting the Lagrangian L = T − V gives
∂
T
∂
r
k
−
d
d
t
∂
T
∂
r
˙
k
⏟
−
F
k
+
−
∂
V
∂
r
k
⏟
N
k
+
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
=
0
,
{\displaystyle \underbrace {{\frac {\partial T}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {\mathbf {r} }}_{k}}}} _{-\mathbf {F} _{k}}+\underbrace {-{\frac {\partial V}{\partial \mathbf {r} _{k}}}} _{\mathbf {N} _{k}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}=0,}
and identifying the derivatives of kinetic energy as the (negative of the) resultant force, and the derivatives of the potential equaling the non-constraint force, it follows the constraint forces are
C
k
=
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
,
{\displaystyle \mathbf {C} _{k}=\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}},}
thus giving the constraint forces explicitly in terms of the constraint equations and the Lagrange multipliers.
== Properties of the Lagrangian ==
=== Non-uniqueness ===
The Lagrangian of a given system is not unique. A Lagrangian L can be multiplied by a nonzero constant a and shifted by an arbitrary constant b, and the new Lagrangian L′ = aL + b will describe the same motion as L. If one restricts as above to trajectories q over a given time interval [tst, tfin]} and fixed end points Pst = q(tst) and Pfin = q(tfin), then two Lagrangians describing the same system can differ by the "total time derivative" of a function f(q, t):
L
′
(
q
,
q
˙
,
t
)
=
L
(
q
,
q
˙
,
t
)
+
d
f
(
q
,
t
)
d
t
,
{\displaystyle L'(\mathbf {q} ,{\dot {\mathbf {q} }},t)=L(\mathbf {q} ,{\dot {\mathbf {q} }},t)+{\frac {\mathrm {d} f(\mathbf {q} ,t)}{\mathrm {d} t}},}
where
d
f
(
q
,
t
)
d
t
{\textstyle {\frac {\mathrm {d} f(\mathbf {q} ,t)}{\mathrm {d} t}}}
means
∂
f
(
q
,
t
)
∂
t
+
∑
i
∂
f
(
q
,
t
)
∂
q
i
q
˙
i
.
{\textstyle {\frac {\partial f(\mathbf {q} ,t)}{\partial t}}+\sum _{i}{\frac {\partial f(\mathbf {q} ,t)}{\partial q_{i}}}{\dot {q}}_{i}.}
Both Lagrangians L and L′ produce the same equations of motion since the corresponding actions S and S′ are related via
S
′
[
q
]
=
∫
t
st
t
fin
L
′
(
q
(
t
)
,
q
˙
(
t
)
,
t
)
d
t
=
∫
t
st
t
fin
L
(
q
(
t
)
,
q
˙
(
t
)
,
t
)
d
t
+
∫
t
st
t
fin
d
f
(
q
(
t
)
,
t
)
d
t
d
t
=
S
[
q
]
+
f
(
P
fin
,
t
fin
)
−
f
(
P
st
,
t
st
)
,
{\displaystyle {\begin{aligned}S'[\mathbf {q} ]&=\int _{t_{\text{st}}}^{t_{\text{fin}}}L'(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt\\&=\int _{t_{\text{st}}}^{t_{\text{fin}}}L(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt+\int _{t_{\text{st}}}^{t_{\text{fin}}}{\frac {\mathrm {d} f(\mathbf {q} (t),t)}{\mathrm {d} t}}\,dt\\&=S[\mathbf {q} ]+f(P_{\text{fin}},t_{\text{fin}})-f(P_{\text{st}},t_{\text{st}}),\end{aligned}}}
with the last two components f(Pfin, tfin) and f(Pst, tst) independent of q.
=== Invariance under point transformations ===
Given a set of generalized coordinates q, if we change these variables to a new set of generalized coordinates Q according to a point transformation Q = Q(q, t) which is invertible as q = q(Q, t), the new Lagrangian L′ is a function of the new coordinates and similarly for the constraints
L
′
(
Q
,
Q
˙
,
t
)
=
L
(
q
(
Q
,
t
)
,
q
˙
(
Q
,
Q
˙
,
t
)
,
t
)
,
ϕ
j
′
(
Q
,
t
)
=
ϕ
j
(
q
(
Q
,
t
)
,
t
)
{\displaystyle {\begin{aligned}L'(\mathbf {Q} ,{\dot {\mathbf {Q} }},t)&=L(\mathbf {q} (\mathbf {Q} ,t),{\dot {\mathbf {q} }}(\mathbf {Q} ,{\dot {\mathbf {Q} }},t),t),\\\phi _{j}'(\mathbf {Q} ,t)&=\phi _{j}(\mathbf {q} (\mathbf {Q} ,t),t)\end{aligned}}}
and by the chain rule for partial differentiation, Lagrange's equations are invariant under this transformation;
d
d
t
∂
L
′
∂
Q
˙
i
=
∂
L
′
∂
Q
i
+
∑
j
λ
j
∂
ϕ
j
′
∂
Q
i
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {Q}}_{i}}}={\frac {\partial L'}{\partial Q_{i}}}+\sum _{j}\lambda _{j}{\frac {\partial \phi '_{j}}{\partial Q_{i}}}.}
=== Cyclic coordinates and conserved momenta ===
An important property of the Lagrangian is that conserved quantities can easily be read off from it. The generalized momentum "canonically conjugate to" the coordinate qi is defined by
p
i
=
∂
L
∂
q
˙
i
.
{\displaystyle p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}.}
If the Lagrangian L does not depend on some coordinate qi, it follows immediately from the Euler–Lagrange equations that
p
˙
i
=
d
d
t
∂
L
∂
q
˙
i
=
∂
L
∂
q
i
=
0
{\displaystyle {\dot {p}}_{i}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}={\frac {\partial L}{\partial q_{i}}}=0}
and integrating shows the corresponding generalized momentum equals a constant, a conserved quantity. This is a special case of Noether's theorem. Such coordinates are called "cyclic" or "ignorable".
For example, a system may have a Lagrangian
L
(
r
,
θ
,
s
˙
,
z
˙
,
r
˙
,
θ
˙
,
ϕ
˙
,
t
)
,
{\displaystyle L(r,\theta ,{\dot {s}},{\dot {z}},{\dot {r}},{\dot {\theta }},{\dot {\phi }},t),}
where r and z are lengths along straight lines, s is an arc length along some curve, and θ and φ are angles. Notice z, s, and φ are all absent in the Lagrangian even though their velocities are not. Then the momenta
p
z
=
∂
L
∂
z
˙
,
p
s
=
∂
L
∂
s
˙
,
p
ϕ
=
∂
L
∂
ϕ
˙
,
{\displaystyle p_{z}={\frac {\partial L}{\partial {\dot {z}}}},\quad p_{s}={\frac {\partial L}{\partial {\dot {s}}}},\quad p_{\phi }={\frac {\partial L}{\partial {\dot {\phi }}}},}
are all conserved quantities. The units and nature of each generalized momentum will depend on the corresponding coordinate; in this case pz is a translational momentum in the z direction, ps is also a translational momentum along the curve s is measured, and pφ is an angular momentum in the plane the angle φ is measured in. However complicated the motion of the system is, all the coordinates and velocities will vary in such a way that these momenta are conserved.
=== Energy ===
Given a Lagrangian
L
,
{\displaystyle L,}
the Hamiltonian of the corresponding mechanical system is, by definition,
H
=
(
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
)
−
L
.
{\displaystyle H={\biggl (}\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}{\biggr )}-L.}
This quantity will be equivalent to energy if the generalized coordinates are natural coordinates, i.e., they have no explicit time dependence when expressing position vector:
r
=
r
(
q
1
,
⋯
,
q
n
)
{\displaystyle \mathbf {r} =\mathbf {r} (q_{1},\cdots ,q_{n})}
. From:
T
=
m
2
v
2
=
m
2
∑
i
,
j
(
∂
r
→
∂
q
i
q
˙
i
)
⋅
(
∂
r
→
∂
q
j
q
˙
j
)
=
m
2
∑
i
,
j
a
i
j
q
˙
i
q
˙
j
{\displaystyle T={\frac {m}{2}}v^{2}={\frac {m}{2}}\sum _{i,j}\left({\frac {\partial {\vec {r}}}{\partial q_{i}}}{\dot {q}}_{i}\right)\cdot \left({\frac {\partial {\vec {r}}}{\partial q_{j}}}{\dot {q}}_{j}\right)={\frac {m}{2}}\sum _{i,j}a_{ij}{\dot {q}}_{i}{\dot {q}}_{j}}
∑
k
=
1
n
q
˙
k
∂
L
∂
q
˙
k
=
∑
k
=
1
n
q
˙
k
∂
T
∂
q
˙
k
=
m
2
(
2
∑
i
,
j
a
i
j
q
˙
i
q
˙
j
)
=
2
T
{\displaystyle \sum _{k=1}^{n}{\dot {q}}_{k}{\frac {\partial L}{\partial {\dot {q}}_{k}}}=\sum _{k=1}^{n}{\dot {q}}_{k}{\frac {\partial T}{\partial {\dot {q}}_{k}}}={\frac {m}{2}}\left(2\sum _{i,j}a_{ij}{\dot {q}}_{i}{\dot {q}}_{j}\right)=2T}
H
=
(
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
)
−
L
=
2
T
−
(
T
−
V
)
=
T
+
V
=
E
{\displaystyle H=\left(\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}\right)-L=2T-(T-V)=T+V=E}
where
a
i
j
=
∂
r
∂
q
i
⋅
∂
r
∂
q
j
{\displaystyle a_{ij}={\frac {\partial \mathbf {r} }{\partial q_{i}}}\cdot {\frac {\partial \mathbf {r} }{\partial q_{j}}}}
is a symmetric matrix that is defined for the derivation.
==== Invariance under coordinate transformations ====
At every time instant t, the energy is invariant under configuration space coordinate changes q → Q, i.e. (using natural coordinates)
E
(
q
,
q
˙
,
t
)
=
E
(
Q
,
Q
˙
,
t
)
.
{\displaystyle E(\mathbf {q} ,{\dot {\mathbf {q} }},t)=E(\mathbf {Q} ,{\dot {\mathbf {Q} }},t).}
Besides this result, the proof below shows that, under such change of coordinates, the derivatives
∂
L
/
∂
q
˙
i
{\displaystyle \partial L/\partial {\dot {q}}_{i}}
change as coefficients of a linear form.
==== Conservation ====
In Lagrangian mechanics, the system is closed if and only if its Lagrangian
L
{\displaystyle L}
does not explicitly depend on time. The energy conservation law states that the energy
E
{\displaystyle E}
of a closed system is an integral of motion.
More precisely, let q = q(t) be an extremal. (In other words, q satisfies the Euler–Lagrange equations). Taking the total time-derivative of L along this extremal and using the EL equations leads to
d
L
d
t
=
q
˙
∂
L
∂
q
+
q
¨
∂
L
∂
q
˙
+
∂
L
∂
t
−
∂
L
∂
t
=
d
d
t
(
∂
L
∂
q
˙
)
q
˙
+
q
¨
∂
L
∂
q
˙
−
L
˙
−
∂
L
∂
t
=
d
d
t
(
∂
L
∂
q
˙
q
˙
−
L
)
=
d
H
d
t
{\displaystyle {\begin{aligned}{\frac {dL}{dt}}&={\dot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {q} }}+{\ddot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {\dot {q}} }}+{\frac {\partial L}{\partial t}}\\-{\frac {\partial L}{\partial t}}&={\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\right){\dot {\mathbf {q} }}+{\ddot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {\dot {q}} }}-{\dot {L}}\\-{\frac {\partial L}{\partial t}}&={\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\mathbf {\dot {q}} -L\right)={\frac {dH}{dt}}\end{aligned}}}
If the Lagrangian L does not explicitly depend on time, then ∂L/∂t = 0, then H does not vary with time evolution of particle, indeed, an integral of motion, meaning that
H
(
q
(
t
)
,
q
˙
(
t
)
,
t
)
=
constant of time
.
{\displaystyle H(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)={\text{constant of time}}.}
Hence, if the chosen coordinates were natural coordinates, the energy is conserved.
==== Kinetic and potential energies ====
Under all these circumstances, the constant
E
=
T
+
V
{\displaystyle E=T+V}
is the total energy of the system. The kinetic and potential energies still change as the system evolves, but the motion of the system will be such that their sum, the total energy, is constant. This is a valuable simplification, since the energy E is a constant of integration that counts as an arbitrary constant for the problem, and it may be possible to integrate the velocities from this energy relation to solve for the coordinates.
=== Mechanical similarity ===
If the potential energy is a homogeneous function of the coordinates and independent of time, and all position vectors are scaled by the same nonzero constant α, rk′ = αrk, so that
V
(
α
r
1
,
α
r
2
,
…
,
α
r
N
)
=
α
N
V
(
r
1
,
r
2
,
…
,
r
N
)
{\displaystyle V(\alpha \mathbf {r} _{1},\alpha \mathbf {r} _{2},\ldots ,\alpha \mathbf {r} _{N})=\alpha ^{N}V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N})}
and time is scaled by a factor β, t′ = βt, then the velocities vk are scaled by a factor of α/β and the kinetic energy T by (α/β)2. The entire Lagrangian has been scaled by the same factor if
α
2
β
2
=
α
N
⇒
β
=
α
1
−
N
2
.
{\displaystyle {\frac {\alpha ^{2}}{\beta ^{2}}}=\alpha ^{N}\quad \Rightarrow \quad \beta =\alpha ^{1-{\frac {N}{2}}}.}
Since the lengths and times have been scaled, the trajectories of the particles in the system follow geometrically similar paths differing in size. The length l traversed in time t in the original trajectory corresponds to a new length l′ traversed in time t′ in the new trajectory, given by the ratios
t
′
t
=
(
l
′
l
)
1
−
N
2
.
{\displaystyle {\frac {t'}{t}}=\left({\frac {l'}{l}}\right)^{1-{\frac {N}{2}}}.}
=== Interacting particles ===
For a given system, if two subsystems A and B are non-interacting, the Lagrangian L of the overall system is the sum of the Lagrangians LA and LB for the subsystems:
L
=
L
A
+
L
B
.
{\displaystyle L=L_{A}+L_{B}.}
If they do interact this is not possible. In some situations, it may be possible to separate the Lagrangian of the system L into the sum of non-interacting Lagrangians, plus another Lagrangian LAB containing information about the interaction,
L
=
L
A
+
L
B
+
L
A
B
.
{\displaystyle L=L_{A}+L_{B}+L_{AB}.}
This may be physically motivated by taking the non-interacting Lagrangians to be kinetic energies only, while the interaction Lagrangian is the system's total potential energy. Also, in the limiting case of negligible interaction, LAB tends to zero reducing to the non-interacting case above.
The extension to more than two non-interacting subsystems is straightforward – the overall Lagrangian is the sum of the separate Lagrangians for each subsystem. If there are interactions, then interaction Lagrangians may be added.
=== Consequences of singular Lagrangians ===
From the Euler-Lagrange equations, it follows that:
d
d
t
∂
L
∂
q
˙
i
−
∂
L
∂
q
i
=
0
∂
2
L
∂
q
j
∂
q
˙
i
d
q
j
d
t
+
∂
2
L
∂
q
˙
j
∂
q
˙
i
d
q
˙
j
d
t
+
∂
L
∂
t
−
∂
L
∂
q
i
=
0
∑
j
W
i
j
(
q
,
q
˙
,
t
)
q
¨
j
=
∂
L
∂
q
i
−
∂
L
∂
t
−
∑
j
∂
2
L
∂
q
˙
i
∂
q
j
q
˙
j
,
{\displaystyle {\begin{aligned}&{\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}-{\frac {\partial L}{\partial q_{i}}}=0\\&{\frac {\partial ^{2}L}{\partial q_{j}\partial {\dot {q}}_{i}}}{\frac {dq_{j}}{dt}}+{\frac {\partial ^{2}L}{\partial {\dot {q}}_{j}\partial {\dot {q}}_{i}}}{\frac {d{\dot {q}}_{j}}{dt}}+{\frac {\partial L}{\partial t}}-{\frac {\partial L}{\partial q_{i}}}=0\\&\sum _{j}W_{ij}(q,{\dot {q}},t){\ddot {q}}_{j}={\frac {\partial L}{\partial q_{i}}}-{\frac {\partial L}{\partial t}}-\sum _{j}{\frac {\partial ^{2}L}{\partial {\dot {q}}_{i}\partial q_{j}}}{\dot {q}}_{j},\\\end{aligned}}}
where the matrix is defined as
W
i
j
=
∂
2
L
∂
q
˙
i
∂
q
˙
j
{\displaystyle W_{ij}={\frac {\partial ^{2}L}{\partial {\dot {q}}_{i}\partial {\dot {q}}_{j}}}}
. If the matrix
W
{\displaystyle W}
is non-singular, the above equations can be solved to represent
q
¨
{\displaystyle {\ddot {q}}}
as a function of
(
q
˙
,
q
,
t
)
{\displaystyle ({\dot {q}},q,t)}
. If the matrix is non-invertible, it would not be possible to represent all
q
¨
{\displaystyle {\ddot {q}}}
's as a function of
(
q
˙
,
q
,
t
)
{\displaystyle ({\dot {q}},q,t)}
but also, the Hamiltonian equations of motions will not take the standard form.
== Examples ==
The following examples apply Lagrange's equations of the second kind to mechanical problems.
=== Conservative force ===
A particle of mass m moves under the influence of a conservative force derived from the gradient ∇ of a scalar potential,
F
=
−
∇
V
(
r
)
.
{\displaystyle \mathbf {F} =-{\boldsymbol {\nabla }}V(\mathbf {r} ).}
If there are more particles, in accordance with the above results, the total kinetic energy is a sum over all the particle kinetic energies, and the potential is a function of all the coordinates.
==== Cartesian coordinates ====
The Lagrangian of the particle can be written
L
(
x
,
y
,
z
,
x
˙
,
y
˙
,
z
˙
)
=
1
2
m
(
x
˙
2
+
y
˙
2
+
z
˙
2
)
−
V
(
x
,
y
,
z
)
.
{\displaystyle L(x,y,z,{\dot {x}},{\dot {y}},{\dot {z}})={\frac {1}{2}}m({\dot {x}}^{2}+{\dot {y}}^{2}+{\dot {z}}^{2})-V(x,y,z).}
The equations of motion for the particle are found by applying the Euler–Lagrange equation, for the x coordinate
d
d
t
(
∂
L
∂
x
˙
)
=
∂
L
∂
x
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {x}}}}\right)={\frac {\partial L}{\partial x}},}
with derivatives
∂
L
∂
x
=
−
∂
V
∂
x
,
∂
L
∂
x
˙
=
m
x
˙
,
d
d
t
(
∂
L
∂
x
˙
)
=
m
x
¨
,
{\displaystyle {\frac {\partial L}{\partial x}}=-{\frac {\partial V}{\partial x}},\quad {\frac {\partial L}{\partial {\dot {x}}}}=m{\dot {x}},\quad {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {x}}}}\right)=m{\ddot {x}},}
hence
m
x
¨
=
−
∂
V
∂
x
,
{\displaystyle m{\ddot {x}}=-{\frac {\partial V}{\partial x}},}
and similarly for the y and z coordinates. Collecting the equations in vector form we find
m
r
¨
=
−
∇
V
{\displaystyle m{\ddot {\mathbf {r} }}=-{\boldsymbol {\nabla }}V}
which is Newton's second law of motion for a particle subject to a conservative force.
==== Polar coordinates in 2D and 3D ====
Using the spherical coordinates (r, θ, φ) as commonly used in physics (ISO 80000-2:2019 convention), where r is the radial distance to origin, θ is polar angle (also known as colatitude, zenith angle, normal angle, or inclination angle), and φ is the azimuthal angle, the Lagrangian for a central potential is
L
=
m
2
(
r
˙
2
+
r
2
θ
˙
2
+
r
2
sin
2
θ
φ
˙
2
)
−
V
(
r
)
.
{\displaystyle L={\frac {m}{2}}({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}+r^{2}\sin ^{2}\theta \,{\dot {\varphi }}^{2})-V(r).}
So, in spherical coordinates, the Euler–Lagrange equations are
m
r
¨
−
m
r
(
θ
˙
2
+
sin
2
θ
φ
˙
2
)
+
∂
V
∂
r
=
0
,
{\displaystyle m{\ddot {r}}-mr({\dot {\theta }}^{2}+\sin ^{2}\theta \,{\dot {\varphi }}^{2})+{\frac {\partial V}{\partial r}}=0,}
d
d
t
(
m
r
2
θ
˙
)
−
m
r
2
sin
θ
cos
θ
φ
˙
2
=
0
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}(mr^{2}{\dot {\theta }})-mr^{2}\sin \theta \cos \theta \,{\dot {\varphi }}^{2}=0,}
d
d
t
(
m
r
2
sin
2
θ
φ
˙
)
=
0.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}(mr^{2}\sin ^{2}\theta \,{\dot {\varphi }})=0.}
The φ coordinate is cyclic since it does not appear in the Lagrangian, so the conserved momentum in the system is the angular momentum
p
φ
=
∂
L
∂
φ
˙
=
m
r
2
sin
2
θ
φ
˙
,
{\displaystyle p_{\varphi }={\frac {\partial L}{\partial {\dot {\varphi }}}}=mr^{2}\sin ^{2}\theta {\dot {\varphi }},}
in which r, θ and dφ/dt can all vary with time, but only in such a way that pφ is constant.
The Lagrangian in two-dimensional polar coordinates is recovered by fixing θ to the constant value π/2.
=== Pendulum on a movable support ===
Consider a pendulum of mass m and length ℓ, which is attached to a support with mass M, which can move along a line in the
x
{\displaystyle x}
-direction. Let
x
{\displaystyle x}
be the coordinate along the line of the support, and let us denote the position of the pendulum by the angle
θ
{\displaystyle \theta }
from the vertical. The coordinates and velocity components of the pendulum bob are
x
p
e
n
d
=
x
+
ℓ
sin
θ
⇒
x
˙
p
e
n
d
=
x
˙
+
ℓ
θ
˙
cos
θ
y
p
e
n
d
=
−
ℓ
cos
θ
⇒
y
˙
p
e
n
d
=
ℓ
θ
˙
sin
θ
.
{\displaystyle {\begin{array}{rll}&x_{\mathrm {pend} }=x+\ell \sin \theta &\quad \Rightarrow \quad {\dot {x}}_{\mathrm {pend} }={\dot {x}}+\ell {\dot {\theta }}\cos \theta \\&y_{\mathrm {pend} }=-\ell \cos \theta &\quad \Rightarrow \quad {\dot {y}}_{\mathrm {pend} }=\ell {\dot {\theta }}\sin \theta .\end{array}}}
The generalized coordinates can be taken to be
x
{\displaystyle x}
and
θ
{\displaystyle \theta }
. The kinetic energy of the system is then
T
=
1
2
M
x
˙
2
+
1
2
m
(
x
˙
p
e
n
d
2
+
y
˙
p
e
n
d
2
)
{\displaystyle T={\frac {1}{2}}M{\dot {x}}^{2}+{\frac {1}{2}}m\left({\dot {x}}_{\mathrm {pend} }^{2}+{\dot {y}}_{\mathrm {pend} }^{2}\right)}
and the potential energy is
V
=
m
g
y
p
e
n
d
{\displaystyle V=mgy_{\mathrm {pend} }}
giving the Lagrangian
L
=
T
−
V
=
1
2
M
x
˙
2
+
1
2
m
[
(
x
˙
+
ℓ
θ
˙
cos
θ
)
2
+
(
ℓ
θ
˙
sin
θ
)
2
]
+
m
g
ℓ
cos
θ
=
1
2
(
M
+
m
)
x
˙
2
+
m
x
˙
ℓ
θ
˙
cos
θ
+
1
2
m
ℓ
2
θ
˙
2
+
m
g
ℓ
cos
θ
.
{\displaystyle {\begin{array}{rcl}L&=&T-V\\&=&{\frac {1}{2}}M{\dot {x}}^{2}+{\frac {1}{2}}m\left[\left({\dot {x}}+\ell {\dot {\theta }}\cos \theta \right)^{2}+\left(\ell {\dot {\theta }}\sin \theta \right)^{2}\right]+mg\ell \cos \theta \\&=&{\frac {1}{2}}\left(M+m\right){\dot {x}}^{2}+m{\dot {x}}\ell {\dot {\theta }}\cos \theta +{\frac {1}{2}}m\ell ^{2}{\dot {\theta }}^{2}+mg\ell \cos \theta .\end{array}}}
Since x is absent from the Lagrangian, it is a cyclic coordinate. The conserved momentum is
p
x
=
∂
L
∂
x
˙
=
(
M
+
m
)
x
˙
+
m
ℓ
θ
˙
cos
θ
,
{\displaystyle p_{x}={\frac {\partial L}{\partial {\dot {x}}}}=(M+m){\dot {x}}+m\ell {\dot {\theta }}\cos \theta ,}
and the Lagrange equation for the support coordinate
x
{\displaystyle x}
is
(
M
+
m
)
x
¨
+
m
ℓ
θ
¨
cos
θ
−
m
ℓ
θ
˙
2
sin
θ
=
0.
{\displaystyle (M+m){\ddot {x}}+m\ell {\ddot {\theta }}\cos \theta -m\ell {\dot {\theta }}^{2}\sin \theta =0.}
The Lagrange equation for the angle θ is
d
d
t
[
m
(
x
˙
ℓ
cos
θ
+
ℓ
2
θ
˙
)
]
+
m
ℓ
(
x
˙
θ
˙
+
g
)
sin
θ
=
0
;
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[m({\dot {x}}\ell \cos \theta +\ell ^{2}{\dot {\theta }})\right]+m\ell ({\dot {x}}{\dot {\theta }}+g)\sin \theta =0;}
and simplifying
θ
¨
+
x
¨
ℓ
cos
θ
+
g
ℓ
sin
θ
=
0.
{\displaystyle {\ddot {\theta }}+{\frac {\ddot {x}}{\ell }}\cos \theta +{\frac {g}{\ell }}\sin \theta =0.}
These equations may look quite complicated, but finding them with Newton's laws would have required carefully identifying all forces, which would have been much more laborious and prone to errors. By considering limit cases, the correctness of this system can be verified: For example,
x
¨
→
0
{\displaystyle {\ddot {x}}\to 0}
should give the equations of motion for a simple pendulum that is at rest in some inertial frame, while
θ
¨
→
0
{\displaystyle {\ddot {\theta }}\to 0}
should give the equations for a pendulum in a constantly accelerating system, etc. Furthermore, it is trivial to obtain the results numerically, given suitable starting conditions and a chosen time step, by stepping through the results iteratively.
=== Two-body central force problem ===
Two bodies of masses m1 and m2 with position vectors r1 and r2 are in orbit about each other due to an attractive central potential V. We may write down the Lagrangian in terms of the position coordinates as they are, but it is an established procedure to convert the two-body problem into a one-body problem as follows. Introduce the Jacobi coordinates; the separation of the bodies r = r2 − r1 and the location of the center of mass R = (m1r1 + m2r2)/(m1 + m2). The Lagrangian is then
L
=
1
2
M
R
˙
2
⏟
L
cm
+
1
2
μ
r
˙
2
−
V
(
|
r
|
)
⏟
L
rel
{\displaystyle L=\underbrace {{\frac {1}{2}}M{\dot {\mathbf {R} }}^{2}} _{L_{\text{cm}}}+\underbrace {{\frac {1}{2}}\mu {\dot {\mathbf {r} }}^{2}-V(|\mathbf {r} |)} _{L_{\text{rel}}}}
where M = m1 + m2 is the total mass, μ = m1m2/(m1 + m2) is the reduced mass, and V the potential of the radial force, which depends only on the magnitude of the separation |r| = |r2 − r1|. The Lagrangian splits into a center-of-mass term Lcm and a relative motion term Lrel.
The Euler–Lagrange equation for R is simply
M
R
¨
=
0
,
{\displaystyle M{\ddot {\mathbf {R} }}=0,}
which states the center of mass moves in a straight line at constant velocity.
Since the relative motion only depends on the magnitude of the separation, it is ideal to use polar coordinates (r, θ) and take r = |r|,
L
rel
=
1
2
μ
(
r
˙
2
+
r
2
θ
˙
2
)
−
V
(
r
)
,
{\displaystyle L_{\text{rel}}={\frac {1}{2}}\mu \left({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}\right)-V(r),}
so θ is a cyclic coordinate with the corresponding conserved (angular) momentum
p
θ
=
∂
L
rel
∂
θ
˙
=
μ
r
2
θ
˙
=
ℓ
.
{\displaystyle p_{\theta }={\frac {\partial L_{\text{rel}}}{\partial {\dot {\theta }}}}=\mu r^{2}{\dot {\theta }}=\ell .}
The radial coordinate r and angular velocity dθ/dt can vary with time, but only in such a way that ℓ is constant. The Lagrange equation for r is
μ
r
θ
˙
2
−
d
V
d
r
=
μ
r
¨
.
{\displaystyle \mu r{\dot {\theta }}^{2}-{\frac {dV}{dr}}=\mu {\ddot {r}}.}
This equation is identical to the radial equation obtained using Newton's laws in a co-rotating reference frame, that is, a frame rotating with the reduced mass so it appears stationary. Eliminating the angular velocity dθ/dt from this radial equation,
μ
r
¨
=
−
d
V
d
r
+
ℓ
2
μ
r
3
.
{\displaystyle \mu {\ddot {r}}=-{\frac {\mathrm {d} V}{\mathrm {d} r}}+{\frac {\ell ^{2}}{\mu r^{3}}}.}
which is the equation of motion for a one-dimensional problem in which a particle of mass μ is subjected to the inward central force −dV/dr and a second outward force, called in this context the (Lagrangian) centrifugal force (see centrifugal force#Other uses of the term):
F
c
f
=
μ
r
θ
˙
2
=
ℓ
2
μ
r
3
.
{\displaystyle F_{\mathrm {cf} }=\mu r{\dot {\theta }}^{2}={\frac {\ell ^{2}}{\mu r^{3}}}.}
Of course, if one remains entirely within the one-dimensional formulation, ℓ enters only as some imposed parameter of the external outward force, and its interpretation as angular momentum depends upon the more general two-dimensional problem from which the one-dimensional problem originated.
If one arrives at this equation using Newtonian mechanics in a co-rotating frame, the interpretation is evident as the centrifugal force in that frame due to the rotation of the frame itself. If one arrives at this equation directly by using the generalized coordinates (r, θ) and simply following the Lagrangian formulation without thinking about frames at all, the interpretation is that the centrifugal force is an outgrowth of using polar coordinates. As Hildebrand says:
"Since such quantities are not true physical forces, they are often called inertia forces. Their presence or absence depends, not upon the particular problem at hand, but upon the coordinate system chosen." In particular, if Cartesian coordinates are chosen, the centrifugal force disappears, and the formulation involves only the central force itself, which provides the centripetal force for a curved motion.
This viewpoint, that fictitious forces originate in the choice of coordinates, often is expressed by users of the Lagrangian method. This view arises naturally in the Lagrangian approach, because the frame of reference is (possibly unconsciously) selected by the choice of coordinates. For example, see for a comparison of Lagrangians in an inertial and in a noninertial frame of reference. See also the discussion of "total" and "updated" Lagrangian formulations in. Unfortunately, this usage of "inertial force" conflicts with the Newtonian idea of an inertial force. In the Newtonian view, an inertial force originates in the acceleration of the frame of observation (the fact that it is not an inertial frame of reference), not in the choice of coordinate system. To keep matters clear, it is safest to refer to the Lagrangian inertial forces as generalized inertial forces, to distinguish them from the Newtonian vector inertial forces. That is, one should avoid following Hildebrand when he says (p. 155) "we deal always with generalized forces, velocities accelerations, and momenta. For brevity, the adjective "generalized" will be omitted frequently."
It is known that the Lagrangian of a system is not unique. Within the Lagrangian formalism the Newtonian fictitious forces can be identified by the existence of alternative Lagrangians in which the fictitious forces disappear, sometimes found by exploiting the symmetry of the system.
== Extensions to include non-conservative forces ==
=== Dissipative forces ===
Dissipation (i.e. non-conservative systems) can also be treated with an effective Lagrangian formulated by a certain doubling of the degrees of freedom.
In a more general formulation, the forces could be both conservative and viscous. If an appropriate transformation can be found from the Fi, Rayleigh suggests using a dissipation function, D, of the following form:
D
=
1
2
∑
j
=
1
m
∑
k
=
1
m
C
j
k
q
˙
j
q
˙
k
,
{\displaystyle D={\frac {1}{2}}\sum _{j=1}^{m}\sum _{k=1}^{m}C_{jk}{\dot {q}}_{j}{\dot {q}}_{k},}
where Cjk are constants that are related to the damping coefficients in the physical system, though not necessarily equal to them. If D is defined this way, then
Q
j
=
−
∂
V
∂
q
j
−
∂
D
∂
q
˙
j
{\displaystyle Q_{j}=-{\frac {\partial V}{\partial q_{j}}}-{\frac {\partial D}{\partial {\dot {q}}_{j}}}}
and
d
d
t
(
∂
L
∂
q
˙
j
)
−
∂
L
∂
q
j
+
∂
D
∂
q
˙
j
=
0.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {q}}_{j}}}\right)-{\frac {\partial L}{\partial q_{j}}}+{\frac {\partial D}{\partial {\dot {q}}_{j}}}=0.}
=== Electromagnetism ===
A test particle is a particle whose mass and charge are assumed to be so small that its effect on external system is insignificant. It is often a hypothetical simplified point particle with no properties other than mass and charge. Real particles like electrons and up quarks are more complex and have additional terms in their Lagrangians. Not only can the fields form non conservative potentials, these potentials can also be velocity dependent.
The Lagrangian for a charged particle with electrical charge q, interacting with an electromagnetic field, is the prototypical example of a velocity-dependent potential. The electric scalar potential ϕ = ϕ(r, t) and magnetic vector potential A = A(r, t) are defined from the electric field E = E(r, t) and magnetic field B = B(r, t) as follows:
E
=
−
∇
ϕ
−
∂
A
∂
t
,
B
=
∇
×
A
.
{\displaystyle \mathbf {E} =-{\boldsymbol {\nabla }}\phi -{\frac {\partial \mathbf {A} }{\partial t}},\quad \mathbf {B} ={\boldsymbol {\nabla }}\times \mathbf {A} .}
The Lagrangian of a massive charged test particle in an electromagnetic field
L
=
1
2
m
r
˙
2
+
q
r
˙
⋅
A
−
q
ϕ
,
{\displaystyle L={\tfrac {1}{2}}m{\dot {\mathbf {r} }}^{2}+q\,{\dot {\mathbf {r} }}\cdot \mathbf {A} -q\phi ,}
is called minimal coupling. This is a good example of when the common rule of thumb that the Lagrangian is the kinetic energy minus the potential energy is incorrect. Combined with Euler–Lagrange equation, it produces the Lorentz force law
m
r
¨
=
q
E
+
q
r
˙
×
B
{\displaystyle m{\ddot {\mathbf {r} }}=q\mathbf {E} +q{\dot {\mathbf {r} }}\times \mathbf {B} }
Under gauge transformation:
A
→
A
+
∇
f
,
ϕ
→
ϕ
−
f
˙
,
{\displaystyle \mathbf {A} \rightarrow \mathbf {A} +{\boldsymbol {\nabla }}f,\quad \phi \rightarrow \phi -{\dot {f}},}
where f(r,t) is any scalar function of space and time, the aforementioned Lagrangian transforms like:
L
→
L
+
q
(
r
˙
⋅
∇
+
∂
∂
t
)
f
=
L
+
q
d
f
d
t
,
{\displaystyle L\rightarrow L+q\left({\dot {\mathbf {r} }}\cdot {\boldsymbol {\nabla }}+{\frac {\partial }{\partial t}}\right)f=L+q{\frac {df}{dt}},}
which still produces the same Lorentz force law.
Note that the canonical momentum (conjugate to position r) is the kinetic momentum plus a contribution from the A field (known as the potential momentum):
p
=
∂
L
∂
r
˙
=
m
r
˙
+
q
A
.
{\displaystyle \mathbf {p} ={\frac {\partial L}{\partial {\dot {\mathbf {r} }}}}=m{\dot {\mathbf {r} }}+q\mathbf {A} .}
This relation is also used in the minimal coupling prescription in quantum mechanics and quantum field theory. From this expression, we can see that the canonical momentum p is not gauge invariant, and therefore not a measurable physical quantity; However, if r is cyclic (i.e. Lagrangian is independent of position r), which happens if the ϕ and A fields are uniform, then this canonical momentum p given here is the conserved momentum, while the measurable physical kinetic momentum mv is not.
== Other contexts and formulations ==
The ideas in Lagrangian mechanics have numerous applications in other areas of physics, and can adopt generalized results from the calculus of variations.
=== Alternative formulations of classical mechanics ===
A closely related formulation of classical mechanics is Hamiltonian mechanics. The Hamiltonian is defined by
H
=
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
−
L
{\displaystyle H=\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}-L}
and can be obtained by performing a Legendre transformation on the Lagrangian, which introduces new variables canonically conjugate to the original variables. For example, given a set of generalized coordinates, the variables canonically conjugate are the generalized momenta. This doubles the number of variables, but makes differential equations first order. The Hamiltonian is a particularly ubiquitous quantity in quantum mechanics (see Hamiltonian (quantum mechanics)).
Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, which is not often used in practice but an efficient formulation for cyclic coordinates.
=== Momentum space formulation ===
The Euler–Lagrange equations can also be formulated in terms of the generalized momenta rather than generalized coordinates. Performing a Legendre transformation on the generalized coordinate Lagrangian L(q, dq/dt, t) obtains the generalized momenta Lagrangian L′(p, dp/dt, t) in terms of the original Lagrangian, as well the EL equations in terms of the generalized momenta. Both Lagrangians contain the same information, and either can be used to solve for the motion of the system. In practice generalized coordinates are more convenient to use and interpret than generalized momenta.
=== Higher derivatives of generalized coordinates ===
There is no mathematical reason to restrict the derivatives of generalized coordinates to first order only. It is possible to derive modified EL equations for a Lagrangian containing higher order derivatives, see Euler–Lagrange equation for details. However, from the physical point-of-view there is an obstacle to include time derivatives higher than the first order, which is implied by Ostrogradsky's construction of a canonical formalism for nondegenerate higher derivative Lagrangians, see Ostrogradsky instability
=== Optics ===
Lagrangian mechanics can be applied to geometrical optics, by applying variational principles to rays of light in a medium, and solving the EL equations gives the equations of the paths the light rays follow.
=== Relativistic formulation ===
Lagrangian mechanics can be formulated in special relativity and general relativity. Some features of Lagrangian mechanics are retained in the relativistic theories but difficulties quickly appear in other respects. In particular, the EL equations take the same form, and the connection between cyclic coordinates and conserved momenta still applies, however the Lagrangian must be modified and is not simply the kinetic minus the potential energy of a particle. Also, it is not straightforward to handle multiparticle systems in a manifestly covariant way, it may be possible if a particular frame of reference is singled out.
=== Quantum mechanics ===
In quantum mechanics, action and quantum-mechanical phase are related via the Planck constant, and the principle of stationary action can be understood in terms of constructive interference of wave functions.
In 1948, Feynman discovered the path integral formulation extending the principle of least action to quantum mechanics for electrons and photons. In this formulation, particles travel every possible path between the initial and final states; the probability of a specific final state is obtained by summing over all possible trajectories leading to it. In the classical regime, the path integral formulation cleanly reproduces Hamilton's principle, and Fermat's principle in optics.
=== Classical field theory ===
In Lagrangian mechanics, the generalized coordinates form a discrete set of variables that define the configuration of a system. In classical field theory, the physical system is not a set of discrete particles, but rather a continuous field ϕ(r, t) defined over a region of 3D space. Associated with the field is a Lagrangian density
L
(
ϕ
,
∇
ϕ
,
ϕ
˙
,
r
,
t
)
{\displaystyle {\mathcal {L}}(\phi ,\nabla \phi ,{\dot {\phi }},\mathbf {r} ,t)}
defined in terms of the field and its space and time derivatives at a location r and time t. Analogous to the particle case, for non-relativistic applications the Lagrangian density is also the kinetic energy density of the field, minus its potential energy density (this is not true in general, and the Lagrangian density has to be "reverse engineered"). The Lagrangian is then the volume integral of the Lagrangian density over 3D space
L
(
t
)
=
∫
L
d
3
r
{\displaystyle L(t)=\int {\mathcal {L}}\,\mathrm {d} ^{3}\mathbf {r} }
where d3r is a 3D differential volume element. The Lagrangian is a function of time since the Lagrangian density has implicit space dependence via the fields, and may have explicit spatial dependence, but these are removed in the integral, leaving only time in as the variable for the Lagrangian.
=== Noether's theorem ===
The action principle, and the Lagrangian formalism, are tied closely to Noether's theorem, which connects physical conserved quantities to continuous symmetries of a physical system.
If the Lagrangian is invariant under a symmetry, then the resulting equations of motion are also invariant under that symmetry. This characteristic is very helpful in showing that theories are consistent with either special relativity or general relativity.
== See also ==
== Footnotes ==
== Notes ==
== References ==
== Further reading ==
Gupta, Kiran Chandra, Classical mechanics of particles and rigid bodies (Wiley, 1988).
Cassel, Kevin (2013). Variational methods with applications in science and engineering. Cambridge: Cambridge University Press. ISBN 978-1-107-02258-4.
Goldstein, Herbert, et al. Classical Mechanics. 3rd ed., Pearson, 2002.
== External links ==
David Tong. "Cambridge Lecture Notes on Classical Dynamics". DAMTP. Retrieved 2017-06-08.
Principle of least action interactive Excellent interactive explanation/webpage
Joseph Louis de Lagrange - Œuvres complètes (Gallica-Math)
Constrained motion and generalized coordinates, page 4 | Wikipedia/Lagrangian_Mechanics |
A bob is a heavy object (also called a "weight" or "mass") on the end of a pendulum found most commonly, but not exclusively, in pendulum clocks.
== Reason for use ==
Although a pendulum can theoretically be any shape, any rigid object swinging on a pivot, clock pendulums are usually made of a weight or bob attached to the bottom end of a rod, with the top attached to a pivot so it can swing. The advantage of this construction is that it positions the centre of mass close to the physical end of the pendulum, farthest from the pivot. This maximizes the moment of inertia, and minimises the length of pendulum required for a given period. Shorter pendulums allow the clock case to be made smaller, and also minimize the pendulum's air resistance. Since most of the energy loss in clocks is due to air friction of the pendulum, this allows clocks to run longer on a given power source.
== Use in clocks ==
Traditionally, a clock pendulum bob is a round flat disk, lens-shaped in section, to reduce its aerodynamic drag, but bobs in older clocks often have decorative carving and shapes characteristic of the type of clock. They are usually made of a dense metal such as iron or brass. Lead is denser, but is usually avoided because of its softness, which would result in the bob being dented during its inevitable collisions with the inside of the clock case when the clock is moved.
In most pendulum clocks the rate is adjusted by moving the bob up or down on the pendulum rod. Moving it up shortens the pendulum, making it beat more quickly, and causing the clock to gain time. In the most common arrangement, the bob is attached to the pendulum with an adjustment nut at the bottom, on the threaded end of the pendulum rod. Turning the nut adjusts the height of the bob. But some bobs have levers or dials to adjust the height. In some precision clocks there is a smaller auxiliary weight on a threaded shaft to allow more fine adjustment. Tower clocks sometimes have a tray mounted on the pendulum rod, to which small weights can be added or removed, to adjust the rate without stopping the clock.
The weight of the bob itself has little effect on the period of the pendulum. However, a heavier bob helps to keep the pendulum moving smoothly until it receives its next push from the clock's escapement mechanism. That increases the pendulum's Q factor, making the motion of the pendulum more independent of the escapement and the errors it introduces, leading to increased accuracy. On the other hand, the heavier the bob is the more energy must be supplied by the clock's power source and more friction and wear occurs in the clock's movement. Pendulum bobs in quality clocks are usually made as heavy as the clock's movement can drive. A common weight for the bob of a one second pendulum, widely used in grandfather clocks and many others, is around 2 kilograms.
== See also ==
Plumb-bob
== References == | Wikipedia/Bob_(physics) |
A harmonograph is a mechanical apparatus that employs pendulums to create a geometric image. The drawings created typically are Lissajous curves or related drawings of greater complexity. The devices, which began to appear in the mid-19th century and peaked in popularity in the 1890s, cannot be conclusively attributed to a single person, although Hugh Blackburn, a professor of mathematics at the University of Glasgow, is commonly believed to be the official inventor.
A simple, so-called "lateral" harmonograph uses two pendulums to control the movement of a pen relative to a drawing surface. One pendulum moves the pen back and forth along one axis, and the other pendulum moves the drawing surface back and forth along a perpendicular axis. By varying the frequency and phase of the pendulums relative to one another, different patterns are created. Even a simple harmonograph as described can create ellipses, spirals, figure eights and other Lissajous figures.
More complex harmonographs incorporate three or more pendulums or linked pendulums together (for example, hanging one pendulum off another), or involve rotary motion, in which one or more pendulums is mounted on gimbals to allow movement in any direction.
A particular type of harmonograph, a pintograph, is based on the relative motion of two rotating disks, as illustrated in the links below. (A pintograph is not to be confused with a pantograph, which is a mechanical device used to enlarge figures.)
== History ==
In the 1870s, the term harmonograph is attested in connection with A. E. Donkin and devices built by Samuel Charles Tisley.
== Blackburn pendulum ==
A Blackburn pendulum is a device for illustrating simple harmonic motion, it was named after Hugh Blackburn, who described it in 1844. This was first discussed by James Dean in 1815 and analyzed mathematically by Nathaniel Bowditch in the same year. A bob is suspended from a string that in turn hangs from a V-shaped pair of strings, so that the pendulum oscillates simultaneously in two perpendicular directions with different periods. The bob consequently follows a path resembling a Lissajous curve; it belongs to the family of mechanical devices known as harmonographs.
Mid-20th century physics textbooks sometimes refer to this type of pendulum as a double pendulum.
== Computer-generated harmonograph figure ==
A harmonograph creates its figures using the movements of damped pendulums. The movement of a damped pendulum is described by the equation
x
(
t
)
=
A
sin
(
t
f
+
p
)
e
−
d
t
,
{\displaystyle x(t)=A\sin(tf+p)e^{-dt},}
in which
f
{\displaystyle f}
represents frequency,
p
{\displaystyle p}
represents phase,
A
{\displaystyle A}
represents amplitude,
d
{\displaystyle d}
represents damping and
t
{\displaystyle t}
represents time. If that pendulum can move about two axes (in a circular or elliptical shape), due to the principle of superposition, the motion of a rod connected to the bottom of the pendulum along one axis will be described by the equation
x
(
t
)
=
A
1
sin
(
t
f
1
+
p
1
)
e
−
d
1
t
+
A
2
sin
(
t
f
2
+
p
2
)
e
−
d
2
t
.
{\displaystyle x(t)=A_{1}\sin(tf_{1}+p_{1})e^{-d_{1}t}+A_{2}\sin(tf_{2}+p_{2})e^{-d_{2}t}.}
A typical harmonograph has two pendulums that move in such a fashion, and a pen that is moved by two perpendicular rods connected to these pendulums. Therefore, the path of the harmonograph figure is described by the parametric equations
x
(
t
)
=
A
1
sin
(
t
f
1
+
p
1
)
e
−
d
1
t
+
A
2
sin
(
t
f
2
+
p
2
)
e
−
d
2
t
,
y
(
t
)
=
A
3
sin
(
t
f
3
+
p
3
)
e
−
d
3
t
+
A
4
sin
(
t
f
4
+
p
4
)
e
−
d
4
t
.
{\displaystyle {\begin{aligned}x(t)&=A_{1}\sin(tf_{1}+p_{1})e^{-d_{1}t}+A_{2}\sin(tf_{2}+p_{2})e^{-d_{2}t},\\y(t)&=A_{3}\sin(tf_{3}+p_{3})e^{-d_{3}t}+A_{4}\sin(tf_{4}+p_{4})e^{-d_{4}t}.\end{aligned}}}
An appropriate computer program can translate these equations into a graph that emulates a harmonograph. Applying the first equation a second time to each equation can emulate a moving piece of paper (see the figure below).
== Gallery ==
== See also ==
Spirograph
== Notes ==
== External links ==
A complex harmonograph with a unique single pendulum design
Harmonograph background, equations, and illustrations
How to build a 3-pendulum rotary harmonograph
Interactive JavaScript simulation of a 3-pendulum rotary harmonograph
HTML5 Animated Harmonograph
Virtual Harmonograph web application
An Animated Harmonograph Model in MS Excel
An interactive Pintograph for iOS
Harmonographs, pintographs, and Excel models | Wikipedia/Harmonograph |
Comptes rendus de l'Académie des Sciences (French pronunciation: [kɔ̃t ʁɑ̃dy də lakademi de sjɑ̃s], Proceedings of the Academy of Sciences), or simply Comptes rendus, is a French scientific journal published since 1835. It is the proceedings of the French Academy of Sciences. It is currently split into seven sections, published on behalf of the Academy until 2020 by Elsevier: Mathématique, Mécanique, Physique, Géoscience, Palévol, Chimie, and Biologies. As of 2020, the Comptes Rendus journals are published by the Academy with a diamond open access model.
== Naming history ==
The journal has had several name changes and splits over the years.
=== 1835–1965 ===
Comptes rendus was initially established in 1835 as Comptes rendus hebdomadaires des séances de l'Académie des Sciences. It began as an alternative publication pathway for more prompt publication than the Mémoires de l'Académie des Sciences, which had been published since 1666. The Mémoires, which continued to be published alongside the Comptes rendus throughout the nineteenth century, had a publication cycle which resulted in memoirs being published years after they had been presented to the Academy. Some academicians continued to publish in the Mémoires because of the strict page limits in the Comptes rendus.
=== 1966–1980 ===
After 1965 this title was split into five sections:
Série A (Sciences mathématiques) – mathematics
Série B (Sciences physiques) – physics and geosciences
Série C (Sciences chimiques) – chemistry
Série D (Sciences naturelles) – life sciences
Vie académique – academy notices and miscellanea (between 1968 and 1970, and again between 1979 and 1983)
Series A and B were published together in one volume except in 1974.
=== 1981–1993 ===
The areas were rearranged as follows:
Série I - (Sciences Mathématiques) - mathematics
Série II (Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terre) - physics, chemistry, astronomy and geosciences
Série III - (Sciences de la vie) - life sciences
Vie académique – academy notices and miscellanea (the last 3 volumes of the second edition, between 1981 and 1983)
Vie des sciences – A renamed Vie académique (from 1984 to 1996)
=== 1994–2001 ===
These publications remained the same:
Série I (Sciences mathématiques) – mathematics
Série III (Sciences de la Vie) – life sciences
Vie des sciences – A renamed Vie académique (until 1996)
The areas published in Série II were slowly split into other publications in ways that caused some confusion.
In 1994, Série II, which covered physics, chemistry, astronomy and geosciences, was replaced by Série IIA and Série IIB. Série IIA was exclusive to geosciences, and Série IIB covered chemistry and astronomy and the now-distinct mechanics and physics.
In 1998, Série IIB covered mechanics, physics and astronomy; chemistry got its separate publication, Série IIC.
In 2000, Série IIB became dedicated exclusively to mechanics in May. Astronomy got redefined as astrophysics, which along with physics was covered by the new Série IV. Série IV began publishing in March; however, Séries IIB published two more issues on physics and astrophysics in April and May before starting the new run.
=== 2002 onwards ===
The present naming and subject assignment was established in 2002:
Comptes Rendus Biologies – life sciences except paleontology and evolutionary biology. Continues in part Série IIC (biochemistry) and III.
Comptes Rendus Chimie – chemistry. Continues in part Série IIC.
Comptes Rendus Géoscience – geosciences. Continues in part Série IIA.
Comptes Rendus Mathématique – mathematics. Continues Série I.
Comptes Rendus Mécanique – mechanics. Continues Série IIB.
Comptes Rendus Palévol – paleontology and evolutionary biology. Continues in part Série IIA and III.
Comptes Rendus Physique – topical issues in physics (mainly optics, astrophysics and particle physics). Continues Série IV.
== Online open archives ==
The Comptes rendus de l'Académie des Sciences publications are available through the National Library of France as part of its free online library and archive of other historical documents and works of art, Gallica. The publications available online are:
Comptes rendus hebdomadaires des séances de l'Académie des science (1835–1965)
Séries A et B, Sciences Mathématiques et Sciences Physiques (1966–1973)
Série A, Sciences Mathématiques, (1974)
Série B, Sciences Physiques, (1974)
Séries A et B, Sciences Mathématiques et Sciences Physiques (1975–1980)
Besides the material for this timeframe, this collection also has a separate set of scans of all the material of Série I - Mathématique from 1981 to 1990
Série C, Sciences Chimique
Série D, Sciences Naturelle
Vie Académique (1968–1970)
Vie Académique (1979–1983)
Série I - Mathématique
Séries A et B, Sciences Mathématiques et Sciences Physiques (1975–1980) has a different set of scans for all of this material.
Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terr
The link to Série I - Mathématique (1984–1996) includes a different set of scans for the first 3 issues of 1981 of this series.
Série III - Sciences de la vie
Série I - Mathématique
Séries A et B, Sciences Mathématiques et Sciences Physiques (1975–1980) has a different set of scans for this series' material until 1990.
This collection contains a different set of scans of the 1981 material of Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terr (1981–1983).
Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terre (1984–1994)
The first year of material (1994) of material of Série IIb - Mécanique, physique, chimie, astronomie (1995–1996) is misfiled in this collection.
Série IIa - Sciences de la terre et des planètes (1994–1996)
Série IIb - Mécanique, physique, chimie, astronomie (1995–1996)
The first year of material (1994) is misfiled together with Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terre (1994–1996).
Série III - Sciences de la vie
Vie des sciences
All publications from 1997 to 2019 were published commercially by Elsevier. From 2020 on, the Comptes Rendus Palevol have been published by the Muséum National d'Histoire Naturelle (Paris) for the Académie des Sciences. All other series of the Comptes Rendus of the Acamémie des Sciences have been published (from 2020 on) by Mersenne under a Diamond Open Access model.
== References ==
== External links ==
"Comptes Rendus official website". French Academy of Sciences. Retrieved 23 May 2024.
Comptes Rendus de l'Académie des sciences numérisés sur le site de la Bibliothèque nationale de France
Scholarly Societies project: French Academy of Sciences page; provides information on naming and publication history up to 1980, as well as on previous journals of the Academy. Retrieved 2006-DEC-10.
Bibliothèque nationale de France: Catalog record and full-text scans of Comptes rendus. Retrieved 2009-JUN-22.
Comptes rendus series: [1]
ScienceDirect list of titles (from 1997 onwards): https://www.sciencedirect.com/browse/journals-and-books?searchPhrase=comptes | Wikipedia/Comptes_Rendus_Hebdomadaires_des_Séances_de_l'Académie_des_Sciences |
In physical sciences, mechanical energy is the sum of macroscopic potential and kinetic energies. The principle of conservation of mechanical energy states that if an isolated system is subject only to conservative forces, then the mechanical energy is constant. If an object moves in the opposite direction of a conservative net force, the potential energy will increase; and if the speed (not the velocity) of the object changes, the kinetic energy of the object also changes. In all real systems, however, nonconservative forces, such as frictional forces, will be present, but if they are of negligible magnitude, the mechanical energy changes little and its conservation is a useful approximation. In elastic collisions, the kinetic energy is conserved, but in inelastic collisions some mechanical energy may be converted into thermal energy. The equivalence between lost mechanical energy and an increase in temperature was discovered by James Prescott Joule.
Many devices are used to convert mechanical energy to or from other forms of energy, e.g. an electric motor converts electrical energy to mechanical energy, an electric generator converts mechanical energy into electrical energy and a heat engine converts heat to mechanical energy.
== General ==
Energy is a scalar quantity, and the mechanical energy of a system is the sum of the potential energy (which is measured by the position of the parts of the system) and the kinetic energy (which is also called the energy of motion):
E
mechanical
=
U
+
K
{\displaystyle E_{\text{mechanical}}=U+K}
The potential energy, U, depends on the position of an object subjected to gravity or some other conservative force. The gravitational potential energy of an object is equal to the weight W of the object multiplied by the height h of the object's center of gravity relative to an arbitrary datum:
U
=
W
h
{\displaystyle U=Wh}
Potential energy is the energy stored in an object due to its position relative to a conservative force field, such as gravity or a spring. It increases when work is done against the force—meaning when the object is moved in the direction opposite to that of the force. If F represents the conservative force and x the position, the potential energy of the force between the two positions x1 and x2 is defined as the negative integral of F from x1 to x2:
U
=
−
∫
x
1
x
2
F
→
⋅
d
x
→
{\displaystyle U=-\int _{x_{1}}^{x_{2}}{\vec {F}}\cdot d{\vec {x}}}
The kinetic energy, K, depends on the speed of an object and is the ability of a moving object to do work on other objects when it collides with them. It is defined as one half the product of the object's mass with the square of its speed, and the total kinetic energy of a system of objects is the sum of the kinetic energies of the respective objects:
K
=
1
2
m
v
2
{\displaystyle K={1 \over 2}mv^{2}}
The principle of conservation of mechanical energy states that if a body or system is subjected only to conservative forces, the mechanical energy of that body or system remains constant. The difference between a conservative and a non-conservative force is that when a conservative force moves an object from one point to another, the work done by the conservative force is independent of the path. On the contrary, when a non-conservative force acts upon an object, the work done by the non-conservative force is dependent of the path.
== Conservation of mechanical energy ==
According to the principle of conservation of mechanical energy, the mechanical energy of an isolated system remains constant in time, as long as the system is free of friction and other non-conservative forces. In any real situation, frictional forces and other non-conservative forces are present, but in many cases their effects on the system are so small that the principle of conservation of mechanical energy can be used as a fair approximation. Though energy cannot be created or destroyed, it can be converted to another form of energy.
=== Swinging pendulum ===
In a mechanical system like a swinging pendulum subjected to the conservative gravitational force where frictional forces like air drag and friction at the pivot are negligible, energy passes back and forth between kinetic and potential energy but never leaves the system. The pendulum reaches greatest kinetic energy and least potential energy when in the vertical position, because it will have the greatest speed and be nearest the Earth at this point. On the other hand, it will have its least kinetic energy and greatest potential energy at the extreme positions of its swing, because it has zero speed and is farthest from Earth at these points. However, when taking the frictional forces into account, the system loses mechanical energy with each swing because of the negative work done on the pendulum by these non-conservative forces.
=== Irreversibilities ===
That the loss of mechanical energy in a system always resulted in an increase of the system's temperature has been known for a long time, but it was the amateur physicist James Prescott Joule who first experimentally demonstrated how a certain amount of work done against friction resulted in a definite quantity of heat which should be conceived as the random motions of the particles that comprise matter. This equivalence between mechanical energy and heat is especially important when considering colliding objects. In an elastic collision, mechanical energy is conserved – the sum of the mechanical energies of the colliding objects is the same before and after the collision. After an inelastic collision, however, the mechanical energy of the system will have changed. Usually, the mechanical energy before the collision is greater than the mechanical energy after the collision. In inelastic collisions, some of the mechanical energy of the colliding objects is transformed into kinetic energy of the constituent particles. This increase in kinetic energy of the constituent particles is perceived as an increase in temperature. The collision can be described by saying some of the mechanical energy of the colliding objects has been converted into an equal amount of heat. Thus, the total energy of the system remains unchanged though the mechanical energy of the system has reduced.
=== Satellite ===
A satellite of mass
m
{\displaystyle m}
at a distance
r
{\displaystyle r}
from the centre of Earth possesses both kinetic energy,
K
{\displaystyle K}
, (by virtue of its motion) and gravitational potential energy,
U
{\displaystyle U}
, (by virtue of its position within the Earth's gravitational field; Earth's mass is
M
{\displaystyle M}
).
Hence, mechanical energy
E
mechanical
{\displaystyle E_{\text{mechanical}}}
of the satellite-Earth system is given by
E
mechanical
=
U
+
K
{\displaystyle E_{\text{mechanical}}=U+K}
E
mechanical
=
−
G
M
m
r
+
1
2
m
v
2
{\displaystyle E_{\text{mechanical}}=-G{\frac {Mm}{r}}\ +{\frac {1}{2}}\,mv^{2}}
If the satellite is in circular orbit, the energy conservation equation can be further simplified into
E
mechanical
=
−
G
M
m
2
r
{\displaystyle E_{\text{mechanical}}=-G{\frac {Mm}{2r}}}
since in circular motion, Newton's 2nd Law of motion can be taken to be
G
M
m
r
2
=
m
v
2
r
{\displaystyle G{\frac {Mm}{r^{2}}}\ ={\frac {mv^{2}}{r}}}
== Conversion ==
Today, many technological devices convert mechanical energy into other forms of energy or vice versa. These devices can be placed in these categories:
An electric motor converts electrical energy into mechanical energy.
A generator converts mechanical energy into electrical energy.
A hydroelectric powerplant converts the mechanical energy of water in a storage dam into electrical energy.
An internal combustion engine is a heat engine that obtains mechanical energy from chemical energy by burning fuel. From this mechanical energy, the internal combustion engine often generates electricity.
A steam engine converts the internal energy of steam into mechanical energy.
A turbine converts the kinetic energy of a stream of gas or liquid into mechanical energy.
== Distinction from other types ==
The classification of energy into different types often follows the boundaries of the fields of study in the natural sciences.
Chemical energy is the kind of potential energy "stored" in chemical bonds and is studied in chemistry.
Nuclear energy is energy stored in interactions between the particles in the atomic nucleus and is studied in nuclear physics.
Electromagnetic energy is in the form of electric charges, magnetic fields, and photons. It is studied in electromagnetism.
Various forms of energy in quantum mechanics; e.g., the energy levels of electrons in an atom.
== References ==
Notes
Citations
Bibliography
Brodie, David; Brown, Wendy; Heslop, Nigel; Ireson, Gren; Williams, Peter (1998). Terry Parkin (ed.). Physics. Addison Wesley Longman Limited. ISBN 978-0-582-28736-5.
Jain, Mahesh C. (2009). Textbook of Engineering Physics, Part I. New Delhi: PHI Learning Pvt. Ltd. ISBN 978-81-203-3862-3. Retrieved 2011-08-25.
Newton, Isaac (1999). I. Bernard Cohen; Anne Miller Whitman (eds.). The Principia: mathematical principles of natural philosophy. United States of America: University of California Press. ISBN 978-0-520-08816-0. | Wikipedia/Conservation_of_mechanical_energy |
A reaction control system (RCS) is a spacecraft system that uses thrusters to provide attitude control and translation. Alternatively, reaction wheels can be used for attitude control, rather than RCS. Use of diverted engine thrust to provide stable attitude control of a short-or-vertical takeoff and landing aircraft below conventional winged flight speeds, such as with the Harrier "jump jet", may also be referred to as a reaction control system.
Reaction control systems are capable of providing small amounts of thrust in any desired direction or combination of directions. An RCS is also capable of providing torque to allow control of rotation (roll, pitch, and yaw).
Reaction control systems often use combinations of large and small (vernier) thrusters, to allow different levels of response.
== Uses ==
Spacecraft reaction control systems are used for:
attitude control during different stages of a mission;
station keeping in orbit;
close maneuvering during docking procedures;
control of orientation, or "pointing the nose" of the craft;
a backup means of deorbiting;
ullage motors to prime the fuel system for a main engine burn.
Because spacecraft only contain a finite amount of fuel and there is little chance to refill them, alternative reaction control systems have been developed so that fuel can be conserved. For stationkeeping, some spacecraft (particularly those in geosynchronous orbit) use high-specific impulse engines such as arcjets, ion thrusters, or Hall effect thrusters. To control orientation, a few spacecraft, including the ISS, use momentum wheels which spin to control rotational rates on the vehicle.
== Location of thrusters on spacecraft ==
The Mercury space capsule and Gemini reentry module both used groupings of nozzles to provide attitude control. The thrusters were located off their center of mass, thus providing a torque to rotate the capsule. The Gemini capsule was also capable of adjusting its reentry course by rolling, which directed its off-center lifting force. The Mercury thrusters used a hydrogen peroxide monopropellant which turned to steam when forced through a tungsten screen, and the Gemini thrusters used hypergolic mono-methyl hydrazine fuel oxidized with nitrogen tetroxide.
The Gemini spacecraft was also equipped with a hypergolic Orbit Attitude and Maneuvering System, which made it the first crewed spacecraft with translation as well as rotation capability. In-orbit attitude control was achieved by firing pairs of eight 25-pound-force (110 N) thrusters located around the circumference of its adapter module at the extreme aft end. Lateral translation control was provided by four 100-pound-force (440 N) thrusters around the circumference at the forward end of the adaptor module (close to the spacecraft's center of mass). Two forward-pointing 85-pound-force (380 N) thrusters at the same location, provided aft translation, and two 100-pound-force (440 N) thrusters located in the aft end of the adapter module provided forward thrust, which could be used to change the craft's orbit. The Gemini reentry module also had a separate Reentry Control System of sixteen thrusters located at the base of its nose, to provide rotational control during reentry.
The Apollo Command Module had a set of twelve hypergolic thrusters for attitude control, and directional reentry control similar to Gemini.
The Apollo Service Module and Lunar Module each had a set of sixteen R-4D hypergolic thrusters, grouped into external clusters of four, to provide both translation and attitude control. The clusters were located near the craft's average centers of mass, and were fired in pairs in opposite directions for attitude control.
A pair of translation thrusters are located at the rear of the Soyuz spacecraft; the counter-acting thrusters are similarly paired in the middle of the spacecraft (near the center of mass) pointing outwards and forward. These act in pairs to prevent the spacecraft from rotating. The thrusters for the lateral directions are mounted close to the center of mass of the spacecraft, in pairs as well.
=== Location of thrusters on spaceplanes ===
The suborbital X-15 and a companion training aero-spacecraft, the NF-104 AST, both intended to travel to an altitude that rendered their aerodynamic control surfaces unusable, established a convention for locations for thrusters on winged vehicles not intended to dock in space; that is, those that only have attitude control thrusters. Those for pitch and yaw are located in the nose, forward of the cockpit, and replace a standard radar system. Those for roll are located at the wingtips. The X-20, which would have gone into orbit, continued this pattern.
Unlike these, the Space Shuttle Orbiter had many more thrusters, which were required to control vehicle attitude in both orbital flight and during the early part of atmospheric entry, as well as carry out rendezvous and docking maneuvers in orbit. Shuttle thrusters were grouped in the nose of the vehicle and on each of the two aft Orbital Maneuvering System pods. No nozzles interrupted the heat shield on the underside of the craft; instead, the nose RCS nozzles which control positive pitch were mounted on the side of the vehicle, and were canted downward. The downward-facing negative pitch thrusters were located in the OMS pods mounted in the tail/afterbody.
== International Space Station systems ==
The International Space Station uses electrically powered control moment gyroscopes (CMG) for primary attitude control, with RCS thruster systems as backup and augmentation systems.
== References ==
== External links ==
NASA.gov
Space Shuttle RCS Archived 2009-05-24 at the Wayback Machine | Wikipedia/Reaction_control_system |
Flywheel energy storage (FES) works by accelerating a rotor (flywheel) to a very high speed and maintaining the energy in the system as rotational energy. When energy is extracted from the system, the flywheel's rotational speed is reduced as a consequence of the principle of conservation of energy; adding energy to the system correspondingly results in an increase in the speed of the flywheel.
Most FES systems use electricity to accelerate and decelerate the flywheel, but devices that directly use mechanical energy are being developed.
Advanced FES systems have rotors made of high strength carbon-fiber composites, suspended by magnetic bearings, and spinning at speeds from 20,000 to over 50,000 rpm in a vacuum enclosure. Such flywheels can come up to speed in a matter of minutes – reaching their energy capacity much more quickly than some other forms of storage.
== Main components ==
A typical system consists of a flywheel supported by rolling-element bearing connected to a motor–generator. The flywheel and sometimes motor–generator may be enclosed in a vacuum chamber to reduce friction and energy loss.
First-generation flywheel energy-storage systems use a large steel flywheel rotating on mechanical bearings. Newer systems use carbon-fiber composite rotors that have a higher tensile strength than steel and can store much more energy for the same mass.
To reduce friction, magnetic bearings are sometimes used instead of mechanical bearings.
=== Possible future use of superconducting bearings ===
The expense of refrigeration led to the early dismissal of low-temperature superconductors for use in magnetic bearings. However, high-temperature superconductor (HTSC) bearings may be economical and could possibly extend the time energy could be stored economically. Hybrid bearing systems are most likely to see use first. High-temperature superconductor bearings have historically had problems providing the lifting forces necessary for the larger designs but can easily provide a stabilizing force. Therefore, in hybrid bearings, permanent magnets support the load and high-temperature superconductors are used to stabilize it. The reason superconductors can work well stabilizing the load is because they are perfect diamagnets. If the rotor tries to drift off-center, a restoring force due to flux pinning restores it. This is known as the magnetic stiffness of the bearing. Rotational axis vibration can occur due to low stiffness and damping, which are inherent problems of superconducting magnets, preventing the use of completely superconducting magnetic bearings for flywheel applications.
Since flux pinning is an important factor for providing the stabilizing and lifting force, the HTSC can be made much more easily for flywheel energy storage than for other uses. HTSC powders can be formed into arbitrary shapes so long as flux pinning is strong. An ongoing challenge that has to be overcome before superconductors can provide the full lifting force for an FES system is finding a way to suppress the decrease of levitation force and the gradual fall of rotor during operation caused by the flux creep of the superconducting material.
== Physical characteristics ==
=== General ===
Compared with other ways to store electricity, FES systems have long lifetimes (lasting decades with little or no maintenance; full-cycle lifetimes quoted for flywheels range from in excess of 105, up to 107, cycles of use), high specific energy (100–130 W·h/kg, or 360–500 kJ/kg), and large maximum power output. The energy efficiency (ratio of energy out per energy in) of flywheels, also known as round-trip efficiency, can be as high as 90%. Typical capacities range from 3 kWh to 133 kWh. Rapid charging of a system occurs in less than 15 minutes. The high specific energies often cited with flywheels can be a little misleading as commercial systems built have much lower specific energy, for example 11 W·h/kg, or 40 kJ/kg.
=== Form of energy storage ===
Here
m
{\displaystyle m}
is the integral of the flywheel's mass, and
n
m
{\displaystyle n_{m}}
is the rotational speed (number of revolutions per second).
=== Specific energy ===
The maximal specific energy of a flywheel rotor is mainly dependent on two factors: the first being the rotor's geometry, and the second being the properties of the material being used. For single-material, isotropic rotors this relationship can be expressed as
E
m
=
K
(
σ
ρ
)
,
{\displaystyle {\frac {E}{m}}=K\left({\frac {\sigma }{\rho }}\right),}
where
E
{\displaystyle E}
is kinetic energy of the rotor [J],
m
{\displaystyle m}
is the rotor's mass [kg],
K
{\displaystyle K}
is the rotor's geometric shape factor [dimensionless],
σ
{\displaystyle \sigma }
is the tensile strength of the material [Pa],
ρ
{\displaystyle \rho }
is the material's density [kg/m3].
==== Geometry (shape factor) ====
The highest possible value for the shape factor of a flywheel rotor, is
K
=
1
{\displaystyle K=1}
, which can be achieved only by the theoretical constant-stress disc geometry. A constant-thickness disc geometry has a shape factor of
K
=
0.606
{\displaystyle K=0.606}
, while for a rod of constant thickness the value is
K
=
0.333
{\displaystyle K=0.333}
. A thin cylinder has a shape factor of
K
=
0.5
{\displaystyle K=0.5}
. For most flywheels with a shaft, the shape factor is below or about
K
=
0.333
{\textstyle K=0.333}
. A shaft-less design has a shape factor similar to a constant-thickness disc (
K
=
0.6
{\textstyle K=0.6}
), which enables a doubled energy density.
==== Material properties ====
For energy storage, materials with high strength and low density are desirable. For this reason, composite materials are frequently used in advanced flywheels. The strength-to-density ratio of a material can be expressed in Wh/kg (or Nm/kg); values greater than 400 Wh/kg can be achieved by certain composite materials.
==== Rotor materials ====
Several modern flywheel rotors are made from composite materials. Examples include the carbon-fiber composite flywheel from Beacon Power Corporation and the PowerThru flywheel from Phillips Service Industries. Alternatively, Calnetix utilizes aerospace-grade high-performance steel in their flywheel construction.
For these rotors, the relationship between material properties, geometry and energy density can be expressed by using a weighed-average approach.
=== Tensile strength and failure modes ===
One of the primary limits to flywheel design is the tensile strength of the rotor. Generally speaking, the stronger the disc, the faster it may be spun, and the more energy the system can store. (Making the flywheel heavier without a corresponding increase in strength will slow the maximum speed the flywheel can spin without rupturing, hence will not increase the total amount of energy the flywheel can store.)
When the tensile strength of a composite flywheel's outer binding cover is exceeded, the binding cover will fracture, and the wheel will shatter as the outer wheel compression is lost around the entire circumference, releasing all of its stored energy at once; this is commonly referred to as "flywheel explosion" since wheel fragments can reach kinetic energy comparable to that of a bullet. Composite materials that are wound and glued in layers tend to disintegrate quickly, first into small-diameter filaments that entangle and slow each other, and then into red-hot powder; a cast metal flywheel throws off large chunks of high-speed shrapnel.
For a cast metal flywheel, the failure limit is the binding strength of the grain boundaries of the polycrystalline molded metal. Aluminum in particular suffers from fatigue and can develop microfractures from repeated low-energy stretching. Angular forces may cause portions of a metal flywheel to bend outward and begin dragging on the outer containment vessel, or to separate completely and bounce randomly around the interior. The rest of the flywheel is now severely unbalanced, which may lead to rapid bearing failure from vibration, and sudden shock fracturing of large segments of the flywheel.
Traditional flywheel systems require strong containment vessels as a safety precaution, which increases the total mass of the device. The energy release from failure can be dampened with a gelatinous or encapsulated liquid inner housing lining, which will boil and absorb the energy of destruction. Still, many customers of large-scale flywheel energy-storage systems prefer to have them embedded in the ground to halt any material that might escape the containment vessel.
=== Energy storage efficiency ===
Flywheel energy storage systems using mechanical bearings can lose 20% to 50% of their energy in two hours. Much of the friction responsible for this energy loss results from the flywheel changing orientation due to the rotation of the earth (an effect similar to that shown by a Foucault pendulum). This change in orientation is resisted by the gyroscopic forces exerted by the flywheel's angular momentum, thus exerting a force against the mechanical bearings. This force increases friction. This can be avoided by aligning the flywheel's axis of rotation parallel to that of the earth's axis of rotation.
Conversely, flywheels with magnetic bearings and high vacuum can maintain 97% mechanical efficiency, and 85% round trip efficiency.
=== Effects of angular momentum in vehicles ===
When used in vehicles, flywheels also act as gyroscopes, since their angular momentum is typically of a similar order of magnitude as the forces acting on the moving vehicle. This property may be detrimental to the vehicle's handling characteristics while turning or driving on rough ground; driving onto the side of a sloped embankment may cause wheels to partially lift off the ground as the flywheel opposes sideways tilting forces. On the other hand, this property could be utilized to keep the car balanced so as to keep it from rolling over during sharp turns.
When a flywheel is used entirely for its effects on the attitude of a vehicle, rather than for energy storage, it is called a reaction wheel or a control moment gyroscope.
The resistance of angular tilting can be almost completely removed by mounting the flywheel within an appropriately applied set of gimbals, allowing the flywheel to retain its original orientation without affecting the vehicle (see Properties of a gyroscope). This does not avoid the complication of gimbal lock, and so a compromise between the number of gimbals and the angular freedom is needed.
The center axle of the flywheel acts as a single gimbal, and if aligned vertically, allows for the 360 degrees of yaw in a horizontal plane. However, for instance driving up-hill requires a second pitch gimbal, and driving on the side of a sloped embankment requires a third roll gimbal.
==== Full-motion gimbals ====
Although the flywheel itself may be of a flat ring shape, a free-movement gimbal mounting inside a vehicle requires a spherical volume for the flywheel to freely rotate within. Left to its own, a spinning flywheel in a vehicle would slowly precess following the Earth's rotation, and precess further yet in vehicles that travel long distances over the Earth's curved spherical surface.
A full-motion gimbal has additional problems of how to communicate power into and out of the flywheel, since the flywheel could potentially flip completely over once a day, precessing as the Earth rotates. Full free rotation would require slip rings around each gimbal axis for power conductors, further adding to the design complexity.
==== Limited-motion gimbals ====
To reduce space usage, the gimbal system may be of a limited-movement design, using shock absorbers to cushion sudden rapid motions within a certain number of degrees of out-of-plane angular rotation, and then gradually forcing the flywheel to adopt the vehicle's current orientation. This reduces the gimbal movement space around a ring-shaped flywheel from a full sphere, to a short thickened cylinder, encompassing for example ± 30 degrees of pitch and ± 30 degrees of roll in all directions around the flywheel.
==== Counterbalancing of angular momentum ====
An alternative solution to the problem is to have two joined flywheels spinning synchronously in opposite directions. They would have a total angular momentum of zero and no gyroscopic effect. A problem with this solution is that when the difference between the momentum of each flywheel is anything other than zero the housing of the two flywheels would exhibit torque. Both wheels must be maintained at the same speed to keep the angular velocity at zero. Strictly speaking, the two flywheels would exert a huge torqueing moment at the central point, trying to bend the axle. However, if the axle were sufficiently strong, no gyroscopic forces would have a net effect on the sealed container, so no torque would be noticed.
To further balance the forces and spread out strain, a single large flywheel can be balanced by two half-size flywheels on each side, or the flywheels can be reduced in size to be a series of alternating layers spinning in opposite directions. However this increases housing and bearing complexity.
== Applications ==
=== Transportation ===
==== Automotive ====
In the 1950s, flywheel-powered buses, known as gyrobuses, were used in Yverdon (Switzerland) and Ghent (Belgium) and there is ongoing research to make flywheel systems that are smaller, lighter, cheaper and have a greater capacity. It is hoped that flywheel systems can replace conventional chemical batteries for mobile applications, such as for electric vehicles. Proposed flywheel systems would eliminate many of the disadvantages of existing battery power systems, such as low capacity, long charge times, heavy weight and short usable lifetimes. Flywheels may have been used in the experimental Chrysler Patriot, though that has been disputed.
Flywheels have also been proposed for use in continuously variable transmissions. Punch Powertrain is currently working on such a device.
During the 1990s, Rosen Motors developed a gas turbine powered series hybrid automotive powertrain using a 55,000 rpm flywheel to provide bursts of acceleration which the small gas turbine engine could not provide. The flywheel also stored energy through regenerative braking. The flywheel was composed of a titanium hub with a carbon fiber cylinder and was gimbal-mounted to minimize adverse gyroscopic effects on vehicle handling. The prototype vehicle was successfully road tested in 1997 but was never mass-produced.
In 2013, Volvo announced a flywheel system fitted to the rear axle of its S60 sedan. Braking action spins the flywheel at up to 60,000 rpm and stops the front-mounted engine. Flywheel energy is applied via a special transmission to partially or completely power the vehicle. The 20-centimetre (7.9 in), 6-kilogram (13 lb) carbon fiber flywheel spins in a vacuum to eliminate friction. When partnered with a four-cylinder engine, it offers up to a 25 percent reduction in fuel consumption versus a comparably performing turbo six-cylinder, providing an 80 horsepower (60 kW) boost and allowing it to reach 100 kilometres per hour (62 mph) in 5.5 seconds. The company did not announce specific plans to include the technology in its product line.
In July 2014 GKN acquired Williams Hybrid Power (WHP) division and intends to supply 500 carbon fiber Gyrodrive electric flywheel systems to urban bus operators over the next two years. As the former developer name implies, these were originally designed for Formula one motor racing applications. In September 2014, Oxford Bus Company announced that it is introducing 14 Gyrodrive hybrid buses by Alexander Dennis on its Brookes Bus operation.
==== Rail vehicles ====
Flywheel systems have been used experimentally in small electric locomotives for shunting or switching, e.g. the Sentinel-Oerlikon Gyro Locomotive. Larger electric locomotives, e.g. British Rail Class 70, have sometimes been fitted with flywheel boosters to carry them over gaps in the third rail. Advanced flywheels, such as the 133 kWh pack of the University of Texas at Austin, can take a train from a standing start up to cruising speed.
The Parry People Mover is a railcar which is powered by a flywheel. It was trialled on Sundays for 12 months on the Stourbridge Town Branch Line in the West Midlands, England during 2006 and 2007 and was intended to be introduced as a full service by the train operator London Midland in December 2008 once two units had been ordered. In January 2010, both units are in operation.
==== Rail electrification ====
FES can be used at the lineside of electrified railways to help regulate the line voltage thus improving the acceleration of unmodified electric trains and the amount of energy recovered back to the line during regenerative braking, thus lowering energy bills. Trials have taken place in London, New York, Lyon and Tokyo, and New York MTA's Long Island Rail Road is now investing $5.2m in a pilot project on LIRR's West Hempstead Branch line.
These trials and systems store kinetic energy in rotors consisting of a carbon-glass composite cylinder packed with neodymium-iron-boron powder that forms a permanent magnet. These spin at up to 37,800 rpm, and each 100 kW (130 hp) unit can store 11 megajoules (3.1 kWh) of re-usable energy, approximately enough to accelerate a weight of 200 metric tons (220 short tons; 197 long tons) from zero to 38 km/h (24 mph).
=== Uninterruptible power supplies ===
Flywheel power storage systems in production as of 2001 had storage capacities comparable to batteries and faster discharge rates. They are mainly used to provide load leveling for large battery systems, such as an uninterruptible power supply for data centers as they save a considerable amount of space compared to battery systems.
Flywheel maintenance in general runs about one-half the cost of traditional battery UPS systems. The only maintenance is a basic annual preventive maintenance routine and replacing the bearings every five to ten years, which takes about four hours. Newer flywheel systems completely levitate the spinning mass using maintenance-free magnetic bearings, thus reducing bearing maintenance and failures.
Costs of a fully installed flywheel UPS (including power conditioning) were (in 2009) about $330 per kilowatt (for 15 seconds full-load capacity).
=== Test laboratories ===
A long-standing niche market for flywheel power systems are facilities where circuit breakers and similar devices are tested: even a small household circuit breaker may be rated to interrupt a current of 10,000 or more amperes, and larger units may have interrupting ratings of 100,000 or 1,000,000 amperes. The enormous transient loads produced by deliberately forcing such devices to demonstrate their ability to interrupt simulated short circuits would have unacceptable effects on the local grid if these tests were done directly from building power. Typically such a laboratory will have several large motor–generator sets, which can be spun up to speed over several minutes; then the motor is disconnected before a circuit breaker is tested.
=== Physics laboratories ===
Tokamak fusion experiments need very high currents for brief intervals (mainly to power large electromagnets for a few seconds).
JET (the Joint European Torus) has two 9 meter (29.53 feet) diameter, 775 t (854 short tons; 763 long tons) flywheels (installed in 1981) that spin up to 225 rpm. Each flywheel stores 3.75 GJ and can deliver at up to 400 MW (540,000 hp).
The Helically Symmetric Experiment at the University of Wisconsin-Madison has 18 one-ton flywheels, which are spun to 10,000 rpm using repurposed electric train motors.
ASDEX Upgrade has 3 flywheel generators.
DIII-D (tokamak) at General Atomics
the Princeton Large Torus (PLT) at the Princeton Plasma Physics Laboratory
Also the non-tokamak: Nimrod synchrotron at the Rutherford Appleton Laboratory had two 30 ton flywheels.
=== Aircraft launching systems ===
The Gerald R. Ford-class aircraft carrier will use flywheels to accumulate energy from the ship's power supply, for rapid release into the electromagnetic aircraft launch system. The shipboard power system cannot on its own supply the high power transients necessary to launch aircraft. Each of four rotors will store 121 MJ (34 kWh) at 6400 rpm. They can store 122 MJ (34 kWh) in 45 secs and release it in 2–3 seconds. The flywheel energy densities are 28 kJ/kg (8 W·h/kg); including the stators and cases this comes down to 18.1 kJ/kg (5 W·h/kg), excluding the torque frame.
=== NASA G2 flywheel for spacecraft energy storage ===
This was a design funded by NASA's Glenn Research Center and intended for component testing in a laboratory environment. It used a carbon fiber rim with a titanium hub designed to spin at 60,000 rpm, mounted on magnetic bearings. Weight was limited to 250 pounds (110 kilograms). Storage was 525 Wh (1.89 MJ) and could be charged or discharged at 1 kW (1.3 hp), leading to a specific energy of 5.31 W⋅h/kg and power density of 10.11 W/kg. The working model shown in the photograph at the top of the page ran at 41,000 rpm on September 2, 2004.
=== Amusement rides ===
The Montezooma's Revenge roller coaster at Knott's Berry Farm was the first flywheel-launched roller coaster in the world and is the last ride of its kind still operating in the United States. The ride uses a 7.6 tonnes flywheel to accelerate the train to 55 miles per hour (89 km/h) in 4.5 seconds.
The Incredible Hulk roller coaster at Universal's Islands of Adventure features a rapidly accelerating uphill launch as opposed to the typical gravity drop. This is achieved through powerful traction motors that throw the car up the track. To achieve the brief very high current required to accelerate a full coaster train to full speed uphill, the park utilizes several motor-generator sets with large flywheels. Without these stored energy units, the park would have to invest in a new substation or risk browning-out the local energy grid every time the ride launches.
=== Pulse power ===
The compensated pulsed alternator (compulsator) is one of the most popular choices of pulsed power supplies for fusion reactors, high-power pulsed lasers, and hypervelocity electromagnetic launchers because of its high energy density and power density.
Instead of having a separate flywheel and generator, only the large rotor of the low-inductance alternator stores energy. (See also Homopolar generator.)
=== Motor sports ===
Using a continuously variable transmission (CVT), energy is recovered from the drive train during braking and stored in a flywheel. This stored energy is then used during acceleration by altering the ratio of the CVT. In motor sports applications this energy is used to improve acceleration rather than reduce carbon dioxide emissions – although the same technology can be applied to road cars to improve fuel efficiency.
Automobile Club de l'Ouest, the organizer behind the annual 24 Hours of Le Mans event and the Le Mans Series, is currently "studying specific rules for LMP1 which will be equipped with a kinetic energy recovery system."
Williams Hybrid Power, a subsidiary of Williams F1 Racing team, have supplied Porsche and Audi with flywheel based hybrid system for Porsche's 911 GT3 R Hybrid and Audi's R18 e-Tron Quattro. Audi's victory in 2012 24 Hours of Le Mans is the first for a hybrid (diesel-electric) vehicle.
=== Grid energy storage ===
Flywheels are sometimes used as short term
spinning reserve for momentary grid frequency regulation and balancing sudden changes between supply and consumption. No carbon emissions, faster response times and ability to buy power at off-peak hours are among the advantages of using flywheels instead of traditional sources of energy like natural gas turbines. Operation is very similar to batteries in the same application, their differences are primarily economic.
Beacon Power opened a 5 MWh (20 MW over 15 mins) flywheel energy storage plant in Stephentown, New York in 2011 using 200 flywheels and a similar 20 MW system at Hazle Township, Pennsylvania in 2014.
A 0.5MWh (2 MW for 15 min) flywheel storage facility in Minto, Ontario, Canada opened in 2014. The flywheel system (developed by NRStor) uses 10 spinning steel flywheels on magnetic bearings.
Amber Kinetics, Inc. has an agreement with Pacific Gas and Electric (PG&E) for a 20 MW / 80 MWh flywheel energy storage facility located in Fresno, CA with a four-hour discharge duration.
A 30 MW flywheel grid system started operating in China in 2024.
=== Wind turbines ===
Flywheels may be used to store energy generated by wind turbines during off-peak periods or during high wind speeds.
In 2010, Beacon Power began testing of their Smart Energy 25 (Gen 4) flywheel energy storage system at a wind farm in Tehachapi, California. The system was part of a wind power and flywheel demonstration project being carried out for the California Energy Commission.
=== Toys ===
Friction motors used to power many toy cars, trucks, trains, action toys and such, are simple flywheel motors.
=== Toggle action presses ===
In industry, toggle action presses are still popular. The usual arrangement involves a very strong crankshaft and a heavy duty connecting rod which drives the press. Large and heavy flywheels are driven by electric motors but the flywheels turn the crankshaft only when clutches are activated.
== Comparison to electric batteries ==
Flywheels are not as adversely affected by temperature changes, can operate at a much wider temperature range, and are not subject to many of the common failures of chemical rechargeable batteries. They are also less potentially damaging to the environment, being largely made of inert or benign materials. Another advantage of flywheels is that by a simple measurement of the rotation speed it is possible to know the exact amount of energy stored.
Unlike most batteries which operate only for a finite period (for example roughly 10 years in the case of lithium iron phosphate batteries), a flywheel potentially has an indefinite working lifespan. Flywheels built as part of James Watt steam engines have been continuously working for more than two hundred years. Working examples of ancient flywheels used mainly in milling and pottery can be found in many locations in Africa, Asia, and Europe.
Most modern flywheels are typically sealed devices that need minimal maintenance throughout their service lives. Magnetic bearing flywheels in vacuum enclosures, such as the NASA model depicted above, do not need any bearing maintenance and are therefore superior to batteries both in terms of total lifetime and energy storage capacity, since their effective service lifespan is still unknown. Flywheel systems with mechanical bearings will have limited lifespans due to wear.
High performance flywheels can explode, injuring bystanders with high-speed fragments. Flywheels can be installed below-ground to reduce this risk. While batteries can catch fire and release toxins, there is generally time for bystanders to flee and escape injury.
The physical arrangement of batteries can be designed to match a wide variety of configurations, whereas a flywheel at a minimum must occupy a certain area and volume, because the energy it stores is proportional to its rotational inertia and to the square of its rotational speed. As a flywheel gets smaller, its mass also decreases, so the speed must increase, and so the stress on the materials increases. Where dimensions are a constraint, (e.g. under the chassis of a train), a flywheel may not be a viable solution.
== See also ==
Beacon Power
Compensated pulsed alternator – Form of power supply
Electric double-layer capacitor – High-capacity electrochemical capacitorPages displaying short descriptions of redirect targets
Energy storage – Captured energy for later usage
Grid energy storage – Large scale electricity supply management
Inverter – Device that changes direct current (DC) to alternating current (AC)Pages displaying short descriptions of redirect targets
Launch loop – Proposed system for launching objects into orbit
List of energy storage projects
List of energy topics – Overview of and topical guide to energyPages displaying short descriptions of redirect targets
Plug-in hybrid – Hybrid vehicle whose battery may be externally charged
Rechargeable battery – Type of electrical battery
Regenerative brake – Energy recovery mechanismPages displaying short descriptions of redirect targets
Rotational energy – Kinetic energy of rotating body with moment of inertia and angular velocity
STATCOM – Regulating device used on transmission networksPages displaying short descriptions of redirect targets
United States Department of Energy International Energy Storage Database
== References ==
== Further reading ==
Beacon Power Applies for DOE Grants to Fund up to 50% of Two 20 MW Energy Storage Plants, Sep. 1, 2009 [1]
Sheahen, Thomas P. (1994). Introduction to High-Temperature Superconductivity. New York: Plenum Press. pp. 76–78, 425–431. ISBN 978-0-306-44793-8.
El-Wakil, M. M. (1984). Powerplant Technology. McGraw-Hill. pp. 685–689. ISBN 9780070192881.
Koshizuka, N.; Ishikawa, F.; Nasu, H.; Murakami, M.; et al. (2003). "Progress of superconducting bearing technologies for flywheel energy storage systems". Physica C. 386 (386): 444–450. Bibcode:2003PhyC..386..444K. doi:10.1016/S0921-4534(02)02206-2.
Wolsky, A. M. (2002). "The status and prospects for flywheels and SMES that incorporate HTS". Physica C. 372 (372–376): 1495–1499. Bibcode:2002PhyC..372.1495W. doi:10.1016/S0921-4534(02)01057-2.
Sung, T. H.; Han, S. C.; Han, Y. H.; Lee, J. S.; et al. (2002). "Designs and analyses of flywheel energy storage systems using high-Tc superconductor bearings". Cryogenics. 42 (6–7): 357–362. Bibcode:2002Cryo...42..357S. doi:10.1016/S0011-2275(02)00057-7.
Akhil, Abbas; Swaminathan, Shiva; Sen, Rajat K. (February 2007). "Cost Analysis of Energy Storage Systems for Electric Utility Applications" (PDF). Sandia National laboratories. Archived from the original (PDF) on 2007-06-21.
Larbalestier, David; Blaugher, Richard D.; Schwall, Robert E.; Sokolowski, Robert S.; et al. (September 1997). "Flywheels". Power Applications of Superconductivity in Japan and Germany. World Technology Evaluation Center.
"A New Look at an Old Idea: The Electromechanical Battery" (PDF). Science & Technology Review: 12–19. April 1996. Archived from the original (PDF) on 2008-04-05. Retrieved 2006-07-21.
Janse van Rensburg, P.J. (December 2011). Energy storage in composite flywheel rotors (Thesis). University of Stellenbosch, South Africa. hdl:10019.1/17864.
Devitt, Drew (March 2010). "Making a case for flywheel energy storage". Renewable Energy World Magazine North America.
Li, X., & Palazzolo, A. (2022). A review of flywheel energy storage systems: State of the art and opportunities. Journal of Energy Storage, 46, 103576. https://doi.org/10.1016/j.est.2021.103576
== External links ==
Federal Technology Alert, Flywheel Energy Storage
Magnetal Whitepaper for its Green Energy Storage System – GESS
Magnetal analysis on gyro forces induced by flywheel energy storage | Wikipedia/Flywheel_energy_storage |
A kinetic energy penetrator (KEP), also known as long-rod penetrator (LRP), is a type of ammunition designed to penetrate vehicle armour using a flechette-like, high-sectional density projectile. Like a bullet or kinetic energy weapon, this type of ammunition does not contain explosive payloads and uses purely kinetic energy to penetrate the target. Modern KEP munitions are typically of the armour-piercing fin-stabilized discarding sabot (APFSDS) type.
== History ==
Early cannons fired kinetic energy ammunition, initially consisting of heavy balls of worked stone and later of dense metals. From the beginning, combining high muzzle energy with projectile weight and hardness have been the foremost factors in the design of such weapons. Similarly, the foremost purpose of such weapons has generally been to defeat protective shells of armored vehicles or other defensive structures, whether it is stone walls, sailship timbers, or modern tank armour. Kinetic energy ammunition, in its various forms, has consistently been the choice for those weapons due to the highly focused terminal ballistics.
The development of the modern KE penetrator combines two aspects of artillery design, high muzzle velocity and concentrated force. High muzzle velocity is achieved by using a projectile with a low mass and large base area in the gun barrel. Firing a small-diameter projectile wrapped in a lightweight outer shell, called a sabot, raises the muzzle velocity. Once the shell clears the barrel, the sabot is no longer needed and falls off in pieces. This leaves the projectile traveling at high velocity with a smaller cross-sectional area and reduced aerodynamic drag during the flight to the target (see external ballistics and terminal ballistics). Germany developed modern sabots under the name "treibspiegel" ("thrust mirror") to give extra altitude to its anti-aircraft guns during the Second World War. Before this, primitive wooden sabots had been used for centuries in the form of a wooden plug attached to or breech loaded before cannonballs in the barrel, placed between the propellant charge and the projectile. The name "sabot" (pronounced SAB-oh in English usage) is the French word for clog (a wooden shoe traditionally worn in some European countries).
Concentration of force into a smaller area was initially attained by replacing the single metal (usually steel) shot with a composite shot using two metals, a heavy core (based on tungsten) inside a lighter metal outer shell. These designs were known as armour-piercing composite rigid (APCR) by the British, high-velocity armor-piercing (HVAP) by the US, and hartkern (hard core) by the Germans. On impact, the core had a much more concentrated effect than plain metal shot of the same weight and size. The air resistance and other effects were the same as for the shell of identical size. High-velocity armor-piercing (HVAP) rounds were primarily used by tank destroyers in the US Army and were relatively uncommon as the tungsten core was expensive and prioritized for other applications.
Between 1941 and 1943, the British combined the two techniques in the armour-piercing discarding sabot (APDS) round. The sabot replaced the outer metal shell of the APCR. While in the gun, the shot had a large base area to get maximum acceleration from the propelling charge but once outside, the sabot fell away to reveal a heavy shot with a small cross-sectional area. APDS rounds served as the primary kinetic energy weapon of most tanks during the early-Cold War period, though they suffered the primary drawback of inaccuracy. This was resolved with the introduction of the armour-piercing fin-stabilized discarding sabot (APFSDS) round during the 1970s, which added stabilising fins to the penetrator, greatly increasing accuracy.
== Design ==
The principle of the kinetic energy penetrator is that it uses its kinetic energy, which is a function of its mass and velocity, to force its way through armor. If the armor is defeated, the heat and spalling (particle spray) generated by the penetrator going through the armor, and the pressure wave that develops, ideally destroys the target.
The modern kinetic energy weapon maximizes the stress (kinetic energy divided by impact area) delivered to the target by:
maximizing the mass – that is, using the densest metals practical, which is one of the reasons depleted uranium or tungsten carbide is often used – and muzzle velocity of the projectile, as kinetic energy scales with the mass m and the square of the velocity v of the projectile
(
m
v
2
/
2
)
.
{\displaystyle (mv^{2}/2).}
minimizing the width, since if the projectile does not tumble, it will hit the target face first. As most modern projectiles have circular cross-sectional areas, their impact area will scale with the square of the radius r (the impact area being
π
r
2
{\displaystyle \pi r^{2}}
). For the same reason, "self-sharpening" through the generation of adiabatic shear bands is also a desired feature for the projectile material.
The penetrator length plays a large role in determining the ultimate depth of penetration. Generally, a penetrator is incapable of penetrating deeper than its own length, as the sheer stress of impact and perforation ablates it. This has led to the current designs which resemble a long metal arrow.
For monobloc penetrators made of a single material, a perforation formula devised by Wili Odermatt and W. Lanz can calculate the penetration depth of an APFSDS round.
In 1982, an analytical investigation drawing from concepts of gas dynamics and experiments on target penetration led to the conclusion on the efficiency of impactors that penetration is deeper using unconventional three-dimensional shapes.
== See also ==
== Notes ==
== References == | Wikipedia/Kinetic_energy_penetrator |
In physics, the energy–momentum relation, or relativistic dispersion relation, is the relativistic equation relating total energy (which is also called relativistic energy) to invariant mass (which is also called rest mass) and momentum. It is the extension of mass–energy equivalence for bodies or systems with non-zero momentum.
It can be formulated as:
This equation holds for a body or system, such as one or more particles, with total energy E, invariant mass m0, and momentum of magnitude p; the constant c is the speed of light. It assumes the special relativity case of flat spacetime and that the particles are free. Total energy is the sum of rest energy
E
0
=
m
0
c
2
{\displaystyle E_{0}=m_{0}c^{2}}
and relativistic kinetic energy:
E
K
=
E
−
E
0
=
(
p
c
)
2
+
(
m
0
c
2
)
2
−
m
0
c
2
{\displaystyle E_{K}=E-E_{0}={\sqrt {(pc)^{2}+\left(m_{0}c^{2}\right)^{2}}}-m_{0}c^{2}}
Invariant mass is mass measured in a center-of-momentum frame.
For bodies or systems with zero momentum, it simplifies to the mass–energy equation
E
0
=
m
0
c
2
{\displaystyle E_{0}=m_{0}c^{2}}
, where total energy in this case is equal to rest energy.
The Dirac sea model, which was used to predict the existence of antimatter, is closely related to the energy–momentum relation.
== Connection to E = mc2 ==
The energy–momentum relation is consistent with the familiar mass–energy relation in both its interpretations: E = mc2 relates total energy E to the (total) relativistic mass m (alternatively denoted mrel or mtot), while E0 = m0c2 relates rest energy E0 to (invariant) rest mass m0.
Unlike either of those equations, the energy–momentum equation (1) relates the total energy to the rest mass m0. All three equations hold true simultaneously.
== Special cases ==
If the body is a massless particle (m0 = 0), then (1) reduces to E = pc. For photons, this is the relation, discovered in 19th century classical electromagnetism, between radiant momentum (causing radiation pressure) and radiant energy.
If the body's speed v is much less than c, then (1) reduces to E = 1/2m0v2 + m0c2; that is, the body's total energy is simply its classical kinetic energy (1/2m0v2) plus its rest energy.
If the body is at rest (v = 0), i.e. in its center-of-momentum frame (p = 0), we have E = E0 and m = m0; thus the energy–momentum relation and both forms of the mass–energy relation (mentioned above) all become the same.
A more general form of relation (1) holds for general relativity.
The invariant mass (or rest mass) is an invariant for all frames of reference (hence the name), not just in inertial frames in flat spacetime, but also accelerated frames traveling through curved spacetime (see below). However the total energy of the particle E and its relativistic momentum p are frame-dependent; relative motion between two frames causes the observers in those frames to measure different values of the particle's energy and momentum; one frame measures E and p, while the other frame measures E′ and p′, where E′ ≠ E and p′ ≠ p, unless there is no relative motion between observers, in which case each observer measures the same energy and momenta. Although we still have, in flat spacetime:
E
′
2
−
(
p
′
c
)
2
=
(
m
0
c
2
)
2
.
{\displaystyle {E'}^{2}-\left(p'c\right)^{2}=\left(m_{0}c^{2}\right)^{2}\,.}
The quantities E, p, E′, p′ are all related by a Lorentz transformation. The relation allows one to sidestep Lorentz transformations when determining only the magnitudes of the energy and momenta by equating the relations in the different frames. Again in flat spacetime, this translates to;
E
2
−
(
p
c
)
2
=
E
′
2
−
(
p
′
c
)
2
=
(
m
0
c
2
)
2
.
{\displaystyle {E}^{2}-\left(pc\right)^{2}={E'}^{2}-\left(p'c\right)^{2}=\left(m_{0}c^{2}\right)^{2}\,.}
Since m0 does not change from frame to frame, the energy–momentum relation is used in relativistic mechanics and particle physics calculations, as energy and momentum are given in a particle's rest frame (that is, E′ and p′ as an observer moving with the particle would conclude to be) and measured in the lab frame (i.e. E and p as determined by particle physicists in a lab, and not moving with the particles).
In relativistic quantum mechanics, it is the basis for constructing relativistic wave equations, since if the relativistic wave equation describing the particle is consistent with this equation – it is consistent with relativistic mechanics, and is Lorentz invariant. In relativistic quantum field theory, it is applicable to all particles and fields.
== Origins and derivation of the equation ==
The energy–momentum relation goes back to Max Planck's article
published in 1906.
It was used by Walter Gordon in 1926 and then by Paul Dirac in 1928 under the form
E
=
c
2
p
2
+
(
m
0
c
2
)
2
+
V
{\textstyle E={\sqrt {c^{2}p^{2}+(m_{0}c^{2})^{2}}}+V}
, where V is the amount of potential energy.
The equation can be derived in a number of ways, two of the simplest include:
From the relativistic dynamics of a massive particle,
By evaluating the norm of the four-momentum of the system. This method applies to both massive and massless particles, and can be extended to multi-particle systems with relatively little effort (see § Many-particle systems below).
=== Heuristic approach for massive particles ===
For a massive object moving at three-velocity u = (ux, uy, uz) with magnitude |u| = u in the lab frame:
E
=
γ
(
u
)
m
0
c
2
{\displaystyle E=\gamma _{(\mathbf {u} )}m_{0}c^{2}}
is the total energy of the moving object in the lab frame,
p
=
γ
(
u
)
m
0
u
{\displaystyle \mathbf {p} =\gamma _{(\mathbf {u} )}m_{0}\mathbf {u} }
is the three dimensional relativistic momentum of the object in the lab frame with magnitude |p| = p. The relativistic energy E and momentum p include the Lorentz factor defined by:
γ
(
u
)
=
1
1
−
u
⋅
u
c
2
=
1
1
−
(
u
c
)
2
{\displaystyle \gamma _{(\mathbf {u} )}={\frac {1}{\sqrt {1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}}={\frac {1}{\sqrt {1-\left({\frac {u}{c}}\right)^{2}}}}}
Some authors use relativistic mass defined by:
m
=
γ
(
u
)
m
0
{\displaystyle m=\gamma _{(\mathbf {u} )}m_{0}}
although rest mass m0 has a more fundamental significance, and will be used primarily over relativistic mass m in this article.
Squaring the 3-momentum gives:
p
2
=
p
⋅
p
=
m
0
2
u
⋅
u
1
−
u
⋅
u
c
2
=
m
0
2
u
2
1
−
(
u
c
)
2
{\displaystyle p^{2}=\mathbf {p} \cdot \mathbf {p} ={\frac {m_{0}^{2}\mathbf {u} \cdot \mathbf {u} }{1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}={\frac {m_{0}^{2}u^{2}}{1-\left({\frac {u}{c}}\right)^{2}}}}
then solving for u2 and substituting into the Lorentz factor one obtains its alternative form in terms of 3-momentum and mass, rather than 3-velocity:
γ
=
1
+
(
p
m
0
c
)
2
{\displaystyle \gamma ={\sqrt {1+\left({\frac {p}{m_{0}c}}\right)^{2}}}}
Inserting this form of the Lorentz factor into the energy equation gives:
E
=
m
0
c
2
1
+
(
p
m
0
c
)
2
{\displaystyle E=m_{0}c^{2}{\sqrt {1+\left({\frac {p}{m_{0}c}}\right)^{2}}}}
followed by more rearrangement it yields (1). The elimination of the Lorentz factor also eliminates implicit velocity dependence of the particle in (1), as well as any inferences to the "relativistic mass" of a massive particle. This approach is not general as massless particles are not considered. Naively setting m0 = 0 would mean that E = 0 and p = 0 and no energy–momentum relation could be derived, which is not correct.
=== Norm of the four-momentum ===
==== Special relativity ====
In Minkowski space, energy (divided by c) and momentum are two components of a Minkowski four-vector, namely the four-momentum;
P
=
(
E
c
,
p
)
,
{\displaystyle \mathbf {P} =\left({\frac {E}{c}},\mathbf {p} \right)\,,}
(these are the contravariant components).
The Minkowski inner product ⟨ , ⟩ of this vector with itself gives the square of the norm of this vector, it is proportional to the square of the rest mass m of the body:
⟨
P
,
P
⟩
=
|
P
|
2
=
(
m
0
c
)
2
,
{\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=\left(m_{0}c\right)^{2}\,,}
a Lorentz invariant quantity, and therefore independent of the frame of reference. Using the Minkowski metric η with metric signature (− + + +), the inner product is
⟨
P
,
P
⟩
=
|
P
|
2
=
−
(
m
0
c
)
2
,
{\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=-\left(m_{0}c\right)^{2}\,,}
and
⟨
P
,
P
⟩
{\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle }
=
P
α
η
α
β
P
β
{\displaystyle =P^{\alpha }\eta _{\alpha \beta }P^{\beta }}
=
(
E
c
p
x
p
y
p
z
)
(
−
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
)
(
E
c
p
x
p
y
p
z
)
{\displaystyle ={\begin{pmatrix}{\frac {E}{c}}&p_{x}&p_{y}&p_{z}\end{pmatrix}}{\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\\\end{pmatrix}}{\begin{pmatrix}{\frac {E}{c}}\\p_{x}\\p_{y}\\p_{z}\end{pmatrix}}}
=
−
(
E
c
)
2
+
p
2
,
{\displaystyle =-\left({\frac {E}{c}}\right)^{2}+p^{2}\,,}
so
−
(
m
0
c
)
2
=
−
(
E
c
)
2
+
p
2
{\displaystyle -\left(m_{0}c\right)^{2}=-\left({\frac {E}{c}}\right)^{2}+p^{2}}
or, in natural units where c = 1,
|
P
|
2
+
(
m
0
)
2
=
0.
{\displaystyle |\mathbf {P} |^{2}+(m_{0})^{2}=0.}
==== General relativity ====
In general relativity, the 4-momentum is a four-vector defined in a local coordinate frame, although by definition the inner product is similar to that of special relativity,
⟨
P
,
P
⟩
=
|
P
|
2
=
(
m
0
c
)
2
,
{\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=\left(m_{0}c\right)^{2}\,,}
in which the Minkowski metric η is replaced by the metric tensor field g:
⟨
P
,
P
⟩
=
|
P
|
2
=
P
α
g
α
β
P
β
,
{\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=P^{\alpha }g_{\alpha \beta }P^{\beta }\,,}
solved from the Einstein field equations. Then:
P
α
g
α
β
P
β
=
(
m
0
c
)
2
.
{\displaystyle P^{\alpha }g_{\alpha \beta }P^{\beta }=\left(m_{0}c\right)^{2}\,.}
== Units of energy, mass and momentum ==
In natural units where c = 1, the energy–momentum equation reduces to
E
2
=
p
2
+
m
0
2
.
{\displaystyle E^{2}=p^{2}+m_{0}^{2}\,.}
In particle physics, energy is typically given in units of electron volts (eV), momentum in units of eV·c−1, and mass in units of eV·c−2. In electromagnetism, and because of relativistic invariance, it is useful to have the electric field E and the magnetic field B in the same unit (Gauss), using the cgs (Gaussian) system of units, where energy is given in units of erg, mass in grams (g), and momentum in g·cm·s−1.
Energy may also in theory be expressed in units of grams, though in practice it requires a large amount of energy to be equivalent to masses in this range. For example, the first atomic bomb liberated about 1 gram of heat, and the largest thermonuclear bombs have generated a kilogram or more of heat. Energies of thermonuclear bombs are usually given in tens of kilotons and megatons referring to the energy liberated by exploding that amount of trinitrotoluene (TNT).
== Special cases ==
=== Centre-of-momentum frame (one particle) ===
For a body in its rest frame, the momentum is zero, so the equation simplifies to
E
0
=
m
0
c
2
,
{\displaystyle E_{0}=m_{0}c^{2}\,,}
where m0 is the rest mass of the body.
=== Massless particles ===
If the object is massless, as is the case for a photon, then the equation reduces to
E
=
p
c
.
{\displaystyle E=pc\,.}
This is a useful simplification. It can be rewritten in other ways using the de Broglie relations:
E
=
h
c
λ
=
ℏ
c
k
.
{\displaystyle E={\frac {hc}{\lambda }}=\hbar ck\,.}
if the wavelength λ or wavenumber k are given.
=== Correspondence principle ===
Rewriting the relation for massive particles as:
E
=
m
0
c
2
1
+
(
p
m
0
c
)
2
,
{\displaystyle E=m_{0}c^{2}{\sqrt {1+\left({\frac {p}{m_{0}c}}\right)^{2}}}\,,}
and expanding into power series by the binomial theorem (or a Taylor series):
E
=
m
0
c
2
[
1
+
1
2
(
p
m
0
c
)
2
−
1
8
(
p
m
0
c
)
4
+
⋯
]
,
{\displaystyle E=m_{0}c^{2}\left[1+{\frac {1}{2}}\left({\frac {p}{m_{0}c}}\right)^{2}-{\frac {1}{8}}\left({\frac {p}{m_{0}c}}\right)^{4}+\cdots \right]\,,}
in the limit that u ≪ c, we have γ(u) ≈ 1 so the momentum has the classical form p ≈ m0u, then to first order in (p/m0c)2 (i.e. retain the term (p/m0c)2n for n = 1 and neglect all terms for n ≥ 2) we have
E
≈
m
0
c
2
[
1
+
1
2
(
m
0
u
m
0
c
)
2
]
,
{\displaystyle E\approx m_{0}c^{2}\left[1+{\frac {1}{2}}\left({\frac {m_{0}u}{m_{0}c}}\right)^{2}\right]\,,}
or
E
≈
m
0
c
2
+
1
2
m
0
u
2
,
{\displaystyle E\approx m_{0}c^{2}+{\frac {1}{2}}m_{0}u^{2}\,,}
where the second term is the classical kinetic energy, and the first is the rest energy of the particle. This approximation is not valid for massless particles, since the expansion required the division of momentum by mass. Incidentally, there are no massless particles in classical mechanics.
== Many-particle systems ==
=== Addition of four momenta ===
In the case of many particles with relativistic momenta pn and energy En, where n = 1, 2, ... (up to the total number of particles) simply labels the particles, as measured in a particular frame, the four-momenta in this frame can be added;
∑
n
P
n
=
∑
n
(
E
n
c
,
p
n
)
=
(
∑
n
E
n
c
,
∑
n
p
n
)
,
{\displaystyle \sum _{n}\mathbf {P} _{n}=\sum _{n}\left({\frac {E_{n}}{c}},\mathbf {p} _{n}\right)=\left(\sum _{n}{\frac {E_{n}}{c}},\sum _{n}\mathbf {p} _{n}\right)\,,}
and then take the norm; to obtain the relation for a many particle system:
|
(
∑
n
P
n
)
|
2
=
(
∑
n
E
n
c
)
2
−
(
∑
n
p
n
)
2
=
(
M
0
c
)
2
,
{\displaystyle \left|\left(\sum _{n}\mathbf {P} _{n}\right)\right|^{2}=\left(\sum _{n}{\frac {E_{n}}{c}}\right)^{2}-\left(\sum _{n}\mathbf {p} _{n}\right)^{2}=\left(M_{0}c\right)^{2}\,,}
where M0 is the invariant mass of the whole system, and is not equal to the sum of the rest masses of the particles unless all particles are at rest (see Mass in special relativity § The mass of composite systems for more detail). Substituting and rearranging gives the generalization of (1);
The energies and momenta in the equation are all frame-dependent, while M0 is frame-independent.
=== Center-of-momentum frame ===
In the center-of-momentum frame (COM frame), by definition we have:
∑
n
p
n
=
0
,
{\displaystyle \sum _{n}\mathbf {p} _{n}={\boldsymbol {0}}\,,}
with the implication from (2) that the invariant mass is also the centre of momentum (COM) mass–energy, aside from the c2 factor:
(
∑
n
E
n
)
2
=
(
M
0
c
2
)
2
⇒
∑
n
E
C
O
M
n
=
E
C
O
M
=
M
0
c
2
,
{\displaystyle \left(\sum _{n}E_{n}\right)^{2}=\left(M_{0}c^{2}\right)^{2}\Rightarrow \sum _{n}E_{\mathrm {COM} \,n}=E_{\mathrm {COM} }=M_{0}c^{2}\,,}
and this is true for all frames since M0 is frame-independent. The energies ECOM n are those in the COM frame, not the lab frame. However, many familiar bound systems have the lab frame as COM frame, since the system itself is not in motion and so the momenta all cancel to zero. An example would be a simple object (where vibrational momenta of atoms cancel) or a container of gas where the container is at rest. In such systems, all the energies of the system are measured as mass. For example, the heat in an object on a scale, or the total of kinetic energies in a container of gas on the scale, all are measured by the scale as the mass of the system.
=== Rest masses and the invariant mass ===
Either the energies or momenta of the particles, as measured in some frame, can be eliminated using the energy momentum relation for each particle:
E
n
2
−
(
p
n
c
)
2
=
(
m
n
c
2
)
2
,
{\displaystyle E_{n}^{2}-\left(\mathbf {p} _{n}c\right)^{2}=\left(m_{n}c^{2}\right)^{2}\,,}
allowing M0 to be expressed in terms of the energies and rest masses, or momenta and rest masses. In a particular frame, the squares of sums can be rewritten as sums of squares (and products):
(
∑
n
E
n
)
2
=
(
∑
n
E
n
)
(
∑
k
E
k
)
=
∑
n
,
k
E
n
E
k
=
2
∑
n
<
k
E
n
E
k
+
∑
n
E
n
2
,
{\displaystyle \left(\sum _{n}E_{n}\right)^{2}=\left(\sum _{n}E_{n}\right)\left(\sum _{k}E_{k}\right)=\sum _{n,k}E_{n}E_{k}=2\sum _{n<k}E_{n}E_{k}+\sum _{n}E_{n}^{2}\,,}
(
∑
n
p
n
)
2
=
(
∑
n
p
n
)
⋅
(
∑
k
p
k
)
=
∑
n
,
k
p
n
⋅
p
k
=
2
∑
n
<
k
p
n
⋅
p
k
+
∑
n
p
n
2
,
{\displaystyle \left(\sum _{n}\mathbf {p} _{n}\right)^{2}=\left(\sum _{n}\mathbf {p} _{n}\right)\cdot \left(\sum _{k}\mathbf {p} _{k}\right)=\sum _{n,k}\mathbf {p} _{n}\cdot \mathbf {p} _{k}=2\sum _{n<k}\mathbf {p} _{n}\cdot \mathbf {p} _{k}+\sum _{n}\mathbf {p} _{n}^{2}\,,}
so substituting the sums, we can introduce their rest masses mn in (2):
∑
n
(
m
n
c
2
)
2
+
2
∑
n
<
k
(
E
n
E
k
−
c
2
p
n
⋅
p
k
)
=
(
M
0
c
2
)
2
.
{\displaystyle \sum _{n}\left(m_{n}c^{2}\right)^{2}+2\sum _{n<k}\left(E_{n}E_{k}-c^{2}\mathbf {p} _{n}\cdot \mathbf {p} _{k}\right)=\left(M_{0}c^{2}\right)^{2}\,.}
The energies can be eliminated by:
E
n
=
(
p
n
c
)
2
+
(
m
n
c
2
)
2
,
E
k
=
(
p
k
c
)
2
+
(
m
k
c
2
)
2
,
{\displaystyle E_{n}={\sqrt {\left(\mathbf {p} _{n}c\right)^{2}+\left(m_{n}c^{2}\right)^{2}}}\,,\quad E_{k}={\sqrt {\left(\mathbf {p} _{k}c\right)^{2}+\left(m_{k}c^{2}\right)^{2}}}\,,}
similarly the momenta can be eliminated by:
p
n
⋅
p
k
=
|
p
n
|
|
p
k
|
cos
θ
n
k
,
|
p
n
|
=
1
c
E
n
2
−
(
m
n
c
2
)
2
,
|
p
k
|
=
1
c
E
k
2
−
(
m
k
c
2
)
2
,
{\displaystyle \mathbf {p} _{n}\cdot \mathbf {p} _{k}=\left|\mathbf {p} _{n}\right|\left|\mathbf {p} _{k}\right|\cos \theta _{nk}\,,\quad |\mathbf {p} _{n}|={\frac {1}{c}}{\sqrt {E_{n}^{2}-\left(m_{n}c^{2}\right)^{2}}}\,,\quad |\mathbf {p} _{k}|={\frac {1}{c}}{\sqrt {E_{k}^{2}-\left(m_{k}c^{2}\right)^{2}}}\,,}
where θnk is the angle between the momentum vectors pn and pk.
Rearranging:
(
M
0
c
2
)
2
−
∑
n
(
m
n
c
2
)
2
=
2
∑
n
<
k
(
E
n
E
k
−
c
2
p
n
⋅
p
k
)
.
{\displaystyle \left(M_{0}c^{2}\right)^{2}-\sum _{n}\left(m_{n}c^{2}\right)^{2}=2\sum _{n<k}\left(E_{n}E_{k}-c^{2}\mathbf {p} _{n}\cdot \mathbf {p} _{k}\right)\,.}
Since the invariant mass of the system and the rest masses of each particle are frame-independent, the right hand side is also an invariant (even though the energies and momenta are all measured in a particular frame).
== Matter waves ==
Using the de Broglie relations for energy and momentum for matter waves,
E
=
ℏ
ω
,
p
=
ℏ
k
,
{\displaystyle E=\hbar \omega \,,\quad \mathbf {p} =\hbar \mathbf {k} \,,}
where ω is the angular frequency and k is the wavevector with magnitude |k| = k, equal to the wave number, the energy–momentum relation can be expressed in terms of wave quantities:
(
ℏ
ω
)
2
=
(
c
ℏ
k
)
2
+
(
m
0
c
2
)
2
,
{\displaystyle \left(\hbar \omega \right)^{2}=\left(c\hbar k\right)^{2}+\left(m_{0}c^{2}\right)^{2}\,,}
and tidying up by dividing by (ħc)2 throughout:
This can also be derived from the magnitude of the four-wavevector
K
=
(
ω
c
,
k
)
,
{\displaystyle \mathbf {K} =\left({\frac {\omega }{c}},\mathbf {k} \right)\,,}
in a similar way to the four-momentum above.
Since the reduced Planck constant ħ and the speed of light c both appear and clutter this equation, this is where natural units are especially helpful. Normalizing them so that ħ = c = 1, we have:
ω
2
=
k
2
+
m
0
2
.
{\displaystyle \omega ^{2}=k^{2}+m_{0}^{2}\,.}
== Tachyon and exotic matter ==
The velocity of a bradyon with the relativistic energy–momentum relation
E
2
=
p
2
c
2
+
m
0
2
c
4
.
{\displaystyle E^{2}=p^{2}c^{2}+m_{0}^{2}c^{4}\,.}
can never exceed c. On the contrary, it is always greater than c for a tachyon whose energy–momentum equation is
E
2
=
p
2
c
2
−
m
0
2
c
4
.
{\displaystyle E^{2}=p^{2}c^{2}-m_{0}^{2}c^{4}\,.}
By contrast, the hypothetical exotic matter has a negative mass and the energy–momentum equation is
E
2
=
−
p
2
c
2
+
m
0
2
c
4
.
{\displaystyle E^{2}=-p^{2}c^{2}+m_{0}^{2}c^{4}\,.}
== See also ==
Mass–energy equivalence
Four-momentum
Mass in special relativity
== References ==
A. Halpern (1988). 3000 Solved Problems in Physics, Schaum Series. McGraw-Hill. pp. 704–705. ISBN 978-0-07-025734-4.
G. Woan (2010). The Cambridge Handbook of Physics Formulas. Cambridge University Press. p. 65. ISBN 978-0-521-57507-2.
C.B. Parker (1994). McGraw-Hill Encyclopaedia of Physics (2nd ed.). McGraw-Hill. pp. 1192, 1193. ISBN 0-07-051400-3.
R.G. Lerner; G.L. Trigg (1991). Encyclopaedia of Physics (2nd ed.). VHC Publishers. p. 1052. ISBN 0-89573-752-3. | Wikipedia/Energy-momentum_relation |
Inertia is the natural tendency of objects in motion to stay in motion and objects at rest to stay at rest, unless a force causes the velocity to change. It is one of the fundamental principles in classical physics, and described by Isaac Newton in his first law of motion (also known as The Principle of Inertia). It is one of the primary manifestations of mass, one of the core quantitative properties of physical systems. Newton writes:
LAW I. Every object perseveres in its state of rest, or of uniform motion in a right line, except insofar as it is compelled to change that state by forces impressed thereon.
In his 1687 work Philosophiæ Naturalis Principia Mathematica, Newton defined inertia as a property:
DEFINITION III. The vis insita, or innate force of matter, is a power of resisting by which every body, as much as in it lies, endeavours to persevere in its present state, whether it be of rest or of moving uniformly forward in a right line.
== History and development ==
=== Early understanding of inertial motion ===
Professor John H. Lienhard points out the Mozi – based on a Chinese text from the Warring States period (475–221 BCE) – as having given the first description of inertia. Before the European Renaissance, the prevailing theory of motion in western philosophy was that of Aristotle (384–322 BCE). On the surface of the Earth, the inertia property of physical objects is often masked by gravity and the effects of friction and air resistance, both of which tend to decrease the speed of moving objects (commonly to the point of rest). This misled the philosopher Aristotle to believe that objects would move only as long as force was applied to them. Aristotle said that all moving objects (on Earth) eventually come to rest unless an external power (force) continued to move them. Aristotle explained the continued motion of projectiles, after being separated from their projector, as an (itself unexplained) action of the surrounding medium continuing to move the projectile.
Despite its general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over nearly two millennia. For example, Lucretius (following, presumably, Epicurus) stated that the "default state" of the matter was motion, not stasis (stagnation). In the 6th century, John Philoponus criticized the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of a surrounding medium, but by some property imparted to the object when it was set in motion. Although this was not the modern concept of inertia, for there was still the need for a power to keep a body in motion, it proved a fundamental step in that direction. This view was strongly opposed by Averroes and by many scholastic philosophers who supported Aristotle. However, this view did not go unchallenged in the Islamic world, where Philoponus had several supporters who further developed his ideas.
In the 11th century, Persian polymath Ibn Sina (Avicenna) claimed that a projectile in a vacuum would not stop unless acted upon.
=== Theory of impetus ===
In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus, dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also believed that impetus could be not only linear but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan's theory was followed up by his pupil Albert of Saxony (1316–1390) and the Oxford Calculators, who performed various experiments which further undermined the Aristotelian model. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of illustrating the laws of motion with graphs.
Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone:
[Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path.
Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion.
=== Classical inertia ===
According to science historian Charles Coulston Gillispie, inertia "entered science as a physical consequence of Descartes' geometrization of space-matter, combined with the immutability of God." The first physicist to completely break away from the Aristotelian model of motion was Isaac Beeckman in 1614.
The term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1617 to 1621). However, the meaning of Kepler's term, which he derived from the Latin word for "idleness" or "laziness", was not quite the same as its modern interpretation. Kepler defined inertia only in terms of resistance to movement, once again based on the axiomatic assumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to those concepts as it is today.
The principle of inertia, as formulated by Aristotle for "motions in a void", includes that a mundane object tends to resist a change in motion. The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the Earth is never at rest, but is actually in constant motion around the Sun.
Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially, as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle:
A body moving on a level surface will continue in the same direction at a constant speed unless disturbed.
Galileo writes that "all external impediments removed, a heavy body on a spherical surface concentric with the earth will maintain itself in that state in which it has been; if placed in a movement towards the west (for example), it will maintain itself in that movement."
This notion, which is termed "circular inertia" or "horizontal circular inertia" by historians of science, is a precursor to, but is distinct from, Newton's notion of rectilinear inertia. For Galileo, a motion is "horizontal" if it does not carry the moving body towards or away from the center of the Earth, and for him, "a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping." It is also worth noting that Galileo later (in 1632) concluded that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Albert Einstein to develop the theory of special relativity.
Concepts of inertia in Galileo's writings would later come to be refined, modified, and codified by Isaac Newton as the first of his laws of motion (first published in Newton's work, Philosophiæ Naturalis Principia Mathematica, in 1687):
Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon.
Despite having defined the concept in his laws of motion, Newton did not actually use the term "inertia.” In fact, he originally viewed the respective phenomena as being caused by "innate forces" inherent in matter which resist any acceleration. Given this perspective, and borrowing from Kepler, Newton conceived of "inertia" as "the innate force possessed by an object which resists changes in motion", thus defining "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself.
However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one that we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon as described by Newton's first law of motion, and the two concepts are now considered to be equivalent.
=== Relativity ===
Albert Einstein's theory of special relativity, as proposed in his 1905 paper entitled "On the Electrodynamics of Moving Bodies", was built on the understanding of inertial reference frames developed by Galileo, Huygens and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass, energy, and distance, Einstein's concept of inertia remained at first unchanged from Newton's original meaning. However, this resulted in a limitation inherent in special relativity: the principle of relativity could only apply to inertial reference frames. To address this limitation, Einstein developed his general theory of relativity ("The Foundation of the General Theory of Relativity", 1916), which provided a theory including noninertial (accelerated) reference frames.
In general relativity, the concept of inertial motion got a broader meaning. Taking into account general relativity, inertial motion is any movement of a body that is not affected by forces of electrical, magnetic, or other origin, but that is only under the influence of gravitational masses. Physically speaking, this happens to be exactly what a properly functioning three-axis accelerometer is indicating when it does not detect any proper acceleration.
== Etymology ==
The term inertia comes from the Latin word iners, meaning idle or sluggish.
== Rotational inertia ==
A quantity related to inertia is rotational inertia (→ moment of inertia), the property that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum remains unchanged unless an external torque is applied; this is called conservation of angular momentum. Rotational inertia is often considered in relation to a rigid body. For example, a gyroscope uses the property that it resists any change in the axis of rotation.
== See also ==
== References ==
== Further reading ==
== External links ==
Quotations related to Inertia at Wikiquote
Why Does the Earth Spin? (YouTube) | Wikipedia/Rest_(physics) |
The linear molecular geometry describes the geometry around a central atom bonded to two other atoms (or ligands) placed at a bond angle of 180°. Linear organic molecules, such as acetylene (HC≡CH), are often described by invoking sp orbital hybridization for their carbon centers.
According to the VSEPR model (Valence Shell Electron Pair Repulsion model), linear geometry occurs at central atoms with two bonded atoms and zero or three lone pairs (AX2 or AX2E3) in the AXE notation. Neutral AX2 molecules with linear geometry include beryllium fluoride (F−Be−F) with two single bonds, carbon dioxide (O=C=O) with two double bonds, hydrogen cyanide (H−C≡N) with one single and one triple bond. The most important linear molecule with more than three atoms is acetylene (H−C≡C−H), in which each of its carbon atoms is considered to be a central atom with a single bond to one hydrogen and a triple bond to the other carbon atom. Linear anions include azide (N−=N+=N−) and thiocyanate (S=C=N−), and a linear cation is the nitronium ion (O=N+=O).
Linear geometry also occurs in AX2E3 molecules, such as xenon difluoride (XeF2) and the triiodide ion (I−3) with one iodide bonded to the two others. As described by the VSEPR model, the five valence electron pairs on the central atom form a trigonal bipyramid in which the three lone pairs occupy the less crowded equatorial positions and the two bonded atoms occupy the two axial positions at the opposite ends of an axis, forming a linear molecule.
== See also ==
AXE method
Molecular geometry
== References ==
== External links ==
Indiana University Molecular Structure Center
Interactive molecular examples for point groups
Molecular Modeling
Animated Trigonal Planar Visual | Wikipedia/Linear_molecule |
Agricultural science (or agriscience for short) is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences that are used in the practice and understanding of agriculture. Professionals of the agricultural science are called agricultural scientists or agriculturists.
== History ==
In the 18th century, Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulfate) as a fertilizer.
In 1843, John Bennet Lawes and Joseph Henry Gilbert began a set of long-term field experiments at Rothamsted Research in England, some of which are still running as of 2018.
In the United States, a scientific revolution in agriculture began with the Hatch Act of 1887, which used the term "agricultural science". The Hatch Act was driven by farmers' interest in knowing the constituents of early artificial fertilizer. The Smith–Hughes Act of 1917 shifted agricultural education back to its vocational roots, but the scientific foundation had been built. For the next 44 years after 1906, federal expenditures on agricultural research in the United States outpaced private expenditures.: xxi
== Prominent agricultural scientists ==
Wilbur Olin Atwater
Robert Bakewell
Norman Borlaug
Luther Burbank
George Washington Carver
Carl Henry Clerk
George C. Clerk
René Dumont
Sir Albert Howard
Kailas Nath Kaul
Thomas Lecky
Justus von Liebig
Jay Laurence Lush
Gregor Mendel
Louis Pasteur
M. S. Swaminathan
Jethro Tull
Artturi Ilmari Virtanen
Sewall Wright
== Fields or related disciplines ==
== Scope ==
Agriculture, agricultural science, and agronomy are closely related. However, they cover different concepts:
Agriculture is the set of activities that transform the environment for the production of animals and plants for human use. Agriculture concerns techniques, including the application of agronomic research.
Agronomy is research and development related to studying and improving plant-based crops.
Geoponics is the science of cultivating the earth.
Hydroponics involves growing plants without soil, by using water-based mineral nutrient solutions in an artificial environment.
== Research topics ==
Agricultural sciences include research and development on:
Improving agricultural productivity in terms of quantity and quality (e.g., selection of drought-resistant crops and animals, development of new pesticides, yield-sensing technologies, simulation models of crop growth, in-vitro cell culture techniques)
Minimizing the effects of pests (weeds, insects, pathogens, mollusks, nematodes) on crop or animal production systems.
Transformation of primary products into end-consumer products (e.g., production, preservation, and packaging of dairy products)
Prevention and correction of adverse environmental effects (e.g., soil degradation, waste management, bioremediation)
Theoretical production ecology, relating to crop production modeling
Traditional agricultural systems, sometimes termed subsistence agriculture, which feed most of the poorest people in the world. These systems are of interest as they sometimes retain a level of integration with natural ecological systems greater than that of industrial agriculture, which may be more sustainable than some modern agricultural systems.
Food production and demand globally, with particular attention paid to the primary producers, such as China, India, Brazil, the US, and the EU.
Various sciences relating to agricultural resources and the environment (e.g. soil science, agroclimatology); biology of agricultural crops and animals (e.g. crop science, animal science and their included sciences, e.g. ruminant nutrition, farm animal welfare); such fields as agricultural economics and rural sociology; various disciplines encompassed in agricultural engineering.
== See also ==
Agricultural Research Council
Agricultural sciences basic topics
Agriculture ministry
Agroecology
American Society of Agronomy
Consultative Group on International Agricultural Research (CGIAR)
Crop Science Society of America
Genomics of domestication
History of agricultural science
Indian Council of Agricultural Research
Institute of Food and Agricultural Sciences
International Assessment of Agricultural Science and Technology for Development
International Food Policy Research Institute, IFPRI
International Institute of Tropical Agriculture
International Livestock Research Institute
List of agriculture topics
National Agricultural Library (NAL)
National FFA Organization
Research Institute of Crop Production (RICP) (in the Czech Republic)
Soil Science Society of America
USDA Agricultural Research Service
University of Agricultural Sciences
== References ==
== Further reading ==
Agricultural Research, Livelihoods, and Poverty: Studies of Economic and Social Impacts in Six Countries Edited by Michelle Adato and Ruth Meinzen-Dick (2007), Johns Hopkins University Press Food Policy Report
Claude Bourguignon, Regenerating the Soil: From Agronomy to Agrology, Other India Press, 2005
Pimentel David, Pimentel Marcia, Computer les kilocalories, Cérès, n. 59, sept-oct. 1977
Russell E. Walter, Soil conditions and plant growth, Longman group, London, New York 1973
Salamini, Francesco; Özkan, Hakan; Brandolini, Andrea; Schäfer-Pregl, Ralf; Martin, William (2002). "Genetics and geography of wild cereal domestication in the near east". Nature Reviews Genetics. 3 (6): 429–441. doi:10.1038/nrg817. PMID 12042770. S2CID 25166879.
Saltini Antonio, Storia delle scienze agrarie, 4 vols, Bologna 1984–89, ISBN 88-206-2412-5, ISBN 88-206-2413-3, ISBN 88-206-2414-1, ISBN 88-206-2415-X
Vavilov Nicolai I. (Starr Chester K. editor), The Origin, Variation, Immunity and Breeding of Cultivated Plants. Selected Writings, in Chronica botanica, 13: 1–6, Waltham, Mass., 1949–50
Vavilov Nicolai I., World Resources of Cereals, Leguminous Seed Crops and Flax, Academy of Sciences of Urss, National Science Foundation, Washington, Israel Program for Scientific Translations, Jerusalem 1960
Winogradsky Serge, Microbiologie du sol. Problèmes et methodes. Cinquante ans de recherches, Masson & c.ie, Paris 1949 | Wikipedia/Agricultural_science |
Soil salinity control refers to controlling the process and progress of soil salinity to prevent soil degradation by salination and reclamation of already salty (saline) soils. Soil reclamation is also known as soil improvement, rehabilitation, remediation, recuperation, or amelioration.
The primary man-made cause of salinization is irrigation. River water or groundwater used in irrigation contains salts, which remain in the soil after the water has evaporated.
The primary method of controlling soil salinity is to permit 10–20% of the irrigation water to leach the soil, so that it will be drained and discharged through an appropriate drainage system. The salt concentration of the drainage water is normally 5 to 10 times higher than that of the irrigation water which meant that salt export will more closely match salt import and it will not accumulate.
== Problems with soil salinity ==
Salty (saline) soils have high salt content. The predominant salt is normally sodium chloride (NaCl, "table salt"). Saline soils are therefore also sodic soils but there may be sodic soils that are not saline, but alkaline.
According to a study by UN University, about 62 million hectares (240 thousand square miles; 150 million acres), representing 20% of the world's irrigated lands are affected, up from 45 million ha (170 thousand sq mi; 110 million acres) in the early 1990s. In the Indo-Gangetic Plain, home to over 10% of the world's population, crop yield losses for wheat, rice, sugarcane and cotton grown on salt-affected lands could be 40%, 45%, 48%, and 63%, respectively.
Salty soils are a common feature and an environmental problem in irrigated lands in arid and semi-arid regions, resulting in poor or little crop production. The causes of salty soils are often associated with high water tables, which are caused by a lack of natural subsurface drainage to the underground. Poor subsurface drainage may be caused by insufficient transport capacity of the aquifer or because water cannot exit the aquifer, for instance, if the aquifer is situated in a topographical depression.
Worldwide, the major factor in the development of saline soils is a lack of precipitation. Most naturally saline soils are found in (semi) arid regions and climates of the earth.
=== Primary cause ===
Man-made salinization is primarily caused by salt found in irrigation water. All irrigation water derived from rivers or groundwater, regardless of water purity, contains salts that remain behind in the soil after the water has evaporated.
For example, assuming irrigation water with a low salt concentration of 0.3 g/L (equal to 0.3 kg/m3 corresponding to an electric conductivity of about 0.5 FdS/m) and a modest annual supply of irrigation water of 10,000 m3/ha (almost 3 mm/day) brings 3,000 kg salt/ha each year. With the absence of sufficient natural drainage (as in waterlogged soils), and proper leaching and drainage program to remove salts, this would lead to high soil salinity and reduced crop yields in the long run.
Much of the water used in irrigation has a higher salt content than 0.3 g/L, compounded by irrigation projects using a far greater annual supply of water. Sugar cane, for example, needs about 20,000 m3/ha of water per year. As a result, irrigated areas often receive more than 3,000 kg/ha of salt per year, with some receiving as much as 10,000 kg/ha/year.
=== Secondary cause ===
The secondary cause of salinization is waterlogging in irrigated land. Irrigation causes changes to the natural water balance of irrigated lands. Large quantities of water in irrigation projects are not consumed by plants and must go somewhere. In irrigation projects, it is impossible to achieve 100% irrigation efficiency where all the irrigation water is consumed by the plants. The maximum attainable irrigation efficiency is about 70%, but usually, it is less than 60%. This means that minimum 30%, but usually more than 40% of the irrigation water is not evaporated and it must go somewhere.
Most of the water lost this way is stored underground which can change the original hydrology of local aquifers considerably. Many aquifers cannot absorb and transport these quantities of water, and so the water table rises leading to waterlogging.
Waterlogging causes three problems:
The shallow water table and lack of oxygenation of the root zone reduces the yield of most crops.
It leads to an accumulation of salts brought in with the irrigation water as their removal through the aquifer is blocked.
With the upward seepage of groundwater, more salts are brought into the soil and the salination is aggravated.
Aquifer conditions in irrigated land and the groundwater flow have an important role in soil salinization, as illustrated here:
Illustration of the influence of aquifer conditions on soil salinization in irrigated land
=== Salt affected area ===
Normally, the salinization of agricultural land affects a considerable area of 20% to 30% in irrigation projects. When the agriculture in such a fraction of the land is abandoned, a new salt and water balance is attained, a new equilibrium is reached and the situation becomes stable.
In India alone, thousands of square kilometers have been severely salinized. China and Pakistan do not lag far behind (perhaps China has even more salt affected land than India). A regional distribution of the 3,230,000 km2 of saline land worldwide is shown in the following table derived from the FAO/UNESCO Soil Map of the World.
=== Spatial variation ===
Although the principles of the processes of salinization are fairly easy to understand, it is more difficult to explain why certain parts of the land suffer from the problems and other parts do not, or to predict accurately which part of the land will fall victim. The main reason for this is the variation of natural conditions in time and space, the usually uneven distribution of the irrigation water, and the seasonal or yearly changes of agricultural practices. Only in lands with undulating topography is the prediction simple: the depressional areas will degrade the most.
The preparation of salt and water balances for distinguishable sub-areas in the irrigation project, or the use of agro-hydro-salinity models, can be helpful in explaining or predicting the extent and severity of the problems.
== Diagnosis ==
=== Measurement ===
Soil salinity is measured as the salt concentration of the soil solution in tems of g/L or electric conductivity (EC) in dS/m. The relation between these two units is about 5/3: y g/L => 5y/3 dS/m. Seawater may have a salt concentration of 30 g/L (3%) and an EC of 50 dS/m.
The standard for the determination of soil salinity is from an extract of a saturated paste of the soil, and the EC is then written as ECe. The extract is obtained by centrifugation. The salinity can more easily be measured, without centrifugation, in a 2:1 or 5:1 water:soil mixture (in terms of g water per g dry soil) than from a saturated paste. The relation between ECe and EC2:1 is about 4, hence: ECe = 4EC1:2.
=== Classification ===
Soils are considered saline when the ECe > 4. When 4 < ECe < 8, the soil is called slightly saline, when 8 < ECe < 16 it is called (moderately) saline, and when ECe > 16 severely saline.
=== Crop tolerance ===
Sensitive crops lose their vigor already in slightly saline soils; most crops are negatively affected by (moderately) saline soils, and only salinity resistant crops thrive in severely saline soils. The University of Wyoming and the Government of Alberta report data on the salt tolerance of plants.
== Principles of salinity control ==
Drainage is the primary method of controlling soil salinity. The system should permit a small fraction of the irrigation water (about 10 to 20 percent, the drainage or leaching fraction) to be drained and discharged out of the irrigation project.
In irrigated areas where salinity is stable, the salt concentration of the drainage water is normally 5 to 10 times higher than that of the irrigation water. Salt export matches salt import and salt will not accumulate.
When reclaiming already salinized soils, the salt concentration of the drainage water will initially be much higher than that of the irrigation water (for example 50 times higher). Salt export will greatly exceed salt import, so that with the same drainage fraction a rapid desalinization occurs. After one or two years, the soil salinity is decreased so much, that the salinity of the drainage water has come down to a normal value and a new, favorable, equilibrium is reached.
In regions with pronounced dry and wet seasons, the drainage system may be operated in the wet season only, and closed during the dry season. This practice of checked or controlled drainage saves irrigation water.
The discharge of salty drainage water may pose environmental problems to downstream areas. The environmental hazards must be considered very carefully and, if necessary mitigating measures must be taken. If possible, the drainage must be limited to wet seasons only, when the salty effluent inflicts the least harm.
== Drainage systems ==
Land drainage for soil salinity control is usually by horizontal drainage system (figure left), but vertical systems (figure right) are also employed.
The drainage system designed to evacuate salty water also lowers the water table. To reduce the cost of the system, the lowering must be reduced to a minimum. The highest permissible level of the water table (or the shallowest permissible depth) depends on the irrigation and agricultural practices and kind of crops.
In many cases a seasonal average water table depth of 0.6 to 0.8 m is deep enough. This means that the water table may occasionally be less than 0.6 m (say 0.2 m just after an irrigation or a rain storm). This automatically implies that, in other occasions, the water table will be deeper than 0.8 m (say 1.2 m). The fluctuation of the water table helps in the breathing function of the soil while the expulsion of carbon dioxide (CO2) produced by the plant roots and the inhalation of fresh oxygen (O2) is promoted.
The establishing of a not-too-deep water table offers the additional advantage that excessive field irrigation is discouraged, as the crop yield would be negatively affected by the resulting elevated water table, and irrigation water may be saved.
The statements made above on the optimum depth of the water table are very general, because in some instances the required water table may be still shallower than indicated (for example in rice paddies), while in other instances it must be considerably deeper (for example in some orchards). The establishment of the optimum depth of the water table is in the realm of agricultural drainage criteria.
== Soil leaching ==
The vadose zone of the soil below the soil surface and the water table is subject to four main hydrological inflow and outflow factors:
Infiltration of rain and irrigation water (Irr) into the soil through the soil surface (Inf) :
Inf = Rain + Irr
Evaporation of soil water through plants and directly into the air through the soil surface (Evap)
Percolation of water from the unsaturated zone soil into the groundwater through the watertable (Perc)
Capillary rise of groundwater moving by capillary suction forces into the unsaturated zone (Cap)
In steady state (i.e. the amount of water stored in the unsaturated zone does not change in the long run) the water balance of the unsaturated zone reads: Inflow = Outflow, thus:
Inf + Cap = Evap + Perc or:
Irr + Rain + Cap = Evap + Perc
and the salt balance is
Irr.Ci + Cap.Cc = Evap.Fc.Ce + Perc.Cp + Ss
where Ci is the salt concentration of the irrigation water, Cc is the salt concentration of the capillary rise, equal to the salt concentration of the upper part of the groundwater body, Fc is the fraction of the total evaporation transpired by plants, Ce is the salt concentration of the water taken up by the plant roots, Cp is the salt concentration of the percolation water, and Ss is the increase of salt storage in the unsaturated soil. This assumes that the rainfall contains no salts. Only along the coast this may not be true. Further it is assumed that no runoff or surface drainage occurs. The amount of removed by plants (Evap.Fc.Ce) is usually negligibly small: Evap.Fc.Ce = 0
The salt concentration Cp can be taken as a part of the salt concentration of the soil in the unsaturated zone (Cu) giving: Cp = Le.Cu, where Le is the leaching efficiency. The leaching efficiency is often in the order of 0.7 to 0.8, but in poorly structured, heavy clay soils it may be less. In the Leziria Grande polder in the delta of the Tagus river in Portugal it was found that the leaching efficiency was only 0.15.
Assuming that one wishes to avoid the soil salinity to increase and maintain the soil salinity Cu at a desired level Cd we have:
Ss = 0, Cu = Cd and Cp = Le.Cd. Hence the salt balance can be simplified to:
Perc.Le.Cd = Irr.Ci + Cap.Cc
Setting the amount percolation water required to fulfill this salt balance equal to Lr (the leaching requirement) it is found that:
Lr = (Irr.Ci + Cap.Cc) / Le.Cd .
Substituting herein Irr = Evap + Perc − Rain − Cap and re-arranging gives :
Lr = [ (Evap−Rain).Ci + Cap(Cc−Ci) ] / (Le.Cd − Ci)
With this the irrigation and drainage requirements for salinity control can be computed too.
In irrigation projects in (semi)arid zones and climates it is important to check the leaching requirement, whereby the field irrigation efficiency (indicating the fraction of irrigation water percolating to the underground) is to be taken into account.
The desired soil salinity level Cd depends on the crop tolerance to salt. The University of Wyoming, US, and the Government of Alberta, Canada, report crop tolerance data.
== Strip cropping: an alternative ==
In irrigated lands with scarce water resources suffering from drainage (high water table) and soil salinity problems, strip cropping is sometimes practiced with strips of land where every other strip is irrigated while the strips in between are left permanently fallow.
Owing to the water application in the irrigated strips they have a higher water table which induces flow of groundwater to the unirrigated strips. This flow functions as subsurface drainage for the irrigated strips, whereby the water table is maintained at a not-too-shallow depth, leaching of the soil is possible, and the soil salinity can be controlled at an acceptably low level.
In the unirrigated (sacrificial) strips the soil is dry and the groundwater comes up by capillary rise and evaporates leaving the salts behind, so that here the soil salinizes. Nevertheless, they can have some use for livestock, sowing salinity resistant grasses or weeds. Moreover, useful salt resistant trees can be planted like Casuarina, Eucalyptus, or Atriplex, keeping in mind that the trees have deep rooting systems and the salinity of the wet subsoil is less than of the topsoil. In these ways wind erosion can be controlled. The unirrigated strips can also be used for salt harvesting.
== Soil salinity models ==
The majority of the computer models available for water and solute transport in the soil (e.g. SWAP, DrainMod-S, UnSatChem, and Hydrus) are based on Richard's differential equation for the movement of water in unsaturated soil in combination with Fick's differential convection–diffusion equation for advection and dispersion of salts.
The models require the input of soil characteristics like the relations between variable unsaturated soil moisture content, water tension, water retention curve, unsaturated hydraulic conductivity, dispersity, and diffusivity. These relations vary greatly from place to place and time to time and are not easy to measure. Further, the models are complicated to calibrate under farmer's field conditions because the soil salinity here is spatially very variable. The models use short time steps and need at least a daily, if not hourly, database of hydrological phenomena. Altogether, this makes model application to a fairly large project the job of a team of specialists with ample facilities.
Simpler models, like SaltMod, based on monthly or seasonal water and soil balances and an empirical capillary rise function, are also available. They are useful for long-term salinity predictions in relation to irrigation and drainage practices.
LeachMod, Using the SaltMod principles helps in analyzing leaching experiments in which the soil salinity was monitored in various root zone layers while the model will optimize the value of the leaching efficiency of each layer so that a fit is obtained of observed with simulated soil salinity values.
Spatial variations owing to variations in topography can be simulated and predicted using salinity cum groundwater models, like SahysMod.
== See also ==
Alkali soils – Soil type with pH > 8.5Pages displaying short descriptions of redirect targets
Biosalinity – Use of salty water for irrigation
Crop tolerance to seawater – Crop tolerance to seawater is the ability of an agricultural crop to withstand the high salinity induced by irrigation with seawater.Pages displaying wikidata descriptions as a fallback
Desalination – Removal of salts from water
Halophyte – Salt-tolerant plant
Halotolerance – Adaptation to high salinity
Salt tolerance of crops
Sodium in biology – Use of sodium by organisms
== References ==
== External links ==
Food and Agriculture Organization of the United Nations on soil salinity
US Salinity Laboratory at Riverside, California | Wikipedia/Soil_salinity_control |
A trial pit (or test pit) is an excavation of ground in order to study or sample the composition and structure of the subsurface, usually dug during a site investigation, a soil survey or a geological survey. Trial pits are dug before the construction. They are dug to determine the geology and the water table of that site.
Trial pits are usually between 1 and 4 metres deep, and are dug either by hand or using a mechanical digger.
Building and construction regulations clearly state that any trial pits that concede deeper than 1.2 metres should be secured against structural collapse, if they are to be entered by people.
== References == | Wikipedia/Trial_pit |
Soil physics is the study of soil's physical properties and processes. It is applied to management and prediction under natural and managed ecosystems. Soil physics deals with the dynamics of physical soil components and their phases as solids, liquids, and gases. It draws on the principles of physics, physical chemistry, engineering, and meteorology. Soil physics applies these principles to address practical problems of agriculture, ecology, and engineering.
== Prominent soil physicists ==
Edgar Buckingham (1867–1940)
The theory of gas diffusion in soil and vadose zone water flow in soil.
Willard Gardner (1883–1964)
First to use porous cups and manometers for capillary potential measurements and accurately predicted the moisture distribution above a water table.
Lorenzo A. Richards (1904–1993)
General transport of water in unsaturated soil, measurement of soil water potential using tensiometer.
John R. Philip (1927–1999)
Analytical solution to general soil water transport, Environmental Mechanics.
== See also ==
Agrophysics
Bulk density
Capacitance probe
Frequency domain sensor
Geotechnical engineering
Irrigation
Irrigation scheduling
Neutron probe
Soil porosity
Soil thermal properties
Time domain reflectometer
Water content
== Notes ==
Horton, Horn, Bachmann & Peth eds. 2016: Essential Soil Physics Schweizerbart, ISBN 978-3-510-65288-4
Encyclopedia of Soil Science, edts. Ward Chesworth, 2008, Uniw. of Guelph Canada, Publ. Springer, ISBN 978-1-4020-3994-2
== External links ==
Media related to Soil physics at Wikimedia Commons
SSSA Soil Physics Division | Wikipedia/Soil_physics |
Wave equation analysis is a numerical method of analysis for the behavior of driven foundation piles. It predicts the pile capacity versus blow count relationship (bearing graph) and pile driving stress. The model mathematically represents the pile driving hammer and all its accessories (ram, cap, and cap block), as well as the pile, as a series of lumped masses and springs in a one-dimensional analysis. The soil response for each pile segment is modeled as viscoelastic-plastic. The method was first developed in the 1950s by E.A. Smith of the Raymond Pile Driving Company.
Wave equation analysis of piles has seen many improvements since the 1950s such as including a thermodynamic diesel hammer model and residual stress. Commercial software packages (such as AllWave-PDP and GRLWEAP) are now available to perform the analysis.
One of the principal uses of this method is the performance of a driveability analysis to select the parameters for safe pile installation, including recommendations on cushion stiffness, hammer stroke and other driving system parameters that optimize blow counts and pile stresses during pile driving. For example, when a soft or hard layer causes excessive stresses or unacceptable blow counts.
== References ==
Smith, E.A.L. (1960) Pile-Driving Analysis by the Wave Equation. Journal of the Engineering Mechanics Division, Proceedings of the American Society of Civil Engineers. Vol. 86, No. EM 4, August.
== External links ==
The Wave Equation Page for Piling | Wikipedia/Wave_equation_analysis |
Land development is the alteration of landscape in any number of ways such as:
Changing landforms from a natural or semi-natural state for a purpose such as agriculture or housing
Subdividing real estate into lots, typically for the purpose of building homes
Real estate development or changing its purpose, for example by converting an unused factory complex into a condominium.
== History ==
Land development has a history dating to Neolithic times around 8,000 BC. From the dawn of civilization, the process of land development has elaborated the progress of improvements on a piece of land based on codes and regulations, particularly housing complexes.
== Economic aspects ==
In an economic context, land development is also sometimes advertised as land improvement or land amelioration. It refers to investment making land more usable by humans. For accounting purposes, it refers to any variety of projects that increase the value of the process. Most are depreciable, but some land improvements are not able to be depreciated because a useful life cannot be determined. Home building and containment are two of the most common and the oldest types of development.
In an urban context, land development furthermore includes:
Road construction
Access roads, walkways, and parking lots
Bridges
Landscaping
Clearing, terracing, or land levelling
Land preparation (development) for gardens
Setup of fences and, to a lesser degree, hedges
Service connections to municipal services and public utilities
Drainage, canal systems
External lighting (street lamps etc.)
A landowner or developer of a project of any size, will often want to maximise profits, minimise risk, and control cash flow. This "profitable energy" means identifying and developing the best scheme for the local marketplace, whilst satisfying the local planning process.
Development analysis puts development prospects and the development process itself under the microscope, identifying where enhancements and improvements can be introduced. These improvements aim to align with best design practice, political sensitivities, and the inevitable social requirements of a project, with the overarching objective of increasing land values and profit margins on behalf of the landowner or developer.
Development analysis can add significantly to the value of land and development, and as such is a crucial tool for landowners and developers. It is an essential step in Kevin A. Lynch's 1960 book The Image of the City, and is considered to be essential to realizing the value potential of land. The landowner can share in additional planning gain (significant value uplift) via an awareness of the land's development potential. This is done via a residual development appraisal or residual valuation. The residual appraisal calculates the sale value of the end product (the gross development value or GDV) and hypothetically deducts costs, including planning and construction costs, finance costs and developer's profit. The "residue", or leftover proportion, represents the land value. Therefore, in maximizing the GDV (that which one could build on the land), land value is concurrently enhanced.
Land value is highly sensitive to supply and demand (for the end product), build costs, planning and affordable housing contributions, and so on. Understanding the intricacies of the development system and the effect of "value drivers" can result in massive differences in the landowner's sale value.
== Conversion of landforms ==
Land development puts more emphasis on the expected economic development as a result of the process; "land conversion" tries to focus on the general physical and biological aspects of the land use change. "Land improvement" in the economic sense can often lead to land degradation from the ecological perspective. Land development and the change in land value does not usually take into account changes in the ecology of the developed area. While conversion of (rural) land with a vegetation carpet to building land may result in a rise in economic growth and rising land prices, the irreversibility of lost flora and fauna because of habitat destruction, the loss of ecosystem services and resulting decline in environmental value is only considered a priori in environmental full-cost accounting.
=== Conversion to building land ===
Conversion to building land is as a rule associated with road building, which in itself already brings topsoil abrasion, soil compaction and modification of the soil's chemical composition through soil stabilization, creation of impervious surfaces and, subsequently, (polluted) surface runoff water.
Construction activity often effectively seals off a larger part of the soil from rainfall and the nutrient cycle, so that the soil below buildings and roads is effectively "consumed" and made infertile.
With the notable exception of attempts at rooftop gardening and hanging gardens in green buildings (possibly as constituents of green urbanism), vegetative cover of higher plants is lost to concrete and asphalt surfaces, complementary interspersed garden and park areas notwithstanding.
=== Conversion to farmland ===
New creation of farmland (or 'agricultural land conversion') will rely on the conversion and development of previous forests, savannas or grassland. Recreation of farmland from wasteland, deserts or previous impervious surfaces is considerably less frequent because of the degraded or missing fertile soil in the latter. Starting from forests, land is made arable by assarting or slash-and-burn.
Agricultural development furthermore includes:
Hydrological measures (land levelling, drainage, irrigation, sometimes landslide and flood control)
Soil improvement (fertilization, establishment of a productive chemical balance).
Road construction
Because the newly created farmland is more prone to erosion than soil stabilized by tree roots, such a conversion may mean irreversible crossing of an ecological threshold.
The resulting deforestation is also not easily compensated for by reforestation or afforestation. This is because plantations of other trees as a means for water conservation and protection against wind erosion (shelterbelts), as a rule, lack the biodiversity of the lost forest, especially when realized as monocultures. These deforestation consequences may have lasting effects on the environment including soil stabilization and erosion control measures that may not be as effective in preserving topsoil as the previous intact vegetation.
=== Restoration ===
Massive land conversion without proper consideration of ecological and geological consequences may lead to disastrous results, such as:
General soil degradation
Catastrophic soil salination and solonchak formation, e.g., in Central Asia, as a consequence of irrigation by saline groundwater
Desertification, soil erosion and ecological shifts due to drainage
Leaching of saline soils
Habitat loss for the wildlife.
While deleterious effects can be particularly visible when land is developed for industrial or mining usage, agro-industrial and settlement use can also have a massive and sometimes irreversible impact on the affected ecosystem.
Examples of land restoration/land rehabilitation counted as land development in the strict sense are still rare. However, renaturation, reforestation, stream restoration may all contribute to a healthier environment and quality of life, especially in densely populated regions. The same is true for planned vegetation like parks and gardens, but restoration plays a particular role, because it reverses previous conversions to built and agricultural areas.
=== Environmental issues ===
The environmental impact of land use and development is a substantial consideration for land development projects. On the local level an environmental impact report (EIR) may be necessary. In the United States, federally funded projects typically require preparation of an environmental impact statement (EIS). The concerns of private citizens or political action committees (PACs) can influence the scope, or even cancel, a project based on concerns like the loss of an endangered species’ habitat.
In most cases, the land development project will be allowed to proceed if mitigation requirements are met. Mitigation banking is the most prevalent example, and necessitates that the habitat will have to be replaced at a greater rate than it is removed. This increase in total area helps to establish the new ecosystem, though it will require time to reach maturity.
=== Biodiversity impacts ===
The extent, and type of land use directly affects wildlife habitat and thereby impacts local and global biodiversity. Human alteration of landscapes from natural vegetation (e.g. wilderness) to any other use can result in habitat loss, degradation, and fragmentation, all of which can have devastating effects on biodiversity. Land conversion is the single greatest cause of extinction of terrestrial species. An example of land conversion being a chief cause of the critically endangered status of a carnivore is the reduction in habitat for the African wild dog, Lycaon pictus.
Deforestation is also the reason for loss of a natural habitat, with large numbers of trees being cut down for residential and commercial use. Urban growth has become a problem for forests and agriculture, the expansion of structures prevents natural resources from producing in their environment. To prevent the loss of wildlife the forests must maintain a stable climate and the land must remain unaffected by development. Furthermore, forests can be sustained by different forest management techniques such as reforestation and preservation. Reforestation is a reactive approach designed to replant previously logged trees within the forest boundary in attempts to re-stabilize this ecosystem. Preservation, on the other hand, is a proactive idea that promotes the concept of leaving the forest without using this area for its ecosystem goods and services. Both of these methods to mitigate deforestation are being used throughout the world.
The U.S. Forest Service predicts that urban and developing terrain in the U.S. will expand by 41 percent in 2060. These conditions cause displacement for the wildlife and limited resources for the environment to maintain a sustainable balance.
== See also ==
== References == | Wikipedia/Land_conversion |
Critical state soil mechanics is the area of soil mechanics that encompasses the conceptual models representing the mechanical behavior of saturated remoulded soils based on the critical state concept. At the critical state, the relationship between forces applied in the soil (stress), and the resulting deformation resulting from this stress (strain) becomes constant. The soil will continue to deform, but the stress will no longer increase.
Forces are applied to soils in a number of ways, for example when they are loaded by foundations, or unloaded by excavations. The critical state concept is used to predict the behaviour of soils under various loading conditions, and geotechnical engineers use the critical state model to estimate how soil will behave under different stresses.
The basic concept is that soil and other granular materials, if continuously distorted until they flow as a frictional fluid, will come into a well-defined critical state. In practical terms, the critical state can be considered a failure condition for the soil. It's the point at which the soil cannot sustain any additional load without undergoing continuous deformation, in a manner similar to the behaviour of fluids.
Certain properties of the soil, like porosity, shear strength, and volume, reach characteristic values. These properties are intrinsic to the type of soil and its initial conditions.
== Formulation ==
The Critical State concept is an idealization of the observed behavior of saturated remoulded clays in triaxial compression tests, and it is assumed to apply to undisturbed soils. It states that soils and other granular materials, if continuously distorted (sheared) until they flow as a frictional fluid, will come into a well-defined critical state. At the onset of the critical state, shear distortions
ε
s
{\displaystyle \ \varepsilon _{s}}
occur without any further changes in mean effective stress
p
′
{\displaystyle \ p'}
, deviatoric stress
q
{\displaystyle \ q}
(or yield stress,
σ
y
{\displaystyle \ \sigma _{y}}
, in uniaxial tension according to the von Mises yielding criterion), or specific volume
ν
{\displaystyle \ \nu }
:
∂
p
′
∂
ε
s
=
∂
q
∂
ε
s
=
∂
ν
∂
ε
s
=
0
{\displaystyle \ {\frac {\partial p'}{\partial \varepsilon _{s}}}={\frac {\partial q}{\partial \varepsilon _{s}}}={\frac {\partial \nu }{\partial \varepsilon _{s}}}=0}
where,
ν
=
1
+
e
{\displaystyle \ \nu =1+e}
p
′
=
1
3
(
σ
1
′
+
σ
2
′
+
σ
3
′
)
{\displaystyle \ p'={\frac {1}{3}}(\sigma _{1}'+\sigma _{2}'+\sigma _{3}')}
q
=
(
σ
1
′
−
σ
2
′
)
2
+
(
σ
2
′
−
σ
3
′
)
2
+
(
σ
1
′
−
σ
3
′
)
2
2
{\displaystyle \ q={\sqrt {\frac {(\sigma _{1}'-\sigma _{2}')^{2}+(\sigma _{2}'-\sigma _{3}')^{2}+(\sigma _{1}'-\sigma _{3}')^{2}}{2}}}}
However, for triaxial conditions
σ
2
′
=
σ
3
′
{\displaystyle \ \sigma _{2}'=\sigma _{3}'}
. Thus,
p
′
=
1
3
(
σ
1
′
+
2
σ
3
′
)
{\displaystyle \ p'={\frac {1}{3}}(\sigma _{1}'+2\sigma _{3}')}
q
=
(
σ
1
′
−
σ
3
′
)
{\displaystyle \ q=(\sigma _{1}'-\sigma _{3}')}
All critical states, for a given soil, form a unique line called the Critical State Line (CSL) defined by the following equations in the space
(
p
′
,
q
,
v
)
{\displaystyle \ (p',q,v)}
:
q
=
M
p
′
{\displaystyle \ q=Mp'}
ν
=
Γ
−
λ
ln
(
p
′
)
{\displaystyle \ \nu =\Gamma -\lambda \ln(p')}
where
M
{\displaystyle \ M}
,
Γ
{\displaystyle \ \Gamma }
, and
λ
{\displaystyle \ \lambda }
are soil constants. The first equation determines the magnitude of the deviatoric stress
q
{\displaystyle \ q}
needed to keep the soil flowing continuously as the product of a frictional constant
M
{\displaystyle \ M}
(capital
μ
{\displaystyle \ \mu }
) and the mean effective stress
p
′
{\displaystyle \ p'}
. The second equation states that the specific volume
ν
{\displaystyle \ \nu }
occupied by unit volume of flowing particles will decrease as the logarithm of the mean effective stress increases.
== History ==
In an attempt to advance soil testing techniques, Kenneth Harry Roscoe of Cambridge University, in the late forties and early fifties, developed a simple shear apparatus in which his successive students attempted to study the changes in conditions in the shear zone both in sand and in clay soils. In 1958 a study of the yielding of soil based on some Cambridge data of the simple shear apparatus tests, and on much more extensive data of triaxial tests at Imperial College London from research led by Professor Sir Alec Skempton at Imperial College, led to the publication of the critical state concept (Roscoe, Schofield & Wroth 1958).
Roscoe obtained his undergraduate degree in mechanical engineering and his experiences trying to create tunnels to escape when held as a prisoner of war by the Nazis during WWII introduced him to soil mechanics. Subsequent to this 1958 paper, concepts of plasticity were introduced by Schofield and published in his textbook. Schofield was taught at Cambridge by Prof. John Baker, a structural engineer who was a strong believer in designing structures that would fail "plastically". Prof. Baker's theories strongly influenced Schofield's thinking on soil shear. Prof. Baker's views were developed from his pre-war work on steel structures and further informed by his wartime experiences assessing blast-damaged structures and with the design of the "Morrison Shelter", an air-raid shelter which could be located indoors (Schofield 2006).
== Original Cam-Clay Model ==
The name cam clay asserts that the plastic volume change typical of clay soil behaviour is due to mechanical stability of an aggregate of small, rough, frictional, interlocking hard particles.
The Original Cam-Clay model is based on the assumption that the soil is isotropic, elasto-plastic, deforms as a continuum, and it is not affected by creep. The yield surface of the Cam clay model is described by the equation
f
(
p
,
q
,
p
c
)
=
q
+
M
p
ln
[
p
p
c
]
≤
0
{\displaystyle f(p,q,p_{c})=q+M\,p\,\ln \left[{\frac {p}{p_{c}}}\right]\leq 0}
where
q
{\displaystyle q}
is the equivalent stress,
p
{\displaystyle p}
is the pressure,
p
c
{\displaystyle p_{c}}
is the pre-consolidation pressure, and
M
{\displaystyle M}
is the slope of the critical state line in
p
−
q
{\displaystyle p-q}
space.
The pre-consolidation pressure evolves as the void ratio (
e
{\displaystyle e}
) (and therefore the specific volume
v
{\displaystyle v}
) of the soil changes. A commonly used relation is
e
=
e
0
−
λ
ln
[
p
c
p
c
0
]
{\displaystyle e=e_{0}-\lambda \ln \left[{\frac {p_{c}}{p_{c0}}}\right]}
where
λ
{\displaystyle \lambda }
is the virgin compression index of the soil. A limitation of this model is the possibility of negative specific volumes at realistic values of stress.
An improvement to the above model for
p
c
{\displaystyle p_{c}}
is the bilogarithmic form
ln
[
1
+
e
1
+
e
0
]
=
ln
[
v
v
0
]
=
−
λ
~
ln
[
p
c
p
c
0
]
{\displaystyle \ln \left[{\frac {1+e}{1+e_{0}}}\right]=\ln \left[{\frac {v}{v_{0}}}\right]=-{\tilde {\lambda }}\ln \left[{\frac {p_{c}}{p_{c0}}}\right]}
where
λ
~
{\displaystyle {\tilde {\lambda }}}
is the appropriate compressibility index of the soil.
== Modified Cam-Clay Model ==
Professor John Burland of Imperial College who worked with Professor Roscoe is credited with the development of the modified version of the original model. The difference between the Cam Clay and the Modified Cam Clay (MCC) is that the yield surface of the MCC is described by an ellipse and therefore the plastic strain increment vector (which is perpendicular to the yield surface) for the largest value of the mean effective stress is horizontal, and hence no incremental deviatoric plastic strain takes place for a change in mean effective stress (for purely hydrostatic states of stress). This is very convenient for constitutive modelling in numerical analysis, especially finite element analysis, where numerical stability issues are important (as a curve needs to be continuous in order to be differentiable).
The yield surface of the modified Cam-clay model has the form
f
(
p
,
q
,
p
c
)
=
[
q
M
]
2
+
p
(
p
−
p
c
)
≤
0
{\displaystyle f(p,q,p_{c})=\left[{\frac {q}{M}}\right]^{2}+p\,(p-p_{c})\leq 0}
where
p
{\displaystyle p}
is the pressure,
q
{\displaystyle q}
is the equivalent stress,
p
c
{\displaystyle p_{c}}
is the pre-consolidation pressure, and
M
{\displaystyle M}
is the slope of the critical state line.
== Critique ==
The basic concepts of the elasto-plastic approach were first proposed by two mathematicians Daniel C. Drucker and William Prager (Drucker and Prager, 1952) in a short eight page note. In their note, Drucker and Prager also demonstrated how to use their approach to calculate the critical height of a vertical bank using either a plane or a log spiral failure surface. Their yield criterion is today called the Drucker-Prager yield criterion. Their approach was subsequently extended by Kenneth H. Roscoe and others in the soil mechanics department of Cambridge University.
Critical state and elasto-plastic soil mechanics have been the subject of criticism ever since they were first introduced. The key factor driving the criticism is primarily the implicit assumption that soils are made of isotropic point particles. Real soils are composed of finite size particles with anisotropic properties that strongly determine observed behavior. Consequently, models based on a metals based theory of plasticity are not able to model behavior of soils that is a result of anisotropic particle properties, one example of which is the drop in shear strengths post peak strength, i.e., strain-softening behavior. Because of this elasto-plastic soil models are only able to model "simple stress-strain curves" such as that from isotropic normally or lightly over consolidated "fat" clays, i.e., CL-ML type soils constituted of very fine grained particles.
Also, in general, volume change is governed by considerations from elasticity and, this assumption being largely untrue for real soils, results in very poor matches of these models to volume changes or pore pressure changes. Further, elasto-plastic models describe the entire element as a whole and not specifically conditions directly on the failure plane, as a consequence of which, they do not model the stress-strain curve post failure, particularly for soils that exhibit strain-softening post peak. Finally, most models separate out the effects of hydrostatic stress and shear stress, with each assumed to cause only volume change and shear change respectively. In reality, soil structure, being analogous to a "house of cards," shows both shear deformations on the application of pure compression, and volume changes on the application of pure shear.
Additional criticisms are that the theory is "only descriptive," i.e., only describes known behavior and lacking the ability to either explain or predict standard soil behaviors such as, why the void ratio in a one dimensional compression test varies linearly with the logarithm of the vertical effective stress. This behavior, critical state soil mechanics simply assumes as a given.
For these reasons, critical-state and elasto-plastic soil mechanics have been subject to charges of scholasticism; the tests to demonstrated its validity are usually "conformation tests" where only simple stress-strain curves are demonstrated to be modeled satisfactorily. The critical-state and concepts surrounding it have a long history of being "scholastic," with Sir Alec Skempton, the “founding father” of British soil mechanics, attributed the scholastic nature of CSSM to Roscoe, of whom he said: “…he did little field work and was, I believe, never involved in a practical engineering job.”.In the 1960s and 1970s, Prof. Alan Bishop at Imperial College used to routinely demonstrate the inability of these theories to match the stress-strain curves of real soils. Joseph (2013) has suggested that critical-state and elasto-plastic soil mechanics meet the criterion of a “degenerate research program” a concept proposed by the philosopher of science Imre Lakatos, for theories where excuses are used to justify an inability of theory to match empirical data.
=== Response ===
The claims that critical state soil mechanics is only descriptive and meets the criterion of a degenerate research program have not been settled. Andrew Jenike used a logarithmic-logarithmic relation to describe the compression test in his theory of critical state and admitted decreases in stress during converging flow and increases in stress during diverging flow. Chris Szalwinski has defined a critical state as a multi-phase state at which the specific volume is the same in both solid and fluid phases. Under his definition the linear-logarithmic relation of the original theory and Jenike's logarithmic-logarithmic relation are special cases of a more general physical phenomenon.
== Stress tensor formulations ==
=== Plane stress ===
σ
=
[
σ
x
x
0
τ
x
z
0
0
0
τ
z
x
0
σ
z
z
]
=
[
σ
x
x
τ
x
z
τ
z
x
σ
z
z
]
{\displaystyle \sigma =\left[{\begin{matrix}\sigma _{xx}&0&\tau _{xz}\\0&0&0\\\tau _{zx}&0&\sigma _{zz}\\\end{matrix}}\right]=\left[{\begin{matrix}\sigma _{xx}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}\\\end{matrix}}\right]}
==== Drained conditions ====
===== Plane Strain State of Stress =====
Separation of Plane Strain Stress State Matrix into Distortional and Volumetric Parts:
σ
=
[
σ
x
x
0
τ
x
z
0
0
0
τ
z
x
0
σ
z
z
]
=
[
σ
x
x
τ
x
z
τ
z
x
σ
z
z
]
=
[
σ
x
x
−
σ
h
y
d
r
o
s
t
a
t
i
c
τ
x
z
τ
z
x
σ
z
z
−
σ
h
y
d
r
o
s
t
a
t
i
c
]
+
[
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
σ
h
y
d
r
o
s
t
a
t
i
c
]
{\displaystyle \sigma =\left[{\begin{matrix}\sigma _{xx}&0&\tau _{xz}\\0&0&0\\\tau _{zx}&0&\sigma _{zz}\\\end{matrix}}\right]=\left[{\begin{matrix}\sigma _{xx}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}\\\end{matrix}}\right]=\left[{\begin{matrix}\sigma _{xx}-\sigma _{hydrostatic}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}-\sigma _{hydrostatic}\\\end{matrix}}\right]+\left[{\begin{matrix}\sigma _{hydrostatic}&0\\0&\sigma _{hydrostatic}\\\end{matrix}}\right]}
σ
h
y
d
r
o
s
t
a
t
i
c
=
p
m
e
a
n
=
σ
x
x
+
σ
z
z
2
{\displaystyle \sigma _{hydrostatic}=p_{mean}={\frac {\sigma _{xx}+\sigma _{zz}}{2}}}
After
δ
σ
z
{\displaystyle \delta \sigma _{z}}
loading
[
σ
x
x
−
σ
h
y
d
r
o
s
t
a
t
i
c
τ
x
z
τ
z
x
σ
z
z
−
σ
h
y
d
r
o
s
t
a
t
i
c
]
+
[
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
σ
h
y
d
r
o
s
t
a
t
i
c
]
+
[
0
0
0
σ
z
]
{\displaystyle \left[{\begin{matrix}\sigma _{xx}-\sigma _{hydrostatic}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}-\sigma _{hydrostatic}\\\end{matrix}}\right]+\left[{\begin{matrix}\sigma _{hydrostatic}&0\\0&\sigma _{hydrostatic}\\\end{matrix}}\right]+\left[{\begin{matrix}0&0\\0&\sigma _{z}\ \\\end{matrix}}\right]}
==== Drained state of stress ====
[
σ
x
x
−
σ
h
y
d
r
o
s
t
a
t
i
c
τ
x
z
τ
z
x
σ
z
z
−
σ
h
y
d
r
o
s
t
a
t
i
c
]
+
[
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
σ
h
y
d
r
o
s
t
a
t
i
c
]
{\displaystyle \left[{\begin{matrix}\sigma _{xx}-\sigma _{hydrostatic}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}-\sigma _{hydrostatic}\\\end{matrix}}\right]+\left[{\begin{matrix}\sigma _{hydrostatic}&0\\0&\sigma _{hydrostatic}\\\end{matrix}}\right]}
+
[
0
0
0
δ
z
]
=
[
σ
x
x
−
σ
h
y
d
r
o
s
t
a
t
i
c
τ
x
z
τ
z
x
σ
z
z
−
σ
h
y
d
r
o
s
t
a
t
i
c
]
+
[
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
σ
h
y
d
r
o
s
t
a
t
i
c
]
{\displaystyle +\left[{\begin{matrix}0&0\\0&\mathbf {\delta z} \ \\\end{matrix}}\right]=\left[{\begin{matrix}\sigma _{xx}-\sigma _{hydrostatic}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}-\sigma _{hydrostatic}\\\end{matrix}}\right]+\left[{\begin{matrix}\sigma _{hydrostatic}&0\\0&\sigma _{hydrostatic}\\\end{matrix}}\right]}
+
[
−
δ
p
w
2
0
0
σ
z
−
δ
p
w
2
]
{\displaystyle +\left[{\begin{matrix}{\frac {-{\delta p}_{w}}{2}}\ &0\\0&\sigma _{z}-{\frac {{\delta p}_{w}}{2}}\ \\\end{matrix}}\right]}
+
[
δ
p
w
2
0
0
δ
p
w
2
]
{\displaystyle +\left[{\begin{matrix}{\frac {{\delta p}_{w}}{2}}&0\\0&{\frac {{\delta p}_{w}}{2}}\ \\\end{matrix}}\right]}
===== Drained Plane Strain State =====
ε
z
=
Δ
h
h
0
{\displaystyle \varepsilon _{z}={\frac {\Delta h}{h_{0}}}}
;
ε
x
=
ε
y
=
0
{\displaystyle \ \varepsilon _{x}=\varepsilon _{y}=0}
ε
z
=
1
E
(
σ
z
−
ν
)
(
σ
x
+
σ
z
)
=
1
E
σ
z
(
1
−
2
ν
ε
)
{\displaystyle \varepsilon _{z}={\frac {1}{E}}(\sigma _{z}-\nu )(\sigma _{x}+\sigma _{z})={\frac {1}{E}}\sigma _{z}(1-2\nu \varepsilon )}
;
ε
=
ν
1
−
ν
;
ν
=
ε
1
+
ε
{\displaystyle \varepsilon ={\frac {\nu }{1-\nu }};\ \nu ={\frac {\varepsilon }{1+\varepsilon }}}
By matrix:
ε
z
=
1
E
(
1
−
2
ν
ε
)
[
[
σ
x
x
−
ρ
w
τ
x
z
τ
z
x
σ
z
z
−
ρ
w
]
+
[
ρ
w
0
0
ρ
w
]
]
{\displaystyle \varepsilon _{z}={\frac {1}{E}}(1-2\nu \varepsilon )\ \left[\left[{\begin{matrix}\sigma _{xx}-\rho _{w}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}-\rho _{w}\\\end{matrix}}\right]+\left[{\begin{matrix}\rho _{w}&0\\0&\rho _{w}\\\end{matrix}}\right]\right]}
;
==== Undrained conditions ====
===== Undrained state of stress =====
[
σ
x
x
−
ρ
w
τ
x
z
τ
z
x
σ
z
z
−
ρ
w
]
+
{\displaystyle \left[{\begin{matrix}\sigma _{xx}-\rho _{w}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}-\rho _{w}\\\end{matrix}}\right]+}
[
ρ
w
0
0
ρ
w
]
+
[
0
0
0
δ
σ
z
]
=
{\displaystyle \left[{\begin{matrix}\rho _{w}&0\\0&\rho _{w}\\\end{matrix}}\right]+\left[{\begin{matrix}0&0\\0&\delta \sigma _{z}\ \\\end{matrix}}\right]=}
=
[
σ
x
x
−
ρ
w
τ
x
z
τ
z
x
σ
z
z
−
ρ
w
]
+
{\displaystyle =\left[{\begin{matrix}\sigma _{xx}-\rho _{w}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}-\rho _{w}\\\end{matrix}}\right]+}
[
ρ
w
0
0
ρ
w
]
{\displaystyle \left[{\begin{matrix}\rho _{w}&0\\0&\rho _{w}\\\end{matrix}}\right]}
+
[
−
p
w
/
2
0
0
σ
z
−
p
w
/
2
]
+
[
δ
p
w
/
2
0
0
δ
p
w
/
2
]
=
{\displaystyle +\ \ \left[{\begin{matrix}-{p}_{w}\ /\mathbf {2} &0\\0&\sigma _{z}-{p}_{w}/\mathbf {2} \ \\\end{matrix}}\right]+\left[{\begin{matrix}\delta p_{w}/2&0\\0&\delta p_{w}/\mathbf {2} \ \\\end{matrix}}\right]=}
=
[
σ
x
x
−
ρ
w
τ
x
z
τ
z
x
σ
z
z
−
ρ
w
]
+
{\displaystyle =\left[{\begin{matrix}\sigma _{xx}-\rho _{w}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}-\rho _{w}\\\end{matrix}}\right]+}
[
ρ
w
0
0
ρ
w
]
{\displaystyle \left[{\begin{matrix}\rho _{w}&0\\0&\rho _{w}\\\end{matrix}}\right]}
+
[
−
p
w
/
2
0
0
σ
z
−
p
w
/
2
]
+
[
δ
p
w
/
2
0
0
δ
p
w
/
2
]
+
{\displaystyle +\ \ \left[{\begin{matrix}-{p}_{w}\ /\mathbf {2} &0\\0&\sigma _{z}-{p}_{w}/\mathbf {2} \ \\\end{matrix}}\right]+\left[{\begin{matrix}\delta p_{w}/2&0\\0&\delta p_{w}/\mathbf {2} \ \\\end{matrix}}\right]+}
[
0
τ
x
z
τ
z
x
0
]
−
[
0
δ
p
w
,
i
n
t
δ
p
w
,
i
n
t
0
]
{\displaystyle \left[{\begin{matrix}0&\tau _{xz}\\{\tau }_{zx}&0\\\end{matrix}}\right]-\left[{\begin{matrix}0&{\delta p}_{w,int}\\{\delta p}_{w,int}&0\\\end{matrix}}\right]}
===== Undrained Strain State of Stress =====
=== Undrained state of Plane Strain State ===
ε
z
=
1
E
(
1
−
2
ν
ε
)
=
{\displaystyle \varepsilon _{z}={\frac {1}{E}}\left(1-2\nu \varepsilon \right)=}
=
[
[
σ
x
x
−
ρ
w
τ
x
z
τ
z
x
σ
z
z
−
ρ
w
]
+
[
ρ
w
0
0
ρ
w
]
+
[
0
δ
τ
x
z
δ
τ
z
x
0
]
−
[
0
δ
p
w
,
i
n
t
δ
p
w
,
i
n
t
0
]
]
=
{\displaystyle =\left[\left[{\begin{matrix}\sigma _{xx}-\rho _{w}&\tau _{xz}\\\tau _{zx}&\sigma _{zz}-\rho _{w}\\\end{matrix}}\right]+\left[{\begin{matrix}\rho _{w}&0\\0&\rho _{w}\\\end{matrix}}\right]+\left[{\begin{matrix}0&\delta \tau _{xz}\\{\delta \tau }_{zx}&0\\\end{matrix}}\right]-\left[{\begin{matrix}0&{\delta p}_{w,int}\\{\delta p}_{w,int}&0\\\end{matrix}}\right]\right]=}
=
1
E
(
1
−
2
ν
ε
)
[
ρ
u
+
ρ
w
+
p
]
{\displaystyle ={\frac {1}{E}}\left(1-2\nu \varepsilon \right)\left[\rho _{u}+\rho _{w}+p\right]}
ρ
u
=
K
u
Δ
ε
z
;
ρ
w
=
K
w
n
Δ
ε
z
;
ρ
=
K
Δ
ε
z
;
{\displaystyle \rho _{u}=K_{u}\Delta \varepsilon _{z};\ \ \rho _{w}={\frac {K_{w}}{n}}\Delta \varepsilon _{z};\ \ \rho _{=}K_{\Delta }\varepsilon _{z};}
=== Triaxial State of Stress ===
Separation Matrix into Distortional and Volumetric Parts:
σ
=
[
σ
r
0
0
0
σ
r
0
0
0
σ
z
]
=
[
σ
r
−
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
r
−
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
z
−
σ
h
y
d
r
o
s
t
a
t
i
c
]
+
[
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
h
y
d
r
o
s
t
a
t
i
c
]
{\displaystyle \sigma =\left[{\begin{matrix}\sigma _{r}&0&0\\0&\sigma _{r}&0\\0&0&\sigma _{z}\\\end{matrix}}\right]=\left[{\begin{matrix}\sigma _{r}-\sigma _{hydrostatic}&0&0\\0&\sigma _{r}-\sigma _{hydrostatic}&0\\0&0&\sigma _{z}-\sigma _{hydrostatic}\\\end{matrix}}\right]+\left[{\begin{matrix}\sigma _{hydrostatic}&0&0\\0&\sigma _{hydrostatic}&0\\0&0&\sigma _{hydrostatic}\\\end{matrix}}\right]}
=== Undrained state of Triaxial stress ===
[
σ
r
−
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
r
−
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
z
−
σ
h
y
d
r
o
s
t
a
t
i
c
]
{\displaystyle \left[{\begin{matrix}\sigma _{r}-\sigma _{hydrostatic}&0&0\\0&\sigma _{r}-\sigma _{hydrostatic}&0\\0&0&\sigma _{z}-\sigma _{hydrostatic}\\\end{matrix}}\right]}
+
[
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
h
y
d
r
o
s
t
a
t
i
c
]
{\displaystyle +\left[{\begin{matrix}\sigma _{hydrostatic}&0&0\\0&\sigma _{hydrostatic}&0\\0&0&\sigma _{hydrostatic}\\\end{matrix}}\right]}
+
[
−
(
r
2
H
∗
3
)
p
w
0
0
0
−
(
r
2
H
∗
3
)
(
p
w
0
0
0
(
σ
z
−
(
r
2
H
∗
3
)
∗
p
w
]
{\displaystyle +\left[{\begin{matrix}-\left({\frac {r}{2H\ast 3}}\right){p}_{w}&0&0\\0&-\left({\frac {r}{2H\ast 3}}\right){(p}_{w}&0\\0&0&(\sigma _{z}-{\left({\frac {r}{2H\ast 3}}\right)\ast \ p}_{w}\\\end{matrix}}\right]}
−
[
(
r
2
H
∗
3
)
p
w
0
0
0
(
r
2
H
∗
3
)
p
w
0
0
0
(
r
2
H
∗
3
)
p
w
]
+
{\displaystyle -\ \ \left[{\begin{matrix}{\left({\frac {r}{2H\ast 3}}\right)p}_{w}&0&0\\0&{\left({\frac {r}{2H\ast 3}}\right)p}_{w}&0\\0&0&\left({\frac {r}{2H\ast 3}}\right)p_{w}\\\end{matrix}}\right]+}
[
0
0
δ
τ
x
z
0
0
0
δ
τ
δ
z
x
0
0
]
{\displaystyle \left[{\begin{matrix}0&0&{\delta \tau _{xz}}\\0&0&0\\\delta {\tau }_{\delta {zx}}&0&0\\\end{matrix}}\right]}
+
[
δ
p
w
,
i
n
t
0
0
0
δ
p
w
,
i
n
t
0
0
0
δ
p
w
,
i
n
t
]
+
{\displaystyle +\left[{\begin{matrix}{\delta p_{w,int}}&0&0\\0&{\delta p_{w,int}}&0\\0&0&{\delta p_{w,int}}\\\end{matrix}}\right]+}
[
−
δ
p
w
,
i
n
t
0
0
0
−
δ
p
w
,
i
n
t
0
0
0
−
δ
p
w
,
i
n
t
]
+
{\displaystyle \left[{\begin{matrix}{-\delta p_{w,int}}&0&0\\0&{-\delta p_{w,int}}&0\\0&0&{-\delta p_{w,int}}\\\end{matrix}}\right]+}
[
0
0
−
δ
τ
x
z
0
0
0
−
δ
τ
δ
z
x
0
0
]
{\displaystyle \left[{\begin{matrix}0&0&{-\delta \tau _{xz}}\\0&0&0\\-\delta {\tau }_{\delta {zx}}&0&0\\\end{matrix}}\right]}
=== Drained state of Triaxial stress ===
Only volumetric in case of drainage:
[
σ
r
−
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
r
−
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
z
−
σ
h
y
d
r
o
s
t
a
t
i
c
]
{\displaystyle \left[{\begin{matrix}\sigma _{r}-\sigma _{hydrostatic}&0&0\\0&\sigma _{r}-\sigma _{hydrostatic}&0\\0&0&\sigma _{z}-\sigma _{hydrostatic}\\\end{matrix}}\right]}
+
[
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
h
y
d
r
o
s
t
a
t
i
c
0
0
0
σ
h
y
d
r
o
s
t
a
t
i
c
]
{\displaystyle +\left[{\begin{matrix}\sigma _{hydrostatic}&0&0\\0&\sigma _{hydrostatic}&0\\0&0&\sigma _{hydrostatic}\\\end{matrix}}\right]}
+
[
−
(
r
2
H
∗
3
)
p
w
0
0
0
−
(
r
2
H
∗
3
)
(
p
w
0
0
0
(
σ
z
−
(
r
2
H
∗
3
)
∗
p
w
]
{\displaystyle +\left[{\begin{matrix}-\left({\frac {r}{2H\ast 3}}\right){p}_{w}&0&0\\0&-\left({\frac {r}{2H\ast 3}}\right){(p}_{w}&0\\0&0&(\sigma _{z}-{\left({\frac {r}{2H\ast 3}}\right)\ast \ p}_{w}\\\end{matrix}}\right]}
−
[
(
r
2
H
∗
3
)
p
w
0
0
0
(
r
2
H
∗
3
)
p
w
0
0
0
(
r
2
H
∗
3
)
p
w
]
+
{\displaystyle -\ \ \left[{\begin{matrix}{\left({\frac {r}{2H\ast 3}}\right)p}_{w}&0&0\\0&{\left({\frac {r}{2H\ast 3}}\right)p}_{w}&0\\0&0&\left({\frac {r}{2H\ast 3}}\right)p_{w}\\\end{matrix}}\right]+}
== Example solution in matrix form ==
The following data were obtained from a conventional triaxial compression test on a saturated (B=1), normally consolidated simple clay (Ladd, 1964). The cell pressure was held constant at 10 kPa, while the axial stress was increased to failure (axial compression test)..
Initial phase:
σ
=
[
σ
r
0
0
0
σ
r
0
0
0
σ
z
]
=
[
0
0
0
0
10
0
0
0
10
]
{\displaystyle \sigma =\left[{\begin{matrix}\sigma _{r}&0&0\\0&\sigma _{r}&0\\0&0&\sigma _{z}\\\end{matrix}}\right]=\left[{\begin{matrix}0&0&0\\0&10&0\\0&0&10\\\end{matrix}}\right]}
Step one:
σ
1
=
[
0
0
0
0
10
0
0
0
10
]
+
σ
=
[
0
0
0
0
10
0
0
0
10
]
+
[
1
0
0
0
0
3.5
0
−
1
0
]
{\displaystyle \sigma _{1}=\left[{\begin{matrix}0&0&0\\0&10&0\\0&0&10\\\end{matrix}}\right]+\mathbf {\sigma } =\left[{\begin{matrix}0&0&0\\0&10&0\\0&0&10\\\end{matrix}}\right]+\left[{\begin{matrix}1&0&0\\0&0&3.5\\0&-1&0\\\end{matrix}}\right]}
[
1
−
1.9
0
0
0
10
−
1.9
3.5
0
−
1
10
−
1.9
]
+
[
1.9
0
0
0
1.9
0
0
0
1.9
]
{\displaystyle \left[{\begin{matrix}1-1.9&0&0\\0&10-1.9&3.5\\0&-1\ &10-1.9\\\end{matrix}}\right]+\left[{\begin{matrix}1.9&0&0\\0&1.9&0\\0&0&1.9\\\end{matrix}}\right]}
Step 2-9 is same step one.
Step seven:
σ
7
=
[
12
−
4.4
0
0
0
10
−
4.4
2.9
0
−
2
10
−
4.4
]
+
[
4.4
0
0
0
4.4
0
0
0
4.4
]
{\displaystyle \sigma _{7}=\left[{\begin{matrix}12-4.4\ \ \ &0&0\\0&10-4.4&2.9\\0&-2\ &10-4.4\\\end{matrix}}\right]+\left[{\begin{matrix}4.4&0&0\\0&4.4&0\\0&0&4.4\\\end{matrix}}\right]}
== Notes ==
== References == | Wikipedia/Critical_state_soil_mechanics |
Erosion control is the practice of preventing or controlling wind or water erosion in agriculture, land development, coastal areas, river banks and construction. Effective erosion controls handle surface runoff and are important techniques in preventing water pollution, soil loss, wildlife habitat loss and human property loss.
== Usage ==
Erosion controls are used in natural areas, agricultural settings or urban environments. In urban areas erosion controls are often part of stormwater runoff management programs required by local governments. The controls often involve the creation of a physical barrier, such as vegetation or rock, to absorb some of the energy of the wind or water that is causing the erosion. They also involve building and maintaining storm drains. On construction sites they are often implemented in conjunction with sediment controls such as sediment basins and silt fences.
Bank erosion is a natural process: without it, rivers would not meander and change course. However, land management patterns that change the hydrograph and/or vegetation cover can act to increase or decrease channel migration rates. In many places, whether or not the banks are unstable due to human activities, people try to keep a river in a single place. This can be done for environmental reclamation or to prevent a river from changing course into land that is being used by people. One way that this is done is by placing riprap or gabions along the bank.
== Examples ==
Examples of erosion control methods include the following:
== Mathematical modeling ==
Since the 1920s and 1930s scientists have been creating mathematical models for understanding the mechanisms of soil erosion and resulting sediment surface runoff, including an early paper by Albert Einstein applying Baer's law. These models have addressed both gully and sheet erosion. Earliest models were a simple set of linked equations which could be employed by manual calculation. By the 1970s the models had expanded to complex computer models addressing nonpoint source pollution with thousands of lines of computer code. The more complex models were able to address nuances in micrometeorology, soil particle size distributions and micro-terrain variation.
== See also ==
Bridge scour
Burned area emergency response
Certified Professional in Erosion and Sediment Control
Coastal management
Dust Bowl
Natural Resources Conservation Service (United States)
Tillage erosion
Universal Soil Loss Equation
Vetiver System
== Notes ==
== References ==
== External links ==
"Saving Runaway Farm Land", November 1930, Popular Mechanics One of the first articles on the problem of soil erosion control
Erosion Control Technology Council - a trade organization for the erosion control industry
International Erosion Control Association - Professional Association, Publications, Training
Soil Bioengineering and Biotechnical Slope Stabilization - Erosion Control subsection of a website on Riparian Habitat Restoration | Wikipedia/Erosion_control |
In fluid mechanics, materials science and Earth sciences, the permeability of porous media (often, a rock or soil) is a measure of the ability for fluids (gas or liquid) to flow through the media; it is commonly symbolized as k.
Fluids can more easily flow through a material with high permeability than one with low permeability.
The permeability of a medium is related to the porosity, but also to the shapes of the pores in the medium and their level of connectedness.
Fluid flows can also be influenced in different lithological settings by brittle deformation of rocks in fault zones; the mechanisms by which this occurs are the subject of fault zone hydrogeology. Permeability is also affected by the pressure inside a material.
The SI unit for permeability is the square metre (m2). A practical unit for permeability is the darcy (d), or more commonly the millidarcy (md) (1 d ≈ 10−12 m2). The name honors the French Engineer Henry Darcy who first described the flow of water through sand filters for potable water supply. Permeability values for most materials commonly range typically from a fraction to several thousand millidarcys. The unit of square centimetre (cm2) is also sometimes used (1 cm2 = 10−4 m2 ≈ 108 d).
== Applications ==
The concept of permeability is of importance in determining the flow characteristics of hydrocarbons in oil and gas reservoirs, and of groundwater in aquifers.
For a rock to be considered as an exploitable hydrocarbon reservoir without stimulation, its permeability must be greater than approximately 100 md (depending on the nature of the hydrocarbon – gas reservoirs with lower permeabilities are still exploitable because of the lower viscosity of gas in comparison with oil). Rocks with permeabilities significantly lower than 100 md can form efficient seals (see petroleum geology). Unconsolidated sands may have permeabilities of over 5000 md.
The concept also has many practical applications outside of geology, for example in chemical engineering (e.g., filtration), as well as in Civil Engineering when determining whether the ground conditions of a site are suitable for construction.
The concept of permeability is also useful in computational fluid dynamics (CFD) for modeling flow through complex geometries such as packed beds, filter papers, or tube banks. When the size of individual components—such as particle diameter in packed beds or tube diameter in tube bundles—are significantly smaller than the overall flow domain, direct modeling becomes computationally intensive due to the fine mesh resolution required. In such cases, the domain can be approximated as a porous medium, with permeability estimated using correlations, experimental data, or separate fluid flow simulations.
== Description ==
Permeability is part of the proportionality constant in Darcy's law which relates discharge (flow rate) and fluid physical properties (e.g. dynamic viscosity), to a pressure gradient applied to the porous media:
v
=
k
η
Δ
P
Δ
x
{\displaystyle v={\frac {k}{\eta }}{\frac {\Delta P}{\Delta x}}}
(for linear flow)
Therefore:
k
=
v
η
Δ
x
Δ
P
{\displaystyle k=v{\frac {\eta \,\Delta x}{\Delta P}}}
where:
v
{\displaystyle v}
is the fluid velocity through the porous medium (i.e., the average flow velocity calculated as if the fluid was the only phase present in the porous medium) (m/s)
k
{\displaystyle k}
is the permeability of a medium (m2)
η
{\displaystyle \eta }
is the dynamic viscosity of the fluid (Pa·s)
Δ
P
{\displaystyle \Delta P}
is the applied pressure difference (Pa)
Δ
x
{\displaystyle \Delta x}
is the thickness of the bed of the porous medium (m)
In naturally occurring materials, the permeability values range over many orders of magnitude (see table below for an example of this range).
=== Relation to hydraulic conductivity ===
The global proportionality constant for the flow of water through a porous medium is called the hydraulic conductivity (K, unit: m/s). Permeability, or intrinsic permeability, (k, unit: m2) is a part of this, and is a specific property characteristic of the solid skeleton and the microstructure of the porous medium itself, independently of the nature and properties of the fluid flowing through the pores of the medium. This allows to take into account the effect of temperature on the viscosity of the fluid flowing though the porous medium and to address other fluids than pure water, e.g., concentrated brines, petroleum, or organic solvents. Given the value of hydraulic conductivity for a studied system, the permeability can be calculated as follows:
k
=
K
η
ρ
g
{\displaystyle k=K{\frac {\eta }{\rho g}}}
where
k
{\displaystyle k}
is the permeability, m2
K
{\displaystyle K}
is the hydraulic conductivity, m/s
η
{\displaystyle \eta }
is the dynamic viscosity of the fluid, Pa·s
ρ
{\displaystyle \rho }
is the density of the fluid, kg/m3
g
{\displaystyle g}
is the acceleration due to gravity, m/s2.
=== Anisotropic permeability ===
Tissue such as brain, liver, muscle, etc can be treated as a heterogeneous porous medium. Describing the flow of biofluids (blood, cerebrospinal fluid, etc.) within such a medium requires a full 3-dimensional anisotropic treatment of the tissue. In this case the scalar hydraulic permeability is replaced with the hydraulic permeability tensor so that Darcy's Law reads
q
=
−
1
η
κ
⋅
∇
P
{\displaystyle {\boldsymbol {q}}=-{\frac {1}{\eta }}{\boldsymbol {\kappa }}\cdot \nabla P}
q
{\displaystyle {\boldsymbol {q}}}
is the Darcy flux, or filtration velocity, which describes the bulk (not microscopic) velocity field of the fluid,
[
Length
]
[
Time
]
−
1
{\displaystyle [{\text{Length}}][{\text{Time}}]^{-1}}
η
{\displaystyle \eta }
is the dynamic viscosity of the fluid,
[
Mass
]
[
L
]
−
1
[
T
]
−
1
{\displaystyle [{\text{Mass}}][{\text{L}}]^{-1}[T]^{-1}}
κ
{\displaystyle {\boldsymbol {\kappa }}}
is the hydraulic permeability tensor,
[
L
]
2
{\displaystyle [{\text{L}}]^{2}}
∇
{\displaystyle \nabla }
is the gradient operator,
[
L
]
−
1
{\displaystyle [{\text{L}}]^{-1}}
P
{\displaystyle P}
is the pressure field in the fluid,
[
M
]
[
L
]
−
1
[
T
]
−
2
{\displaystyle [{\text{M}}][{\text{L}}]^{-1}[{\text{T}}]^{-2}}
Connecting this expression to the isotropic case,
κ
=
k
1
{\displaystyle {\boldsymbol {\kappa }}=k\mathbb {1} }
, where k is the scalar hydraulic permeability, and 1 is the identity tensor.
== Determination ==
Permeability is typically determined in the lab by application of Darcy's law under steady state conditions or, more generally, by application of various solutions to the diffusion equation for unsteady flow conditions.
Permeability needs to be measured, either directly (using Darcy's law), or through estimation using empirically derived formulas. However, for some simple models of porous media, permeability can be calculated (e.g., random close packing of identical spheres).
=== Permeability model based on conduit flow ===
Based on the Hagen–Poiseuille equation for viscous flow in a pipe, permeability can be expressed as:
k
I
=
C
⋅
d
2
{\displaystyle k_{I}=C\cdot d^{2}}
where:
k
I
{\displaystyle k_{I}}
is the intrinsic permeability [length2]
C
{\displaystyle C}
is a dimensionless constant that is related to the configuration of the flow-paths
d
{\displaystyle d}
is the average, or effective pore diameter [length].
== Absolute permeability (aka intrinsic or specific permeability) ==
Absolute permeability denotes the permeability in a porous medium that is 100% saturated with a single-phase fluid. This may also be called the intrinsic permeability or specific permeability. These terms refer to the quality that the permeability value in question is an intensive property of the medium, not a spatial average of a heterogeneous block of material equation 2.28; and that it is a function of the material structure only (and not of the fluid). They explicitly distinguish the value from that of relative permeability.
== Permeability to gases ==
Sometimes, permeability to gases can be somewhat different than that for liquids in the same media. One difference is attributable to the "slippage" of gas at the interface with the solid when the gas mean free path is comparable to the pore size (about 0.01 to 0.1 μm at standard temperature and pressure). See also Knudsen diffusion and constrictivity. For example, measurement of permeability through sandstones and shales yielded values from 9.0×10−19 m2 to 2.4×10−12 m2 for water and between 1.7×10−17 m2 to 2.6×10−12 m2 for nitrogen gas. Gas permeability of reservoir rock and source rock is important in petroleum engineering, when considering the optimal extraction of gas from unconventional sources such as shale gas, tight gas, or coalbed methane.
== Permeability tensor ==
To model permeability in anisotropic media, a permeability tensor is needed. Pressure can be applied in three directions, and for each direction, permeability can be measured (via Darcy's law in 3D) in three directions, thus leading to a 3 by 3 tensor. The tensor is realised using a 3 by 3 matrix being both symmetric and positive definite (SPD matrix):
The tensor is symmetric by the Onsager reciprocal relations
The tensor is positive definite because the energy being expended (the inner product of fluid flow and negative pressure gradient) is always positive
The permeability tensor is always diagonalizable (being both symmetric and positive definite). The eigenvectors will yield the principal directions of flow where flow is parallel to the pressure gradient, and the eigenvalues represent the principal permeabilities.
== Ranges of common intrinsic permeabilities ==
These values do not depend on the fluid properties; see the table derived from the same source for values of hydraulic conductivity, which are specific to the material through which the fluid is flowing.
== See also ==
Fault zone hydrogeology
Hydraulic conductivity
Hydrogeology
Permeation
Petroleum geology
Relative permeability
Klinkenberg correction
Electrical resistivity measurement of concrete
Permeability of soils
== References ==
== Further reading ==
Wang, H. F., 2000. Theory of Linear Poroelasticity with Applications to Geomechanics and Hydrogeology, Princeton University Press. ISBN 0-691-03746-9
== External links ==
Defining Permeability
Tailoring porous media to control permeability
Permeability of Porous Media
Graphical depiction of different flow rates through materials of differing permeability
Web-based porosity and permeability calculator given flow characteristics
Multiphase fluid flow in porous media
Florida Method of Test For Concrete Resistivity as an Electrical Indicator of its Permeability Archived 2011-06-16 at the Wayback Machine | Wikipedia/Permeability_(earth_sciences) |
Earth materials include minerals, rocks, soil and water. These are the naturally occurring materials found on Earth that constitute the raw materials upon which our global society exists. Earth materials are vital resources that provide the basic components for life, agriculture and industry.
== Definitions ==
The type of materials available locally will of course vary depending upon the conditions in the area of the building site. Take considerations of what is explained below.
In many areas, indigenous stone is available from the local region, such as limestone, marble, granite, and sandstone. It may be cut in quarries or removed from the surface of the ground (flag and fieldstone). Ideally, stone from the building site can be utilized. Depending on the stone type, it can be used for structural block, facing block, pavers, and crushed stone.
Most brick plants are located near the clay source they use to make brick. Bricks are molded and baked blocks of clay. Brick products come in many forms, including structural brick, face brick, roof tile, structural tile, paving brick, and floor tile.
Caliche is a soft limestone material which is mined from areas with calcium-carbonate soils and limestone bedrock. It is best known as a road bed material, but it can be processed into an unfired building block, stabilized with an additive such as cement. Other earth materials include soil blocks typically stabilized with a cement additive and produced with forms or compression.
Rammed Earth consists of walls made from moist, sandy soil, or stabilized soil, which is tamped into form work. Walls are a minimum of 12″ thick. Soils should contain about 30% clay and 70% sand.
== Considerations ==
The use of locally available and indigenous earth materials has several advantages in terms of sustainability. They are:
Reduction of energy costs related to transportation.
Reduction of material costs due to reduced transportation costs, especially for well-established industries.
Support of local businesses and resource bases.
Care must be taken to ensure that non-renewable earth materials are not over-extracted. Ecological balance within the region needs to be maintained while efficiently utilizing its resources. Many local suppliers carry materials that have been shipped in from out of the area, so it is important to ask for locally produced/quarried materials.
Both brick and stone materials are aesthetically pleasing, durable, and low maintenance. Exterior walls weather well, eliminating the need for constant refinishing and sealing. Interior use of brick and stone can also provide excellent thermal mass, or be used to provide radiant heat. Some stone and brick makes an ideal flooring or exterior paving material, cool in summer and possessing good thermal properties for passive solar heating. Caliche block has been produced for applications similar to stone and brick mentioned above. Caliche or earth material block has special structural and finishing characteristics.
Rammed earth is more often considered for use in walls, although it can also be used for floors. Rammed earth and caliche block can be used for structural walls, and offer great potential as low-cost material alternatives with low embodied energy. In addition, such materials are fireproof.
Caliche block and rammed earth can be produced on-site. It is very important to have soils tested for construction material use. Some soils, such as highly expansive or bentonite soils, are not suitable for structural use. Testing labs are available in most areas to determine material suitability for structural use and meeting codes.
Soils for traditional adobe construction are not found in some areas, but other soils for earth building options are available. Many areas have a high percentage of soils suitable for ramming.
(Official areas are approximately 19,610 acres in the Austin, TX area, according to the US. Department of Agriculture).
Caliche is also abundant in many areas (covering 14% of the Austin geographic area, for instance) and is readily available locally.
== See also ==
Structure of Earth
== References ==
== External links ==
BGS Open Data Archived 2013-09-21 at the Wayback Machine Earth Materials Ontologies | Wikipedia/Earth_materials |
Environmental soil science is the study of the interaction of humans with the pedosphere as well as critical aspects of the biosphere, the lithosphere, the hydrosphere, and the atmosphere. Environmental soil science addresses both the fundamental and applied aspects of the field including: buffers and surface water quality, vadose zone functions, septic drain field site assessment and function, land treatment of wastewater, stormwater, erosion control, soil contamination with metals and pesticides, remediation of contaminated soils, restoration of wetlands, soil degradation, nutrient management, movement of viruses and bacteria in soils and waters, bioremediation, application of molecular biology and genetic engineering to development of soil microbes that can degrade hazardous pollutants, land use, global warming, acid rain, and the study of anthropogenic soils, such as terra preta. Much of the research done in environmental soil science is produced through the use of models.
== See also ==
Soil functions
== References ==
== Bibliography ==
Hillel, D., J.L.. Hatfield, D.S. Powlson, C. Rosenweig, K. M. Scow, M.J. Singer and D.L. Sparks. Editors. (2004) Encyclopedia of Soils in the Environment, Four-Volume Set, Volume 1-4, ISBN 0-12-348530-4
== External links ==
Media related to Environmental soil science at Wikimedia Commons | Wikipedia/Environmental_soil_science |
In fluid mechanics, materials science and Earth sciences, the permeability of porous media (often, a rock or soil) is a measure of the ability for fluids (gas or liquid) to flow through the media; it is commonly symbolized as k.
Fluids can more easily flow through a material with high permeability than one with low permeability.
The permeability of a medium is related to the porosity, but also to the shapes of the pores in the medium and their level of connectedness.
Fluid flows can also be influenced in different lithological settings by brittle deformation of rocks in fault zones; the mechanisms by which this occurs are the subject of fault zone hydrogeology. Permeability is also affected by the pressure inside a material.
The SI unit for permeability is the square metre (m2). A practical unit for permeability is the darcy (d), or more commonly the millidarcy (md) (1 d ≈ 10−12 m2). The name honors the French Engineer Henry Darcy who first described the flow of water through sand filters for potable water supply. Permeability values for most materials commonly range typically from a fraction to several thousand millidarcys. The unit of square centimetre (cm2) is also sometimes used (1 cm2 = 10−4 m2 ≈ 108 d).
== Applications ==
The concept of permeability is of importance in determining the flow characteristics of hydrocarbons in oil and gas reservoirs, and of groundwater in aquifers.
For a rock to be considered as an exploitable hydrocarbon reservoir without stimulation, its permeability must be greater than approximately 100 md (depending on the nature of the hydrocarbon – gas reservoirs with lower permeabilities are still exploitable because of the lower viscosity of gas in comparison with oil). Rocks with permeabilities significantly lower than 100 md can form efficient seals (see petroleum geology). Unconsolidated sands may have permeabilities of over 5000 md.
The concept also has many practical applications outside of geology, for example in chemical engineering (e.g., filtration), as well as in Civil Engineering when determining whether the ground conditions of a site are suitable for construction.
The concept of permeability is also useful in computational fluid dynamics (CFD) for modeling flow through complex geometries such as packed beds, filter papers, or tube banks. When the size of individual components—such as particle diameter in packed beds or tube diameter in tube bundles—are significantly smaller than the overall flow domain, direct modeling becomes computationally intensive due to the fine mesh resolution required. In such cases, the domain can be approximated as a porous medium, with permeability estimated using correlations, experimental data, or separate fluid flow simulations.
== Description ==
Permeability is part of the proportionality constant in Darcy's law which relates discharge (flow rate) and fluid physical properties (e.g. dynamic viscosity), to a pressure gradient applied to the porous media:
v
=
k
η
Δ
P
Δ
x
{\displaystyle v={\frac {k}{\eta }}{\frac {\Delta P}{\Delta x}}}
(for linear flow)
Therefore:
k
=
v
η
Δ
x
Δ
P
{\displaystyle k=v{\frac {\eta \,\Delta x}{\Delta P}}}
where:
v
{\displaystyle v}
is the fluid velocity through the porous medium (i.e., the average flow velocity calculated as if the fluid was the only phase present in the porous medium) (m/s)
k
{\displaystyle k}
is the permeability of a medium (m2)
η
{\displaystyle \eta }
is the dynamic viscosity of the fluid (Pa·s)
Δ
P
{\displaystyle \Delta P}
is the applied pressure difference (Pa)
Δ
x
{\displaystyle \Delta x}
is the thickness of the bed of the porous medium (m)
In naturally occurring materials, the permeability values range over many orders of magnitude (see table below for an example of this range).
=== Relation to hydraulic conductivity ===
The global proportionality constant for the flow of water through a porous medium is called the hydraulic conductivity (K, unit: m/s). Permeability, or intrinsic permeability, (k, unit: m2) is a part of this, and is a specific property characteristic of the solid skeleton and the microstructure of the porous medium itself, independently of the nature and properties of the fluid flowing through the pores of the medium. This allows to take into account the effect of temperature on the viscosity of the fluid flowing though the porous medium and to address other fluids than pure water, e.g., concentrated brines, petroleum, or organic solvents. Given the value of hydraulic conductivity for a studied system, the permeability can be calculated as follows:
k
=
K
η
ρ
g
{\displaystyle k=K{\frac {\eta }{\rho g}}}
where
k
{\displaystyle k}
is the permeability, m2
K
{\displaystyle K}
is the hydraulic conductivity, m/s
η
{\displaystyle \eta }
is the dynamic viscosity of the fluid, Pa·s
ρ
{\displaystyle \rho }
is the density of the fluid, kg/m3
g
{\displaystyle g}
is the acceleration due to gravity, m/s2.
=== Anisotropic permeability ===
Tissue such as brain, liver, muscle, etc can be treated as a heterogeneous porous medium. Describing the flow of biofluids (blood, cerebrospinal fluid, etc.) within such a medium requires a full 3-dimensional anisotropic treatment of the tissue. In this case the scalar hydraulic permeability is replaced with the hydraulic permeability tensor so that Darcy's Law reads
q
=
−
1
η
κ
⋅
∇
P
{\displaystyle {\boldsymbol {q}}=-{\frac {1}{\eta }}{\boldsymbol {\kappa }}\cdot \nabla P}
q
{\displaystyle {\boldsymbol {q}}}
is the Darcy flux, or filtration velocity, which describes the bulk (not microscopic) velocity field of the fluid,
[
Length
]
[
Time
]
−
1
{\displaystyle [{\text{Length}}][{\text{Time}}]^{-1}}
η
{\displaystyle \eta }
is the dynamic viscosity of the fluid,
[
Mass
]
[
L
]
−
1
[
T
]
−
1
{\displaystyle [{\text{Mass}}][{\text{L}}]^{-1}[T]^{-1}}
κ
{\displaystyle {\boldsymbol {\kappa }}}
is the hydraulic permeability tensor,
[
L
]
2
{\displaystyle [{\text{L}}]^{2}}
∇
{\displaystyle \nabla }
is the gradient operator,
[
L
]
−
1
{\displaystyle [{\text{L}}]^{-1}}
P
{\displaystyle P}
is the pressure field in the fluid,
[
M
]
[
L
]
−
1
[
T
]
−
2
{\displaystyle [{\text{M}}][{\text{L}}]^{-1}[{\text{T}}]^{-2}}
Connecting this expression to the isotropic case,
κ
=
k
1
{\displaystyle {\boldsymbol {\kappa }}=k\mathbb {1} }
, where k is the scalar hydraulic permeability, and 1 is the identity tensor.
== Determination ==
Permeability is typically determined in the lab by application of Darcy's law under steady state conditions or, more generally, by application of various solutions to the diffusion equation for unsteady flow conditions.
Permeability needs to be measured, either directly (using Darcy's law), or through estimation using empirically derived formulas. However, for some simple models of porous media, permeability can be calculated (e.g., random close packing of identical spheres).
=== Permeability model based on conduit flow ===
Based on the Hagen–Poiseuille equation for viscous flow in a pipe, permeability can be expressed as:
k
I
=
C
⋅
d
2
{\displaystyle k_{I}=C\cdot d^{2}}
where:
k
I
{\displaystyle k_{I}}
is the intrinsic permeability [length2]
C
{\displaystyle C}
is a dimensionless constant that is related to the configuration of the flow-paths
d
{\displaystyle d}
is the average, or effective pore diameter [length].
== Absolute permeability (aka intrinsic or specific permeability) ==
Absolute permeability denotes the permeability in a porous medium that is 100% saturated with a single-phase fluid. This may also be called the intrinsic permeability or specific permeability. These terms refer to the quality that the permeability value in question is an intensive property of the medium, not a spatial average of a heterogeneous block of material equation 2.28; and that it is a function of the material structure only (and not of the fluid). They explicitly distinguish the value from that of relative permeability.
== Permeability to gases ==
Sometimes, permeability to gases can be somewhat different than that for liquids in the same media. One difference is attributable to the "slippage" of gas at the interface with the solid when the gas mean free path is comparable to the pore size (about 0.01 to 0.1 μm at standard temperature and pressure). See also Knudsen diffusion and constrictivity. For example, measurement of permeability through sandstones and shales yielded values from 9.0×10−19 m2 to 2.4×10−12 m2 for water and between 1.7×10−17 m2 to 2.6×10−12 m2 for nitrogen gas. Gas permeability of reservoir rock and source rock is important in petroleum engineering, when considering the optimal extraction of gas from unconventional sources such as shale gas, tight gas, or coalbed methane.
== Permeability tensor ==
To model permeability in anisotropic media, a permeability tensor is needed. Pressure can be applied in three directions, and for each direction, permeability can be measured (via Darcy's law in 3D) in three directions, thus leading to a 3 by 3 tensor. The tensor is realised using a 3 by 3 matrix being both symmetric and positive definite (SPD matrix):
The tensor is symmetric by the Onsager reciprocal relations
The tensor is positive definite because the energy being expended (the inner product of fluid flow and negative pressure gradient) is always positive
The permeability tensor is always diagonalizable (being both symmetric and positive definite). The eigenvectors will yield the principal directions of flow where flow is parallel to the pressure gradient, and the eigenvalues represent the principal permeabilities.
== Ranges of common intrinsic permeabilities ==
These values do not depend on the fluid properties; see the table derived from the same source for values of hydraulic conductivity, which are specific to the material through which the fluid is flowing.
== See also ==
Fault zone hydrogeology
Hydraulic conductivity
Hydrogeology
Permeation
Petroleum geology
Relative permeability
Klinkenberg correction
Electrical resistivity measurement of concrete
Permeability of soils
== References ==
== Further reading ==
Wang, H. F., 2000. Theory of Linear Poroelasticity with Applications to Geomechanics and Hydrogeology, Princeton University Press. ISBN 0-691-03746-9
== External links ==
Defining Permeability
Tailoring porous media to control permeability
Permeability of Porous Media
Graphical depiction of different flow rates through materials of differing permeability
Web-based porosity and permeability calculator given flow characteristics
Multiphase fluid flow in porous media
Florida Method of Test For Concrete Resistivity as an Electrical Indicator of its Permeability Archived 2011-06-16 at the Wayback Machine | Wikipedia/Permeability_(Earth_sciences) |
Agricultural soil science is a branch of soil science that deals with the study of edaphic conditions as they relate to the production of food and fiber. In this context, it is also a constituent of the field of agronomy and is thus also described as soil agronomy.
== History ==
Prior to the development of pedology in the 19th century, agricultural soil science (or edaphology) was the only branch of soil science. The bias of early soil science toward viewing soils only in terms of their agricultural potential continues to define the soil science profession in both academic and popular settings as of 2006. (Baveye, 2006)
== Current status ==
Agricultural soil science follows the holistic method. Soil is investigated in relation to and as integral part of terrestrial ecosystems but is also recognized as a manageable natural resource.
Agricultural soil science studies the chemical, physical, biological, and mineralogical composition of soils as they relate to agriculture. Agricultural soil scientists develop methods that will improve the use of soil and increase the production of food and fiber crops. Emphasis continues to grow on the importance of soil sustainability. Soil degradation such as erosion, compaction, lowered fertility, and contamination continue to be serious concerns. They conduct research in irrigation and drainage, tillage, soil classification, plant nutrition, soil fertility, and other areas.
Although maximizing plant (and thus animal) production is a valid goal, sometimes it may come at high cost which can be readily evident (e.g. massive crop disease stemming from monoculture) or long-term (e.g. impact of chemical fertilizers and pesticides on human health). An agricultural soil scientist may come up with a plan that can maximize production using sustainable methods and solutions, and in order to do that they must look into a number of science fields including agricultural science, physics, chemistry, biology, meteorology and geology.
== Kinds of soil and their variables ==
Some soil variables of special interest to agricultural soil science are
Soil texture or soil composition: Soils are composed of solid particles of various sizes. In decreasing order, these particles are sand, silt and clay. Every soil can be classified according to the relative percentage of sand, silt and clay it contains.
Aeration and porosity: Atmospheric air contains elements such as oxygen, nitrogen, carbon and others. These elements are prerequisites for life on Earth. Particularly, all cells (including root cells) require oxygen to function and if conditions become anaerobic they fail to respire and metabolize. Aeration in this context refers to the mechanisms by which air is delivered to the soil. In natural ecosystems soil aeration is chiefly accomplished through the vibrant activity of the biota. Humans commonly aerate the soil by tilling and plowing, yet such practice may cause degradation. Porosity refers to the air-holding capacity of the soil. See also characterisation of pore space in soil.
Drainage: In soils of bad drainage the water delivered through rain or irrigation may pool and stagnate. As a result, prevail anaerobic conditions and plant roots suffocate. Stagnant water also favors plant-attacking water molds. In soils of excess drainage, on the other hand, plants don't get to absorb adequate water and nutrients are washed from the porous medium to end up in groundwater reserves.
Water content: Without soil moisture there is no transpiration, no growth and plants wilt. Technically, plant cells lose their pressure (see osmotic pressure and turgor pressure). Plants contribute directly to soil moisture. For instance, they create a leafy cover that minimizes the evaporative effects of solar radiation. But even when plants or parts of plants die, the decaying plant matter produces a thick organic cover that protects the soil from evaporation, erosion and compaction. For more on this subject see mulch.
Water potential: Water potential describes the tendency of the water to flow from one area of the soil to another. While water delivered to the soil surface normally flows downward due to gravity, at some point it meets increased pressure which causes a reverse upward flow. This effect is known as water suction.
Horizonation: Typically found in advanced and mature soils, horizonation refers to the creation of soil layers with differing characteristics. It affects almost all soil variables.
Fertility: A fertile soil is one rich in nutrients and organic matter. Modern agricultural methods have rendered much of the arable land infertile. In such cases, soil can no longer support on its own plants with high nutritional demand and thus needs an external source of nutrients. However, there are cases where human activity is thought to be responsible for transforming rather normal soils into super-fertile ones (see terra preta).
Biota and soil biota: Organisms interact with the soil and contribute to its quality in innumerable ways. Sometimes the nature of interaction may be unclear, yet a rule is becoming evident: The amount and diversity of the biota is "proportional" to the quality of the soil. Clades of interest include bacteria, fungi, nematodes, annelids and arthropods.
Soil acidity or soil pH and cation-exchange capacity: Root cells act as hydrogen pumps and the surrounding concentration of hydrogen ions affects their ability to absorb nutrients. pH is a measure of this concentration. Each plant species achieves maximum growth in a particular pH range, yet the vast majority of edible plants can grow in soil pH between 5.0 and 7.5.
Soil scientists use a soil classification system to describe soil qualities. The International Union of Soil Sciences endorses the World Reference Base as the international standard.
== Soil fertility ==
Agricultural soil scientists study ways to make soils more productive. They classify soils and test them to determine whether they contain nutrients vital to plant growth. Such nutritional substances include compounds of nitrogen, phosphorus, and potassium. If a certain soil is deficient in these substances, fertilizers may provide them. Agricultural soil scientists investigate the movement of nutrients through the soil, and the amount of nutrients absorbed by a plant's roots. Agricultural soil scientists also examine the development of roots and their relation to the soil. Some agricultural soil scientists try to understand the structure and function of soils in relation to soil fertility. They grasp the structure of soil as porous solid. The solid frames of soil consist of mineral derived from the rocks and organic matter originated from the dead bodies of various organisms. The pore space of the soil is essential for the soil to become productive. Small pores serve as water reservoir supplying water to plants and other organisms in the soil during the rain-less period. The water in the small pores of soils is not pure water; they call it soil solution. In soil solution, various plant nutrients derived from minerals and organic matters in the soil are there. This is measured through the cation exchange capacity. Large pores serve as water drainage pipe to allow the excessive water pass through the soil, during the heavy rains. They also serve as air tank to supply oxygen to plant roots and other living beings in the soil.
== Soil preservation ==
In addition, agricultural soil scientists develop methods to preserve the agricultural productivity of soil and to decrease the effects on productivity of erosion by wind and water. For example, a technique called contour plowing may be used to prevent soil erosion and conserve rainfall. Researchers in agricultural soil science also seek ways to use the soil more effectively in addressing associated challenges. Such challenges include the beneficial reuse of human and animal wastes using agricultural crops; agricultural soil management aspects of preventing water pollution and the build-up in agricultural soil of chemical pesticides. Regenerative agriculture practices can be used to address these challenges and rebuild soil health.
== Employment of agricultural soil scientists ==
Most agricultural soil scientists are consultants, researchers, or teachers. Many work in the developed world as farm advisors, agricultural experiment stations, federal, state or local government agencies, industrial firms, or universities. Within the USA they may be trained through the USDA's Cooperative Extension Service offices, although other countries may use universities, research institutes or research agencies. Elsewhere, agricultural soil scientists may serve in international organizations such as the Agency for International Development and the Food and Agriculture Organization of the United Nations.
== Quotations ==
[The key objective of the soil science discipline is that of] finding ways to meet growing human needs for food and fiber while maintaining environmental stability and conserving resources for future generations
Many people have the vague notion that soil science is merely a phase of agronomy and deals only with practical soil management for field crops. Whether we like it or not this is the image many have of us
== See also ==
Agricultural science
Agrogeology
Agrology
Compost
Potting soil
Soil biology
Soil conditioner
Soil science
Regenerative agriculture
== References ==
== External links ==
The Soil Science Society of America (SSSA)
British Society of Soil Science | Wikipedia/Agricultural_soil_science |
Geotechnical centrifuge modeling is a technique for testing physical scale models of geotechnical engineering systems such as natural and man-made slopes and earth retaining structures and building or bridge foundations.
The scale model is typically constructed in the laboratory and then loaded onto the end of the centrifuge, which is typically between 0.2 and 10 metres (0.7 and 32.8 ft) in radius. The purpose of spinning the models on the centrifuge is to increase the g-forces on the model so that stresses in the model are equal to stresses in the prototype. For example, the stress beneath a 0.1-metre-deep (0.3 ft) layer of model soil spun at a centrifugal acceleration of 50 g produces stresses equivalent to those beneath a 5-metre-deep (16 ft) prototype layer of soil in earth's gravity.
The idea to use centrifugal acceleration to simulate increased gravitational acceleration was first proposed by Phillips (1869). Pokrovsky and Fedorov (1936) in the Soviet Union and Bucky (1931) in the United States were the first to implement the idea. Andrew N. Schofield (e.g. Schofield 1980) played a key role in modern development of centrifuge modeling.
== Principles of centrifuge modeling ==
=== Typical applications ===
A geotechnical centrifuge is used to test models of geotechnical problems such as the strength, stiffness and capacity of foundations for bridges and buildings, settlement of embankments, stability of slopes, earth retaining structures, tunnel stability and seawalls. Other applications include explosive cratering, contaminant migration in ground water, frost heave and sea ice. The centrifuge may be useful for scale modeling of any large-scale nonlinear problem for which gravity is a primary driving force.
=== Reason for model testing on the centrifuge ===
Geotechnical materials such as soil and rock have non-linear mechanical properties that depend on the effective confining stress and stress history. The centrifuge applies an increased "gravitational" acceleration to physical models in order to produce identical self-weight stresses in the model and prototype. The one to one scaling of stress enhances the similarity of geotechnical models and makes it possible to obtain accurate data to help solve complex problems such as earthquake-induced liquefaction, soil-structure interaction and underground transport of pollutants such as dense non-aqueous phase liquids. Centrifuge model testing provides data to improve our understanding of basic mechanisms of deformation and failure and provides benchmarks useful for verification of numerical models.
=== Scaling laws ===
Note that in this article, the asterisk on any quantity represents the scale factor for that quantity. For example, in
x
∗
=
x
m
x
p
{\displaystyle x^{*}={\frac {x_{m}}{x_{p}}}}
, the subscript m represents "model" and the subscript p represents "prototype" and
x
∗
{\displaystyle x^{*}\,}
represents the scale factor for the quantity
x
{\displaystyle x\,}
.
The reason for spinning a model on a centrifuge is to enable small scale models to feel the same effective stresses as a full-scale prototype. This goal can be stated mathematically as
σ
′
∗
=
σ
m
′
σ
p
′
=
1
{\displaystyle \sigma '^{*}={\frac {\sigma '_{m}}{\sigma '_{p}}}=1}
where the asterisk represents the scaling factor for the quantity,
σ
m
′
{\displaystyle \sigma '_{m}}
is the effective stress in the model and
σ
p
′
{\displaystyle \sigma '_{p}}
is the effective stress in the prototype.
In soil mechanics the vertical effective stress,
σ
′
{\displaystyle \sigma '}
for example, is typically calculated by
σ
′
=
σ
t
−
u
{\displaystyle \sigma '=\sigma ^{t}-u\,}
where
σ
t
{\displaystyle \sigma ^{t}}
is the total stress and
u
{\displaystyle u}
is the pore pressure. For a uniform layer with no pore pressure, the total vertical stress at a depth
H
{\displaystyle H}
may be calculated by:
σ
t
=
ρ
g
H
{\displaystyle \sigma ^{t}=\rho gH\,}
where
ρ
{\displaystyle \rho }
represents the density of the layer and
g
{\displaystyle g}
represents gravity. In the conventional form of centrifuge modeling, it is typical that the same materials are used in the model and prototype; therefore the densities are the same in model and prototype, i.e.,
ρ
∗
=
1
{\displaystyle \rho ^{*}=1\,}
Furthermore, in conventional centrifuge modeling all lengths are scaled by the same factor
L
∗
{\displaystyle L^{*}}
. To produce the same stress in the model as in the prototype, we thus require
ρ
∗
g
∗
H
∗
=
(
1
)
g
∗
L
∗
=
1
{\displaystyle \rho ^{*}g^{*}H^{*}=(1)g^{*}L^{*}=1\,}
, which may be rewritten as
g
∗
=
1
L
∗
{\displaystyle g^{*}={\frac {1}{L^{*}}}}
The above scaling law states that if lengths in the model are reduced by some factor, n, then gravitational accelerations must be increased by the same factor, n in order to preserve equal stresses in model and prototype.
==== Dynamic problems ====
For dynamic problems where gravity and accelerations are important, all accelerations must scale as gravity is scaled, i.e.
a
∗
=
g
∗
=
1
L
∗
{\displaystyle a^{*}=g^{*}={\frac {1}{L^{*}}}}
Since acceleration has units of
L
T
2
{\displaystyle {\frac {L}{T^{2}}}}
, it is required that
a
∗
=
L
∗
T
∗
2
{\displaystyle a^{*}={\frac {L^{*}}{T^{*2}}}}
Hence it is required that :
1
L
∗
=
L
∗
T
∗
2
{\displaystyle {\frac {1}{L^{*}}}={\frac {L^{*}}{T^{*2}}}}
, or
T
∗
=
L
∗
{\displaystyle T^{*}=L^{*}\,}
Frequency has units of inverse of time, velocity has units of length per time, so for dynamic problems we also obtain
f
∗
=
1
L
∗
{\displaystyle f^{*}={\frac {1}{L{*}}}}
v
∗
=
L
∗
T
∗
=
1
{\displaystyle v^{*}={\frac {L^{*}}{T{*}}}=1}
==== Diffusion problems ====
T
∗
=
L
∗
2
{\displaystyle T^{*}=L^{*2}\,}
For model tests involving both dynamics and diffusion, the conflict in time scale factors may be resolved by scaling the permeability of the soil
==== Scaling of other quantitites ====
(this section obviously needs work!)
scale factors for energy, force, pressure, acceleration, velocity, etc.
Note that stress has units of pressure, or force per unit area. Thus we can show that
Substituting F = m∙a (Newton's law, force = mass ∙ acceleration) and r = m/L3 (from the definition of mass density).
Scale factors for many other quantities can be derived from the above relationships. The table below summarizes common scale factors for centrifuge testing.
Scale Factors for Centrifuge Model Tests (from Garnier et al., 2007 )
(Table is suggested to be added here)
== Value of centrifuge in geotechnical earthquake engineering ==
Large earthquakes are infrequent and unrepeatable but they can be devastating. All of these factors make it difficult to obtain the required data to study their effects by post earthquake field investigations. Instrumentation of full scale structures is expensive to maintain over the large periods of time that may elapse between major temblors, and the instrumentation may not be placed in the most scientifically useful locations. Even if engineers are lucky enough to obtain timely recordings of data from real failures, there is no guarantee that the instrumentation is providing repeatable data. In addition, scientifically educational failures from real earthquakes come at the expense of the safety of the public. Understandably, after a real earthquake, most of the interesting data is rapidly cleared away before engineers have an opportunity to adequately study the failure modes.
Centrifuge modeling is a valuable tool for studying the effects of ground shaking on critical structures without risking the safety of the public. The efficacy of alternative designs or seismic retrofitting techniques can compared in a repeatable scientific series of tests.
== Verification of numerical models ==
Centrifuge tests can also be used to obtain experimental data to verify a design procedure or a computer model. The rapid development of computational power over recent decades has revolutionized engineering analysis. Many computer models have been developed to predict the behavior of geotechnical structures during earthquakes and other loads. Before a computer model can be used with confidence, it must be proven to be valid based on evidence. The meager and unrepeatable data provided by natural earthquakes, for example, is usually insufficient for this purpose. Verification of the validity of assumptions made by a computational algorithm is especially important in the area of geotechnical engineering due to the complexity of soil behavior. Soils exhibit highly non-linear behavior, their strength and stiffness depend on their stress history and on the water pressure in the pore fluid, all of which may evolve during the loading caused by an earthquake. The computer models which are intended to simulate these phenomena are very complex and require extensive verification. Experimental data from centrifuge tests is useful for verifying assumptions made by a computational algorithm. If the results show the computer model to be inaccurate, the centrifuge test data provides insight into the physical processes which in turn stimulates the development of better computer models.
== See also ==
== References ==
== External links ==
Technical committee on physical modelling in geotechnics
International Society for Soil Mechanics and Geotechnical Engineering
American Society of Civil Engineers | Wikipedia/Geotechnical_centrifuge_modeling |
In physics, Torricelli's equation, or Torricelli's formula, is an equation created by Evangelista Torricelli to find the final velocity of a moving object with constant acceleration along an axis (for example, the x axis) without having a known time interval.
The equation itself is:
v
f
2
=
v
i
2
+
2
a
Δ
x
{\displaystyle v_{f}^{2}=v_{i}^{2}+2a\Delta x\,}
where
v
f
{\displaystyle v_{f}}
is the object's final velocity along the x axis on which the acceleration is constant.
v
i
{\displaystyle v_{i}}
is the object's initial velocity along the x axis.
a
{\displaystyle a}
is the object's acceleration along the x axis, which is given as a constant.
Δ
x
{\displaystyle \Delta x\,}
is the object's change in position along the x axis, also called displacement.
In this and all subsequent equations in this article, the subscript
x
{\displaystyle x}
(as in
v
f
x
{\displaystyle {v_{f}}_{x}}
) is implied, but is not expressed explicitly for clarity in presenting the equations.
This equation is valid along any axis on which the acceleration is constant.
== Derivation ==
=== Without differentials and integration ===
Begin with the following relations for the case of uniform acceleration:
Take (1), and multiply both sides with acceleration
a
{\textstyle a}
The following rearrangement of the right hand side makes it easier to recognize the coming substitution:
Use (2) to substitute the product
a
t
{\textstyle at}
:
Work out the multiplications:
The crossterms
v
i
v
f
{\textstyle v_{i}v_{f}}
drop away against each other, leaving only squared terms:
(7) rearranges to the form of Torricelli's equation as presented at the start of the article:
=== Using differentials and integration ===
Begin with the definitions of velocity as the derivative of the position, and acceleration as the derivative of the velocity:
Set up integration from initial position
x
i
{\textstyle x_{i}}
to final position
x
f
{\textstyle x_{f}}
In accordance with (9) we can substitute
d
x
{\textstyle dx}
with
v
d
t
{\textstyle v\,dt}
, with corresponding change of limits.
Here changing the order of
a
{\textstyle a}
and
v
{\textstyle v}
makes it easier to recognize the upcoming substitution.
In accordance with (10) we can substitute
a
d
t
{\textstyle a\,dt}
with
d
v
{\textstyle dv}
, with corresponding change of limits.
So we have:
Since the acceleration is constant, we can factor it out of the integration:
Evaluating the integration:
The factor
x
f
−
x
i
{\textstyle x_{f}-x_{i}}
is the displacement
Δ
x
{\textstyle \Delta x}
:
== Application ==
Combining Torricelli's equation with
F
=
m
a
{\textstyle F=ma}
gives the work-energy theorem.
Torricelli's equation and the generalization to non-uniform acceleration have the same form:
Repeat of (16):
Evaluating the right hand side:
To compare with Torricelli's equation: repeat of (7):
To derive the work-energy theorem: start with
F
=
m
a
{\textstyle F=ma}
and on both sides state the integral with respect to the position coordinate. If both sides are integrable then the resulting expression is valid:
Use (22) to process the right hand side:
The reason that the right hand sides of (22) and (23) are the same:
First consider the case with two consecutive stages of different uniform acceleration, first from
s
0
{\textstyle s_{0}}
to
s
1
{\textstyle s_{1}}
, and then from
s
1
{\textstyle s_{1}}
to
s
2
{\textstyle s_{2}}
.
Expressions for each of the two stages:
a
1
(
s
1
−
s
0
)
=
1
2
v
1
2
−
1
2
v
0
2
{\displaystyle a_{1}(s_{1}-s_{0})={\tfrac {1}{2}}v_{1}^{2}-{\tfrac {1}{2}}v_{0}^{2}}
a
2
(
s
2
−
s
1
)
=
1
2
v
2
2
−
1
2
v
1
2
{\displaystyle a_{2}(s_{2}-s_{1})={\tfrac {1}{2}}v_{2}^{2}-{\tfrac {1}{2}}v_{1}^{2}}
Since these expressions are for consecutive intervals they can be added; the result is a valid expression.
Upon addition the intermediate term
1
2
v
1
2
{\textstyle {\tfrac {1}{2}}v_{1}^{2}}
drops out; only the outer terms
1
2
v
2
2
{\textstyle {\tfrac {1}{2}}v_{2}^{2}}
and
1
2
v
0
2
{\textstyle {\tfrac {1}{2}}v_{0}^{2}}
remain:
The above result generalizes: the total distance can be subdivided into any number of subdivisions; after adding everything together only the outer terms remain; all of the intermediate terms drop out.
The generalization of (26) to an arbitrary number of subdivisions of the total interval from
s
0
{\textstyle s_{0}}
to
s
n
{\textstyle s_{n}}
can be expressed as a summation:
== See also ==
Equation of motion
== References ==
== External links ==
Torricelli's theorem | Wikipedia/Torricelli's_equation |
In general relativity, if two objects are set in motion along two initially parallel trajectories, the presence of a tidal gravitational force will cause the trajectories to bend towards or away from each other, producing a relative acceleration between the objects.
Mathematically, the tidal force in general relativity is described by the Riemann curvature tensor, and the trajectory of an object solely under the influence of gravity is called a geodesic. The geodesic deviation equation relates the Riemann curvature tensor to the relative acceleration of two neighboring geodesics. In differential geometry, the geodesic deviation equation is more commonly known as the Jacobi equation.
== Mathematical definition ==
To quantify geodesic deviation, one begins by setting up a family of closely spaced geodesics indexed by a continuous variable s and parametrized by an affine parameter τ. That is, for each fixed s, the curve swept out by γs(τ) as τ varies is a geodesic. When considering the geodesic of a massive object, it is often convenient to choose τ to be the object's proper time. If xμ(s,τ) are the coordinates of the geodesic γs(τ), then the tangent vector of this geodesic is
T
μ
=
∂
x
μ
(
s
,
τ
)
∂
τ
.
{\displaystyle T^{\mu }={\frac {\partial x^{\mu }(s,\tau )}{\partial \tau }}.}
If τ is the proper time, then Tμ is the four-velocity of the object traveling along the geodesic.
One can also define a deviation vector, which is the displacement of two objects travelling along two infinitesimally separated geodesics:
X
μ
=
∂
x
μ
(
s
,
τ
)
∂
s
.
{\displaystyle X^{\mu }={\frac {\partial x^{\mu }(s,\tau )}{\partial s}}.}
The relative acceleration Aμ of the two objects is defined, roughly, as the second derivative of the separation vector Xμ as the objects advance along their respective geodesics. Specifically, Aμ is found by taking the directional covariant derivative of X along T twice:
A
μ
=
T
α
∇
α
(
T
β
∇
β
X
μ
)
.
{\displaystyle A^{\mu }=T^{\alpha }\nabla _{\alpha }\left(T^{\beta }\nabla _{\beta }X^{\mu }\right).}
The geodesic deviation equation relates Aμ, Tμ, Xμ, and the Riemann tensor Rμνρσ:
A
μ
=
R
μ
ν
ρ
σ
T
ν
T
ρ
X
σ
.
{\displaystyle A^{\mu }={R^{\mu }}_{\nu \rho \sigma }T^{\nu }T^{\rho }X^{\sigma }.}
An alternate notation for the directional covariant derivative
T
α
∇
α
{\displaystyle T^{\alpha }\nabla _{\alpha }}
is
D
/
d
τ
{\displaystyle D/d\tau }
, so the geodesic deviation equation may also be written as
D
2
X
μ
d
τ
2
=
R
μ
ν
ρ
σ
T
ν
T
ρ
X
σ
.
{\displaystyle {\frac {D^{2}X^{\mu }}{d\tau ^{2}}}={R^{\mu }}_{\nu \rho \sigma }T^{\nu }T^{\rho }X^{\sigma }.}
The geodesic deviation equation can be derived from the second variation of the point particle Lagrangian along geodesics, or from the first variation of a combined Lagrangian. The Lagrangian approach has two advantages. First it allows various formal approaches of quantization to be applied to the geodesic deviation system. Second it allows deviation to be formulated for much more general objects than geodesics (any dynamical system which has a one spacetime indexed momentum appears to have a corresponding generalization of geodesic deviation).
== Weak-field limit ==
The connection between geodesic deviation and tidal acceleration can be seen more explicitly by examining geodesic deviation in the weak-field limit, where the metric is approximately Minkowski, and the velocities of test particles are assumed to be much less than c. Then the tangent vector Tμ is approximately (1, 0, 0, 0); i.e., only the timelike component is nonzero.
The spatial components of the relative acceleration are then given by
A
i
=
R
i
0
j
0
X
j
,
{\displaystyle A^{i}={R^{i}}_{0j0}X^{j},}
where i and j run only over the spatial indices 1, 2, and 3.
In the particular case of a metric corresponding to the Newtonian potential Φ(x, y, z) of a massive object at x = y = z = 0, we have
R
i
0
j
0
=
−
∂
2
Φ
∂
x
i
∂
x
j
,
{\displaystyle {R^{i}}_{0j0}=-{\frac {\partial ^{2}\Phi }{\partial x^{i}\partial x^{j}}},}
which is the tidal tensor of the Newtonian potential.
== See also ==
Bernhard Riemann
Curvature
Glossary of Riemannian and metric geometry
== References ==
Stephani, Hans (1982), General relativity - an introduction to the theory of the gravitation field, Cambridge University Press, ISBN 0-521-37066-3.
Wald, Robert M. (1984), General Relativity, University of Chicago Press, ISBN 978-0-226-87033-5.
== External links ==
General Relativity and Quantum Cosmology
Tensors and Relativity: Geodesic deviation Archived 2011-11-16 at the Wayback Machine | Wikipedia/Geodesic_deviation_equation |
In physics, mathematics, engineering, and related fields, a wave is a propagating dynamic disturbance (change from equilibrium) of one or more quantities. Periodic waves oscillate repeatedly about an equilibrium (resting) value at some frequency. When the entire waveform moves in one direction, it is said to be a travelling wave; by contrast, a pair of superimposed periodic waves traveling in opposite directions makes a standing wave. In a standing wave, the amplitude of vibration has nulls at some positions where the wave amplitude appears smaller or even zero.
There are two types of waves that are most commonly studied in classical physics: mechanical waves and electromagnetic waves. In a mechanical wave, stress and strain fields oscillate about a mechanical equilibrium. A mechanical wave is a local deformation (strain) in some physical medium that propagates from particle to particle by creating local stresses that cause strain in neighboring particles too. For example, sound waves are variations of the local pressure and particle motion that propagate through the medium. Other examples of mechanical waves are seismic waves, gravity waves, surface waves and string vibrations. In an electromagnetic wave (such as light), coupling between the electric and magnetic fields sustains propagation of waves involving these fields according to Maxwell's equations. Electromagnetic waves can travel through a vacuum and through some dielectric media (at wavelengths where they are considered transparent). Electromagnetic waves, as determined by their frequencies (or wavelengths), have more specific designations including radio waves, infrared radiation, terahertz waves, visible light, ultraviolet radiation, X-rays and gamma rays.
Other types of waves include gravitational waves, which are disturbances in spacetime that propagate according to general relativity; heat diffusion waves; plasma waves that combine mechanical deformations and electromagnetic fields; reaction–diffusion waves, such as in the Belousov–Zhabotinsky reaction; and many more. Mechanical and electromagnetic waves transfer energy, momentum, and information, but they do not transfer particles in the medium. In mathematics and electronics waves are studied as signals. On the other hand, some waves have envelopes which do not move at all such as standing waves (which are fundamental to music) and hydraulic jumps.
A physical wave field is almost always confined to some finite region of space, called its domain. For example, the seismic waves generated by earthquakes are significant only in the interior and surface of the planet, so they can be ignored outside it. However, waves with infinite domain, that extend over the whole space, are commonly studied in mathematics, and are very valuable tools for understanding physical waves in finite domains.
A plane wave is an important mathematical idealization where the disturbance is identical along any (infinite) plane normal to a specific direction of travel. Mathematically, the simplest wave is a sinusoidal plane wave in which at any point the field experiences simple harmonic motion at one frequency. In linear media, complicated waves can generally be decomposed as the sum of many sinusoidal plane waves having different directions of propagation and/or different frequencies. A plane wave is classified as a transverse wave if the field disturbance at each point is described by a vector perpendicular to the direction of propagation (also the direction of energy transfer); or longitudinal wave if those vectors are aligned with the propagation direction. Mechanical waves include both transverse and longitudinal waves; on the other hand electromagnetic plane waves are strictly transverse while sound waves in fluids (such as air) can only be longitudinal. That physical direction of an oscillating field relative to the propagation direction is also referred to as the wave's polarization, which can be an important attribute.
== Mathematical description ==
=== Single waves ===
A wave can be described just like a field, namely as a function
F
(
x
,
t
)
{\displaystyle F(x,t)}
where
x
{\displaystyle x}
is a position and
t
{\displaystyle t}
is a time.
The value of
x
{\displaystyle x}
is a point of space, specifically in the region where the wave is defined. In mathematical terms, it is usually a vector in the Cartesian three-dimensional space
R
3
{\displaystyle \mathbb {R} ^{3}}
. However, in many cases one can ignore one dimension, and let
x
{\displaystyle x}
be a point of the Cartesian plane
R
2
{\displaystyle \mathbb {R} ^{2}}
. This is the case, for example, when studying vibrations of a drum skin. One may even restrict
x
{\displaystyle x}
to a point of the Cartesian line
R
{\displaystyle \mathbb {R} }
– that is, the set of real numbers. This is the case, for example, when studying vibrations in a violin string or recorder. The time
t
{\displaystyle t}
, on the other hand, is always assumed to be a scalar; that is, a real number.
The value of
F
(
x
,
t
)
{\displaystyle F(x,t)}
can be any physical quantity of interest assigned to the point
x
{\displaystyle x}
that may vary with time. For example, if
F
{\displaystyle F}
represents the vibrations inside an elastic solid, the value of
F
(
x
,
t
)
{\displaystyle F(x,t)}
is usually a vector that gives the current displacement from
x
{\displaystyle x}
of the material particles that would be at the point
x
{\displaystyle x}
in the absence of vibration. For an electromagnetic wave, the value of
F
{\displaystyle F}
can be the electric field vector
E
{\displaystyle E}
, or the magnetic field vector
H
{\displaystyle H}
, or any related quantity, such as the Poynting vector
E
×
H
{\displaystyle E\times H}
. In fluid dynamics, the value of
F
(
x
,
t
)
{\displaystyle F(x,t)}
could be the velocity vector of the fluid at the point
x
{\displaystyle x}
, or any scalar property like pressure, temperature, or density. In a chemical reaction,
F
(
x
,
t
)
{\displaystyle F(x,t)}
could be the concentration of some substance in the neighborhood of point
x
{\displaystyle x}
of the reaction medium.
For any dimension
d
{\displaystyle d}
(1, 2, or 3), the wave's domain is then a subset
D
{\displaystyle D}
of
R
d
{\displaystyle \mathbb {R} ^{d}}
, such that the function value
F
(
x
,
t
)
{\displaystyle F(x,t)}
is defined for any point
x
{\displaystyle x}
in
D
{\displaystyle D}
. For example, when describing the motion of a drum skin, one can consider
D
{\displaystyle D}
to be a disk (circle) on the plane
R
2
{\displaystyle \mathbb {R} ^{2}}
with center at the origin
(
0
,
0
)
{\displaystyle (0,0)}
, and let
F
(
x
,
t
)
{\displaystyle F(x,t)}
be the vertical displacement of the skin at the point
x
{\displaystyle x}
of
D
{\displaystyle D}
and at time
t
{\displaystyle t}
.
=== Superposition ===
Waves of the same type are often superposed and encountered simultaneously at a given point in space and time. The properties at that point are the sum of the properties of each component wave at that point. In general, the velocities are not the same, so the wave form will change over time and space.
=== Wave spectrum ===
=== Wave families ===
Sometimes one is interested in a single specific wave. More often, however, one needs to understand large set of possible waves; like all the ways that a drum skin can vibrate after being struck once with a drum stick, or all the possible radar echoes one could get from an airplane that may be approaching an airport.
In some of those situations, one may describe such a family of waves by a function
F
(
A
,
B
,
…
;
x
,
t
)
{\displaystyle F(A,B,\ldots ;x,t)}
that depends on certain parameters
A
,
B
,
…
{\displaystyle A,B,\ldots }
, besides
x
{\displaystyle x}
and
t
{\displaystyle t}
. Then one can obtain different waves – that is, different functions of
x
{\displaystyle x}
and
t
{\displaystyle t}
– by choosing different values for those parameters.
For example, the sound pressure inside a recorder that is playing a "pure" note is typically a standing wave, that can be written as
F
(
A
,
L
,
n
,
c
;
x
,
t
)
=
A
(
cos
2
π
x
2
n
−
1
4
L
)
(
cos
2
π
c
t
2
n
−
1
4
L
)
{\displaystyle F(A,L,n,c;x,t)=A\left(\cos 2\pi x{\frac {2n-1}{4L}}\right)\left(\cos 2\pi ct{\frac {2n-1}{4L}}\right)}
The parameter
A
{\displaystyle A}
defines the amplitude of the wave (that is, the maximum sound pressure in the bore, which is related to the loudness of the note);
c
{\displaystyle c}
is the speed of sound;
L
{\displaystyle L}
is the length of the bore; and
n
{\displaystyle n}
is a positive integer (1,2,3,...) that specifies the number of nodes in the standing wave. (The position
x
{\displaystyle x}
should be measured from the mouthpiece, and the time
t
{\displaystyle t}
from any moment at which the pressure at the mouthpiece is maximum. The quantity
λ
=
4
L
/
(
2
n
−
1
)
{\displaystyle \lambda =4L/(2n-1)}
is the wavelength of the emitted note, and
f
=
c
/
λ
{\displaystyle f=c/\lambda }
is its frequency.) Many general properties of these waves can be inferred from this general equation, without choosing specific values for the parameters.
As another example, it may be that the vibrations of a drum skin after a single strike depend only on the distance
r
{\displaystyle r}
from the center of the skin to the strike point, and on the strength
s
{\displaystyle s}
of the strike. Then the vibration for all possible strikes can be described by a function
F
(
r
,
s
;
x
,
t
)
{\displaystyle F(r,s;x,t)}
.
Sometimes the family of waves of interest has infinitely many parameters. For example, one may want to describe what happens to the temperature in a metal bar when it is initially heated at various temperatures at different points along its length, and then allowed to cool by itself in vacuum. In that case, instead of a scalar or vector, the parameter would have to be a function
h
{\displaystyle h}
such that
h
(
x
)
{\displaystyle h(x)}
is the initial temperature at each point
x
{\displaystyle x}
of the bar. Then the temperatures at later times can be expressed by a function
F
{\displaystyle F}
that depends on the function
h
{\displaystyle h}
(that is, a functional operator), so that the temperature at a later time is
F
(
h
;
x
,
t
)
{\displaystyle F(h;x,t)}
=== Differential wave equations ===
Another way to describe and study a family of waves is to give a mathematical equation that, instead of explicitly giving the value of
F
(
x
,
t
)
{\displaystyle F(x,t)}
, only constrains how those values can change with time. Then the family of waves in question consists of all functions
F
{\displaystyle F}
that satisfy those constraints – that is, all solutions of the equation.
This approach is extremely important in physics, because the constraints usually are a consequence of the physical processes that cause the wave to evolve. For example, if
F
(
x
,
t
)
{\displaystyle F(x,t)}
is the temperature inside a block of some homogeneous and isotropic solid material, its evolution is constrained by the partial differential equation
∂
F
∂
t
(
x
,
t
)
=
α
(
∂
2
F
∂
x
1
2
(
x
,
t
)
+
∂
2
F
∂
x
2
2
(
x
,
t
)
+
∂
2
F
∂
x
3
2
(
x
,
t
)
)
+
β
Q
(
x
,
t
)
{\displaystyle {\frac {\partial F}{\partial t}}(x,t)=\alpha \left({\frac {\partial ^{2}F}{\partial x_{1}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{2}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{3}^{2}}}(x,t)\right)+\beta Q(x,t)}
where
Q
(
p
,
f
)
{\displaystyle Q(p,f)}
is the heat that is being generated per unit of volume and time in the neighborhood of
x
{\displaystyle x}
at time
t
{\displaystyle t}
(for example, by chemical reactions happening there);
x
1
,
x
2
,
x
3
{\displaystyle x_{1},x_{2},x_{3}}
are the Cartesian coordinates of the point
x
{\displaystyle x}
;
∂
F
/
∂
t
{\displaystyle \partial F/\partial t}
is the (first) derivative of
F
{\displaystyle F}
with respect to
t
{\displaystyle t}
; and
∂
2
F
/
∂
x
i
2
{\displaystyle \partial ^{2}F/\partial x_{i}^{2}}
is the second derivative of
F
{\displaystyle F}
relative to
x
i
{\displaystyle x_{i}}
. (The symbol "
∂
{\displaystyle \partial }
" is meant to signify that, in the derivative with respect to some variable, all other variables must be considered fixed.)
This equation can be derived from the laws of physics that govern the diffusion of heat in solid media. For that reason, it is called the heat equation in mathematics, even though it applies to many other physical quantities besides temperatures.
For another example, we can describe all possible sounds echoing within a container of gas by a function
F
(
x
,
t
)
{\displaystyle F(x,t)}
that gives the pressure at a point
x
{\displaystyle x}
and time
t
{\displaystyle t}
within that container. If the gas was initially at uniform temperature and composition, the evolution of
F
{\displaystyle F}
is constrained by the formula
∂
2
F
∂
t
2
(
x
,
t
)
=
α
(
∂
2
F
∂
x
1
2
(
x
,
t
)
+
∂
2
F
∂
x
2
2
(
x
,
t
)
+
∂
2
F
∂
x
3
2
(
x
,
t
)
)
+
β
P
(
x
,
t
)
{\displaystyle {\frac {\partial ^{2}F}{\partial t^{2}}}(x,t)=\alpha \left({\frac {\partial ^{2}F}{\partial x_{1}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{2}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{3}^{2}}}(x,t)\right)+\beta P(x,t)}
Here
P
(
x
,
t
)
{\displaystyle P(x,t)}
is some extra compression force that is being applied to the gas near
x
{\displaystyle x}
by some external process, such as a loudspeaker or piston right next to
p
{\displaystyle p}
.
This same differential equation describes the behavior of mechanical vibrations and electromagnetic fields in a homogeneous isotropic non-conducting solid. Note that this equation differs from that of heat flow only in that the left-hand side is
∂
2
F
/
∂
t
2
{\displaystyle \partial ^{2}F/\partial t^{2}}
, the second derivative of
F
{\displaystyle F}
with respect to time, rather than the first derivative
∂
F
/
∂
t
{\displaystyle \partial F/\partial t}
. Yet this small change makes a huge difference on the set of solutions
F
{\displaystyle F}
. This differential equation is called "the" wave equation in mathematics, even though it describes only one very special kind of waves.
== Wave in elastic medium ==
Consider a traveling transverse wave (which may be a pulse) on a string (the medium). Consider the string to have a single spatial dimension. Consider this wave as traveling
in the
x
{\displaystyle x}
direction in space. For example, let the positive
x
{\displaystyle x}
direction be to the right, and the negative
x
{\displaystyle x}
direction be to the left.
with constant amplitude
u
{\displaystyle u}
with constant velocity
v
{\displaystyle v}
, where
v
{\displaystyle v}
is
independent of wavelength (no dispersion)
independent of amplitude (linear media, not nonlinear).
with constant waveform, or shape
This wave can then be described by the two-dimensional functions
or, more generally, by d'Alembert's formula:
u
(
x
,
t
)
=
F
(
x
−
v
t
)
+
G
(
x
+
v
t
)
.
{\displaystyle u(x,t)=F(x-vt)+G(x+vt).}
representing two component waveforms
F
{\displaystyle F}
and
G
{\displaystyle G}
traveling through the medium in opposite directions. A generalized representation of this wave can be obtained as the partial differential equation
1
v
2
∂
2
u
∂
t
2
=
∂
2
u
∂
x
2
.
{\displaystyle {\frac {1}{v^{2}}}{\frac {\partial ^{2}u}{\partial t^{2}}}={\frac {\partial ^{2}u}{\partial x^{2}}}.}
General solutions are based upon Duhamel's principle.
=== Wave forms ===
The form or shape of F in d'Alembert's formula involves the argument x − vt. Constant values of this argument correspond to constant values of F, and these constant values occur if x increases at the same rate that vt increases. That is, the wave shaped like the function F will move in the positive x-direction at velocity v (and G will propagate at the same speed in the negative x-direction).
In the case of a periodic function F with period λ, that is, F(x + λ − vt) = F(x − vt), the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period λ (the wavelength of the wave). In a similar fashion, this periodicity of F implies a periodicity in time as well: F(x − v(t + T)) = F(x − vt) provided vT = λ, so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = λ/v.
=== Amplitude and modulation ===
The amplitude of a wave may be constant (in which case the wave is a c.w. or continuous wave), or may be modulated so as to vary with time and/or position. The outline of the variation in amplitude is called the envelope of the wave. Mathematically, the modulated wave can be written in the form:
u
(
x
,
t
)
=
A
(
x
,
t
)
sin
(
k
x
−
ω
t
+
ϕ
)
,
{\displaystyle u(x,t)=A(x,t)\sin \left(kx-\omega t+\phi \right),}
where
A
(
x
,
t
)
{\displaystyle A(x,\ t)}
is the amplitude envelope of the wave,
k
{\displaystyle k}
is the wavenumber and
ϕ
{\displaystyle \phi }
is the phase. If the group velocity
v
g
{\displaystyle v_{g}}
(see below) is wavelength-independent, this equation can be simplified as:
u
(
x
,
t
)
=
A
(
x
−
v
g
t
)
sin
(
k
x
−
ω
t
+
ϕ
)
,
{\displaystyle u(x,t)=A(x-v_{g}t)\sin \left(kx-\omega t+\phi \right),}
showing that the envelope moves with the group velocity and retains its shape. Otherwise, in cases where the group velocity varies with wavelength, the pulse shape changes in a manner often described using an envelope equation.
=== Phase velocity and group velocity ===
There are two velocities that are associated with waves, the phase velocity and the group velocity.
Phase velocity is the rate at which the phase of the wave propagates in space: any given phase of the wave (for example, the crest) will appear to travel at the phase velocity. The phase velocity is given in terms of the wavelength λ (lambda) and period T as
v
p
=
λ
T
.
{\displaystyle v_{\mathrm {p} }={\frac {\lambda }{T}}.}
Group velocity is a property of waves that have a defined envelope, measuring propagation through space (that is, phase velocity) of the overall shape of the waves' amplitudes—modulation or envelope of the wave.
== Special waves ==
=== Sine waves ===
=== Plane waves ===
A plane wave is a kind of wave whose value varies only in one spatial direction. That is, its value is constant on a plane that is perpendicular to that direction. Plane waves can be specified by a vector of unit length
n
^
{\displaystyle {\hat {n}}}
indicating the direction that the wave varies in, and a wave profile describing how the wave varies as a function of the displacement along that direction (
n
^
⋅
x
→
{\displaystyle {\hat {n}}\cdot {\vec {x}}}
) and time (
t
{\displaystyle t}
). Since the wave profile only depends on the position
x
→
{\displaystyle {\vec {x}}}
in the combination
n
^
⋅
x
→
{\displaystyle {\hat {n}}\cdot {\vec {x}}}
, any displacement in directions perpendicular to
n
^
{\displaystyle {\hat {n}}}
cannot affect the value of the field.
Plane waves are often used to model electromagnetic waves far from a source. For electromagnetic plane waves, the electric and magnetic fields themselves are transverse to the direction of propagation, and also perpendicular to each other.
=== Standing waves ===
A standing wave, also known as a stationary wave, is a wave whose envelope remains in a constant position. This phenomenon arises as a result of interference between two waves traveling in opposite directions.
The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example, when a violin string is displaced, transverse waves propagate out to where the string is held in place at the bridge and the nut, where the waves are reflected back. At the bridge and nut, the two opposed waves are in antiphase and cancel each other, producing a node. Halfway between two nodes there is an antinode, where the two counter-propagating waves enhance each other maximally. There is no net propagation of energy over time.
=== Solitary waves ===
A soliton or solitary wave is a self-reinforcing wave packet that maintains its shape while it propagates at a constant velocity. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons are the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems.
== Physical properties ==
=== Propagation ===
Wave propagation is any of the ways in which waves travel. With respect to the direction of the oscillation relative to the propagation direction, we can distinguish between longitudinal wave and transverse waves.
Electromagnetic waves propagate in vacuum as well as in material media. Propagation of other wave types such as sound may occur only in a transmission medium.
==== Reflection of plane waves in a half-space ====
The propagation and reflection of plane waves—e.g. Pressure waves (P wave) or Shear waves (SH or SV-waves) are phenomena that were first characterized within the field of classical seismology, and are now considered fundamental concepts in modern seismic tomography. The analytical solution to this problem exists and is well known. The frequency domain solution can be obtained by first finding the Helmholtz decomposition of the displacement field, which is then substituted into the wave equation. From here, the plane wave eigenmodes can be calculated.
==== SV wave propagation ====
The analytical solution of SV-wave in a half-space indicates that the plane SV wave reflects back to the domain as a P and SV waves, leaving out special cases. The angle of the reflected SV wave is identical to the incidence wave, while the angle of the reflected P wave is greater than the SV wave. For the same wave frequency, the SV wavelength is smaller than the P wavelength. This fact has been depicted in this animated picture.
==== P wave propagation ====
Similar to the SV wave, the P incidence, in general, reflects as the P and SV wave. There are some special cases where the regime is different.
=== Wave velocity ===
Wave velocity is a general concept, of various kinds of wave velocities, for a wave's phase and speed concerning energy (and information) propagation. The phase velocity is given as:
v
p
=
ω
k
,
{\displaystyle v_{\rm {p}}={\frac {\omega }{k}},}
where:
vp is the phase velocity (with SI unit m/s),
ω is the angular frequency (with SI unit rad/s),
k is the wavenumber (with SI unit rad/m).
The phase speed gives you the speed at which a point of constant phase of the wave will travel for a discrete frequency. The angular frequency ω cannot be chosen independently from the wavenumber k, but both are related through the dispersion relationship:
ω
=
Ω
(
k
)
.
{\displaystyle \omega =\Omega (k).}
In the special case Ω(k) = ck, with c a constant, the waves are called non-dispersive, since all frequencies travel at the same phase speed c. For instance electromagnetic waves in vacuum are non-dispersive. In case of other forms of the dispersion relation, we have dispersive waves. The dispersion relationship depends on the medium through which the waves propagate and on the type of waves (for instance electromagnetic, sound or water waves).
The speed at which a resultant wave packet from a narrow range of frequencies will travel is called the group velocity and is determined from the gradient of the dispersion relation:
v
g
=
∂
ω
∂
k
{\displaystyle v_{\rm {g}}={\frac {\partial \omega }{\partial k}}}
In almost all cases, a wave is mainly a movement of energy through a medium. Most often, the group velocity is the velocity at which the energy moves through this medium.
Waves exhibit common behaviors under a number of standard situations, for example:
=== Transmission and media ===
Waves normally move in a straight line (that is, rectilinearly) through a transmission medium. Such media can be classified into one or more of the following categories:
A bounded medium if it is finite in extent, otherwise an unbounded medium
A linear medium if the amplitudes of different waves at any particular point in the medium can be added
A uniform medium or homogeneous medium if its physical properties are unchanged at different locations in space
An anisotropic medium if one or more of its physical properties differ in one or more directions
An isotropic medium if its physical properties are the same in all directions
=== Absorption ===
Waves are usually defined in media which allow most or all of a wave's energy to propagate without loss. However materials may be characterized as "lossy" if they remove energy from a wave, usually converting it into heat. This is termed "absorption." A material which absorbs a wave's energy, either in transmission or reflection, is characterized by a refractive index which is complex. The amount of absorption will generally depend on the frequency (wavelength) of the wave, which, for instance, explains why objects may appear colored.
=== Reflection ===
When a wave strikes a reflective surface, it changes direction, such that the angle made by the incident wave and line normal to the surface equals the angle made by the reflected wave and the same normal line.
=== Refraction ===
Refraction is the phenomenon of a wave changing its speed. Mathematically, this means that the size of the phase velocity changes. Typically, refraction occurs when a wave passes from one medium into another. The amount by which a wave is refracted by a material is given by the refractive index of the material. The directions of incidence and refraction are related to the refractive indices of the two materials by Snell's law.
=== Diffraction ===
A wave exhibits diffraction when it encounters an obstacle that bends the wave or when it spreads after emerging from an opening. Diffraction effects are more pronounced when the size of the obstacle or opening is comparable to the wavelength of the wave.
=== Interference ===
When waves in a linear medium (the usual case) cross each other in a region of space, they do not actually interact with each other, but continue on as if the other one were not present. However at any point in that region the field quantities describing those waves add according to the superposition principle. If the waves are of the same frequency in a fixed phase relationship, then there will generally be positions at which the two waves are in phase and their amplitudes add, and other positions where they are out of phase and their amplitudes (partially or fully) cancel. This is called an interference pattern.
=== Polarization ===
The phenomenon of polarization arises when wave motion can occur simultaneously in two orthogonal directions. Transverse waves can be polarized, for instance. When polarization is used as a descriptor without qualification, it usually refers to the special, simple case of linear polarization. A transverse wave is linearly polarized if it oscillates in only one direction or plane. In the case of linear polarization, it is often useful to add the relative orientation of that plane, perpendicular to the direction of travel, in which the oscillation occurs, such as "horizontal" for instance, if the plane of polarization is parallel to the ground. Electromagnetic waves propagating in free space, for instance, are transverse; they can be polarized by the use of a polarizing filter.
Longitudinal waves, such as sound waves, do not exhibit polarization. For these waves there is only one direction of oscillation, that is, along the direction of travel.
=== Dispersion ===
Dispersion is the frequency dependence of the refractive index, a consequence of the atomic nature of materials.: 67
A wave undergoes dispersion when either the phase velocity or the group velocity depends on the wave frequency. Dispersion is seen by letting white light pass through a prism, the result of which is to produce the spectrum of colors of the rainbow. Isaac Newton was the first to recognize that this meant that white light was a mixture of light of different colors.: 190
=== Doppler effect ===
The Doppler effect or Doppler shift is the change in frequency of a wave in relation to an observer who is moving relative to the wave source. It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842.
== Mechanical waves ==
A mechanical wave is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves.
=== Waves on strings ===
The transverse vibration of a string is a function of tension and inertia, and is constrained by the length of the string as the ends are fixed. This constraint limits the steady state modes that are possible, and thereby the frequencies.
The speed of a transverse wave traveling along a vibrating string (v) is directly proportional to the square root of the tension of the string (T) over the linear mass density (μ):
v
=
T
μ
,
{\displaystyle v={\sqrt {\frac {T}{\mu }}},}
where the linear density μ is the mass per unit length of the string.
=== Acoustic waves ===
Acoustic or sound waves are compression waves which travel as body waves at the speed given by:
v
=
B
ρ
0
,
{\displaystyle v={\sqrt {\frac {B}{\rho _{0}}}},}
or the square root of the adiabatic bulk modulus divided by the ambient density of the medium (see speed of sound).
=== Water waves ===
Ripples on the surface of a pond are actually a combination of transverse and longitudinal waves; therefore, the points on the surface follow orbital paths.
Sound, a mechanical wave that propagates through gases, liquids, solids and plasmas.
Inertial waves, which occur in rotating fluids and are restored by the Coriolis effect.
Ocean surface waves, which are perturbations that propagate through water.
=== Body waves ===
Body waves travel through the interior of the medium along paths controlled by the material properties in terms of density and modulus (stiffness). The density and modulus, in turn, vary according to temperature, composition, and material phase. This effect resembles the refraction of light waves. Two types of particle motion result in two types of body waves: Primary and Secondary waves.
=== Seismic waves ===
Seismic waves are waves of energy that travel through the Earth's layers, and are a result of earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions that give out low-frequency acoustic energy. They include body waves—the primary (P waves) and secondary waves (S waves)—and surface waves, such as Rayleigh waves, Love waves, and Stoneley waves.
=== Shock waves ===
A shock wave is a type of propagating disturbance. When a wave moves faster than the local speed of sound in a fluid, it is a shock wave. Like an ordinary wave, a shock wave carries energy and can propagate through a medium; however, it is characterized by an abrupt, nearly discontinuous change in pressure, temperature and density of the medium.
=== Shear waves ===
Shear waves are body waves due to shear rigidity and inertia. They can only be transmitted through solids and to a lesser extent through liquids with a sufficiently high viscosity.
=== Other ===
Waves of traffic, that is, propagation of different densities of motor vehicles, and so forth, which can be modeled as kinematic waves
Metachronal wave refers to the appearance of a traveling wave produced by coordinated sequential actions.
== Electromagnetic waves ==
An electromagnetic wave consists of two waves that are oscillations of the electric and magnetic fields. An electromagnetic wave travels in a direction that is at right angles to the oscillation direction of both fields. In the 19th century, James Clerk Maxwell showed that, in vacuum, the electric and magnetic fields satisfy the wave equation both with speed equal to that of the speed of light. From this emerged the idea that light is an electromagnetic wave. The unification of light and electromagnetic waves was experimentally confirmed by Hertz in the end of the 1880s. Electromagnetic waves can have different frequencies (and thus wavelengths), and are classified accordingly in wavebands, such as radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. The range of frequencies in each of these bands is continuous, and the limits of each band are mostly arbitrary, with the exception of visible light, which must be visible to the normal human eye.
== Quantum mechanical waves ==
=== Schrödinger equation ===
The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle.
=== Dirac equation ===
The Dirac equation is a relativistic wave equation detailing electromagnetic interactions. Dirac waves accounted for the fine details of the hydrogen spectrum in a completely rigorous way. The wave equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-1⁄2 particles.
=== de Broglie waves ===
Louis de Broglie postulated that all particles with momentum have a wavelength
λ
=
h
p
,
{\displaystyle \lambda ={\frac {h}{p}},}
where h is the Planck constant, and p is the magnitude of the momentum of the particle. This hypothesis was at the basis of quantum mechanics. Nowadays, this wavelength is called the de Broglie wavelength. For example, the electrons in a CRT display have a de Broglie wavelength of about 10−13 m.
A wave representing such a particle traveling in the k-direction is expressed by the wave function as follows:
ψ
(
r
,
t
=
0
)
=
A
e
i
k
⋅
r
,
{\displaystyle \psi (\mathbf {r} ,\,t=0)=Ae^{i\mathbf {k\cdot r} },}
where the wavelength is determined by the wave vector k as:
λ
=
2
π
k
,
{\displaystyle \lambda ={\frac {2\pi }{k}},}
and the momentum by:
p
=
ℏ
k
.
{\displaystyle \mathbf {p} =\hbar \mathbf {k} .}
However, a wave like this with definite wavelength is not localized in space, and so cannot represent a particle localized in space. To localize a particle, de Broglie proposed a superposition of different wavelengths ranging around a central value in a wave packet, a waveform often used in quantum mechanics to describe the wave function of a particle. In a wave packet, the wavelength of the particle is not precise, and the local wavelength deviates on either side of the main wavelength value.
In representing the wave function of a localized particle, the wave packet is often taken to have a Gaussian shape and is called a Gaussian wave packet. Gaussian wave packets also are used to analyze water waves.
For example, a Gaussian wavefunction ψ might take the form:
ψ
(
x
,
t
=
0
)
=
A
exp
(
−
x
2
2
σ
2
+
i
k
0
x
)
,
{\displaystyle \psi (x,\,t=0)=A\exp \left(-{\frac {x^{2}}{2\sigma ^{2}}}+ik_{0}x\right),}
at some initial time t = 0, where the central wavelength is related to the central wave vector k0 as λ0 = 2π / k0. It is well known from the theory of Fourier analysis, or from the Heisenberg uncertainty principle (in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. The Fourier transform of a Gaussian is itself a Gaussian. Given the Gaussian:
f
(
x
)
=
e
−
x
2
/
(
2
σ
2
)
,
{\displaystyle f(x)=e^{-x^{2}/\left(2\sigma ^{2}\right)},}
the Fourier transform is:
f
~
(
k
)
=
σ
e
−
σ
2
k
2
/
2
.
{\displaystyle {\tilde {f}}(k)=\sigma e^{-\sigma ^{2}k^{2}/2}.}
The Gaussian in space therefore is made up of waves:
f
(
x
)
=
1
2
π
∫
−
∞
∞
f
~
(
k
)
e
i
k
x
d
k
;
{\displaystyle f(x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }\ {\tilde {f}}(k)e^{ikx}\ dk;}
that is, a number of waves of wavelengths λ such that kλ = 2 π.
The parameter σ decides the spatial spread of the Gaussian along the x-axis, while the Fourier transform shows a spread in wave vector k determined by 1/σ. That is, the smaller the extent in space, the larger the extent in k, and hence in λ = 2π/k.
== Gravity waves ==
Gravity waves are waves generated in a fluid medium or at the interface between two media when the force of gravity or buoyancy works to restore equilibrium. Surface waves on water are the most familiar example.
== Gravitational waves ==
Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016.
Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
== See also ==
Index of wave articles
=== Waves in general ===
==== Parameters ====
==== Waveforms ====
=== Electromagnetic waves ===
=== In fluids ===
=== In quantum mechanics ===
=== In relativity ===
=== Other specific types of waves ===
=== Related topics ===
== References ==
== Sources ==
== External links ==
The Feynman Lectures on Physics: Waves
Linear and nonlinear waves
Science Aid: Wave properties – Concise guide aimed at teens Archived 2019-09-04 at the Wayback Machine
"AT&T Archives: Similiarities of Wave Behavior" demonstrated by J.N. Shive of Bell Labs (video on YouTube) | Wikipedia/Wave_(physics) |
In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame. The two differ only by a multiplicative constant and the units of measurement. The principle is described by the physicist Albert Einstein's formula:
E
=
m
c
2
{\displaystyle E=mc^{2}}
. In a reference frame where the system is moving, its relativistic energy and relativistic mass (instead of rest mass) obey the same formula.
The formula defines the energy (E) of a particle in its rest frame as the product of mass (m) with the speed of light squared (c2). Because the speed of light is a large number in everyday units (approximately 300000 km/s or 186000 mi/s), the formula implies that a small amount of mass corresponds to an enormous amount of energy.
Rest mass, also called invariant mass, is a fundamental physical property of matter, independent of velocity. Massless particles such as photons have zero invariant mass, but massless free particles have both momentum and energy.
The equivalence principle implies that when mass is lost in chemical reactions or nuclear reactions, a corresponding amount of energy will be released. The energy can be released to the environment (outside of the system being considered) as radiant energy, such as light, or as thermal energy. The principle is fundamental to many fields of physics, including nuclear and particle physics.
Mass–energy equivalence arose from special relativity as a paradox described by the French polymath Henri Poincaré (1854–1912). Einstein was the first to propose the equivalence of mass and energy as a general principle and a consequence of the symmetries of space and time. The principle first appeared in "Does the inertia of a body depend upon its energy-content?", one of his annus mirabilis papers, published on 21 November 1905. The formula and its relationship to momentum, as described by the energy–momentum relation, were later developed by other physicists.
== Description ==
Mass–energy equivalence states that all objects having mass, or massive objects, have a corresponding intrinsic energy, even when they are stationary. In the rest frame of an object, where by definition it is motionless and so has no momentum, the mass and energy are equal or they differ only by a constant factor, the speed of light squared (c2). In Newtonian mechanics, a motionless body has no kinetic energy, and it may or may not have other amounts of internal stored energy, like chemical energy or thermal energy, in addition to any potential energy it may have from its position in a field of force. These energies tend to be much smaller than the mass of the object multiplied by c2, which is on the order of 1017 joules for a mass of one kilogram. Due to this principle, the mass of the atoms that come out of a nuclear reaction is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light with the same equivalent energy as the difference. In analyzing these extreme events, Einstein's formula can be used with E as the energy released (removed), and m as the change in mass.
In relativity, all the energy that moves with an object (i.e., the energy as measured in the object's rest frame) contributes to the total mass of the body, which measures how much it resists acceleration. If an isolated box of ideal mirrors could contain light, the individually massless photons would contribute to the total mass of the box by the amount equal to their energy divided by c2. For an observer in the rest frame, removing energy is the same as removing mass and the formula m = E/c2 indicates how much mass is lost when energy is removed. In the same way, when any energy is added to an isolated system, the increase in the mass is equal to the added energy divided by c2.
== Mass in special relativity ==
An object moves at different speeds in different frames of reference, depending on the motion of the observer. This implies the kinetic energy, in both Newtonian mechanics and relativity, is 'frame dependent', so that the amount of relativistic energy that an object is measured to have depends on the observer. The relativistic mass of an object is given by the relativistic energy divided by c2. Because the relativistic mass is exactly proportional to the relativistic energy, relativistic mass and relativistic energy are nearly synonymous; the only difference between them is the units. The rest mass or invariant mass of an object is defined as the mass an object has in its rest frame, when it is not moving with respect to the observer. The rest mass is the same for all inertial frames, as it is independent of the motion of the observer, it is the smallest possible value of the relativistic mass of the object. Because of the attraction between components of a system, which results in potential energy, the rest mass is almost never additive; in general, the mass of an object is not the sum of the masses of its parts. The rest mass of an object is the total energy of all the parts, including kinetic energy, as observed from the center of momentum frame, and potential energy. The masses add up only if the constituents are at rest (as observed from the center of momentum frame) and do not attract or repel, so that they do not have any extra kinetic or potential energy. Massless particles are particles with no rest mass, and therefore have no intrinsic energy; their energy is due only to their momentum.
=== Relativistic mass ===
Relativistic mass depends on the motion of the object, so that different observers in relative motion see different values for it. The relativistic mass of a moving object is larger than the relativistic mass of an object at rest, because a moving object has kinetic energy. If the object moves slowly, the relativistic mass is nearly equal to the rest mass and both are nearly equal to the classical inertial mass (as it appears in Newton's laws of motion). If the object moves quickly, the relativistic mass is greater than the rest mass by an amount equal to the mass associated with the kinetic energy of the object. Massless particles also have relativistic mass derived from their kinetic energy, equal to their relativistic energy divided by c2, or mrel = E/c2. The speed of light is one in a system where length and time are measured in natural units and the relativistic mass and energy would be equal in value and dimension. As it is just another name for the energy, the use of the term relativistic mass is redundant and physicists generally reserve mass to refer to rest mass, or invariant mass, as opposed to relativistic mass. A consequence of this terminology is that the mass is not conserved in special relativity, whereas the conservation of momentum and conservation of energy are both fundamental laws.
=== Conservation of mass and energy ===
Conservation of energy is a universal principle in physics and holds for any interaction, along with the conservation of momentum. The classical conservation of mass, in contrast, is violated in certain relativistic settings. This concept has been experimentally proven in a number of ways, including the conversion of mass into kinetic energy in nuclear reactions and other interactions between elementary particles. While modern physics has discarded the expression 'conservation of mass', in older terminology a relativistic mass can also be defined to be equivalent to the energy of a moving system, allowing for a conservation of relativistic mass. Mass conservation breaks down when the energy associated with the mass of a particle is converted into other forms of energy, such as kinetic energy, thermal energy, or radiant energy.
=== Massless particles ===
Massless particles have zero rest mass. The Planck–Einstein relation for the energy for photons is given by the equation E = hf, where h is the Planck constant and f is the photon frequency. This frequency and thus the relativistic energy are frame-dependent. If an observer runs away from a photon in the direction the photon travels from a source, and it catches up with the observer, the observer sees it as having less energy than it had at the source. The faster the observer is traveling with regard to the source when the photon catches up, the less energy the photon would be seen to have. As an observer approaches the speed of light with regard to the source, the redshift of the photon increases, according to the relativistic Doppler effect. The energy of the photon is reduced and as the wavelength becomes arbitrarily large, the photon's energy approaches zero, because of the massless nature of photons, which does not permit any intrinsic energy.
=== Composite systems ===
For closed systems made up of many parts, like an atomic nucleus, planet, or star, the relativistic energy is given by the sum of the relativistic energies of each of the parts, because energies are additive in these systems. If a system is bound by attractive forces, and the energy gained in excess of the work done is removed from the system, then mass is lost with this removed energy. The mass of an atomic nucleus is less than the total mass of the protons and neutrons that make it up. This mass decrease is also equivalent to the energy required to break up the nucleus into individual protons and neutrons. This effect can be understood by looking at the potential energy of the individual components. The individual particles have a force attracting them together, and forcing them apart increases the potential energy of the particles in the same way that lifting an object up on earth does. This energy is equal to the work required to split the particles apart. The mass of the Solar System is slightly less than the sum of its individual masses.
For an isolated system of particles moving in different directions, the invariant mass of the system is the analog of the rest mass, and is the same for all observers, even those in relative motion. It is defined as the total energy (divided by c2) in the center of momentum frame. The center of momentum frame is defined so that the system has zero total momentum; the term center of mass frame is also sometimes used, where the center of mass frame is a special case of the center of momentum frame where the center of mass is put at the origin. A simple example of an object with moving parts but zero total momentum is a container of gas. In this case, the mass of the container is given by its total energy (including the kinetic energy of the gas molecules), since the system's total energy and invariant mass are the same in any reference frame where the momentum is zero, and such a reference frame is also the only frame in which the object can be weighed. In a similar way, the theory of special relativity posits that the thermal energy in all objects, including solids, contributes to their total masses, even though this energy is present as the kinetic and potential energies of the atoms in the object, and it (in a similar way to the gas) is not seen in the rest masses of the atoms that make up the object. Similarly, even photons, if trapped in an isolated container, would contribute their energy to the mass of the container. Such extra mass, in theory, could be weighed in the same way as any other type of rest mass, even though individually photons have no rest mass. The property that trapped energy in any form adds weighable mass to systems that have no net momentum is one of the consequences of relativity. It has no counterpart in classical Newtonian physics, where energy never exhibits weighable mass.
=== Relation to gravity ===
Physics has two concepts of mass, the gravitational mass and the inertial mass. The gravitational mass is the quantity that determines the strength of the gravitational field generated by an object, as well as the gravitational force acting on the object when it is immersed in a gravitational field produced by other bodies. The inertial mass, on the other hand, quantifies how much an object accelerates if a given force is applied to it. The mass–energy equivalence in special relativity refers to the inertial mass. However, already in the context of Newtonian gravity, the weak equivalence principle is postulated: the gravitational and the inertial mass of every object are the same. Thus, the mass–energy equivalence, combined with the weak equivalence principle, results in the prediction that all forms of energy contribute to the gravitational field generated by an object. This observation is one of the pillars of the general theory of relativity.
The prediction that all forms of energy interact gravitationally has been subject to experimental tests. One of the first observations testing this prediction, called the Eddington experiment, was made during the solar eclipse of May 29, 1919. During the eclipse, the English astronomer and physicist Arthur Eddington observed that the light from stars passing close to the Sun was bent. The effect is due to the gravitational attraction of light by the Sun. The observation confirmed that the energy carried by light indeed is equivalent to a gravitational mass. Another seminal experiment, the Pound–Rebka experiment, was performed in 1960. In this test a beam of light was emitted from the top of a tower and detected at the bottom. The frequency of the light detected was higher than the light emitted. This result confirms that the energy of photons increases when they fall in the gravitational field of the Earth. The energy, and therefore the gravitational mass, of photons is proportional to their frequency as stated by the Planck's relation.
== Efficiency ==
In some reactions, matter particles can be destroyed and their associated energy released to the environment as other forms of energy, such as light and heat. One example of such a conversion takes place in elementary particle interactions, where the rest energy is transformed into kinetic energy. Such conversions between types of energy happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their original mass, though the mass lost is not due to the destruction of any smaller constituents. Nuclear fission allows a tiny fraction of the energy associated with the mass to be converted into usable energy such as radiation; in the decay of the uranium, for instance, about 0.1% of the mass of the original atom is lost. In theory, it should be possible to destroy matter and convert all of the rest-energy associated with matter into heat and light, but none of the theoretically known methods are practical. One way to harness all the energy associated with mass is to annihilate matter with antimatter. Antimatter is rare in the universe, however, and the known mechanisms of production require more usable energy than would be released in annihilation. CERN estimated in 2011 that over a billion times more energy is required to make and store antimatter than could be released in its annihilation.
As most of the mass which comprises ordinary objects resides in protons and neutrons, converting all the energy of ordinary matter into more useful forms requires that the protons and neutrons be converted to lighter particles, or particles with no mass at all. In the Standard Model of particle physics, the number of protons plus neutrons is nearly exactly conserved. Despite this, Gerard 't Hooft showed that there is a process that converts protons and neutrons to antielectrons and neutrinos. This is the weak SU(2) instanton proposed by the physicists Alexander Belavin, Alexander Markovich Polyakov, Albert Schwarz, and Yu. S. Tyupkin. This process, can in principle destroy matter and convert all the energy of matter into neutrinos and usable energy, but it is normally extraordinarily slow. It was later shown that the process occurs rapidly at extremely high temperatures that would only have been reached shortly after the Big Bang.
Many extensions of the standard model contain magnetic monopoles, and in some models of grand unification, these monopoles catalyze proton decay, a process known as the Callan–Rubakov effect. This process would be an efficient mass–energy conversion at ordinary temperatures, but it requires making monopoles and anti-monopoles, whose production is expected to be inefficient. Another method of completely annihilating matter uses the gravitational field of black holes. The British theoretical physicist Stephen Hawking theorized it is possible to throw matter into a black hole and use the emitted heat to generate power. According to the theory of Hawking radiation, however, larger black holes radiate less than smaller ones, so that usable power can only be produced by small black holes.
== Extension for systems in motion ==
Unlike a system's energy in an inertial frame, the relativistic energy (
E
r
e
l
{\displaystyle E_{\rm {rel}}}
) of a system depends on both the rest mass (
m
0
{\displaystyle m_{0}}
) and the total momentum of the system. The extension of Einstein's equation to these systems is given by:
E
r
e
l
2
−
|
p
|
2
c
2
=
m
0
2
c
4
{\displaystyle {\begin{aligned}E_{\rm {rel}}^{2}-|\mathbf {p} |^{2}c^{2}&=m_{0}^{2}c^{4}\\\end{aligned}}}
or
E
r
e
l
2
−
(
p
c
)
2
=
(
m
0
c
2
)
2
{\displaystyle {\begin{aligned}E_{\rm {rel}}^{2}-(pc)^{2}&=(m_{0}c^{2})^{2}\\\end{aligned}}}
or
E
r
e
l
=
(
m
0
c
2
)
2
+
(
p
c
)
2
{\displaystyle {\begin{aligned}E_{\rm {rel}}={\sqrt {(m_{0}c^{2})^{2}+(pc)^{2}}}\,\!\end{aligned}}}
where the
(
p
c
)
2
{\displaystyle (pc)^{2}}
term represents the square of the Euclidean norm (total vector length) of the various momentum vectors in the system, which reduces to the square of the simple momentum magnitude, if only a single particle is considered. This equation is called the energy–momentum relation and reduces to
E
r
e
l
=
m
c
2
{\displaystyle E_{\rm {rel}}=mc^{2}}
when the momentum term is zero. For photons where
m
0
=
0
{\displaystyle m_{0}=0}
, the equation reduces to
E
r
e
l
=
p
c
{\displaystyle E_{\rm {rel}}=pc}
.
== Low-speed approximation ==
Using the Lorentz factor, γ, the energy–momentum can be rewritten as E = γmc2 and expanded as a power series:
E
=
m
0
c
2
[
1
+
1
2
(
v
c
)
2
+
3
8
(
v
c
)
4
+
5
16
(
v
c
)
6
+
…
]
.
{\displaystyle E=m_{0}c^{2}\left[1+{\frac {1}{2}}\left({\frac {v}{c}}\right)^{2}+{\frac {3}{8}}\left({\frac {v}{c}}\right)^{4}+{\frac {5}{16}}\left({\frac {v}{c}}\right)^{6}+\ldots \right].}
For speeds much smaller than the speed of light, higher-order terms in this expression get smaller and smaller because v/c is small. For low speeds, all but the first two terms can be ignored:
E
≈
m
0
c
2
+
1
2
m
0
v
2
.
{\displaystyle E\approx m_{0}c^{2}+{\frac {1}{2}}m_{0}v^{2}.}
In classical mechanics, both the m0c2 term and the high-speed corrections are ignored. The initial value of the energy is arbitrary, as only the change in energy can be measured and so the m0c2 term is ignored in classical physics. While the higher-order terms become important at higher speeds, the Newtonian equation is a highly accurate low-speed approximation; adding in the third term yields:
E
≈
m
0
c
2
+
1
2
m
0
v
2
(
1
+
3
v
2
4
c
2
)
{\displaystyle E\approx m_{0}c^{2}+{\frac {1}{2}}m_{0}v^{2}\left(1+{\frac {3v^{2}}{4c^{2}}}\right)}
.
The difference between the two approximations is given by
3
v
2
4
c
2
{\displaystyle {\tfrac {3v^{2}}{4c^{2}}}}
, a number very small for everyday objects. In 2018 NASA announced the Parker Solar Probe was the fastest ever, with a speed of 153,454 miles per hour (68,600 m/s). The difference between the approximations for the Parker Solar Probe in 2018 is
3
v
2
4
c
2
≈
3.9
×
10
−
8
{\displaystyle {\tfrac {3v^{2}}{4c^{2}}}\approx 3.9\times 10^{-8}}
, which accounts for an energy correction of four parts per hundred million. The gravitational constant, in contrast, has a standard relative uncertainty of about
2.2
×
10
−
5
{\displaystyle 2.2\times 10^{-5}}
.
== Applications ==
=== Application to nuclear physics ===
The nuclear binding energy is the minimum energy that is required to disassemble the nucleus of an atom into its component parts. The mass of an atom is less than the sum of the masses of its constituents due to the attraction of the strong nuclear force. The difference between the two masses is called the mass defect and is related to the binding energy through Einstein's formula. The principle is used in modeling nuclear fission reactions, and it implies that a great amount of energy can be released by the nuclear fission chain reactions used in both nuclear weapons and nuclear power.
A water molecule weighs a little less than two free hydrogen atoms and an oxygen atom. The minuscule mass difference is the energy needed to split the molecule into three individual atoms (divided by c2), which was given off as heat when the molecule formed (this heat had mass). Similarly, a stick of dynamite in theory weighs a little bit more than the fragments after the explosion; in this case the mass difference is the energy and heat that is released when the dynamite explodes. Such a change in mass may only happen when the system is open, and the energy and mass are allowed to escape. Thus, if a stick of dynamite is detonated in a hermetically sealed chamber, the mass of the chamber and fragments, the heat, sound, and light would still be equal to the original mass of the chamber and dynamite. If sitting on a scale, the weight and mass would not change. This would in theory also happen even with a nuclear bomb, if it could be kept in an ideal box of infinite strength, which did not rupture or pass radiation. Thus, a 21.5 kiloton (9×1013 joule) nuclear bomb produces about one gram of heat and electromagnetic radiation, but the mass of this energy would not be detectable in an exploded bomb in an ideal box sitting on a scale; instead, the contents of the box would be heated to millions of degrees without changing total mass and weight. If a transparent window passing only electromagnetic radiation were opened in such an ideal box after the explosion, and a beam of X-rays and other lower-energy light allowed to escape the box, it would eventually be found to weigh one gram less than it had before the explosion. This weight loss and mass loss would happen as the box was cooled by this process, to room temperature. However, any surrounding mass that absorbed the X-rays (and other "heat") would gain this gram of mass from the resulting heating, thus, in this case, the mass "loss" would represent merely its relocation.
=== Practical examples ===
Einstein used the centimetre–gram–second system of units (cgs), but the formula is independent of the system of units. In natural units, the numerical value of the speed of light is set to equal 1, and the formula expresses an equality of numerical values: E = m. In the SI system (expressing the ratio E/m in joules per kilogram using the value of c in metres per second):
E/m = c2 = (299792458 m/s)2 = 89875517873681764 J/kg (≈ 9.0 × 1016 joules per kilogram).
So the energy equivalent of one kilogram of mass is
89.9 petajoules
25.0 billion kilowatt-hours (or 25,000 GW·h)
21.5 trillion kilocalories (or 21.5 Pcal)
85.2 trillion BTUs (or 0.0852 quads)
or the energy released by combustion of any of the following:
21 500 kilotons of TNT-equivalent energy (or 21.5 Mt)
2630000000 litres or 695000000 US gallons of automotive gasoline
Any time energy is released, the process can be evaluated from an E = mc2 perspective. For instance, the "gadget"-style bomb used in the Trinity test and the bombing of Nagasaki had an explosive yield equivalent to 21 kt of TNT. About 1 kg of the approximately 6.15 kg of plutonium in each of these bombs fissioned into lighter elements totaling almost exactly one gram less, after cooling. The electromagnetic radiation and kinetic energy (thermal and blast energy) released in this explosion carried the missing gram of mass.
Whenever energy is added to a system, the system gains mass, as shown when the equation is rearranged:
A spring's mass increases whenever it is put into compression or tension. Its mass increase arises from the increased potential energy stored within it, which is bound in the stretched chemical (electron) bonds linking the atoms within the spring.
Raising the temperature of an object (increasing its thermal energy) increases its mass. For example, consider the world's primary mass standard for the kilogram, made of platinum and iridium. If its temperature is allowed to change by 1 °C, its mass changes by 1.5 picograms (1 pg = 1×10−12 g).
A spinning ball has greater mass than when it is not spinning. Its increase of mass is exactly the equivalent of the mass of energy of rotation, which is itself the sum of the kinetic energies of all the moving parts of the ball. For example, the Earth itself is more massive due to its rotation, than it would be with no rotation. The rotational energy of the Earth is greater than 1024 Joules, which is over 107 kg.
== History ==
While Einstein was the first to have correctly deduced the mass–energy equivalence formula, he was not the first to have related energy with mass, though nearly all previous authors thought that the energy that contributes to mass comes only from electromagnetic fields. Once discovered, Einstein's formula was initially written in many different notations, and its interpretation and justification was further developed in several steps.
=== Developments prior to Einstein ===
Eighteenth century theories on the correlation of mass and energy included that devised by the English scientist Isaac Newton in 1717, who speculated that light particles and matter particles were interconvertible in "Query 30" of the Opticks, where he asks: "Are not the gross bodies and light convertible into one another, and may not bodies receive much of their activity from the particles of light which enter their composition?" Swedish scientist and theologian Emanuel Swedenborg, in his Principia of 1734 theorized that all matter is ultimately composed of dimensionless points of "pure and total motion". He described this motion as being without force, direction or speed, but having the potential for force, direction and speed everywhere within it.
During the nineteenth century there were several speculative attempts to show that mass and energy were proportional in various ether theories. In 1873 the Russian physicist and mathematician Nikolay Umov pointed out a relation between mass and energy for ether in the form of Е = kmc2, where 0.5 ≤ k ≤ 1. English engineer Samuel Tolver Preston in 1875 and the Italian industrialist and geologist Olinto De Pretto in 1903, following physicist Georges-Louis Le Sage, imagined that the universe was filled with an ether of tiny particles that always move at speed c. Each of these particles has a kinetic energy of mc2 up to a small numerical factor, giving a mass–energy relation.
In 1905, independently of Einstein, French polymath Gustave Le Bon speculated that atoms could release large amounts of latent energy, reasoning from an all-encompassing qualitative philosophy of physics.
==== Electromagnetic mass ====
There were many attempts in the 19th and the beginning of the 20th century—like those of British physicists J. J. Thomson in 1881 and Oliver Heaviside in 1889, and George Frederick Charles Searle in 1897, German physicists Wilhelm Wien in 1900 and Max Abraham in 1902, and the Dutch physicist Hendrik Antoon Lorentz in 1904—to understand how the mass of a charged object depends on the electrostatic field. This concept was called electromagnetic mass, and was considered as being dependent on velocity and direction as well. Lorentz in 1904 gave the following expressions for longitudinal and transverse electromagnetic mass:
m
L
=
m
0
(
1
−
v
2
c
2
)
3
,
m
T
=
m
0
1
−
v
2
c
2
{\displaystyle m_{L}={\frac {m_{0}}{\left({\sqrt {1-{\frac {v^{2}}{c^{2}}}}}\right)^{3}}},\quad m_{T}={\frac {m_{0}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}}
,
where
m
0
=
4
3
E
e
m
c
2
{\displaystyle m_{0}={\frac {4}{3}}{\frac {E_{em}}{c^{2}}}}
Another way of deriving a type of electromagnetic mass was based on the concept of radiation pressure. In 1900, French polymath Henri Poincaré associated electromagnetic radiation energy with a "fictitious fluid" having momentum and mass
m
e
m
=
E
e
m
c
2
.
{\displaystyle m_{em}={\frac {E_{em}}{c^{2}}}\,.}
By that, Poincaré tried to save the center of mass theorem in Lorentz's theory, though his treatment led to radiation paradoxes.
Austrian physicist Friedrich Hasenöhrl showed in 1904 that electromagnetic cavity radiation contributes the "apparent mass"
m
0
=
4
3
E
e
m
c
2
{\displaystyle m_{0}={\frac {4}{3}}{\frac {E_{em}}{c^{2}}}}
to the cavity's mass. He argued that this implies mass dependence on temperature as well.
=== Einstein: mass–energy equivalence ===
Einstein did not write the exact formula E = mc2 in his 1905 Annus Mirabilis paper "Does the Inertia of an object Depend Upon Its Energy Content?"; rather, the paper states that if a body gives off the energy L by emitting light, its mass diminishes by L/c2. This formulation relates only a change Δm in mass to a change L in energy without requiring the absolute relationship. The relationship convinced him that mass and energy can be seen as two names for the same underlying, conserved physical quantity. He has stated that the laws of conservation of energy and conservation of mass are "one and the same". Einstein elaborated in a 1946 essay that "the principle of the conservation of mass… proved inadequate in the face of the special theory of relativity. It was therefore merged with the energy conservation principle—just as, about 60 years before, the principle of the conservation of mechanical energy had been combined with the principle of the conservation of heat [thermal energy]. We might say that the principle of the conservation of energy, having previously swallowed up that of the conservation of heat, now proceeded to swallow that of the conservation of mass—and holds the field alone."
==== Mass–velocity relationship ====
In developing special relativity, Einstein found that the kinetic energy of a moving body is
E
k
=
m
0
c
2
(
γ
−
1
)
=
m
0
c
2
(
1
1
−
v
2
c
2
−
1
)
,
{\displaystyle E_{k}=m_{0}c^{2}(\gamma -1)=m_{0}c^{2}\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right),}
with v the velocity, m0 the rest mass, and γ the Lorentz factor.
He included the second term on the right to make sure that for small velocities the energy would be the same as in classical mechanics, thus satisfying the correspondence principle:
E
k
=
1
2
m
0
v
2
+
⋯
{\displaystyle E_{k}={\frac {1}{2}}m_{0}v^{2}+\cdots }
Without this second term, there would be an additional contribution in the energy when the particle is not moving.
==== Einstein's view on mass ====
Einstein, following Lorentz and Abraham, used velocity- and direction-dependent mass concepts in his 1905 electrodynamics paper and in another paper in 1906. In Einstein's first 1905 paper on E = mc2, he treated m as what would now be called the rest mass, and it has been noted that in his later years he did not like the idea of "relativistic mass".
In modern physics terminology, relativistic energy is used in lieu of relativistic mass and the term "mass" is reserved for the rest mass. Historically, there has been considerable debate over the use of the concept of "relativistic mass" and the connection of "mass" in relativity to "mass" in Newtonian dynamics. One view is that only rest mass is a viable concept and is a property of the particle; while relativistic mass is a conglomeration of particle properties and properties of spacetime. Another view, attributed to Norwegian physicist Kjell Vøyenli, is that the Newtonian concept of mass as a particle property and the relativistic concept of mass have to be viewed as embedded in their own theories and as having no precise connection.
==== Einstein's 1905 derivation ====
Already in his relativity paper "On the electrodynamics of moving bodies", Einstein derived the correct expression for the kinetic energy of particles:
E
k
=
m
c
2
(
1
1
−
v
2
c
2
−
1
)
{\displaystyle E_{k}=mc^{2}\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right)}
.
Now the question remained open as to which formulation applies to bodies at rest. This was tackled by Einstein in his paper "Does the inertia of a body depend upon its energy content?", one of his Annus Mirabilis papers. Here, Einstein used V to represent the speed of light in vacuum and L to represent the energy lost by a body in the form of radiation. Consequently, the equation E = mc2 was not originally written as a formula but as a sentence in German saying that "if a body gives off the energy L in the form of radiation, its mass diminishes by L/V2." A remark placed above it informed that the equation was approximated by neglecting "magnitudes of fourth and higher orders" of a series expansion. Einstein used a body emitting two light pulses in opposite directions, having energies of E0 before and E1 after the emission as seen in its rest frame. As seen from a moving frame, E0 becomes H0 and E1 becomes H1. Einstein obtained, in modern notation:
(
H
0
−
E
0
)
−
(
H
1
−
E
1
)
=
E
(
1
1
−
v
2
c
2
−
1
)
{\displaystyle \left(H_{0}-E_{0}\right)-\left(H_{1}-E_{1}\right)=E\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right)}
.
He then argued that H − E can only differ from the kinetic energy K by an additive constant, which gives
K
0
−
K
1
=
E
(
1
1
−
v
2
c
2
−
1
)
{\displaystyle K_{0}-K_{1}=E\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right)}
.
Neglecting effects higher than third order in v/c after a Taylor series expansion of the right side of this yields:
K
0
−
K
1
=
E
c
2
v
2
2
.
{\displaystyle K_{0}-K_{1}={\frac {E}{c^{2}}}{\frac {v^{2}}{2}}.}
Einstein concluded that the emission reduces the body's mass by E/c2, and that the mass of a body is a measure of its energy content.
The correctness of Einstein's 1905 derivation of E = mc2 was criticized by German theoretical physicist Max Planck in 1907, who argued that it is only valid to first approximation. Another criticism was formulated by American physicist Herbert Ives in 1952 and the Israeli physicist Max Jammer in 1961, asserting that Einstein's derivation is based on begging the question. Other scholars, such as American and Chilean philosophers John Stachel and Roberto Torretti, have argued that Ives' criticism was wrong, and that Einstein's derivation was correct. American physics writer Hans Ohanian, in 2008, agreed with Stachel/Torretti's criticism of Ives, though he argued that Einstein's derivation was wrong for other reasons.
==== Relativistic center-of-mass theorem of 1906 ====
Like Poincaré, Einstein concluded in 1906 that the inertia of electromagnetic energy is a necessary condition for the center-of-mass theorem to hold. On this occasion, Einstein referred to Poincaré's 1900 paper and wrote: "Although the merely formal considerations, which we will need for the proof, are already mostly contained in a work by H. Poincaré2, for the sake of clarity I will not rely on that work." In Einstein's more physical, as opposed to formal or mathematical, point of view, there was no need for fictitious masses. He could avoid the perpetual motion problem because, on the basis of the mass–energy equivalence, he could show that the transport of inertia that accompanies the emission and absorption of radiation solves the problem. Poincaré's rejection of the principle of action–reaction can be avoided through Einstein's E = mc2, because mass conservation appears as a special case of the energy conservation law.
==== Further developments ====
There were several further developments in the first decade of the twentieth century. In May 1907, Einstein explained that the expression for energy ε of a moving mass point assumes the simplest form when its expression for the state of rest is chosen to be ε0 = μV2 (where μ is the mass), which is in agreement with the "principle of the equivalence of mass and energy". In addition, Einstein used the formula μ = E0/V2, with E0 being the energy of a system of mass points, to describe the energy and mass increase of that system when the velocity of the differently moving mass points is increased. Max Planck rewrote Einstein's mass–energy relationship as M = E0 + pV0/c2 in June 1907, where p is the pressure and V0 the volume to express the relation between mass, its latent energy, and thermodynamic energy within the body. Subsequently, in October 1907, this was rewritten as M0 = E0/c2 and given a quantum interpretation by German physicist Johannes Stark, who assumed its validity and correctness. In December 1907, Einstein expressed the equivalence in the form M = μ + E0/c2 and concluded: "A mass μ is equivalent, as regards inertia, to a quantity of energy μc2. […] It appears far more natural to consider every inertial mass as a store of energy." American physical chemists Gilbert N. Lewis and Richard C. Tolman used two variations of the formula in 1909: m = E/c2 and m0 = E0/c2, with E being the relativistic energy (the energy of an object when the object is moving), E0 is the rest energy (the energy when not moving), m is the relativistic mass (the rest mass and the extra mass gained when moving), and m0 is the rest mass. The same relations in different notation were used by Lorentz in 1913 and 1914, though he placed the energy on the left-hand side: ε = Mc2 and ε0 = mc2, with ε being the total energy (rest energy plus kinetic energy) of a moving material point, ε0 its rest energy, M the relativistic mass, and m the invariant mass.
In 1911, German physicist Max von Laue gave a more comprehensive proof of M0 = E0/c2 from the stress–energy tensor, which was later generalized by German mathematician Felix Klein in 1918.
Einstein returned to the topic once again after World War II and this time he wrote E = mc2 in the title of his article intended as an explanation for a general reader by analogy.
==== Alternative version ====
An alternative version of Einstein's thought experiment was proposed by American theoretical physicist Fritz Rohrlich in 1990, who based his reasoning on the Doppler effect. Like Einstein, he considered a body at rest with mass M. If the body is examined in a frame moving with nonrelativistic velocity v, it is no longer at rest and in the moving frame it has momentum P = Mv. Then he supposed the body emits two pulses of light to the left and to the right, each carrying an equal amount of energy E/2. In its rest frame, the object remains at rest after the emission since the two beams are equal in strength and carry opposite momentum. However, if the same process is considered in a frame that moves with velocity v to the left, the pulse moving to the left is redshifted, while the pulse moving to the right is blue shifted. The blue light carries more momentum than the red light, so that the momentum of the light in the moving frame is not balanced: the light is carrying some net momentum to the right. The object has not changed its velocity before or after the emission. Yet in this frame it has lost some right-momentum to the light. The only way it could have lost momentum is by losing mass. This also solves Poincaré's radiation paradox. The velocity is small, so the right-moving light is blueshifted by an amount equal to the nonrelativistic Doppler shift factor 1 − v/c. The momentum of the light is its energy divided by c, and it is increased by a factor of v/c. So the right-moving light is carrying an extra momentum ΔP given by:
Δ
P
=
v
c
E
2
c
.
{\displaystyle \Delta P={v \over c}{E \over 2c}.}
The left-moving light carries a little less momentum, by the same amount ΔP. So the total right-momentum in both light pulses is twice ΔP. This is the right-momentum that the object lost.
2
Δ
P
=
v
E
c
2
.
{\displaystyle 2\Delta P=v{E \over c^{2}}.}
The momentum of the object in the moving frame after the emission is reduced to this amount:
P
′
=
M
v
−
2
Δ
P
=
(
M
−
E
c
2
)
v
.
{\displaystyle P'=Mv-2\Delta P=\left(M-{E \over c^{2}}\right)v.}
So the change in the object's mass is equal to the total energy lost divided by c2. Since any emission of energy can be carried out by a two-step process, where first the energy is emitted as light and then the light is converted to some other form of energy, any emission of energy is accompanied by a loss of mass. Similarly, by considering absorption, a gain in energy is accompanied by a gain in mass.
=== Radioactivity and nuclear energy ===
It was quickly noted after the discovery of radioactivity in 1897 that the total energy due to radioactive processes is about one million times greater than that involved in any known molecular change, raising the question of where the energy comes from. After eliminating the idea of absorption and emission of some sort of Lesagian ether particles, the existence of a huge amount of latent energy, stored within matter, was proposed by New Zealand physicist Ernest Rutherford and British radiochemist Frederick Soddy in 1903. Rutherford also suggested that this internal energy is stored within normal matter as well. He went on to speculate in 1904: "If it were ever found possible to control at will the rate of disintegration of the radio-elements, an enormous amount of energy could be obtained from a small quantity of matter."
Einstein's equation does not explain the large energies released in radioactive decay, but can be used to quantify them. The theoretical explanation for radioactive decay is given by the nuclear forces responsible for holding atoms together, though these forces were still unknown in 1905. The enormous energy released from radioactive decay had previously been measured by Rutherford and was much more easily measured than the small change in the gross mass of materials as a result. Einstein's equation, by theory, can give these energies by measuring mass differences before and after reactions, but in practice, these mass differences in 1905 were still too small to be measured in bulk. Prior to this, the ease of measuring radioactive decay energies with a calorimeter was thought possibly likely to allow measurement of changes in mass difference, as a check on Einstein's equation itself. Einstein mentions in his 1905 paper that mass–energy equivalence might perhaps be tested with radioactive decay, which was known by then to release enough energy to possibly be "weighed," when missing from the system. However, radioactivity seemed to proceed at its own unalterable pace, and even when simple nuclear reactions became possible using proton bombardment, the idea that these great amounts of usable energy could be liberated at will with any practicality, proved difficult to substantiate. Rutherford was reported in 1933 to have declared that this energy could not be exploited efficiently: "Anyone who expects a source of power from the transformation of the atom is talking moonshine."
This outlook changed dramatically in 1932 with the discovery of the neutron and its mass, allowing mass differences for single nuclides and their reactions to be calculated directly, and compared with the sum of masses for the particles that made up their composition. In 1933, the energy released from the reaction of lithium-7 plus protons giving rise to two alpha particles, allowed Einstein's equation to be tested to an error of ±0.5%. However, scientists still did not see such reactions as a practical source of power, due to the energy cost of accelerating reaction particles. After the very public demonstration of huge energies released from nuclear fission after the atomic bombings of Hiroshima and Nagasaki in 1945, the equation E = mc2 became directly linked in the public eye with the power and peril of nuclear weapons. The equation was featured on page 2 of the Smyth Report, the official 1945 release by the US government on the development of the atomic bomb, and by 1946 the equation was linked closely enough with Einstein's work that the cover of Time magazine prominently featured a picture of Einstein next to an image of a mushroom cloud emblazoned with the equation. Einstein himself had only a minor role in the Manhattan Project: he had cosigned a letter to the U.S. president in 1939 urging funding for research into atomic energy, warning that an atomic bomb was theoretically possible. The letter persuaded Roosevelt to devote a significant portion of the wartime budget to atomic research. Without a security clearance, Einstein's only scientific contribution was an analysis of an isotope separation method in theoretical terms. It was inconsequential, on account of Einstein not being given sufficient information to fully work on the problem.
While E = mc2 is useful for understanding the amount of energy potentially released in a fission reaction, it was not strictly necessary to develop the weapon, once the fission process was known, and its energy measured at 200 MeV (which was directly possible, using a quantitative Geiger counter, at that time). The physicist and Manhattan Project participant Robert Serber noted that somehow "the popular notion took hold long ago that Einstein's theory of relativity, in particular his equation E = mc2, plays some essential role in the theory of fission. Einstein had a part in alerting the United States government to the possibility of building an atomic bomb, but his theory of relativity is not required in discussing fission. The theory of fission is what physicists call a non-relativistic theory, meaning that relativistic effects are too small to affect the dynamics of the fission process significantly." There are other views on the equation's importance to nuclear reactions. In late 1938, the Austrian-Swedish and British physicists Lise Meitner and Otto Robert Frisch—while on a winter walk during which they solved the meaning of Hahn's experimental results and introduced the idea that would be called atomic fission—directly used Einstein's equation to help them understand the quantitative energetics of the reaction that overcame the "surface tension-like" forces that hold the nucleus together, and allowed the fission fragments to separate to a configuration from which their charges could force them into an energetic fission. To do this, they used packing fraction, or nuclear binding energy values for elements. These, together with use of E = mc2 allowed them to realize on the spot that the basic fission process was energetically possible.
=== Einstein's equation written ===
According to the Einstein Papers Project at the California Institute of Technology and Hebrew University of Jerusalem, there remain only four known copies of this equation as written by Einstein. One of these is a letter written in German to Ludwik Silberstein, which was in Silberstein's archives, and sold at auction for $1.2 million, RR Auction of Boston, Massachusetts said on May 21, 2021.
== See also ==
== Notes ==
== References ==
== External links ==
Einstein on the Inertia of Energy – MathPages
Einstein-on film explaining a mass energy equivalence
Mass and Energy – Conversations About Science with Theoretical Physicist Matt Strassler
The Equivalence of Mass and Energy – Entry in the Stanford Encyclopedia of Philosophy
Merrifield, Michael; Copeland, Ed; Bowley, Roger. "E=mc2 – Mass–Energy Equivalence". Sixty Symbols. Brady Haran for the University of Nottingham. | Wikipedia/Mass-energy_equivalence |
A set of equations describing the trajectories of objects subject to a constant gravitational force under normal Earth-bound conditions. Assuming constant acceleration g due to Earth's gravity, Newton's law of universal gravitation simplifies to F = mg, where F is the force exerted on a mass m by the Earth's gravitational field of strength g. Assuming constant g is reasonable for objects falling to Earth over the relatively short vertical distances of our everyday experience, but is not valid for greater distances involved in calculating more distant effects, such as spacecraft trajectories.
== History ==
Galileo was the first to demonstrate and then formulate these equations. He used a ramp to study rolling balls, the ramp slowing the acceleration enough to measure the time taken for the ball to roll a known distance. He measured elapsed time with a water clock, using an "extremely accurate balance" to measure the amount of water.
The equations ignore air resistance, which has a dramatic effect on objects falling an appreciable distance in air, causing them to quickly approach a terminal velocity. The effect of air resistance varies enormously depending on the size and geometry of the falling object—for example, the equations are hopelessly wrong for a feather, which has a low mass but offers a large resistance to the air. (In the absence of an atmosphere all objects fall at the same rate, as astronaut David Scott demonstrated by dropping a hammer and a feather on the surface of the Moon.)
The equations also ignore the rotation of the Earth, failing to describe the Coriolis effect for example. Nevertheless, they are usually accurate enough for dense and compact objects falling over heights not exceeding the tallest man-made structures.
== Overview ==
Near the surface of the Earth, the acceleration due to gravity g = 9.807 m/s2 (metres per second squared, which might be thought of as "metres per second, per second"; or 32.18 ft/s2 as "feet per second per second") approximately. A coherent set of units for g, d, t and v is essential. Assuming SI units, g is measured in metres per second squared, so d must be measured in metres, t in seconds and v in metres per second.
In all cases, the body is assumed to start from rest, and air resistance is neglected. Generally, in Earth's atmosphere, all results below will therefore be quite inaccurate after only 5 seconds of fall (at which time an object's velocity will be a little less than the vacuum value of 49 m/s (9.8 m/s2 × 5 s) due to air resistance). Air resistance induces a drag force on any body that falls through any atmosphere other than a perfect vacuum, and this drag force increases with velocity until it equals the gravitational force, leaving the object to fall at a constant terminal velocity.
Terminal velocity depends on atmospheric drag, the coefficient of drag for the object, the (instantaneous) velocity of the object, and the area presented to the airflow.
Apart from the last formula, these formulas also assume that g negligibly varies with height during the fall (that is, they assume constant acceleration). The last equation is more accurate where significant changes in fractional distance from the centre of the planet during the fall cause significant changes in g. This equation occurs in many applications of basic physics.
The following equations start from the general equations of linear motion:
d
(
t
)
=
d
0
+
v
0
t
+
1
2
a
t
2
{\displaystyle d(t)=d_{0}+v_{0}t+{1 \over 2}at^{2}}
v
(
t
)
=
v
0
+
a
t
{\displaystyle v(t)=v_{0}+at}
and equation for universal gravitation (r+d= distance of object above the ground from the center of mass of planet):
F
=
G
m
M
(
r
+
d
)
2
=
m
g
{\displaystyle F=G{{mM} \over {(r+d)^{2}}}=mg}
== Equations ==
== Example ==
The first equation shows that, after one second, an object will have fallen a distance of 1/2 × 9.8 × 12 = 4.9 m. After two seconds it will have fallen 1/2 × 9.8 × 22 = 19.6 m; and so on. On the other hand, the penultimate equation becomes grossly inaccurate at great distances. If an object fell 10 000 m to Earth, then the results of both equations differ by only 0.08 %; however, if it fell from geosynchronous orbit, which is 42 164 km, then the difference changes to almost 64 %.
Based on wind resistance, for example, the terminal velocity of a skydiver in a belly-to-earth (i.e., face down) free-fall position is about 195 km/h (122 mph or 54 m/s). This velocity is the asymptotic limiting value of the acceleration process, because the effective forces on the body balance each other more and more closely as the terminal velocity is approached. In this example, a speed of 50 % of terminal velocity is reached after only about 3 seconds, while it takes 8 seconds to reach 90 %, 15 seconds to reach 99 % and so on.
Higher speeds can be attained if the skydiver pulls in his or her limbs (see also freeflying). In this case, the terminal velocity increases to about 320 km/h (200 mph or 90 m/s), which is almost the terminal velocity of the peregrine falcon diving down on its prey. The same terminal velocity is reached for a typical .30-06 bullet dropping downwards—when it is returning to earth having been fired upwards, or dropped from a tower—according to a 1920 U.S. Army Ordnance study.
For astronomical bodies other than Earth, and for short distances of fall at other than "ground" level, g in the above equations may be replaced by
G
(
M
+
m
)
r
2
{\displaystyle {\frac {G(M+m)}{r^{2}}}}
where G is the gravitational constant, M is the mass of the astronomical body, m is the mass of the falling body, and r is the radius from the falling object to the center of the astronomical body.
Removing the simplifying assumption of uniform gravitational acceleration provides more accurate results. We find from the formula for radial elliptic trajectories:
The time t taken for an object to fall from a height r to a height x, measured from the centers of the two bodies, is given by:
t
=
π
2
−
arcsin
(
x
r
)
+
x
r
(
1
−
x
r
)
2
μ
r
3
/
2
{\displaystyle t={\frac {{\frac {\pi }{2}}-\arcsin {\Big (}{\sqrt {\frac {x}{r}}}{\Big )}+{\sqrt {{\frac {x}{r}}\ (1-{\frac {x}{r}})}}}{\sqrt {2\mu }}}\,r^{3/2}}
where
μ
=
G
(
M
+
m
)
{\displaystyle \mu =G(M+m)}
is the sum of the standard gravitational parameters of the two bodies. This equation should be used whenever there is a significant difference in the gravitational acceleration during the fall.
Note that when
x
=
r
{\displaystyle x=r}
this equation gives
t
=
0
{\displaystyle t=0}
, as expected; and when
x
=
0
{\displaystyle x=0}
it gives
t
=
π
2
r
3
2
μ
{\displaystyle t={\frac {\pi }{2}}{\sqrt {\frac {r^{3}}{2\mu }}}}
, which is the time to collision.
== Acceleration relative to the rotating Earth ==
Centripetal force causes the acceleration measured on the rotating surface of the Earth to differ from the acceleration that is measured for a free-falling body: the apparent acceleration in the rotating frame of reference is the total gravity vector minus a small vector toward the north–south axis of the Earth, corresponding to staying stationary in that frame of reference.
== See also ==
De motu antiquiora and Two New Sciences (the earliest modern investigations of the motion of falling bodies)
Equations of motion
Free fall
Gravity
Mean speed theorem, the foundation of the law of falling bodies
Radial trajectory
== Notes ==
== References ==
== External links ==
Falling body equations calculator | Wikipedia/Equations_for_a_falling_body |
In physics, Torricelli's equation, or Torricelli's formula, is an equation created by Evangelista Torricelli to find the final velocity of a moving object with constant acceleration along an axis (for example, the x axis) without having a known time interval.
The equation itself is:
v
f
2
=
v
i
2
+
2
a
Δ
x
{\displaystyle v_{f}^{2}=v_{i}^{2}+2a\Delta x\,}
where
v
f
{\displaystyle v_{f}}
is the object's final velocity along the x axis on which the acceleration is constant.
v
i
{\displaystyle v_{i}}
is the object's initial velocity along the x axis.
a
{\displaystyle a}
is the object's acceleration along the x axis, which is given as a constant.
Δ
x
{\displaystyle \Delta x\,}
is the object's change in position along the x axis, also called displacement.
In this and all subsequent equations in this article, the subscript
x
{\displaystyle x}
(as in
v
f
x
{\displaystyle {v_{f}}_{x}}
) is implied, but is not expressed explicitly for clarity in presenting the equations.
This equation is valid along any axis on which the acceleration is constant.
== Derivation ==
=== Without differentials and integration ===
Begin with the following relations for the case of uniform acceleration:
Take (1), and multiply both sides with acceleration
a
{\textstyle a}
The following rearrangement of the right hand side makes it easier to recognize the coming substitution:
Use (2) to substitute the product
a
t
{\textstyle at}
:
Work out the multiplications:
The crossterms
v
i
v
f
{\textstyle v_{i}v_{f}}
drop away against each other, leaving only squared terms:
(7) rearranges to the form of Torricelli's equation as presented at the start of the article:
=== Using differentials and integration ===
Begin with the definitions of velocity as the derivative of the position, and acceleration as the derivative of the velocity:
Set up integration from initial position
x
i
{\textstyle x_{i}}
to final position
x
f
{\textstyle x_{f}}
In accordance with (9) we can substitute
d
x
{\textstyle dx}
with
v
d
t
{\textstyle v\,dt}
, with corresponding change of limits.
Here changing the order of
a
{\textstyle a}
and
v
{\textstyle v}
makes it easier to recognize the upcoming substitution.
In accordance with (10) we can substitute
a
d
t
{\textstyle a\,dt}
with
d
v
{\textstyle dv}
, with corresponding change of limits.
So we have:
Since the acceleration is constant, we can factor it out of the integration:
Evaluating the integration:
The factor
x
f
−
x
i
{\textstyle x_{f}-x_{i}}
is the displacement
Δ
x
{\textstyle \Delta x}
:
== Application ==
Combining Torricelli's equation with
F
=
m
a
{\textstyle F=ma}
gives the work-energy theorem.
Torricelli's equation and the generalization to non-uniform acceleration have the same form:
Repeat of (16):
Evaluating the right hand side:
To compare with Torricelli's equation: repeat of (7):
To derive the work-energy theorem: start with
F
=
m
a
{\textstyle F=ma}
and on both sides state the integral with respect to the position coordinate. If both sides are integrable then the resulting expression is valid:
Use (22) to process the right hand side:
The reason that the right hand sides of (22) and (23) are the same:
First consider the case with two consecutive stages of different uniform acceleration, first from
s
0
{\textstyle s_{0}}
to
s
1
{\textstyle s_{1}}
, and then from
s
1
{\textstyle s_{1}}
to
s
2
{\textstyle s_{2}}
.
Expressions for each of the two stages:
a
1
(
s
1
−
s
0
)
=
1
2
v
1
2
−
1
2
v
0
2
{\displaystyle a_{1}(s_{1}-s_{0})={\tfrac {1}{2}}v_{1}^{2}-{\tfrac {1}{2}}v_{0}^{2}}
a
2
(
s
2
−
s
1
)
=
1
2
v
2
2
−
1
2
v
1
2
{\displaystyle a_{2}(s_{2}-s_{1})={\tfrac {1}{2}}v_{2}^{2}-{\tfrac {1}{2}}v_{1}^{2}}
Since these expressions are for consecutive intervals they can be added; the result is a valid expression.
Upon addition the intermediate term
1
2
v
1
2
{\textstyle {\tfrac {1}{2}}v_{1}^{2}}
drops out; only the outer terms
1
2
v
2
2
{\textstyle {\tfrac {1}{2}}v_{2}^{2}}
and
1
2
v
0
2
{\textstyle {\tfrac {1}{2}}v_{0}^{2}}
remain:
The above result generalizes: the total distance can be subdivided into any number of subdivisions; after adding everything together only the outer terms remain; all of the intermediate terms drop out.
The generalization of (26) to an arbitrary number of subdivisions of the total interval from
s
0
{\textstyle s_{0}}
to
s
n
{\textstyle s_{n}}
can be expressed as a summation:
== See also ==
Equation of motion
== References ==
== External links ==
Torricelli's theorem | Wikipedia/Torricelli_equation |
In mathematics, the Korteweg–De Vries (KdV) equation is a partial differential equation (PDE) which serves as a mathematical model of waves on shallow water surfaces. It is particularly notable as the prototypical example of an integrable PDE, exhibiting typical behaviors such as a large number of explicit solutions, in particular soliton solutions, and an infinite number of conserved quantities, despite the nonlinearity which typically renders PDEs intractable. The KdV can be solved by the inverse scattering method (ISM). In fact, Clifford Gardner, John M. Greene, Martin Kruskal and Robert Miura developed the classical inverse scattering method to solve the KdV equation.
The KdV equation was first introduced by Joseph Valentin Boussinesq (1877, footnote on page 360) and rediscovered by Diederik Korteweg and Gustav de Vries in 1895, who found the simplest solution, the one-soliton solution. Understanding of the equation and behavior of solutions was greatly advanced by the computer simulations of Norman Zabusky and Kruskal in 1965 and then the development of the inverse scattering transform in 1967.
In 1972, T. Kawahara proposed a fifth-order KdV type of equation, known as Kawahara equation, that describes dispersive waves, particularly in cases when the coefficient of the KdV equation becomes very small or zero.
== Definition ==
The KdV equation is a partial differential equation that models (spatially) one-dimensional nonlinear dispersive nondissipative waves described by a function
ϕ
(
x
,
t
)
{\displaystyle \phi (x,t)}
adhering to:
∂
t
ϕ
+
∂
x
3
ϕ
−
6
ϕ
∂
x
ϕ
=
0
x
∈
R
,
t
≥
0
,
{\displaystyle \partial _{t}\phi +\partial _{x}^{3}\phi -6\,\phi \,\partial _{x}\phi =0\,\quad x\in \mathbb {R} ,\;t\geq 0,}
where
∂
x
3
ϕ
{\displaystyle \partial _{x}^{3}\phi }
accounts for dispersion and the nonlinear element
ϕ
∂
x
ϕ
{\displaystyle \phi \partial _{x}\phi }
is an advection term.
For modelling shallow water waves,
ϕ
{\displaystyle \phi }
is the height displacement of the water surface from its equilibrium height.
The constant
6
{\displaystyle 6}
in front of the last term is conventional but of no great significance: multiplying
t
{\displaystyle t}
,
x
{\displaystyle x}
, and
ϕ
{\displaystyle \phi }
by constants can be used to make the coefficients of any of the three terms equal to any given non-zero constants.
== Soliton solutions ==
=== One-soliton solution ===
Consider solutions in which a fixed waveform, given by
f
(
X
)
{\displaystyle f(X)}
, maintains its shape as it travels to the right at phase speed
c
{\displaystyle c}
. Such a solution is given by
φ
(
x
,
t
)
=
f
(
x
−
c
t
−
a
)
=
f
(
X
)
{\displaystyle \varphi (x,t)=f(x-ct-a)=f(X)}
. Substituting it into the KdV equation gives the ordinary differential equation
−
c
d
f
d
X
+
d
3
f
d
X
3
−
6
f
d
f
d
X
=
0
,
{\displaystyle -c{\frac {df}{dX}}+{\frac {d^{3}f}{dX^{3}}}-6f{\frac {df}{dX}}=0,}
or, integrating with respect to
X
{\displaystyle X}
,
−
c
f
+
d
2
f
d
X
2
−
3
f
2
=
A
{\displaystyle -cf+{\frac {d^{2}f}{dX^{2}}}-3f^{2}=A}
where
A
{\displaystyle A}
is a constant of integration. Interpreting the independent variable
X
{\displaystyle X}
above as a virtual time variable, this means
f
{\displaystyle f}
satisfies Newton's equation of motion of a particle of unit mass in a cubic potential
V
(
f
)
=
−
(
f
3
+
1
2
c
f
2
+
A
f
)
{\displaystyle V(f)=-\left(f^{3}+{\frac {1}{2}}cf^{2}+Af\right)}
.
If
A
=
0
,
c
>
0
{\displaystyle A=0,\,c>0}
then the potential function
V
(
f
)
{\displaystyle V(f)}
has local maximum at
f
=
0
{\displaystyle f=0}
; there is a solution in which
f
(
X
)
{\displaystyle f(X)}
starts at this point at 'virtual time'
−
∞
{\displaystyle -\infty }
, eventually slides down to the local minimum, then back up the other side, reaching an equal height, and then reverses direction, ending up at the local maximum again at time
∞
{\displaystyle \infty }
. In other words,
f
(
X
)
{\displaystyle f(X)}
approaches
0
{\displaystyle 0}
as
X
→
−
∞
{\displaystyle X\to -\infty }
. This is the characteristic shape of the solitary wave solution.
More precisely, the solution is
ϕ
(
x
,
t
)
=
−
1
2
c
sech
2
[
c
2
(
x
−
c
t
−
a
)
]
{\displaystyle \phi (x,t)=-{\frac {1}{2}}\,c\,\operatorname {sech} ^{2}\left[{{\sqrt {c}} \over 2}(x-c\,t-a)\right]}
where
sech
{\displaystyle \operatorname {sech} }
stands for the hyperbolic secant and
a
{\displaystyle a}
is an arbitrary constant. This describes a right-moving soliton with velocity
c
{\displaystyle c}
.
=== N-soliton solution ===
There is a known expression for a solution which is an
N
{\displaystyle N}
-soliton solution, which at late times resolves into
N
{\displaystyle N}
separate single solitons. The solution depends on a set of decreasing positive parameters
χ
1
>
⋯
>
χ
N
>
0
{\displaystyle \chi _{1}>\cdots >\chi _{N}>0}
and a set of non-zero parameters
β
1
,
⋯
,
β
N
{\displaystyle \beta _{1},\cdots ,\beta _{N}}
. The solution is given in the form
ϕ
(
x
,
t
)
=
−
2
∂
2
∂
x
2
l
o
g
[
d
e
t
A
(
x
,
t
)
]
{\displaystyle \phi (x,t)=-2{\frac {\partial ^{2}}{\partial x^{2}}}\mathrm {log} [\mathrm {det} A(x,t)]}
where the components of the matrix
A
(
x
,
t
)
{\displaystyle A(x,t)}
are
A
n
m
(
x
,
t
)
=
δ
n
m
+
β
n
e
8
χ
n
3
t
e
−
(
χ
n
+
χ
m
)
x
χ
n
+
χ
m
.
{\displaystyle A_{nm}(x,t)=\delta _{nm}+{\frac {\beta _{n}e^{8\chi _{n}^{3}t}e^{-(\chi _{n}+\chi _{m})x}}{\chi _{n}+\chi _{m}}}.}
This is derived using the inverse scattering method.
== Integrals of motion ==
The KdV equation has infinitely many integrals of motion, functionals on a solution
ϕ
(
t
)
{\displaystyle \phi (t)}
which do not change with time. They can be given explicitly as
∫
−
∞
+
∞
P
2
n
−
1
(
ϕ
,
∂
x
ϕ
,
∂
x
2
ϕ
,
…
)
d
x
{\displaystyle \int _{-\infty }^{+\infty }P_{2n-1}(\phi ,\,\partial _{x}\phi ,\,\partial _{x}^{2}\phi ,\,\ldots )\,{\text{d}}x\,}
where the polynomials
P
n
{\displaystyle P_{n}}
are defined recursively by
P
1
=
ϕ
,
P
n
=
−
d
P
n
−
1
d
x
+
∑
i
=
1
n
−
2
P
i
P
n
−
1
−
i
for
n
≥
2.
{\displaystyle {\begin{aligned}P_{1}&=\phi ,\\P_{n}&=-{\frac {dP_{n-1}}{dx}}+\sum _{i=1}^{n-2}\,P_{i}\,P_{n-1-i}\quad {\text{ for }}n\geq 2.\end{aligned}}}
The first few integrals of motion are:
the mass
∫
ϕ
d
x
,
{\displaystyle \int \phi \,\mathrm {d} x,}
the momentum
∫
ϕ
2
d
x
,
{\displaystyle \int \phi ^{2}\,\mathrm {d} x,}
the energy
∫
[
2
ϕ
3
−
(
∂
x
ϕ
)
2
]
d
x
{\displaystyle \int \left[2\phi ^{3}-\left(\partial _{x}\phi \right)^{2}\right]\,\mathrm {d} x}
.
Only the odd-numbered terms
P
2
n
+
1
{\displaystyle P_{2n+1}}
result in non-trivial (meaning non-zero) integrals of motion.
== Lax pairs ==
The KdV equation
∂
t
ϕ
=
6
ϕ
∂
x
ϕ
−
∂
x
3
ϕ
{\displaystyle \partial _{t}\phi =6\,\phi \,\partial _{x}\phi -\partial _{x}^{3}\phi }
can be reformulated as the Lax equation
L
t
=
[
L
,
A
]
≡
L
A
−
A
L
{\displaystyle L_{t}=[L,A]\equiv LA-AL\,}
with
L
{\displaystyle L}
a Sturm–Liouville operator:
L
=
−
∂
x
2
+
ϕ
,
A
=
4
∂
x
3
−
6
ϕ
∂
x
−
3
[
∂
x
,
ϕ
]
{\displaystyle {\begin{aligned}L&=-\partial _{x}^{2}+\phi ,\\A&=4\partial _{x}^{3}-6\phi \,\partial _{x}-3[\partial _{x},\phi ]\end{aligned}}}
where
[
∂
x
,
ϕ
]
{\displaystyle [\partial _{x},\phi ]}
is the commutator such that
[
∂
x
,
ϕ
]
f
=
f
∂
x
ϕ
{\displaystyle [\partial _{x},\phi ]f=f\partial _{x}\phi }
. The Lax pair accounts for the infinite number of first integrals of the KdV equation.
In fact,
L
{\displaystyle L}
is the time-independent Schrödinger operator (disregarding constants) with potential
ϕ
(
x
,
t
)
{\displaystyle \phi (x,t)}
. It can be shown that due to this Lax formulation that in fact the eigenvalues do not depend on
t
{\displaystyle t}
.
=== Zero-curvature representation ===
Setting the components of the Lax connection to be
L
x
=
(
0
1
ϕ
−
λ
0
)
,
L
t
=
(
−
ϕ
x
2
ϕ
+
4
λ
2
ϕ
2
−
ϕ
x
x
+
2
ϕ
λ
−
4
λ
2
ϕ
x
)
,
{\displaystyle L_{x}={\begin{pmatrix}0&1\\\phi -\lambda &0\end{pmatrix}},L_{t}={\begin{pmatrix}-\phi _{x}&2\phi +4\lambda \\2\phi ^{2}-\phi _{xx}+2\phi \lambda -4\lambda ^{2}&\phi _{x}\end{pmatrix}},}
the KdV equation is equivalent to the zero-curvature equation for the Lax connection,
∂
t
L
x
−
∂
x
L
t
+
[
L
x
,
L
t
]
=
0.
{\displaystyle \partial _{t}L_{x}-\partial _{x}L_{t}+[L_{x},L_{t}]=0.}
== Least action principle ==
The Korteweg–De Vries equation
∂
t
ϕ
+
6
ϕ
∂
x
ϕ
+
∂
x
3
ϕ
=
0
,
{\displaystyle \partial _{t}\phi +6\phi \,\partial _{x}\phi +\partial _{x}^{3}\phi =0,}
is the Euler–Lagrange equation of motion derived from the Lagrangian density,
L
{\displaystyle {\mathcal {L}}\,}
with
ϕ
{\displaystyle \phi }
defined by
ϕ
:=
∂
ψ
∂
x
.
{\displaystyle \phi :={\frac {\partial \psi }{\partial x}}.}
== Long-time asymptotics ==
It can be shown that any sufficiently fast decaying smooth solution will eventually split into a finite superposition of solitons travelling to the right plus a decaying dispersive part travelling to the left. This was first observed by Zabusky & Kruskal (1965) and can be rigorously proven using the nonlinear steepest descent analysis for oscillatory Riemann–Hilbert problems.
== History ==
The history of the KdV equation started with experiments by John Scott Russell in 1834, followed by theoretical investigations by Lord Rayleigh and Joseph Boussinesq around 1870 and, finally, Korteweg and De Vries in 1895.
The KdV equation was not studied much after this until Zabusky & Kruskal (1965) discovered numerically that its solutions seemed to decompose at large times into a collection of "solitons": well separated solitary waves. Moreover, the solitons seems to be almost unaffected in shape by passing through each other (though this could cause a change in their position). They also made the connection to earlier numerical experiments by Fermi, Pasta, Ulam, and Tsingou by showing that the KdV equation was the continuum limit of the FPUT system. Development of the analytic solution by means of the inverse scattering transform was done in 1967 by Gardner, Greene, Kruskal and Miura.
The KdV equation is now seen to be closely connected to Huygens' principle.
== Applications and connections ==
The KdV equation has several connections to physical problems. In addition to being the governing equation of the string in the Fermi–Pasta–Ulam–Tsingou problem in the continuum limit, it approximately describes the evolution of long, one-dimensional waves in many physical settings, including:
shallow-water waves with weakly non-linear restoring forces,
long internal waves in a density-stratified ocean,
ion acoustic waves in a plasma,
acoustic waves on a crystal lattice.
The KdV equation can also be solved using the inverse scattering transform such as those applied to the non-linear Schrödinger equation.
=== KdV equation and the Gross–Pitaevskii equation ===
Considering the simplified solutions of the form
ϕ
(
x
,
t
)
=
ϕ
(
x
±
t
)
{\displaystyle \phi (x,t)=\phi (x\pm t)}
we obtain the KdV equation as
±
∂
x
ϕ
+
∂
x
3
ϕ
+
6
ϕ
∂
x
ϕ
=
0
{\displaystyle \pm \partial _{x}\phi +\partial _{x}^{3}\phi +6\,\phi \,\partial _{x}\phi =0\,}
or
±
∂
x
ϕ
+
∂
x
(
∂
x
2
ϕ
+
3
ϕ
2
)
=
0
{\displaystyle \pm \partial _{x}\phi +\partial _{x}(\partial _{x}^{2}\phi +3\phi ^{2})=0\,}
Integrating and taking the special case in which the integration constant is zero, we have:
−
∂
x
2
ϕ
−
3
ϕ
2
=
±
ϕ
{\displaystyle -\partial _{x}^{2}\phi -3\phi ^{2}=\pm \phi \,}
which is the
λ
=
1
{\displaystyle \lambda =1}
special case of the generalized stationary Gross–Pitaevskii equation (GPE)
−
∂
x
2
ϕ
−
3
ϕ
λ
ϕ
=
±
ϕ
{\displaystyle -\partial _{x}^{2}\phi -3\phi ^{\lambda }\phi =\pm \phi \,}
Therefore, for the certain class of solutions of generalized GPE (
λ
=
4
{\displaystyle \lambda =4}
for the true one-dimensional condensate and
λ
=
2
{\displaystyle \lambda =2}
while using the three dimensional equation in one dimension), two equations are one. Furthermore, taking the
λ
=
3
{\displaystyle \lambda =3}
case with the minus sign and the
ϕ
{\displaystyle \phi }
real, one obtains an attractive self-interaction that should yield a bright soliton.
== Variations ==
Many different variations of the KdV equations have been studied. Some are listed in the following table.
== See also ==
== Notes ==
== References ==
Berest, Yuri Y.; Loutsenko, Igor M. (1997). "Huygens' Principle in Minkowski Spaces and Soliton Solutions of the Korteweg-de Vries Equation". Communications in Mathematical Physics. 190 (1): 113–132. arXiv:solv-int/9704012. doi:10.1007/s002200050235. ISSN 0010-3616.
Boussinesq, J. (1877), Essai sur la theorie des eaux courantes, Memoires presentes par divers savants ` l’Acad. des Sci. Inst. Nat. France, XXIII, pp. 1–680
Chalub, Fabio A.C.C.; Zubelli, Jorge P. (2006). "Huygens' principle for hyperbolic operators and integrable hierarchies" (PDF). Physica D: Nonlinear Phenomena. 213 (2): 231–245. doi:10.1016/j.physd.2005.11.008.
Darrigol, Olivier (2005). Worlds of Flow. Oxford; New York: Oxford University Press. ISBN 978-0-19-856843-8.
Dauxois, Thierry; Peyrard, Michel (2006). Physics of Solitons. Cambridge, UK; New York: Cambridge University Press. ISBN 0-521-85421-0. OCLC 61757137.
Dingemans, M. W. (1997). Water Wave Propagation Over Uneven Bottoms. River Edge, NJ: World Scientific. ISBN 981-02-0427-2.
Dunajski, Maciej (2009). Solitons, Instantons, and Twistors. Oxford; New York: OUP Oxford. ISBN 978-0-19-857063-9. OCLC 320199531.
Gardner, Clifford S.; Greene, John M.; Kruskal, Martin D.; Miura, Robert M. (1967). "Method for Solving the Korteweg-deVries Equation". Physical Review Letters. 19 (19): 1095–1097. doi:10.1103/PhysRevLett.19.1095. ISSN 0031-9007.
Grunert, Katrin; Teschl, Gerald (2009), "Long-Time Asymptotics for the Korteweg–De Vries Equation via Nonlinear Steepest Descent", Math. Phys. Anal. Geom., vol. 12, no. 3, pp. 287–324, arXiv:0807.5041, Bibcode:2009MPAG...12..287G, doi:10.1007/s11040-009-9062-2, S2CID 8740754
Korteweg, D. J.; de Vries, G. (1895). "XLI. On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 39 (240): 422–443. doi:10.1080/14786449508620739. ISSN 1941-5982.
Lax, Peter D. (1968). "Integrals of nonlinear equations of evolution and solitary waves". Communications on Pure and Applied Mathematics. 21 (5): 467–490. doi:10.1002/cpa.3160210503. ISSN 0010-3640. OSTI 4522657.
Miura, Robert M.; Gardner, Clifford S.; Kruskal, Martin D. (1968), "Korteweg–De Vries equation and generalizations. II. Existence of conservation laws and constants of motion", J. Math. Phys., 9 (8): 1204–1209, Bibcode:1968JMP.....9.1204M, doi:10.1063/1.1664701, MR 0252826
Polyanin, Andrei D.; Zaitsev, Valentin F. (2003). Handbook of Nonlinear Partial Differential Equations. Boca Raton, Fla: Chapman and Hall/CRC. ISBN 978-1-58488-355-5.
Vakakis, Alexander F. (2002). Normal Modes and Localization in Nonlinear Systems. Dordrecht; Boston: Springer Science & Business Media. ISBN 978-0-7923-7010-9.
Zabusky, N. J.; Kruskal, M. D. (1965). "Interaction of "Solitons" in a Collisionless Plasma and the Recurrence of Initial States". Physical Review Letters. 15 (6): 240–243. doi:10.1103/PhysRevLett.15.240. ISSN 0031-9007.
== External links ==
Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Korteweg–De Vries equation at NEQwiki, the nonlinear equations encyclopedia.
Cylindrical Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Modified Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Modified Korteweg–De Vries equation at NEQwiki, the nonlinear equations encyclopedia.
Weisstein, Eric W. "Korteweg–deVries Equation". MathWorld.
Derivation of the Korteweg–De Vries equation for a narrow canal.
Three Solitons Solution of KdV Equation – [1]
Three Solitons (unstable) Solution of KdV Equation – [2]
Mathematical aspects of equations of Korteweg–De Vries type are discussed on the Dispersive PDE Wiki.
Solitons from the Korteweg–De Vries Equation by S. M. Blinder, The Wolfram Demonstrations Project.
Solitons & Nonlinear Wave Equations | Wikipedia/Korteweg–de_Vries_equation |
Two New Sciences! is the second album by Fire Flies, released on May 29, 2007.
== Track listing ==
"Mechanical Love" - 3:11
"Call Me Your Darkness" - 3:13
"They've All Forgotten You" - 3:33
"It's a Party!" - 3:15
"She Sings in Tune" - 3:32
"Worst Man I Can Be" - 3:41
"We're Alive" - 3:53
"Closer to the End" - 3:14
"Rapid Eye Radar" - 3:49
"STOP THE CAR!!!" - 4:03
"The Receiver" - 5:45
"Give Me Time" - 6:18
== Personnel ==
Dan Romer – vocals, synthesizers, keyboards, acoustic guitar
Wil Farr – lead guitar
Matt Krahula – bass guitar
Seth Faulk - drums | Wikipedia/Two_New_Sciences! |
Flow plasticity is a solid mechanics theory that is used to describe the plastic behavior of materials. Flow plasticity theories are characterized by the assumption that a flow rule exists that can be used to determine the amount of plastic deformation in the material.
In flow plasticity theories it is assumed that the total strain in a body can be decomposed additively (or multiplicatively) into an elastic part and a plastic part. The elastic part of the strain can be computed from a linear elastic or hyperelastic constitutive model. However, determination of the plastic part of the strain requires a flow rule and a hardening model.
== Small deformation theory ==
Typical flow plasticity theories for unidirectional loading (for small deformation perfect plasticity or hardening plasticity) are developed on the basis of the following requirements:
The material has a linear elastic range.
The material has an elastic limit defined as the stress at which plastic deformation first takes place, i.e.,
σ
=
σ
0
{\displaystyle \sigma =\sigma _{0}}
.
Beyond the elastic limit the stress state always remains on the yield surface, i.e.,
σ
=
σ
y
{\displaystyle \sigma =\sigma _{y}}
.
Loading is defined as the situation under which increments of stress are greater than zero, i.e.,
d
σ
>
0
{\displaystyle d\sigma >0}
. If loading takes the stress state to the plastic domain then the increment of plastic strain is always greater than zero, i.e.,
d
ε
p
>
0
{\displaystyle d\varepsilon _{p}>0}
.
Unloading is defined as the situation under which increments of stress are less than zero, i.e.,
d
σ
<
0
{\displaystyle d\sigma <0}
. The material is elastic during unloading and no additional plastic strain is accumulated.
The total strain is a linear combination of the elastic and plastic parts, i.e.,
d
ε
=
d
ε
e
+
d
ε
p
{\displaystyle d\varepsilon =d\varepsilon _{e}+d\varepsilon _{p}}
. The plastic part cannot be recovered while the elastic part is fully recoverable.
The work done of a loading-unloading cycle is positive or zero, i.e.,
d
σ
d
ε
=
d
σ
(
d
ε
e
+
d
ε
p
)
≥
0
{\displaystyle d\sigma \,d\varepsilon =d\sigma \,(d\varepsilon _{e}+d\varepsilon _{p})\geq 0}
. This is also called the Drucker stability postulate and eliminates the possibility of strain softening behavior.
The above requirements can be expressed in three dimensional states of stress and multidirectional loading as follows.
Elasticity (Hooke's law). In the linear elastic regime the stresses and strains in the material are related by
σ
=
D
:
ε
{\displaystyle {\boldsymbol {\sigma }}={\mathsf {D}}:{\boldsymbol {\varepsilon }}}
where the stiffness matrix
D
{\displaystyle {\mathsf {D}}}
is constant.
Elastic limit (Yield surface). The elastic limit is defined by a yield surface that does not depend on the plastic strain and has the form
f
(
σ
)
=
0
.
{\displaystyle f({\boldsymbol {\sigma }})=0\,.}
Beyond the elastic limit. For strain hardening materials, the yield surface evolves with increasing plastic strain and the elastic limit changes. The evolving yield surface has the form
f
(
σ
,
ε
p
)
=
0
.
{\displaystyle f({\boldsymbol {\sigma }},{\boldsymbol {\varepsilon }}_{p})=0\,.}
Loading. For general states of stress, plastic loading is indicated if the state of stress is on the yield surface and the stress increment is directed toward the outside of the yield surface; this occurs if the inner product of the stress increment and the outward normal of the yield surface is positive, i.e.,
d
σ
:
∂
f
∂
σ
≥
0
.
{\displaystyle d{\boldsymbol {\sigma }}:{\frac {\partial f}{\partial {\boldsymbol {\sigma }}}}\geq 0\,.}
The above equation, when it is equal to zero, indicates a state of neutral loading where the stress state moves along the yield surface.
Unloading: A similar argument is made for unloading for which situation
f
<
0
{\displaystyle f<0}
, the material is in the elastic domain, and
d
σ
:
∂
f
∂
σ
<
0
.
{\displaystyle d{\boldsymbol {\sigma }}:{\frac {\partial f}{\partial {\boldsymbol {\sigma }}}}<0\,.}
Strain decomposition: The additive decomposition of the strain into elastic and plastic parts can be written as
d
ε
=
d
ε
e
+
d
ε
p
.
{\displaystyle d{\boldsymbol {\varepsilon }}=d{\boldsymbol {\varepsilon }}_{e}+d{\boldsymbol {\varepsilon }}_{p}\,.}
Stability postulate: The stability postulate is expressed as
d
σ
:
d
ε
≥
0
.
{\displaystyle d{\boldsymbol {\sigma }}:d{\boldsymbol {\varepsilon }}\geq 0\,.}
=== Flow rule ===
In metal plasticity, the assumption that the plastic strain increment and deviatoric stress tensor have the same principal directions is encapsulated in a relation called the flow rule. Rock plasticity theories also use a similar concept except that the requirement of pressure-dependence of the yield surface requires a relaxation of the above assumption. Instead, it is typically assumed that the plastic strain increment and the normal to the pressure-dependent yield surface have the same direction, i.e.,
d
ε
p
=
d
λ
∂
f
∂
σ
{\displaystyle d{\boldsymbol {\varepsilon }}_{p}=d\lambda \,{\frac {\partial f}{\partial {\boldsymbol {\sigma }}}}}
where
d
λ
>
0
{\displaystyle d\lambda >0}
is a hardening parameter. This form of the flow rule is called an associated flow rule and the assumption of co-directionality is called the normality condition. The function
f
{\displaystyle f}
is also called a plastic potential.
The above flow rule is easily justified for perfectly plastic deformations for which
d
σ
=
0
{\displaystyle d{\boldsymbol {\sigma }}=0}
when
d
ε
p
>
0
{\displaystyle d{\boldsymbol {\varepsilon }}_{p}>0}
, i.e., the yield surface remains constant under increasing plastic deformation. This implies that the increment of elastic strain is also zero,
d
ε
e
=
0
{\displaystyle d{\boldsymbol {\varepsilon }}_{e}=0}
, because of Hooke's law. Therefore,
d
σ
:
∂
f
∂
σ
=
0
and
d
σ
:
d
ε
p
=
0
.
{\displaystyle d{\boldsymbol {\sigma }}:{\frac {\partial f}{\partial {\boldsymbol {\sigma }}}}=0\quad {\text{and}}\quad d{\boldsymbol {\sigma }}:d{\boldsymbol {\varepsilon }}_{p}=0\,.}
Hence, both the normal to the yield surface and the plastic strain tensor are perpendicular to the stress tensor and must have the same direction.
For a work hardening material, the yield surface can expand with increasing stress. We assume Drucker's second stability postulate which states that for an infinitesimal stress cycle this plastic work is positive, i.e.,
d
σ
:
d
ε
p
≥
0
.
{\displaystyle d{\boldsymbol {\sigma }}:d{\boldsymbol {\varepsilon }}_{p}\geq 0\,.}
The above quantity is equal to zero for purely elastic cycles. Examination of the work done over a cycle of plastic loading-unloading can be used to justify the validity of the associated flow rule.
=== Consistency condition ===
The Prager consistency condition is needed to close the set of constitutive equations and to eliminate the unknown parameter
d
λ
{\displaystyle d\lambda }
from the system of equations. The consistency condition states that
d
f
=
0
{\displaystyle df=0}
at yield because
f
(
σ
,
ε
p
)
=
0
{\displaystyle f({\boldsymbol {\sigma }},{\boldsymbol {\varepsilon }}_{p})=0}
, and hence
d
f
=
∂
f
∂
σ
:
d
σ
+
∂
f
∂
ε
p
:
d
ε
p
=
0
.
{\displaystyle df={\frac {\partial f}{\partial {\boldsymbol {\sigma }}}}:d{\boldsymbol {\sigma }}+{\frac {\partial f}{\partial {\boldsymbol {\varepsilon }}_{p}}}:d{\boldsymbol {\varepsilon }}_{p}=0\,.}
== Large deformation theory ==
Large deformation flow theories of plasticity typically start with one of the following assumptions:
the rate of deformation tensor can be additively decomposed into an elastic part and a plastic part, or
the deformation gradient tensor can be multiplicatively decomposed in an elastic part and a plastic part.
The first assumption was widely used for numerical simulations of metals but has gradually been superseded by the multiplicative theory.
=== Kinematics of multiplicative plasticity ===
The concept of multiplicative decomposition of the deformation gradient into elastic and plastic parts was first proposed independently by B. A. Bilby, E. Kröner, in the context of crystal plasticity and extended to continuum plasticity by Erasmus Lee. The decomposition assumes that the total deformation gradient (F) can be decomposed as:
F
=
F
e
⋅
F
p
{\displaystyle {\boldsymbol {F}}={\boldsymbol {F}}^{e}\cdot {\boldsymbol {F}}^{p}}
where Fe is the elastic (recoverable) part and Fp is the plastic (unrecoverable) part of the deformation. The spatial velocity gradient is given by
l
=
F
˙
⋅
F
−
1
=
(
F
˙
e
⋅
F
p
+
F
e
⋅
F
˙
p
)
⋅
[
(
F
p
)
−
1
⋅
(
F
e
)
−
1
]
=
F
˙
e
⋅
(
F
e
)
−
1
+
F
e
⋅
[
F
˙
p
⋅
(
F
p
)
−
1
]
⋅
(
F
e
)
−
1
.
{\displaystyle {\begin{aligned}{\boldsymbol {l}}&={\dot {\boldsymbol {F}}}\cdot {\boldsymbol {F}}^{-1}=\left({\dot {\boldsymbol {F}}}^{e}\cdot {\boldsymbol {F}}^{p}+{\boldsymbol {F}}^{e}\cdot {\dot {\boldsymbol {F}}}^{p}\right)\cdot \left[({\boldsymbol {F}}^{p})^{-1}\cdot ({\boldsymbol {F}}^{e})^{-1}\right]\\&={\dot {\boldsymbol {F}}}^{e}\cdot ({\boldsymbol {F}}^{e})^{-1}+{\boldsymbol {F}}^{e}\cdot [{\dot {\boldsymbol {F}}}^{p}\cdot ({\boldsymbol {F}}^{p})^{-1}]\cdot ({\boldsymbol {F}}^{e})^{-1}\,.\end{aligned}}}
where a superposed dot indicates a time derivative. We can write the above as
l
=
l
e
+
F
e
⋅
L
p
⋅
(
F
e
)
−
1
.
{\displaystyle {\boldsymbol {l}}={\boldsymbol {l}}^{e}+{\boldsymbol {F}}^{e}\cdot {\boldsymbol {L}}^{p}\cdot ({\boldsymbol {F}}^{e})^{-1}\,.}
The quantity
L
p
:=
F
˙
p
⋅
(
F
p
)
−
1
{\displaystyle {\boldsymbol {L}}^{p}:={\dot {\boldsymbol {F}}}^{p}\cdot ({\boldsymbol {F}}^{p})^{-1}}
is called a plastic velocity gradient and is defined in an intermediate (incompatible) stress-free configuration. The symmetric part (Dp) of Lp is called the plastic rate of deformation while the skew-symmetric part (Wp) is called the plastic spin:
D
p
=
1
2
[
L
p
+
(
L
p
)
T
]
,
W
p
=
1
2
[
L
p
−
(
L
p
)
T
]
.
{\displaystyle {\boldsymbol {D}}^{p}={\tfrac {1}{2}}[{\boldsymbol {L}}^{p}+({\boldsymbol {L}}^{p})^{T}]~,~~{\boldsymbol {W}}^{p}={\tfrac {1}{2}}[{\boldsymbol {L}}^{p}-({\boldsymbol {L}}^{p})^{T}]\,.}
Typically, the plastic spin is ignored in most descriptions of finite plasticity.
=== Elastic regime ===
The elastic behavior in the finite strain regime is typically described by a hyperelastic material model. The elastic strain can be measured using an elastic right Cauchy-Green deformation tensor defined as:
C
e
:=
(
F
e
)
T
⋅
F
e
.
{\displaystyle {\boldsymbol {C}}^{e}:=({\boldsymbol {F}}^{e})^{T}\cdot {\boldsymbol {F}}^{e}\,.}
The logarithmic or Hencky strain tensor may then be defined as
E
e
:=
1
2
ln
C
e
.
{\displaystyle {\boldsymbol {E}}^{e}:={\tfrac {1}{2}}\ln {\boldsymbol {C}}^{e}\,.}
The symmetrized Mandel stress tensor is a convenient stress measure for finite plasticity and is defined as
M
:=
1
2
(
C
e
⋅
S
+
S
⋅
C
e
)
{\displaystyle {\boldsymbol {M}}:={\tfrac {1}{2}}({\boldsymbol {C}}^{e}\cdot {\boldsymbol {S}}+{\boldsymbol {S}}\cdot {\boldsymbol {C}}^{e})}
where S is the second Piola-Kirchhoff stress. A possible hyperelastic model in terms of the logarithmic strain is
M
=
∂
W
∂
E
e
=
J
d
U
d
J
+
2
μ
dev
(
E
e
)
{\displaystyle {\boldsymbol {M}}={\frac {\partial W}{\partial {\boldsymbol {E}}^{e}}}=J\,{\frac {dU}{dJ}}+2\mu \,{\text{dev}}({\boldsymbol {E}}^{e})}
where W is a strain energy density function, J = det(F), μ is a modulus, and "dev" indicates the deviatoric part of a tensor.
=== Flow rule ===
Application of the Clausius-Duhem inequality leads, in the absence of a plastic spin, to the finite strain flow rule
D
p
=
λ
˙
∂
f
∂
M
.
{\displaystyle {\boldsymbol {D}}^{p}={\dot {\lambda }}\,{\frac {\partial f}{\partial {\boldsymbol {M}}}}\,.}
=== Loading-unloading conditions ===
The loading-unloading conditions can be shown to be equivalent to the Karush-Kuhn-Tucker conditions
λ
˙
≥
0
,
f
≤
0
,
λ
˙
f
=
0
.
{\displaystyle {\dot {\lambda }}\geq 0~,~~f\leq 0~,~~{\dot {\lambda }}\,f=0\,.}
=== Consistency condition ===
The consistency condition is identical to that for the small strain case,
λ
˙
f
˙
=
0
.
{\displaystyle {\dot {\lambda }}\,{\dot {f}}=0\,.}
== References ==
== See also ==
Plasticity (physics) | Wikipedia/Flow_plasticity_theory |
In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles.
The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays several crucial roles in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators.
Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans, created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it was not until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Maxwell's equations provided a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, and predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies.
In the modern era, scientists continue to refine the theory of electromagnetism to account for the effects of modern physics, including quantum mechanics and relativity. The theoretical implications of electromagnetism, particularly the requirement that observations remain consistent when viewed from various moving frames of reference (relativistic electromagnetism) and the establishment of the speed of light based on properties of the medium of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Quantum electrodynamics (QED) modifies Maxwell's equations to be consistent with the quantized nature of matter. In QED, changes in the electromagnetic field are expressed in terms of discrete excitations, particles known as photons, the quanta of light.
== History ==
=== Ancient world ===
Investigation into electromagnetic phenomena began about 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures).
=== 19th century ===
Electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments:
Electric charges attract or repel one another with a force inversely proportional to the square of the distance between them: opposite charges attract, like charges repel.
Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole.
An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire.
A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement.
In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism.
His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy.
This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies.
Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community.
An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars."
== A fundamental force ==
The electromagnetic force is the second strongest of the four known fundamental forces and has unlimited range.
All other forces, known as non-fundamental forces. (e.g., friction, contact forces) are derived from the four fundamental forces. At high energy, the weak force and electromagnetic force are unified as a single interaction called the electroweak interaction.
Most of the forces involved in interactions between atoms are explained by electromagnetic forces between electrically charged atomic nuclei and electrons. The electromagnetic force is also involved in all forms of chemical phenomena.
Electromagnetism explains how materials carry momentum despite being composed of individual particles and empty space. The forces we experience when "pushing" or "pulling" ordinary material objects result from intermolecular forces between individual molecules in our bodies and in the objects.
The effective forces generated by the momentum of electrons' movement is a necessary part of understanding atomic and intermolecular interactions. As electrons move between interacting atoms, they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behavior of matter at the molecular scale, including its density, is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves.
== Classical electrodynamics ==
In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10 May 1752 by Thomas-François Dalibard of France using a 40-foot-tall (12 m) iron rod instead of a kite and he successfully extracted electrical sparks from a cloud.
One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation.
A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.
One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.)
In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.)
Today few problems in electromagnetism remain unsolved. These include: the lack of magnetic monopoles, Abraham–Minkowski controversy, the location in space of the electromagnetic field energy, and the mechanism by which some organisms can sense electric and magnetic fields.
== Extension to nonlinear phenomena ==
The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Another branch of electromagnetism dealing with nonlinearity is nonlinear optics.
== Quantities and units ==
Here is a list of common units related to electromagnetism:
In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system.
Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units.
== Applications ==
The study of electromagnetism informs electric circuits, magnetic circuits, and semiconductor devices' construction.
== See also ==
== References ==
== Further reading ==
=== Web sources ===
=== Textbooks ===
=== General coverage ===
== External links ==
Magnetic Field Strength Converter
Electromagnetic Force – from Eric Weisstein's World of Physics | Wikipedia/electromagnetism |
In electromagnetism, the Lorentz force is the force exerted on a charged particle by electric and magnetic fields. It is the fundamental force that governs the motion of charged particles in electromagnetic fields and underlies many physical phenomena, from the operation of electric motors and particle accelerators to the behavior of plasmas.
The force has two components. The electric force acts in the direction of the electric field for positive charges and opposite to it for negative charges, tending to accelerate the particle in a straight line. The magnetic force is perpendicular to both the particle's velocity and the magnetic field, and it causes the particle to move along a curved trajectory, often circular or helical in form, depending on the directions of the fields.
Variations on the force law describe the magnetic force on a current-carrying wire (sometimes called Laplace force), and the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction).
Historians suggest that the law is implicit in a paper by James Clerk Maxwell, published in 1865. Hendrik Lorentz arrived at a complete derivation in 1895, identifying the contribution of the electric force a few years after Oliver Heaviside correctly identified the contribution of the magnetic force.
== Definition ==
=== Charged particle ===
The force F acting on a particle of electric charge q with instantaneous velocity v, due to an external electric field E and magnetic field B, is given by (SI definition of quantities):
where × is the vector cross product (all boldface quantities are vectors). In terms of Cartesian components, we have:
F
x
=
q
(
E
x
+
v
y
B
z
−
v
z
B
y
)
,
F
y
=
q
(
E
y
+
v
z
B
x
−
v
x
B
z
)
,
F
z
=
q
(
E
z
+
v
x
B
y
−
v
y
B
x
)
.
{\displaystyle {\begin{aligned}F_{x}&=q\left(E_{x}+v_{y}B_{z}-v_{z}B_{y}\right),\\[0.5ex]F_{y}&=q\left(E_{y}+v_{z}B_{x}-v_{x}B_{z}\right),\\[0.5ex]F_{z}&=q\left(E_{z}+v_{x}B_{y}-v_{y}B_{x}\right).\end{aligned}}}
In general, the electric and magnetic fields are functions of the position and time. Therefore, explicitly, the Lorentz force can be written as:
F
(
r
(
t
)
,
r
˙
(
t
)
,
t
,
q
)
=
q
[
E
(
r
,
t
)
+
r
˙
(
t
)
×
B
(
r
,
t
)
]
{\displaystyle \mathbf {F} \left(\mathbf {r} (t),{\dot {\mathbf {r} }}(t),t,q\right)=q\left[\mathbf {E} (\mathbf {r} ,t)+{\dot {\mathbf {r} }}(t)\times \mathbf {B} (\mathbf {r} ,t)\right]}
in which r is the position vector of the charged particle, t is time, and the overdot is a time derivative.
A positively charged particle will be accelerated in the same linear orientation as the E field, but will curve perpendicularly to both the instantaneous velocity vector v and the B field according to the right-hand rule (in detail, if the fingers of the right hand are extended to point in the direction of v and are then curled to point in the direction of B, then the extended thumb will point in the direction of F).
The term qE is called the electric force, while the term q(v × B) is called the magnetic force. According to some definitions, the term "Lorentz force" refers specifically to the formula for the magnetic force, with the total electromagnetic force (including the electric force) given some other (nonstandard) name. This article will not follow this nomenclature: in what follows, the term Lorentz force will refer to the expression for the total force.
The magnetic force component of the Lorentz force manifests itself as the force that acts on a current-carrying wire in a magnetic field. In that context, it is also called the Laplace force.
The Lorentz force is a force exerted by the electromagnetic field on the charged particle, that is, it is the rate at which linear momentum is transferred from the electromagnetic field to the particle. Associated with it is the power which is the rate at which energy is transferred from the electromagnetic field to the particle. That power is
v
⋅
F
=
q
v
⋅
E
.
{\displaystyle \mathbf {v} \cdot \mathbf {F} =q\,\mathbf {v} \cdot \mathbf {E} .}
Notice that the magnetic field does not contribute to the power because the magnetic force is always perpendicular to the velocity of the particle and does no work.
=== Continuous charge distribution ===
For a continuous charge distribution in motion, the Lorentz force equation becomes:
d
F
=
d
q
(
E
+
v
×
B
)
{\displaystyle \mathrm {d} \mathbf {F} =\mathrm {d} q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)}
where
d
F
{\displaystyle \mathrm {d} \mathbf {F} }
is the force on a small piece of the charge distribution with charge
d
q
{\displaystyle \mathrm {d} q}
. If both sides of this equation are divided by the volume of this small piece of the charge distribution
d
V
{\displaystyle \mathrm {d} V}
, the result is:
f
=
ρ
(
E
+
v
×
B
)
{\displaystyle \mathbf {f} =\rho \left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)}
where
f
{\displaystyle \mathbf {f} }
is the force density (force per unit volume) and
ρ
{\displaystyle \rho }
is the charge density (charge per unit volume). Next, the current density corresponding to the motion of the charge continuum is
J
=
ρ
v
{\displaystyle \mathbf {J} =\rho \mathbf {v} }
so the continuous analogue to the equation is
The total force is the volume integral over the charge distribution:
F
=
∫
(
ρ
E
+
J
×
B
)
d
V
.
{\displaystyle \mathbf {F} =\int \left(\rho \mathbf {E} +\mathbf {J} \times \mathbf {B} \right)\mathrm {d} V.}
By eliminating
ρ
{\displaystyle \rho }
and
J
{\displaystyle \mathbf {J} }
, using Maxwell's equations, and manipulating using the theorems of vector calculus, this form of the equation can be used to derive the Maxwell stress tensor
σ
{\displaystyle {\boldsymbol {\sigma }}}
, in turn this can be combined with the Poynting vector
S
{\displaystyle \mathbf {S} }
to obtain the electromagnetic stress–energy tensor T used in general relativity.
In terms of
σ
{\displaystyle {\boldsymbol {\sigma }}}
and
S
{\displaystyle \mathbf {S} }
, another way to write the Lorentz force (per unit volume) is
f
=
∇
⋅
σ
−
1
c
2
∂
S
∂
t
{\displaystyle \mathbf {f} =\nabla \cdot {\boldsymbol {\sigma }}-{\dfrac {1}{c^{2}}}{\dfrac {\partial \mathbf {S} }{\partial t}}}
where
∇
⋅
{\displaystyle \nabla \cdot }
denotes the divergence of the tensor field and
c
{\displaystyle c}
is the speed of light. Rather than the amount of charge and its velocity in electric and magnetic fields, this equation relates the energy flux (flow of energy per unit time per unit distance) in the fields to the force exerted on a charge distribution. See Covariant formulation of classical electromagnetism for more details.
The density of power associated with the Lorentz force in a material medium is
J
⋅
E
.
{\displaystyle \mathbf {J} \cdot \mathbf {E} .}
If we separate the total charge and total current into their free and bound parts, we get that the density of the Lorentz force is
f
=
(
ρ
f
−
∇
⋅
P
)
E
+
(
J
f
+
∇
×
M
+
∂
P
∂
t
)
×
B
.
{\displaystyle \mathbf {f} =\left(\rho _{f}-\nabla \cdot \mathbf {P} \right)\mathbf {E} +\left(\mathbf {J} _{f}+\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}\right)\times \mathbf {B} .}
where:
ρ
f
{\displaystyle \rho _{f}}
is the density of free charge;
P
{\displaystyle \mathbf {P} }
is the polarization density;
J
f
{\displaystyle \mathbf {J} _{f}}
is the density of free current; and
M
{\displaystyle \mathbf {M} }
is the magnetization density. In this way, the Lorentz force can explain the torque applied to a permanent magnet by the magnetic field. The density of the associated power is
(
J
f
+
∇
×
M
+
∂
P
∂
t
)
⋅
E
.
{\displaystyle \left(\mathbf {J} _{f}+\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}\right)\cdot \mathbf {E} .}
=== Formulation in the Gaussian system ===
The above-mentioned formulae use the conventions for the definition of the electric and magnetic field used with the SI, which is the most common. However, other conventions with the same physics (i.e. forces on e.g. an electron) are possible and used. In the conventions used with the older CGS-Gaussian units, which are somewhat more common among some theoretical physicists as well as condensed matter experimentalists, one has instead
F
=
q
G
(
E
G
+
v
c
×
B
G
)
,
{\displaystyle \mathbf {F} =q_{\mathrm {G} }\left(\mathbf {E} _{\mathrm {G} }+{\frac {\mathbf {v} }{c}}\times \mathbf {B} _{\mathrm {G} }\right),}
where c is the speed of light. Although this equation looks slightly different, it is equivalent, since one has the following relations:
q
G
=
q
S
I
4
π
ε
0
,
E
G
=
4
π
ε
0
E
S
I
,
B
G
=
4
π
/
μ
0
B
S
I
,
c
=
1
ε
0
μ
0
.
{\displaystyle q_{\mathrm {G} }={\frac {q_{\mathrm {SI} }}{\sqrt {4\pi \varepsilon _{0}}}},\quad \mathbf {E} _{\mathrm {G} }={\sqrt {4\pi \varepsilon _{0}}}\,\mathbf {E} _{\mathrm {SI} },\quad \mathbf {B} _{\mathrm {G} }={\sqrt {4\pi /\mu _{0}}}\,{\mathbf {B} _{\mathrm {SI} }},\quad c={\frac {1}{\sqrt {\varepsilon _{0}\mu _{0}}}}.}
where ε0 is the vacuum permittivity and μ0 the vacuum permeability. In practice, the subscripts "G" and "SI" are omitted, and the used convention (and unit) must be determined from context.
== History ==
Early attempts to quantitatively describe the electromagnetic force were made in the mid-18th century. It was proposed that the force on magnetic poles, by Johann Tobias Mayer and others in 1760, and electrically charged objects, by Henry Cavendish in 1762, obeyed an inverse-square law. However, in both cases the experimental proof was neither complete nor conclusive. It was not until 1784 when Charles-Augustin de Coulomb, using a torsion balance, was able to definitively show through experiment that this was true. Soon after the discovery in 1820 by Hans Christian Ørsted that a magnetic needle is acted on by a voltaic current, André-Marie Ampère that same year was able to devise through experimentation the formula for the angular dependence of the force between two current elements. In all these descriptions, the force was always described in terms of the properties of the matter involved and the distances between two masses or charges rather than in terms of electric and magnetic fields.
The modern concept of electric and magnetic fields first arose in the theories of Michael Faraday, particularly his idea of lines of force, later to be given full mathematical description by Lord Kelvin and James Clerk Maxwell. From a modern perspective it is possible to identify in Maxwell's 1865 formulation of his field equations a form of the Lorentz force equation in relation to electric currents, although in the time of Maxwell it was not evident how his equations related to the forces on moving charged objects. J. J. Thomson was the first to attempt to derive from Maxwell's field equations the electromagnetic forces on a moving charged object in terms of the object's properties and external fields. Interested in determining the electromagnetic behavior of the charged particles in cathode rays, Thomson published a paper in 1881 wherein he gave the force on the particles due to an external magnetic field as
F
=
q
2
v
×
B
.
{\displaystyle \mathbf {F} ={\frac {q}{2}}\mathbf {v} \times \mathbf {B} .}
Thomson derived the correct basic form of the formula, but, because of some miscalculations and an incomplete description of the displacement current, included an incorrect scale-factor of a half in front of the formula. Oliver Heaviside invented the modern vector notation and applied it to Maxwell's field equations; he also (in 1885 and 1889) had fixed the mistakes of Thomson's derivation and arrived at the correct form of the magnetic force on a moving charged object. Finally, in 1895, Hendrik Lorentz derived the modern form of the formula for the electromagnetic force which includes the contributions to the total force from both the electric and the magnetic fields. Lorentz began by abandoning the Maxwellian descriptions of the ether and conduction. Instead, Lorentz made a distinction between matter and the luminiferous aether and sought to apply the Maxwell equations at a microscopic scale. Using Heaviside's version of the Maxwell equations for a stationary ether and applying Lagrangian mechanics (see below), Lorentz arrived at the correct and complete form of the force law that now bears his name.
== Lorentz force law as the definition of E and B ==
In many textbook treatments of classical electromagnetism, the Lorentz force law is used as the definition of the electric and magnetic fields E and B. To be specific, the Lorentz force is understood to be the following empirical statement:
The electromagnetic force F on a test charge at a given point and time is a certain function of its charge q and velocity v, which can be parameterized by exactly two vectors E and B, in the functional form:
F
=
q
(
E
+
v
×
B
)
{\displaystyle \mathbf {F} =q(\mathbf {E} +\mathbf {v} \times \mathbf {B} )}
This is valid, even for particles approaching the speed of light (that is, magnitude of v, |v| ≈ c). So the two vector fields E and B are thereby defined throughout space and time, and these are called the "electric field" and "magnetic field". The fields are defined everywhere in space and time with respect to what force a test charge would receive regardless of whether a charge is present to experience the force.
== Trajectories of particles due to the Lorentz force ==
In many cases of practical interest, the motion in a magnetic field of an electrically charged particle (such as an electron or ion in a plasma) can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation.
== Significance of the Lorentz force ==
While the modern Maxwell's equations describe how electrically charged particles and currents or moving charged particles give rise to electric and magnetic fields, the Lorentz force law completes that picture by describing the force acting on a moving point charge q in the presence of electromagnetic fields. The Lorentz force law describes the effect of E and B upon a point charge, but such electromagnetic forces are not the entire picture. Charged particles are possibly coupled to other forces, notably gravity and nuclear forces. Thus, Maxwell's equations do not stand separate from other physical laws, but are coupled to them via the charge and current densities. The response of a point charge to the Lorentz law is one aspect; the generation of E and B by currents and charges is another.
In real materials the Lorentz force is inadequate to describe the collective behavior of charged particles, both in principle and as a matter of computation. The charged particles in a material medium not only respond to the E and B fields but also generate these fields. Complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, stellar evolution. An entire physical apparatus for dealing with these matters has developed. See for example, Green–Kubo relations and Green's function (many-body theory).
== Force on a current-carrying wire ==
When a wire carrying an electric current is placed in an external magnetic field, each of the moving charges, which comprise the current, experiences the Lorentz force, and together they can create a macroscopic force on the wire (sometimes called the Laplace force). By combining the Lorentz force law above with the definition of electric current, the following equation results, in the case of a straight stationary wire in a homogeneous field:
F
=
I
ℓ
×
B
,
{\displaystyle \mathbf {F} =I{\boldsymbol {\ell }}\times \mathbf {B} ,}
where ℓ is a vector whose magnitude is the length of the wire, and whose direction is along the wire, aligned with the direction of the conventional current I.
If the wire is not straight, the force on it can be computed by applying this formula to each infinitesimal segment of wire
d
ℓ
{\displaystyle \mathrm {d} {\boldsymbol {\ell }}}
, then adding up all these forces by integration. This results in the same formal expression, but ℓ should now be understood as the vector connecting the end points of the curved wire with direction from starting to end point of conventional current. Usually, there will also be a net torque.
If, in addition, the magnetic field is inhomogeneous, the net force on a stationary rigid wire carrying a steady current I is given by integration along the wire,
F
=
I
∫
(
d
ℓ
×
B
)
.
{\displaystyle \mathbf {F} =I\int (\mathrm {d} {\boldsymbol {\ell }}\times \mathbf {B} ).}
One application of this is Ampère's force law, which describes how two current-carrying wires can attract or repel each other, since each experiences a Lorentz force from the other's generated magnetic field.
Another application is an induction motor. The stator winding AC current generates a moving magnetic field which induces a current in the rotor. The subsequent Lorentz force
F
{\displaystyle \mathbf {F} }
acting on the rotor creates a torque, making the motor spin. Hence, though the Lorentz force law does not apply when the magnetic field
B
{\displaystyle \mathbf {B} }
is generated by the current
I
{\displaystyle I}
, it does apply when the current
I
{\displaystyle I}
is induced by the movement of magnetic field
B
{\displaystyle \mathbf {B} }
.
== Electromotive force ==
The magnetic force (qv × B) component of the Lorentz force is responsible for motional electromotive force (or motional EMF), the phenomenon underlying many electrical generators. When a conductor is moved through a magnetic field, the magnetic field exerts opposite forces on electrons and nuclei in the wire, and this creates the EMF. The term "motional EMF" is applied to this phenomenon, since the EMF is due to the motion of the wire.
In other electrical generators, the magnets move, while the conductors do not. In this case, the EMF is due to the electric force (qE) term in the Lorentz Force equation. The electric field in question is created by the changing magnetic field, resulting in an induced EMF called the transformer EMF, as described by the Maxwell–Faraday equation (one of the four modern Maxwell's equations).
Both of these EMFs, despite their apparently distinct origins, are described by the same equation, namely, the EMF is the rate of change of magnetic flux through the wire. (This is Faraday's law of induction, see below.) Einstein's special theory of relativity was partially motivated by the desire to better understand this link between the two effects. In fact, the electric and magnetic fields are different facets of the same electromagnetic field, and in moving from one inertial frame to another, the solenoidal vector field portion of the E-field can change in whole or in part to a B-field or vice versa.
== Lorentz force and Faraday's law of induction ==
Given a loop of wire in a magnetic field, Faraday's law of induction states the induced electromotive force (EMF) in the wire is:
E
=
−
d
Φ
B
d
t
{\displaystyle {\mathcal {E}}=-{\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}}
where
Φ
B
=
∫
Σ
(
t
)
B
(
r
,
t
)
⋅
d
A
,
{\displaystyle \Phi _{B}=\int _{\Sigma (t)}\mathbf {B} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} ,}
is the magnetic flux through the loop, B is the magnetic field, Σ(t) is a surface bounded by the closed contour ∂Σ(t), at time t, dA is an infinitesimal vector area element of Σ(t) (magnitude is the area of an infinitesimal patch of surface, direction is orthogonal to that surface patch).
The sign of the EMF is determined by Lenz's law. Note that this is valid for not only a stationary wire – but also for a moving wire.
From Faraday's law of induction (that is valid for a moving wire, for instance in a motor) and the Maxwell Equations, the Lorentz Force can be deduced. The reverse is also true, the Lorentz force and the Maxwell Equations can be used to derive the Faraday Law.
Let ∂Σ(t) be the moving wire, moving together without rotation and with constant velocity v and Σ(t) be the internal surface of the wire. The EMF around the closed path ∂Σ(t) is given by:
E
=
∮
∂
Σ
(
t
)
F
q
⋅
d
ℓ
{\displaystyle {\mathcal {E}}=\oint _{\partial \Sigma (t)}{\frac {\mathbf {F} }{q}}\cdot \mathrm {d} {\boldsymbol {\ell }}}
where
E
′
(
r
,
t
)
=
F
/
q
(
r
,
t
)
{\displaystyle \mathbf {E} '(\mathbf {r} ,t)=\mathbf {F} /q(\mathbf {r} ,t)}
is the electric field and dℓ is an infinitesimal vector element of the contour ∂Σ(t). Equating both integrals leads to the field theory form of Faraday's law, given by:
E
=
∮
∂
Σ
(
t
)
E
′
(
r
,
t
)
⋅
d
ℓ
=
−
d
d
t
∫
Σ
(
t
)
B
(
r
,
t
)
⋅
d
A
.
{\displaystyle {\mathcal {E}}=\oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-{\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Sigma (t)}\mathbf {B} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} .}
This result can be compared with the version of Faraday's law of induction that appears in the modern Maxwell's equations, called the (integral form of) Maxwell–Faraday equation:
∮
∂
Σ
(
t
)
E
(
r
,
t
)
⋅
d
ℓ
=
−
∫
Σ
(
t
)
∂
B
(
r
,
t
)
∂
t
⋅
d
A
.
{\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} (\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-\int _{\Sigma (t)}{\frac {\partial \mathbf {B} (\mathbf {r} ,t)}{\partial t}}\cdot \mathrm {d} \mathbf {A} .}
The two equations are equivalent if the wire is not moving. In case the circuit is moving with a velocity
v
{\displaystyle \mathbf {v} }
in some direction, then, using the Leibniz integral rule and that div B = 0, gives
∮
∂
Σ
(
t
)
E
′
(
r
,
t
)
⋅
d
ℓ
=
−
∫
Σ
(
t
)
∂
B
(
r
,
t
)
∂
t
⋅
d
A
+
∮
∂
Σ
(
t
)
(
v
×
B
(
r
,
t
)
)
⋅
d
ℓ
.
{\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-\int _{\Sigma (t)}{\frac {\partial \mathbf {B} (\mathbf {r} ,t)}{\partial t}}\cdot \mathrm {d} \mathbf {A} +\oint _{\partial \Sigma (t)}\left(\mathbf {v} \times \mathbf {B} (\mathbf {r} ,t)\right)\cdot \mathrm {d} {\boldsymbol {\ell }}.}
Substituting the Maxwell-Faraday equation then gives
∮
∂
Σ
(
t
)
E
′
(
r
,
t
)
⋅
d
ℓ
=
∮
∂
Σ
(
t
)
E
(
r
,
t
)
⋅
d
ℓ
+
∮
∂
Σ
(
t
)
(
v
×
B
(
r
,
t
)
)
d
ℓ
{\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=\oint _{\partial \Sigma (t)}\mathbf {E} (\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}+\oint _{\partial \Sigma (t)}\left(\mathbf {v} \times \mathbf {B} (\mathbf {r} ,t)\right)\mathrm {d} {\boldsymbol {\ell }}}
since this is valid for any wire position it implies that
F
=
q
E
(
r
,
t
)
+
q
v
×
B
(
r
,
t
)
.
{\displaystyle \mathbf {F} =q\,\mathbf {E} (\mathbf {r} ,\,t)+q\,\mathbf {v} \times \mathbf {B} (\mathbf {r} ,\,t).}
Faraday's law of induction holds whether the loop of wire is rigid and stationary, or in motion or in process of deformation, and it holds whether the magnetic field is constant in time or changing. However, there are cases where Faraday's law is either inadequate or difficult to use, and application of the underlying Lorentz force law is necessary. See inapplicability of Faraday's law.
If the magnetic field is fixed in time and the conducting loop moves through the field, the magnetic flux ΦB linking the loop can change in several ways. For example, if the B-field varies with position, and the loop moves to a location with different B-field, ΦB will change. Alternatively, if the loop changes orientation with respect to the B-field, the B ⋅ dA differential element will change because of the different angle between B and dA, also changing ΦB. As a third example, if a portion of the circuit is swept through a uniform, time-independent B-field, and another portion of the circuit is held stationary, the flux linking the entire closed circuit can change due to the shift in relative position of the circuit's component parts with time (surface ∂Σ(t) time-dependent). In all three cases, Faraday's law of induction then predicts the EMF generated by the change in ΦB.
Note that the Maxwell Faraday's equation implies that the Electric Field E is non conservative when the Magnetic Field B varies in time, and is not expressible as the gradient of a scalar field, and not subject to the gradient theorem since its curl is not zero.
== Lorentz force in terms of potentials ==
The E and B fields can be replaced by the magnetic vector potential A and (scalar) electrostatic potential ϕ by
E
=
−
∇
ϕ
−
∂
A
∂
t
B
=
∇
×
A
{\displaystyle {\begin{aligned}\mathbf {E} &=-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}\\[1ex]\mathbf {B} &=\nabla \times \mathbf {A} \end{aligned}}}
where ∇ is the gradient, ∇⋅ is the divergence, and ∇× is the curl.
The force becomes
F
=
q
[
−
∇
ϕ
−
∂
A
∂
t
+
v
×
(
∇
×
A
)
]
.
{\displaystyle \mathbf {F} =q\left[-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}+\mathbf {v} \times (\nabla \times \mathbf {A} )\right].}
Using an identity for the triple product this can be rewritten as
F
=
q
[
−
∇
ϕ
−
∂
A
∂
t
+
∇
(
v
⋅
A
)
−
(
v
⋅
∇
)
A
]
.
{\displaystyle \mathbf {F} =q\left[-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}+\nabla \left(\mathbf {v} \cdot \mathbf {A} \right)-\left(\mathbf {v} \cdot \nabla \right)\mathbf {A} \right].}
(Notice that the coordinates and the velocity components should be treated as independent variables, so the del operator acts only on
A
{\displaystyle \mathbf {A} }
, not on
v
{\displaystyle \mathbf {v} }
; thus, there is no need of using Feynman's subscript notation in the equation above.) Using the chain rule, the convective derivative of
A
{\displaystyle \mathbf {A} }
is:
d
A
d
t
=
∂
A
∂
t
+
(
v
⋅
∇
)
A
{\displaystyle {\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}={\frac {\partial \mathbf {A} }{\partial t}}+(\mathbf {v} \cdot \nabla )\mathbf {A} }
so that the above expression becomes:
F
=
q
[
−
∇
(
ϕ
−
v
⋅
A
)
−
d
A
d
t
]
.
{\displaystyle \mathbf {F} =q\left[-\nabla (\phi -\mathbf {v} \cdot \mathbf {A} )-{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\right].}
With v = ẋ and
d
d
t
[
∂
∂
x
˙
(
ϕ
−
x
˙
⋅
A
)
]
=
−
d
A
d
t
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[{\frac {\partial }{\partial {\dot {\mathbf {x} }}}}\left(\phi -{\dot {\mathbf {x} }}\cdot \mathbf {A} \right)\right]=-{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}},}
we can put the equation into the convenient Euler–Lagrange form
where
∇
x
=
x
^
∂
∂
x
+
y
^
∂
∂
y
+
z
^
∂
∂
z
{\displaystyle \nabla _{\mathbf {x} }={\hat {x}}{\dfrac {\partial }{\partial x}}+{\hat {y}}{\dfrac {\partial }{\partial y}}+{\hat {z}}{\dfrac {\partial }{\partial z}}}
and
∇
x
˙
=
x
^
∂
∂
x
˙
+
y
^
∂
∂
y
˙
+
z
^
∂
∂
z
˙
.
{\displaystyle \nabla _{\dot {\mathbf {x} }}={\hat {x}}{\dfrac {\partial }{\partial {\dot {x}}}}+{\hat {y}}{\dfrac {\partial }{\partial {\dot {y}}}}+{\hat {z}}{\dfrac {\partial }{\partial {\dot {z}}}}.}
== Lorentz force and analytical mechanics ==
The Lagrangian for a charged particle of mass m and charge q in an electromagnetic field equivalently describes the dynamics of the particle in terms of its energy, rather than the force exerted on it. The classical expression is given by:
L
=
m
2
r
˙
⋅
r
˙
+
q
A
⋅
r
˙
−
q
ϕ
{\displaystyle L={\frac {m}{2}}\mathbf {\dot {r}} \cdot \mathbf {\dot {r}} +q\mathbf {A} \cdot \mathbf {\dot {r}} -q\phi }
where A and ϕ are the potential fields as above. The quantity
V
=
q
(
ϕ
−
A
⋅
r
˙
)
{\displaystyle V=q(\phi -\mathbf {A} \cdot \mathbf {\dot {r}} )}
can be identified as a generalized, velocity-dependent potential energy and, accordingly,
F
{\displaystyle \mathbf {F} }
as a non-conservative force. Using the Lagrangian, the equation for the Lorentz force given above can be obtained again.
The relativistic Lagrangian is
L
=
−
m
c
2
1
−
(
r
˙
c
)
2
+
q
A
(
r
)
⋅
r
˙
−
q
ϕ
(
r
)
{\displaystyle L=-mc^{2}{\sqrt {1-\left({\frac {\dot {\mathbf {r} }}{c}}\right)^{2}}}+q\mathbf {A} (\mathbf {r} )\cdot {\dot {\mathbf {r} }}-q\phi (\mathbf {r} )}
The action is the relativistic arclength of the path of the particle in spacetime, minus the potential energy contribution, plus an extra contribution which quantum mechanically is an extra phase a charged particle gets when it is moving along a vector potential.
== Relativistic form of the Lorentz force ==
=== Covariant form of the Lorentz force ===
==== Field tensor ====
Using the metric signature (1, −1, −1, −1), the Lorentz force for a charge q can be written in covariant form:
where pα is the four-momentum, defined as
p
α
=
(
p
0
,
p
1
,
p
2
,
p
3
)
=
(
γ
m
c
,
p
x
,
p
y
,
p
z
)
,
{\displaystyle p^{\alpha }=\left(p_{0},p_{1},p_{2},p_{3}\right)=\left(\gamma mc,p_{x},p_{y},p_{z}\right),}
τ the proper time of the particle, Fαβ the contravariant electromagnetic tensor
F
α
β
=
(
0
−
E
x
/
c
−
E
y
/
c
−
E
z
/
c
E
x
/
c
0
−
B
z
B
y
E
y
/
c
B
z
0
−
B
x
E
z
/
c
−
B
y
B
x
0
)
{\displaystyle F^{\alpha \beta }={\begin{pmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{pmatrix}}}
and U is the covariant 4-velocity of the particle, defined as:
U
β
=
(
U
0
,
U
1
,
U
2
,
U
3
)
=
γ
(
c
,
−
v
x
,
−
v
y
,
−
v
z
)
,
{\displaystyle U_{\beta }=\left(U_{0},U_{1},U_{2},U_{3}\right)=\gamma \left(c,-v_{x},-v_{y},-v_{z}\right),}
in which
γ
(
v
)
=
1
1
−
v
2
c
2
=
1
1
−
v
x
2
+
v
y
2
+
v
z
2
c
2
{\displaystyle \gamma (v)={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}={\frac {1}{\sqrt {1-{\frac {v_{x}^{2}+v_{y}^{2}+v_{z}^{2}}{c^{2}}}}}}}
is the Lorentz factor.
The fields are transformed to a frame moving with constant relative velocity by:
F
′
μ
ν
=
Λ
μ
α
Λ
ν
β
F
α
β
,
{\displaystyle F'^{\mu \nu }={\Lambda ^{\mu }}_{\alpha }{\Lambda ^{\nu }}_{\beta }F^{\alpha \beta }\,,}
where Λμα is the Lorentz transformation tensor.
==== Translation to vector notation ====
The α = 1 component (x-component) of the force is
d
p
1
d
τ
=
q
U
β
F
1
β
=
q
(
U
0
F
10
+
U
1
F
11
+
U
2
F
12
+
U
3
F
13
)
.
{\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=qU_{\beta }F^{1\beta }=q\left(U_{0}F^{10}+U_{1}F^{11}+U_{2}F^{12}+U_{3}F^{13}\right).}
Substituting the components of the covariant electromagnetic tensor F yields
d
p
1
d
τ
=
q
[
U
0
(
E
x
c
)
+
U
2
(
−
B
z
)
+
U
3
(
B
y
)
]
.
{\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=q\left[U_{0}\left({\frac {E_{x}}{c}}\right)+U_{2}(-B_{z})+U_{3}(B_{y})\right].}
Using the components of covariant four-velocity yields
d
p
1
d
τ
=
q
γ
[
c
(
E
x
c
)
+
(
−
v
y
)
(
−
B
z
)
+
(
−
v
z
)
(
B
y
)
]
=
q
γ
(
E
x
+
v
y
B
z
−
v
z
B
y
)
=
q
γ
[
E
x
+
(
v
×
B
)
x
]
.
{\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=q\gamma \left[c\left({\frac {E_{x}}{c}}\right)+(-v_{y})(-B_{z})+(-v_{z})(B_{y})\right]=q\gamma \left(E_{x}+v_{y}B_{z}-v_{z}B_{y}\right)=q\gamma \left[E_{x}+\left(\mathbf {v} \times \mathbf {B} \right)_{x}\right]\,.}
The calculation for α = 2, 3 (force components in the y and z directions) yields similar results, so collecting the three equations into one:
d
p
d
τ
=
q
γ
(
E
+
v
×
B
)
,
{\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} \tau }}=q\gamma \left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right),}
and since differentials in coordinate time dt and proper time dτ are related by the Lorentz factor,
d
t
=
γ
(
v
)
d
τ
,
{\displaystyle dt=\gamma (v)\,d\tau ,}
so we arrive at
d
p
d
t
=
q
(
E
+
v
×
B
)
.
{\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}=q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right).}
This is precisely the Lorentz force law, however, it is important to note that p is the relativistic expression,
p
=
γ
(
v
)
m
0
v
.
{\displaystyle \mathbf {p} =\gamma (v)m_{0}\mathbf {v} \,.}
=== Lorentz force in spacetime algebra (STA) ===
The electric and magnetic fields are dependent on the velocity of an observer, so the relativistic form of the Lorentz force law can best be exhibited starting from a coordinate-independent expression for the electromagnetic and magnetic fields
F
{\displaystyle {\mathcal {F}}}
, and an arbitrary time-direction,
γ
0
{\displaystyle \gamma _{0}}
. This can be settled through spacetime algebra (or the geometric algebra of spacetime), a type of Clifford algebra defined on a pseudo-Euclidean space, as
E
=
(
F
⋅
γ
0
)
γ
0
{\displaystyle \mathbf {E} =\left({\mathcal {F}}\cdot \gamma _{0}\right)\gamma _{0}}
and
i
B
=
(
F
∧
γ
0
)
γ
0
{\displaystyle i\mathbf {B} =\left({\mathcal {F}}\wedge \gamma _{0}\right)\gamma _{0}}
F
{\displaystyle {\mathcal {F}}}
is a spacetime bivector (an oriented plane segment, just like a vector is an oriented line segment), which has six degrees of freedom corresponding to boosts (rotations in spacetime planes) and rotations (rotations in space-space planes). The dot product with the vector
γ
0
{\displaystyle \gamma _{0}}
pulls a vector (in the space algebra) from the translational part, while the wedge-product creates a trivector (in the space algebra) who is dual to a vector which is the usual magnetic field vector. The relativistic velocity is given by the (time-like) changes in a time-position vector
v
=
x
˙
{\displaystyle v={\dot {x}}}
, where
v
2
=
1
,
{\displaystyle v^{2}=1,}
(which shows our choice for the metric) and the velocity is
v
=
c
v
∧
γ
0
/
(
v
⋅
γ
0
)
.
{\displaystyle \mathbf {v} =cv\wedge \gamma _{0}/(v\cdot \gamma _{0}).}
The proper form of the Lorentz force law ('invariant' is an inadequate term because no transformation has been defined) is simply
Note that the order is important because between a bivector and a vector the dot product is anti-symmetric. Upon a spacetime split like one can obtain the velocity, and fields as above yielding the usual expression.
=== Lorentz force in general relativity ===
In the general theory of relativity the equation of motion for a particle with mass
m
{\displaystyle m}
and charge
e
{\displaystyle e}
, moving in a space with metric tensor
g
a
b
{\displaystyle g_{ab}}
and electromagnetic field
F
a
b
{\displaystyle F_{ab}}
, is given as
m
d
u
c
d
s
−
m
1
2
g
a
b
,
c
u
a
u
b
=
e
F
c
b
u
b
,
{\displaystyle m{\frac {du_{c}}{ds}}-m{\frac {1}{2}}g_{ab,c}u^{a}u^{b}=eF_{cb}u^{b},}
where
u
a
=
d
x
a
/
d
s
{\displaystyle u^{a}=dx^{a}/ds}
(
d
x
a
{\displaystyle dx^{a}}
is taken along the trajectory),
g
a
b
,
c
=
∂
g
a
b
/
∂
x
c
{\displaystyle g_{ab,c}=\partial g_{ab}/\partial x^{c}}
, and
d
s
2
=
g
a
b
d
x
a
d
x
b
{\displaystyle ds^{2}=g_{ab}dx^{a}dx^{b}}
.
The equation can also be written as
m
d
u
c
d
s
−
m
Γ
a
b
c
u
a
u
b
=
e
F
c
b
u
b
,
{\displaystyle m{\frac {du_{c}}{ds}}-m\Gamma _{abc}u^{a}u^{b}=eF_{cb}u^{b},}
where
Γ
a
b
c
{\displaystyle \Gamma _{abc}}
is the Christoffel symbol (of the torsion-free metric connection in general relativity), or as
m
D
u
c
d
s
=
e
F
c
b
u
b
,
{\displaystyle m{\frac {Du_{c}}{ds}}=eF_{cb}u^{b},}
where
D
{\displaystyle D}
is the covariant differential in general relativity.
== Applications ==
The Lorentz force occurs in many devices, including:
Cyclotrons and other circular path particle accelerators
Mass spectrometers
Velocity filters
Magnetrons
Lorentz force velocimetry
In its manifestation as the Laplace force on an electric current in a conductor, this force occurs in many devices, including:
Electric motors
Railguns
Linear motors
Loudspeakers
Magnetoplasmadynamic thrusters
Electrical generators
Homopolar generators
Linear alternators
== See also ==
== Notes ==
=== Remarks ===
=== Citations ===
== References ==
Darrigol, Olivier (2000). Electrodynamics from Ampère to Einstein. Oxford ; New York: Clarendon Press. ISBN 0-19-850594-9.
Feynman, Richard Phillips; Leighton, Robert B.; Sands, Matthew L. (2006). The Feynman lectures on physics. Vol. 2. Pearson / Addison-Wesley. ISBN 0-8053-9047-2.
Griffiths, David J. (2023). Introduction to Electrodynamics. Cambridge University Press. doi:10.1017/9781009397735. ISBN 978-1-009-39773-5.
Jackson, John David (1998). Classical Electrodynamics. New York: John Wiley & Sons. ISBN 978-0-471-30932-1.
Purcell, Edward M.; Morin, David J. (2013). Electricity and Magnetism:. Cambridge University Press. doi:10.1017/cbo9781139012973. ISBN 978-1-139-01297-3.
Sadiku, Matthew N. O. (2018). Elements of electromagnetics (7th ed.). New York/Oxford: Oxford University Press. ISBN 978-0-19-069861-4.
Serway, Raymond A.; Jewett, John W. Jr. (2004). Physics for scientists and engineers, with modern physics. Belmont, California: Thomson Brooks/Cole. ISBN 0-534-40846-X.
Srednicki, Mark A. (2007). Quantum field theory. Cambridge, England; New York City: Cambridge University Press. ISBN 978-0-521-86449-7.
== External links ==
Lorentz force (demonstration)
Interactive Java applet on the magnetic deflection of a particle beam in a homogeneous magnetic field Archived 2011-08-13 at the Wayback Machine by Wolfgang Bauer | Wikipedia/Lorentz_force_law |
Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, responding to stimuli, providing structure to cells and organisms, and transporting molecules from one location to another. Proteins differ from one another primarily in their sequence of amino acids, which is dictated by the nucleotide sequence of their genes, and which usually results in protein folding into a specific 3D structure that determines its activity.
A linear chain of amino acid residues is called a polypeptide. A protein contains at least one long polypeptide. Short polypeptides, containing less than 20–30 residues, are rarely considered to be proteins and are commonly called peptides. The individual amino acid residues are bonded together by peptide bonds and adjacent amino acid residues. The sequence of amino acid residues in a protein is defined by the sequence of a gene, which is encoded in the genetic code. In general, the genetic code specifies 20 standard amino acids; but in certain organisms the genetic code can include selenocysteine and—in certain archaea—pyrrolysine. Shortly after or even during synthesis, the residues in a protein are often chemically modified by post-translational modification, which alters the physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins. Some proteins have non-peptide groups attached, which can be called prosthetic groups or cofactors. Proteins can work together to achieve a particular function, and they often associate to form stable protein complexes.
Once formed, proteins only exist for a certain period and are then degraded and recycled by the cell's machinery through the process of protein turnover. A protein's lifespan is measured in terms of its half-life and covers a wide range. They can exist for minutes or years with an average lifespan of 1–2 days in mammalian cells. Abnormal or misfolded proteins are degraded more rapidly either due to being targeted for destruction or due to being unstable.
Like other biological macromolecules such as polysaccharides and nucleic acids, proteins are essential parts of organisms and participate in virtually every process within cells. Many proteins are enzymes that catalyse biochemical reactions and are vital to metabolism. Some proteins have structural or mechanical functions, such as actin and myosin in muscle, and the cytoskeleton's scaffolding proteins that maintain cell shape. Other proteins are important in cell signaling, immune responses, cell adhesion, and the cell cycle. In animals, proteins are needed in the diet to provide the essential amino acids that cannot be synthesized. Digestion breaks the proteins down for metabolic use.
== History and etymology ==
=== Discovery and early studies ===
Proteins have been studied and recognized since the 1700s by Antoine Fourcroy and others, who often collectively called them "albumins", or "albuminous materials" (Eiweisskörper, in German). Gluten, for example, was first separated from wheat in published research around 1747, and later determined to exist in many plants. In 1789, Antoine Fourcroy recognized three distinct varieties of animal proteins: albumin, fibrin, and gelatin. Vegetable (plant) proteins studied in the late 1700s and early 1800s included gluten, plant albumin, gliadin, and legumin.
Proteins were first described by the Dutch chemist Gerardus Johannes Mulder and named by the Swedish chemist Jöns Jacob Berzelius in 1838. Mulder carried out elemental analysis of common proteins and found that nearly all proteins had the same empirical formula, C400H620N100O120P1S1. He came to the erroneous conclusion that they might be composed of a single type of (very large) molecule. The term "protein" to describe these molecules was proposed by Mulder's associate Berzelius; protein is derived from the Greek word πρώτειος (proteios), meaning "primary", "in the lead", or "standing in front", + -in. Mulder went on to identify the products of protein degradation such as the amino acid leucine for which he found a (nearly correct) molecular weight of 131 Da.
Early nutritional scientists such as the German Carl von Voit believed that protein was the most important nutrient for maintaining the structure of the body, because it was generally believed that "flesh makes flesh." Around 1862, Karl Heinrich Ritthausen isolated the amino acid glutamic acid. Thomas Burr Osborne compiled a detailed review of the vegetable proteins at the Connecticut Agricultural Experiment Station. Osborne, alongside Lafayette Mendel, established several nutritionally essential amino acids in feeding experiments with laboratory rats. Diets lacking an essential amino acid stunts the rats' growth, consistent with Liebig's law of the minimum. The final essential amino acid to be discovered, threonine, was identified by William Cumming Rose.
The difficulty in purifying proteins impeded work by early protein biochemists. Proteins could be obtained in large quantities from blood, egg whites, and keratin, but individual proteins were unavailable. In the 1950s, the Armour Hot Dog Company purified 1 kg of bovine pancreatic ribonuclease A and made it freely available to scientists. This gesture helped ribonuclease A become a major target for biochemical study for the following decades.
=== Polypeptides ===
The understanding of proteins as polypeptides, or chains of amino acids, came through the work of Franz Hofmeister and Hermann Emil Fischer in 1902. The central role of proteins as enzymes in living organisms that catalyzed reactions was not fully appreciated until 1926, when James B. Sumner showed that the enzyme urease was in fact a protein.
Linus Pauling is credited with the successful prediction of regular protein secondary structures based on hydrogen bonding, an idea first put forth by William Astbury in 1933. Later work by Walter Kauzmann on denaturation, based partly on previous studies by Kaj Linderstrøm-Lang, contributed an understanding of protein folding and structure mediated by hydrophobic interactions.
The first protein to have its amino acid chain sequenced was insulin, by Frederick Sanger, in 1949. Sanger correctly determined the amino acid sequence of insulin, thus conclusively demonstrating that proteins consisted of linear polymers of amino acids rather than branched chains, colloids, or cyclols. He won the Nobel Prize for this achievement in 1958. Christian Anfinsen's studies of the oxidative folding process of ribonuclease A, for which he won the nobel prize in 1972, solidified the thermodynamic hypothesis of protein folding, according to which the folded form of a protein represents its free energy minimum.
=== Structure ===
With the development of X-ray crystallography, it became possible to determine protein structures as well as their sequences. The first protein structures to be solved were hemoglobin by Max Perutz and myoglobin by John Kendrew, in 1958. The use of computers and increasing computing power has supported the sequencing of complex proteins. In 1999, Roger Kornberg sequenced the highly complex structure of RNA polymerase using high intensity X-rays from synchrotrons.
Since then, cryo-electron microscopy (cryo-EM) of large macromolecular assemblies has been developed. Cryo-EM uses protein samples that are frozen rather than crystals, and beams of electrons rather than X-rays. It causes less damage to the sample, allowing scientists to obtain more information and analyze larger structures. Computational protein structure prediction of small protein structural domains has helped researchers to approach atomic-level resolution of protein structures.
As of April 2024, the Protein Data Bank contains 181,018 X-ray, 19,809 EM and 12,697 NMR protein structures.
== Classification ==
Proteins are primarily classified by sequence and structure, although other classifications are commonly used. Especially for enzymes the EC number system provides a functional classification scheme. Similarly, gene ontology classifies both genes and proteins by their biological and biochemical function, and by their intracellular location.
Sequence similarity is used to classify proteins both in terms of evolutionary and functional similarity. This may use either whole proteins or protein domains, especially in multi-domain proteins. Protein domains allow protein classification by a combination of sequence, structure and function, and they can be combined in many ways. In an early study of 170,000 proteins, about two-thirds were assigned at least one domain, with larger proteins containing more domains (e.g. proteins larger than 600 amino acids having an average of more than 5 domains).
== Biochemistry ==
Most proteins consist of linear polymers built from series of up to 20 L-α-amino acids. All proteinogenic amino acids have a common structure where an α-carbon is bonded to an amino group, a carboxyl group, and a variable side chain. Only proline differs from this basic structure as its side chain is cyclical, bonding to the amino group, limiting protein chain flexibility. The side chains of the standard amino acids have a variety of chemical structures and properties, and it is the combined effect of all amino acids that determines its three-dimensional structure and chemical reactivity.
The amino acids in a polypeptide chain are linked by peptide bonds between amino and carboxyl group. An individual amino acid in a chain is called a residue, and the linked series of carbon, nitrogen, and oxygen atoms are known as the main chain or protein backbone.: 19 The peptide bond has two resonance forms that confer some double-bond character to the backbone. The alpha carbons are roughly coplanar with the nitrogen and the carbonyl (C=O) group. The other two dihedral angles in the peptide bond determine the local shape assumed by the protein backbone. One conseqence of the N-C(O) double bond character is that proteins are somewhat rigid.: 31 A polypeptide chain ends with a free amino group, known as the N-terminus or amino terminus, and a free carboxyl group, known as the C-terminus or carboxy terminus. By convention, peptide sequences are written N-terminus to C-terminus, correlating with the order in which proteins are synthesized by ribosomes.
The words protein, polypeptide, and peptide are a little ambiguous and can overlap in meaning. Protein is generally used to refer to the complete biological molecule in a stable conformation, whereas peptide is generally reserved for a short amino acid oligomers often lacking a stable 3D structure. But the boundary between the two is not well defined and usually lies near 20–30 residues.
Proteins can interact with many types of molecules and ions, including with other proteins, with lipids, with carbohydrates, and with DNA.
=== Abundance in cells ===
A typical bacterial cell, e.g. E. coli and Staphylococcus aureus, is estimated to contain about 2 million proteins. Smaller bacteria, such as Mycoplasma or spirochetes contain fewer molecules, on the order of 50,000 to 1 million. By contrast, eukaryotic cells are larger and thus contain much more protein. For instance, yeast cells have been estimated to contain about 50 million proteins and human cells on the order of 1 to 3 billion. The concentration of individual protein copies ranges from a few molecules per cell up to 20 million. Not all genes coding proteins are expressed in most cells and their number depends on, for example, cell type and external stimuli. For instance, of the 20,000 or so proteins encoded by the human genome, only 6,000 are detected in lymphoblastoid cells. The most abundant protein in nature is thought to be RuBisCO, an enzyme that catalyzes the incorporation of carbon dioxide into organic matter in photosynthesis. Plants can consist of as much as 1% by weight of this enzyme.
== Synthesis ==
=== Biosynthesis ===
Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein. The genetic code is a set of three-nucleotide sets called codons and each three-nucleotide combination designates an amino acid, for example AUG (adenine–uracil–guanine) is the code for methionine. Because DNA contains four nucleotides, the total number of possible codons is 64; hence, there is some redundancy in the genetic code, with some amino acids specified by more than one codon.: 1002–42 Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase. Most organisms then process the pre-mRNA (a primary transcript) using various forms of post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second.
The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus.: 1002–42
The size of a synthesized protein can be measured by the number of amino acids it contains and by its total molecular mass, which is normally reported in units of daltons (synonymous with atomic mass units), or the derivative unit kilodalton (kDa). The average size of a protein increases from Archaea to Bacteria to Eukaryote (283, 311, 438 residues and 31, 34, 49 kDa respectively) due to a bigger number of protein domains constituting proteins in higher organisms. For instance, yeast proteins are on average 466 amino acids long and 53 kDa in mass. The largest known proteins are the titins, a component of the muscle sarcomere, with a molecular mass of almost 3,000 kDa and a total length of almost 27,000 amino acids.
=== Chemical synthesis ===
Short proteins can be synthesized chemically by a family of peptide synthesis methods. These rely on organic synthesis techniques such as chemical ligation to produce peptides in high yield. Chemical synthesis allows for the introduction of non-natural amino acids into polypeptide chains, such as attachment of fluorescent probes to amino acid side chains. These methods are useful in laboratory biochemistry and cell biology, though generally not for commercial applications. Chemical synthesis is inefficient for polypeptides longer than about 300 amino acids, and the synthesized proteins may not readily assume their native tertiary structure. Most chemical synthesis methods proceed from C-terminus to N-terminus, opposite the biological reaction.
== Structure ==
Most proteins fold into unique 3D structures. The shape into which a protein naturally folds is known as its native conformation.: 36 Although many proteins can fold unassisted, simply through the chemical properties of their amino acids, others require the aid of molecular chaperones to fold into their native states.: 37 Biochemists often refer to four distinct aspects of a protein's structure:: 30–34
Primary structure: the amino acid sequence. A protein is a polyamide.
Secondary structure: regularly repeating local structures stabilized by hydrogen bonds. The most common examples are the α-helix, β-sheet and turns. Because secondary structures are local, many regions of distinct secondary structure can be present in the same protein molecule.
Tertiary structure: the overall shape of a single protein molecule; the spatial relationship of the secondary structures to one another. Tertiary structure is generally stabilized by nonlocal interactions, most commonly the formation of a hydrophobic core, but also through salt bridges, hydrogen bonds, disulfide bonds, and even post-translational modifications. The term "tertiary structure" is often used as synonymous with the term fold. The tertiary structure is what controls the basic function of the protein.
Quaternary structure: the structure formed by several protein molecules (polypeptide chains), usually called protein subunits in this context, which function as a single protein complex.
Quinary structure: the signatures of protein surface that organize the crowded cellular interior. Quinary structure is dependent on transient, yet essential, macromolecular interactions that occur inside living cells.
Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution, protein structures vary because of thermal vibration and collisions with other molecules.: 368–75
Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane.: 165–85
A special case of intramolecular hydrogen bonds within proteins, poorly shielded from water attack and hence promoting their own dehydration, are called dehydrons.
=== Protein domains ===
Many proteins are composed of several protein domains, i.e. segments of a protein that fold into distinct structural units.: 134 Domains usually have specific functions, such as enzymatic activities (e.g. kinase) or they serve as binding modules.: 155–156
=== Sequence motif ===
Short amino acid sequences within proteins often act as recognition sites for other proteins. For instance, SH3 domains typically bind to short PxxP motifs (i.e. 2 prolines [P], separated by two unspecified amino acids [x], although the surrounding amino acids may determine the exact binding specificity). Many such motifs has been collected in the Eukaryotic Linear Motif (ELM) database.
== Cellular functions ==
Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes. With the exception of certain types of RNA, most other biological molecules are relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome.: 120
The chief characteristic of proteins that allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is known as the binding site and is often a depression or "pocket" on the molecular surface. This binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side chains. Protein binding can be extraordinarily tight and specific; for example, the ribonuclease inhibitor protein binds to human angiogenin with a sub-femtomolar dissociation constant (<10−15 M) but does not bind at all to its amphibian homolog onconase (> 1 M). Extremely minor chemical changes such as the addition of a single methyl group to a binding partner can sometimes suffice to nearly eliminate binding; for example, the aminoacyl tRNA synthetase specific to the amino acid valine discriminates against the very similar side chain of the amino acid isoleucine.
Proteins can bind to other proteins as well as to small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils; this process occurs often in structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions regulate enzymatic activity, control progression through the cell cycle, and allow the assembly of large protein complexes that carry out many closely related reactions with a common biological function. Proteins can bind to, or be integrated into, cell membranes. The ability of binding partners to induce conformational changes in proteins allows the construction of enormously complex signaling networks.: 830–49
As interactions between proteins are reversible and depend heavily on the availability of different groups of partner proteins to form aggregates that are capable to carry out discrete sets of function, study of the interactions between specific proteins is a key to understand important aspects of cellular function, and ultimately the properties that distinguish particular cell types.
=== Enzymes ===
The best-known role of proteins in the cell is as enzymes, which catalyse chemical reactions. Enzymes are usually highly specific and accelerate only one or a few chemical reactions. Enzymes carry out most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription. Some enzymes act on other proteins to add or remove chemical groups in a process known as posttranslational modification. About 4,000 reactions are known to be catalysed by enzymes. The rate acceleration conferred by enzymatic catalysis is often enormous—as much as 1017-fold increase in rate over the uncatalysed reaction in the case of orotate decarboxylase (78 million years without the enzyme, 18 milliseconds with the enzyme).
The molecules bound and acted upon by enzymes are called substrates. Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis. The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site.: 389
Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes.
=== Cell signaling and ligand binding ===
Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell.: 251–81
Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. Whereas enzymes are limited in their binding affinity for their substrates by the necessity of conducting their reaction, antibodies have no such constraints. An antibody's binding affinity to its target is extraordinarily high.: 275–50
Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their ligand is present in high concentrations, and release the ligand when it is present at low concentrations in the target tissues. The canonical example of a ligand-binding protein is haemoglobin, which transports oxygen from the lungs to other organs and tissues in all vertebrates and has close homologs in every biological kingdom.: 222–29 Lectins are sugar-binding proteins which are highly specific for their sugar moieties. Lectins typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins.
Transmembrane proteins can serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. The membrane alone has a hydrophobic core through which polar or charged molecules cannot diffuse. Membrane proteins contain internal channels that allow such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion; for example, potassium and sodium channels often discriminate for only one of the two ions.: 232–34
=== Structural proteins ===
Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins; for example, collagen and elastin are critical components of connective tissue such as cartilage, and keratin is found in hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells.: 178–81 Some globular proteins can play structural functions, for example, actin and tubulin are globular and soluble as monomers, but polymerize to form long, stiff fibers that make up the cytoskeleton, which allows the cell to maintain its shape and size.: 490
Other proteins that serve structural functions are motor proteins such as myosin, kinesin, and dynein, which are capable of generating mechanical forces. These proteins are crucial for cellular motility of single celled organisms and the sperm of many multicellular organisms which reproduce sexually. They generate the forces exerted by contracting muscles: 258–64, 272 and play essential roles in intracellular transport.: 481, 490
== Methods of study ==
Methods commonly used to study protein structure and function include immunohistochemistry, site-directed mutagenesis, X-ray crystallography, nuclear magnetic resonance and mass spectrometry. The activities and structures of proteins may be examined in vitro, in vivo, and in silico. In vitro studies of purified proteins in controlled environments are useful for learning how a protein carries out its function: for example, enzyme kinetics studies explore the chemical mechanism of an enzyme's catalytic activity and its relative affinity for various possible substrate molecules. By contrast, in vivo experiments can provide information about the physiological role of a protein in the context of a cell or even a whole organism, and can often provide more information about protein behavior in different contexts. In silico studies use computational methods to study proteins.
=== Protein purification ===
Proteins may be purified from other cellular components using a variety of techniques such as ultracentrifugation, precipitation, electrophoresis, and chromatography;: 21–24 the advent of genetic engineering has made possible a number of methods to facilitate purification.
To perform in vitro analysis, a protein must be purified away from other cellular components. This process usually begins with cell lysis, in which a cell's membrane is disrupted and its internal contents released into a solution known as a crude lysate. The resulting mixture can be purified using ultracentrifugation, which fractionates the various cellular components into fractions containing soluble proteins; membrane lipids and proteins; cellular organelles, and nucleic acids. Precipitation by a method known as salting out can concentrate the proteins from this lysate. Various types of chromatography are then used to isolate the protein or proteins of interest based on properties such as molecular weight, net charge and binding affinity.: 21–24 The level of purification can be monitored using various types of gel electrophoresis if the desired protein's molecular weight and isoelectric point are known, by spectroscopy if the protein has distinguishable spectroscopic features, or by enzyme assays if the protein has enzymatic activity. Additionally, proteins can be isolated according to their charge using electrofocusing.
For natural proteins, a series of purification steps may be necessary to obtain protein sufficiently pure for laboratory applications. To simplify this process, genetic engineering is often used to add chemical features to proteins that make them easier to purify without affecting their structure or activity. Here, a "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein. As a result, when the lysate is passed over a chromatography column containing nickel, the histidine residues ligate the nickel and attach to the column while the untagged components of the lysate pass unimpeded. A number of tags have been developed to help researchers purify specific proteins from complex mixtures.
=== Cellular localization ===
The study of proteins in vivo is often concerned with the synthesis and localization of the protein within the cell. Although many intracellular proteins are synthesized in the cytoplasm and membrane-bound or secreted proteins in the endoplasmic reticulum, the specifics of how proteins are targeted to specific organelles or cellular structures is often unclear. A useful technique for assessing cellular localization uses genetic engineering to express in a cell a fusion protein or chimera consisting of the natural protein of interest linked to a "reporter" such as green fluorescent protein (GFP). The fused protein's position within the cell can then be cleanly and efficiently visualized using microscopy.
Other methods for elucidating the cellular location of proteins requires the use of known compartmental markers for regions such as the ER, the Golgi, lysosomes or vacuoles, mitochondria, chloroplasts, plasma membrane, etc. With the use of fluorescently tagged versions of these markers or of antibodies to known markers, it becomes much simpler to identify the localization of a protein of interest. For example, indirect immunofluorescence will allow for fluorescence colocalization and demonstration of location. Fluorescent dyes are used to label cellular compartments for a similar purpose.
Other possibilities exist, as well. For example, immunohistochemistry usually uses an antibody to one or more proteins of interest that are conjugated to enzymes yielding either luminescent or chromogenic signals that can be compared between samples, allowing for localization information. Another applicable technique is cofractionation in sucrose (or other material) gradients using isopycnic centrifugation. While this technique does not prove colocalization of a compartment of known density and the protein of interest, it indicates an increased likelihood.
Finally, the gold-standard method of cellular localization is immunoelectron microscopy. This technique uses an antibody to the protein of interest, along with classical electron microscopy techniques. The sample is prepared for normal electron microscopic examination, and then treated with an antibody to the protein of interest that is conjugated to an extremely electro-dense material, usually gold. This allows for the localization of both ultrastructural details as well as the protein of interest.
Through another genetic engineering application known as site-directed mutagenesis, researchers can alter the protein sequence and hence its structure, cellular localization, and susceptibility to regulation. This technique even allows the incorporation of unnatural amino acids into proteins, using modified tRNAs, and may allow the rational design of new proteins with novel properties.
=== Proteomics ===
The total complement of proteins present at a time in a cell or cell type is known as its proteome, and the study of such large-scale data sets defines the field of proteomics, named by analogy to the related field of genomics. Key experimental techniques in proteomics include 2D electrophoresis, which allows the separation of many proteins, mass spectrometry, which allows rapid high-throughput identification of proteins and sequencing of peptides (most often after in-gel digestion), protein microarrays, which allow the detection of the relative levels of the various proteins present in a cell, and two-hybrid screening, which allows the systematic exploration of protein–protein interactions. The total complement of biologically possible such interactions is known as the interactome. A systematic attempt to determine the structures of proteins representing every possible fold is known as structural genomics.
=== Structure determination ===
Discovering the tertiary structure of a protein, or the quaternary structure of its complexes, can provide important clues about how the protein performs its function and how it can be affected, i.e. in drug design. As proteins are too small to be seen under a light microscope, other methods have to be employed to determine their structure. Common experimental methods include X-ray crystallography and NMR spectroscopy, both of which can produce structural information at atomic resolution. However, NMR experiments are able to provide information from which a subset of distances between pairs of atoms can be estimated, and the final possible conformations for a protein are determined by solving a distance geometry problem. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet / α-helical composition of proteins. Cryoelectron microscopy is used to produce lower-resolution structural information about very large protein complexes, including assembled viruses;: 340–41 a variant known as electron crystallography can produce high-resolution information in some cases, especially for two-dimensional crystals of membrane proteins. Solved structures are usually deposited in the Protein Data Bank (PDB), a freely available resource from which structural data about thousands of proteins can be obtained in the form of Cartesian coordinates for each atom in the protein.
Many more gene sequences are known than protein structures. Further, the set of solved structures is biased toward proteins that can be easily subjected to the conditions required in X-ray crystallography, one of the major structure determination methods. In particular, globular proteins are comparatively easy to crystallize in preparation for X-ray crystallography. Membrane proteins and large protein complexes, by contrast, are difficult to crystallize and are underrepresented in the PDB. Structural genomics initiatives have attempted to remedy these deficiencies by systematically solving representative structures of major fold classes. Protein structure prediction methods attempt to provide a means of generating a plausible structure for proteins whose structures have not been experimentally determined.
=== Structure prediction ===
Complementary to the field of structural genomics, protein structure prediction develops efficient mathematical models of proteins to computationally predict the molecular formations in theory, instead of detecting structures with laboratory observation. The most successful type of structure prediction, known as homology modeling, relies on the existence of a "template" structure with sequence similarity to the protein being modeled; structural genomics' goal is to provide sufficient representation in solved structures to model most of those that remain. Although producing accurate models remains a challenge when only distantly related template structures are available, it has been suggested that sequence alignment is the bottleneck in this process, as quite accurate models can be produced if a "perfect" sequence alignment is known. Many structure prediction methods have served to inform the emerging field of protein engineering, in which novel protein folds have already been designed. Many proteins (in eukaryotes ~33%) contain large unstructured but biologically functional segments and can be classified as intrinsically disordered proteins. Predicting and analysing protein disorder is an important part of protein structure characterisation.
=== In silico simulation of dynamical processes ===
A more complex computational problem is the prediction of intermolecular interactions, such as in molecular docking, protein folding, protein–protein interaction and chemical reactivity. Mathematical models to simulate these dynamical processes involve molecular mechanics, in particular, molecular dynamics. In this regard, in silico simulations discovered the folding of small α-helical protein domains such as the villin headpiece, the HIV accessory protein and hybrid methods combining standard molecular dynamics with quantum mechanical mathematics have explored the electronic states of rhodopsins.
Beyond classical molecular dynamics, quantum dynamics methods allow the simulation of proteins in atomistic detail with an accurate description of quantum mechanical effects. Examples include the multi-layer multi-configuration time-dependent Hartree method and the hierarchical equations of motion approach, which have been applied to plant cryptochromes and bacteria light-harvesting complexes, respectively. Both quantum and classical mechanical simulations of biological-scale systems are extremely computationally demanding, so distributed computing initiatives such as the Folding@home project facilitate the molecular modeling by exploiting advances in GPU parallel processing and Monte Carlo techniques.
=== Chemical analysis ===
The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available.
== Digestion ==
In the absence of catalysts, proteins are slow to hydrolyze. The breakdown of proteins to small peptides and amino acids (proteolysis) is a step in digestion; these breakdown products are then absorbed in the small intestine. The hydrolysis of proteins relies on enzymes called proteases or peptidases. Proteases, which are themselves proteins, come in several types according to the particular peptide bonds that they cleave as well as their tendency to cleave peptide bonds at the terminus of a protein (exopeptidases) vs peptide bonds at the interior of the protein (endopeptidases). Pepsin is an endopeptidase in the stomach. Subsequent to the stomach, the pancreas secretes other proteases to complete the hydrolysis, these include trypsin and chymotrypsin.
Protein hydrolysis is employed commercially as a means of producing amino acids from bulk sources of protein, such as blood meal, feathers, keratin. Such materials are treated with hot hydrochloric acid, which effects the hydrolysis of the peptide bonds.
== Mechanical properties ==
The mechanical properties of proteins are highly diverse and are often central to their biological function, as in the case of proteins like keratin and collagen. For instance, the ability of muscle tissue to continually expand and contract is directly tied to the elastic properties of their underlying protein makeup. Beyond fibrous proteins, the conformational dynamics of enzymes and the structure of biological membranes, among other biological functions, are governed by the mechanical properties of the proteins. Outside of their biological context, the unique mechanical properties of many proteins, along with their relative sustainability when compared to synthetic polymers, have made them desirable targets for next-generation materials design.
Young's modulus, E, is calculated as the axial stress σ over the resulting strain ε. It is a measure of the relative stiffness of a material. In the context of proteins, this stiffness often directly correlates to biological function. For example, collagen, found in connective tissue, bones, and cartilage, and keratin, found in nails, claws, and hair, have observed stiffnesses that are several orders of magnitude higher than that of elastin, which is though to give elasticity to structures such as blood vessels, pulmonary tissue, and bladder tissue, among others. In comparison to this, globular proteins, such as Bovine Serum Albumin, which float relatively freely in the cytosol and often function as enzymes (and thus undergoing frequent conformational changes) have comparably much lower Young's moduli.
The Young's modulus of a single protein can be found through molecular dynamics simulation. Using either atomistic force-fields, such as CHARMM or GROMOS, or coarse-grained forcefields like Martini, a single protein molecule can be stretched by a uniaxial force while the resulting extension is recorded in order to calculate the strain. Experimentally, methods such as atomic force microscopy can be used to obtain similar data. The internal dynamics of proteins involve subtle elastic and plastic deformations induced by viscoelastic forces, which can be probed by nano-rheology techniques. These estimates yield typical spring constants around k ≈ 100 pN/nm, equivalent to Yonung's moduli of E ≈ 100 MPa, and typical friction coefficients of γ ≈ 0.1 pN·s/nm, corresponding to viscosity of η ≈ 0.01 pN·s/nm2 = 107cP (that is, 107 more viscous than water).
At the macroscopic level, the Young's modulus of cross-linked protein networks can be obtained through more traditional mechanical testing. Experimentally observed values for a few proteins can be seen below.
== See also ==
== References ==
== Further reading ==
Textbooks
History
Tanford C, Reynolds JA (2001). Nature's Robots: A History of Proteins. Oxford New York: Oxford University Press, USA. ISBN 978-0-19-850466-5.
== External links ==
=== Databases and projects ===
NCBI Entrez Protein database
NCBI Protein Structure database
Human Protein Reference Database
Human Proteinpedia
Folding@Home (Stanford University) Archived 2012-09-08 at the Wayback Machine
Protein Databank in Europe (see also PDBeQuips, short articles and tutorials on interesting PDB structures)
Research Collaboratory for Structural Bioinformatics (see also Molecule of the Month Archived 2020-07-24 at the Wayback Machine, presenting short accounts on selected proteins from the PDB)
Proteopedia – Life in 3D: rotatable, zoomable 3D model with wiki annotations for every known protein molecular structure.
UniProt the Universal Protein Resource
=== Tutorials and educational websites ===
"An Introduction to Proteins" from HOPES (Huntington's Disease Outreach Project for Education at Stanford)
Proteins: Biogenesis to Degradation – The Virtual Library of Biochemistry and Cell Biology | Wikipedia/Proteins |
In convective heat transfer, the Churchill–Bernstein equation is used to estimate the surface averaged Nusselt number for a cylinder in cross flow at various velocities. The need for the equation arises from the inability to solve the Navier–Stokes equations in the turbulent flow regime, even for a Newtonian fluid. When the concentration and temperature profiles are independent of one another, the mass-heat transfer analogy can be employed. In the mass-heat transfer analogy, heat transfer dimensionless quantities are replaced with analogous mass transfer dimensionless quantities.
This equation is named after Stuart W. Churchill and M. Bernstein, who introduced it in 1977. This equation is also called the Churchill–Bernstein correlation.
== Heat transfer definition ==
N
u
¯
D
=
0.3
+
0.62
R
e
D
1
/
2
Pr
1
/
3
[
1
+
(
0.4
/
Pr
)
2
/
3
]
1
/
4
[
1
+
(
R
e
D
282000
)
5
/
8
]
4
/
5
Pr
R
e
D
≥
0.2
{\displaystyle {\overline {\mathrm {Nu} }}_{D}\ =0.3+{\frac {0.62\mathrm {Re} _{D}^{1/2}\Pr ^{1/3}}{\left[1+(0.4/\Pr )^{2/3}\,\right]^{1/4}\,}}{\bigg [}1+{\bigg (}{\frac {\mathrm {Re} _{D}}{282000}}{\bigg )}^{5/8}{\bigg ]}^{4/5}\quad \Pr \mathrm {Re} _{D}\geq 0.2}
where:
N
u
¯
D
{\displaystyle {\overline {\mathrm {Nu} }}_{D}}
is the surface averaged Nusselt number with characteristic length of diameter;
R
e
D
{\displaystyle \mathrm {Re} _{D}\,\!}
is the Reynolds number with the cylinder diameter as its characteristic length;
Pr
{\displaystyle \Pr }
is the Prandtl number.
The Churchill–Bernstein equation is valid for a wide range of Reynolds numbers and Prandtl numbers, as long as the product of the two is greater than or equal to 0.2, as defined above. The Churchill–Bernstein equation can be used for any object of cylindrical geometry in which boundary layers develop freely, without constraints imposed by other surfaces. Properties of the external free stream fluid are to be evaluated at the film temperature in order to account for the variation of the fluid properties at different temperatures. One should not expect much more than 20% accuracy from the above equation due to the wide range of flow conditions that the equation encompasses. The Churchill–Bernstein equation is a correlation and cannot be derived from principles of fluid dynamics. The equation yields the surface averaged Nusselt number, which is used to determine the average convective heat transfer coefficient. Newton's law of cooling (in the form of heat loss per surface area being equal to heat transfer coefficient multiplied by temperature gradient) can then be invoked to determine the heat loss or gain from the object, fluid and/or surface temperatures, and the area of the object, depending on what information is known.
== Mass transfer definition ==
S
h
D
=
0.3
+
0.62
R
e
D
1
/
2
S
c
1
/
3
[
1
+
(
0.4
/
S
c
)
2
/
3
]
1
/
4
[
1
+
(
R
e
D
282000
)
5
/
8
]
4
/
5
S
c
R
e
D
≥
0.2
{\displaystyle \mathrm {Sh} _{D}=0.3+{\frac {0.62\mathrm {Re} _{D}^{1/2}\mathrm {Sc} ^{1/3}}{\left[1+(0.4/\mathrm {Sc} )^{2/3}\,\right]^{1/4}\,}}{\bigg [}1+{\bigg (}{\frac {\mathrm {Re} _{D}}{282000}}{\bigg )}^{5/8}{\bigg ]}^{4/5}\quad \mathrm {Sc} \,\mathrm {Re} _{D}\geq 0.2}
where:
S
h
D
{\displaystyle \mathrm {Sh} _{D}}
is the Sherwood number related to hydraulic diameter
S
c
{\displaystyle \mathrm {Sc} }
is the Schmidt number
Using the mass-heat transfer analogy, the Nusselt number is replaced by the Sherwood number, and the Prandtl number is replaced by the Schmidt number. The same restrictions described in the heat transfer definition are applied to the mass transfer definition. The Sherwood number can be used to find an overall mass transfer coefficient and applied to Fick's law of diffusion to find concentration profiles and mass transfer fluxes.
== See also ==
Prandtl number
Reynolds number
== Notes ==
== References ==
Churchill, S. W.; Bernstein, M. (1977), "A Correlating Equation for Forced Convection From Gases and Liquids to a Circular Cylinder in Crossflow", Journal of Heat Transfer, 99 (2): 300–306, Bibcode:1977ATJHT..99..300C, doi:10.1115/1.3450685
Incropera, F.P.; DeWitt, D.P.; Bergman, T.L.; Lavine, A.S. (2006). Fundamentals of Heat and Mass Transfer, 6th Ed. Wiley. ISBN 978-0-471-45728-2.
Tammet, Hannes; Kulmala, Markku (June 2007), Simulating aerosol nucleation bursts in a coniferous forest (PDF), archived from the original (PDF) on 18 August 2007, retrieved 10 Jul 2007
Ramachandran Venkatesan; Scott Fogler (2004). "Comments on Analogies for Correlated Heat and Mass Transfer in Turbulent Flow" (PDF). AIChE Journal. 50 (7): 1623–1626. doi:10.1002/aic.10146. hdl:2027.42/34252.
Martínez, Isidoro, Forced and Natural Convection (PDF), retrieved 2011-11-30 | Wikipedia/Churchill–Bernstein_equation |
In mathematics, a Whittaker function is a special solution of Whittaker's equation, a modified form of the confluent hypergeometric equation introduced by Whittaker (1903) to make the formulas involving the solutions more symmetric. More generally, Jacquet (1966, 1967) introduced Whittaker functions of reductive groups over local fields, where the functions studied by Whittaker are essentially the case where the local field is the real numbers and the group is SL2(R).
Whittaker's equation is
d
2
w
d
z
2
+
(
−
1
4
+
κ
z
+
1
/
4
−
μ
2
z
2
)
w
=
0.
{\displaystyle {\frac {d^{2}w}{dz^{2}}}+\left(-{\frac {1}{4}}+{\frac {\kappa }{z}}+{\frac {1/4-\mu ^{2}}{z^{2}}}\right)w=0.}
It has a regular singular point at 0 and an irregular singular point at ∞.
Two solutions are given by the Whittaker functions Mκ,μ(z), Wκ,μ(z), defined in terms of Kummer's confluent hypergeometric functions M and U by
M
κ
,
μ
(
z
)
=
exp
(
−
z
/
2
)
z
μ
+
1
2
M
(
μ
−
κ
+
1
2
,
1
+
2
μ
,
z
)
{\displaystyle M_{\kappa ,\mu }\left(z\right)=\exp \left(-z/2\right)z^{\mu +{\tfrac {1}{2}}}M\left(\mu -\kappa +{\tfrac {1}{2}},1+2\mu ,z\right)}
W
κ
,
μ
(
z
)
=
exp
(
−
z
/
2
)
z
μ
+
1
2
U
(
μ
−
κ
+
1
2
,
1
+
2
μ
,
z
)
.
{\displaystyle W_{\kappa ,\mu }\left(z\right)=\exp \left(-z/2\right)z^{\mu +{\tfrac {1}{2}}}U\left(\mu -\kappa +{\tfrac {1}{2}},1+2\mu ,z\right).}
The Whittaker function
W
κ
,
μ
(
z
)
{\displaystyle W_{\kappa ,\mu }(z)}
is the same as those with opposite values of μ, in other words considered as a function of μ at fixed κ and z it is even functions. When κ and z are real, the functions give real values for real and imaginary values of μ. These functions of μ play a role in so-called Kummer spaces.
Whittaker functions appear as coefficients of certain representations of the group SL2(R), called Whittaker models.
== References ==
Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 13". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. pp. 504, 537. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. See also chapter 14.
Bateman, Harry (1953), Higher transcendental functions (PDF), vol. 1, McGraw-Hill, archived from the original (PDF) on 2011-08-11, retrieved 2011-07-30.
Brychkov, Yu.A.; Prudnikov, A.P. (2001) [1994], "Whittaker function", Encyclopedia of Mathematics, EMS Press.
Daalhuis, Adri B. Olde (2010), "Whittaker function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
Jacquet, Hervé (1966), "Une interprétation géométrique et une généralisation P-adique des fonctions de Whittaker en théorie des groupes semi-simples", Comptes Rendus de l'Académie des Sciences, Série A et B, 262: A943 – A945, ISSN 0151-0509, MR 0200390
Jacquet, Hervé (1967), "Fonctions de Whittaker associées aux groupes de Chevalley", Bulletin de la Société Mathématique de France, 95: 243–309, doi:10.24033/bsmf.1654, ISSN 0037-9484, MR 0271275
Rozov, N.Kh. (2001) [1994], "Whittaker equation", Encyclopedia of Mathematics, EMS Press.
Slater, Lucy Joan (1960), Confluent hypergeometric functions, Cambridge University Press, MR 0107026.
Whittaker, Edmund T. (1903), "An expression of certain known functions as generalized hypergeometric functions", Bulletin of the A.M.S., 10 (3), Providence, R.I.: American Mathematical Society: 125–134, doi:10.1090/S0002-9904-1903-01077-5
== Further reading ==
Hatamzadeh-Varmazyar, Saeed; Masouri, Zahra (2012-11-01). "A fast numerical method for analysis of one- and two-dimensional electromagnetic scattering using a set of cardinal functions". Engineering Analysis with Boundary Elements. 36 (11): 1631–1639. doi:10.1016/j.enganabound.2012.04.014. ISSN 0955-7997.
Gerasimov, A. A.; Lebedev, Dmitrii R.; Oblezin, Sergei V. (2012). "New integral representations of Whittaker functions for classical Lie groups". Russian Mathematical Surveys. 67 (1): 1–92. arXiv:0705.2886. Bibcode:2012RuMaS..67....1G. doi:10.1070/RM2012v067n01ABEH004776. ISSN 0036-0279.
Baudoin, Fabrice; O'Connell, Neil (2011). "Exponential functionals of brownian motion and class-one Whittaker functions". Annales de l'Institut Henri Poincaré, Probabilités et Statistiques. 47 (4): 1096–1120. arXiv:0809.2506. Bibcode:2011AIHPB..47.1096B. doi:10.1214/10-AIHP401. S2CID 113388.
McKee, Mark (April 2009). "An Infinite Order Whittaker Function". Canadian Journal of Mathematics. 61 (2): 373–381. doi:10.4153/CJM-2009-019-x. ISSN 0008-414X. S2CID 55587239.
Mathai, A. M.; Pederzoli, Giorgio (1997-03-01). "Some properties of matrix-variate Laplace transforms and matrix-variate Whittaker functions". Linear Algebra and Its Applications. 253 (1): 209–226. doi:10.1016/0024-3795(95)00705-9. ISSN 0024-3795.
Whittaker, J. M. (May 1927). "On the Cardinal Function of Interpolation Theory". Proceedings of the Edinburgh Mathematical Society. 1 (1): 41–46. doi:10.1017/S0013091500007318. ISSN 1464-3839.
Cherednik, Ivan (2009). "Whittaker Limits of Difference Spherical Functions". International Mathematics Research Notices. 2009 (20): 3793–3842. arXiv:0807.2155. doi:10.1093/imrn/rnp065. ISSN 1687-0247. S2CID 6253357.
Slater, L. J. (October 1954). "Expansions of generalized Whittaker functions". Mathematical Proceedings of the Cambridge Philosophical Society. 50 (4): 628–631. Bibcode:1954PCPS...50..628S. doi:10.1017/S0305004100029765. ISSN 1469-8064. S2CID 122348447.
Etingof, Pavel (1999-01-12). "Whittaker functions on quantum groups and q-deformed Toda operators". arXiv:math/9901053.
McNamara, Peter J. (2011-01-15). "Metaplectic Whittaker functions and crystal bases". Duke Mathematical Journal. 156 (1): 1–31. arXiv:0907.2675. doi:10.1215/00127094-2010-064. ISSN 0012-7094. S2CID 979197.
Mathai, A. M.; Pederzoli, Giorgio (1998-01-15). "A whittaker function of matrix argument". Linear Algebra and Its Applications. 269 (1): 91–103. doi:10.1016/S0024-3795(97)00059-1. ISSN 0024-3795.
Frenkel, E.; Gaitsgory, D.; Kazhdan, D.; Vilonen, K. (1998). "Geometric realization of Whittaker functions and the Langlands conjecture". Journal of the American Mathematical Society. 11 (2): 451–484. arXiv:alg-geom/9703022. doi:10.1090/S0894-0347-98-00260-4. ISSN 0894-0347. S2CID 13221400. | Wikipedia/Whittaker_function |
Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate the corresponding electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson who published it in 1823.
== Statement of the equation ==
Poisson's equation is
Δ
φ
=
f
,
{\displaystyle \Delta \varphi =f,}
where
Δ
{\displaystyle \Delta }
is the Laplace operator, and
f
{\displaystyle f}
and
φ
{\displaystyle \varphi }
are real or complex-valued functions on a manifold. Usually,
f
{\displaystyle f}
is given, and
φ
{\displaystyle \varphi }
is sought. When the manifold is Euclidean space, the Laplace operator is often denoted as ∇2, and so Poisson's equation is frequently written as
∇
2
φ
=
f
.
{\displaystyle \nabla ^{2}\varphi =f.}
In three-dimensional Cartesian coordinates, it takes the form
(
∂
2
∂
x
2
+
∂
2
∂
y
2
+
∂
2
∂
z
2
)
φ
(
x
,
y
,
z
)
=
f
(
x
,
y
,
z
)
.
{\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {\partial ^{2}}{\partial z^{2}}}\right)\varphi (x,y,z)=f(x,y,z).}
When
f
=
0
{\displaystyle f=0}
identically, we obtain Laplace's equation.
Poisson's equation may be solved using a Green's function:
φ
(
r
)
=
−
∭
f
(
r
′
)
4
π
|
r
−
r
′
|
d
3
r
′
,
{\displaystyle \varphi (\mathbf {r} )=-\iiint {\frac {f(\mathbf {r} ')}{4\pi |\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} ^{3}r',}
where the integral is over all of space. A general exposition of the Green's function for Poisson's equation is given in the article on the screened Poisson equation. There are various methods for numerical solution, such as the relaxation method, an iterative algorithm.
== Applications in physics and engineering ==
=== Newtonian gravity ===
In the case of a gravitational field g due to an attracting massive object of density ρ, Gauss's law for gravity in differential form can be used to obtain the corresponding Poisson equation for gravity. Gauss's law for gravity is
∇
⋅
g
=
−
4
π
G
ρ
.
{\displaystyle \nabla \cdot \mathbf {g} =-4\pi G\rho .}
Since the gravitational field is conservative (and irrotational), it can be expressed in terms of a scalar potential ϕ:
g
=
−
∇
ϕ
.
{\displaystyle \mathbf {g} =-\nabla \phi .}
Substituting this into Gauss's law,
∇
⋅
(
−
∇
ϕ
)
=
−
4
π
G
ρ
,
{\displaystyle \nabla \cdot (-\nabla \phi )=-4\pi G\rho ,}
yields Poisson's equation for gravity:
∇
2
ϕ
=
4
π
G
ρ
.
{\displaystyle \nabla ^{2}\phi =4\pi G\rho .}
If the mass density is zero, Poisson's equation reduces to Laplace's equation. The corresponding Green's function can be used to calculate the potential at distance r from a central point mass m (i.e., the fundamental solution). In three dimensions the potential is
ϕ
(
r
)
=
−
G
m
r
,
{\displaystyle \phi (r)={\frac {-Gm}{r}},}
which is equivalent to Newton's law of universal gravitation.
=== Electrostatics ===
Many problems in electrostatics are governed by the Poisson equation, which relates the electric potential
φ to the free charge density
ρ
f
{\displaystyle \rho _{f}}
, such as those found in conductors.
The mathematical details of Poisson's equation, commonly expressed in SI units (as opposed to Gaussian units), describe how the distribution of free charges generates the electrostatic potential in a given region.
Starting with Gauss's law for electricity (also one of Maxwell's equations) in differential form, one has
∇
⋅
D
=
ρ
f
,
{\displaystyle \mathbf {\nabla } \cdot \mathbf {D} =\rho _{f},}
where
∇
⋅
{\displaystyle \mathbf {\nabla } \cdot }
is the divergence operator, D is the electric displacement field, and ρf is the free-charge density (describing charges brought from outside).
Assuming the medium is linear, isotropic, and homogeneous (see polarization density), we have the constitutive equation
D
=
ε
E
,
{\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,}
where ε is the permittivity of the medium, and E is the electric field.
Substituting this into Gauss's law and assuming that ε is spatially constant in the region of interest yields
∇
⋅
E
=
ρ
f
ε
.
{\displaystyle \mathbf {\nabla } \cdot \mathbf {E} ={\frac {\rho _{f}}{\varepsilon }}.}
In electrostatics, we assume that there is no magnetic field (the argument that follows also holds in the presence of a constant magnetic field).
Then, we have that
∇
×
E
=
0
,
{\displaystyle \nabla \times \mathbf {E} =0,}
where ∇× is the curl operator. This equation means that we can write the electric field as the gradient of a scalar function φ (called the electric potential), since the curl of any gradient is zero. Thus we can write
E
=
−
∇
φ
,
{\displaystyle \mathbf {E} =-\nabla \varphi ,}
where the minus sign is introduced so that φ is identified as the electric potential energy per unit charge.
The derivation of Poisson's equation under these circumstances is straightforward. Substituting the potential gradient for the electric field,
∇
⋅
E
=
∇
⋅
(
−
∇
φ
)
=
−
∇
2
φ
=
ρ
f
ε
,
{\displaystyle \nabla \cdot \mathbf {E} =\nabla \cdot (-\nabla \varphi )=-\nabla ^{2}\varphi ={\frac {\rho _{f}}{\varepsilon }},}
directly produces Poisson's equation for electrostatics, which is
∇
2
φ
=
−
ρ
f
ε
.
{\displaystyle \nabla ^{2}\varphi =-{\frac {\rho _{f}}{\varepsilon }}.}
Specifying the Poisson's equation for the potential requires knowing the charge density distribution. If the charge density is zero, then Laplace's equation results. If the charge density follows a Boltzmann distribution, then the Poisson–Boltzmann equation results. The Poisson–Boltzmann equation plays a role in the development of the Debye–Hückel theory of dilute electrolyte solutions.
Using a Green's function, the potential at distance r from a central point charge Q (i.e., the fundamental solution) is
φ
(
r
)
=
Q
4
π
ε
r
,
{\displaystyle \varphi (r)={\frac {Q}{4\pi \varepsilon r}},}
which is Coulomb's law of electrostatics. (For historical reasons, and unlike gravity's model above, the
4
π
{\displaystyle 4\pi }
factor appears here and not in Gauss's law.)
The above discussion assumes that the magnetic field is not varying in time. The same Poisson equation arises even if it does vary in time, as long as the Coulomb gauge is used. In this more general class of cases, computing φ is no longer sufficient to calculate E, since E also depends on the magnetic vector potential A, which must be independently computed. See Maxwell's equation in potential formulation for more on φ and A in Maxwell's equations and how an appropriate Poisson's equation is obtained in this case.
==== Potential of a Gaussian charge density ====
If there is a static spherically symmetric Gaussian charge density
ρ
f
(
r
)
=
Q
σ
3
2
π
3
e
−
r
2
/
(
2
σ
2
)
,
{\displaystyle \rho _{f}(r)={\frac {Q}{\sigma ^{3}{\sqrt {2\pi }}^{3}}}\,e^{-r^{2}/(2\sigma ^{2})},}
where Q is the total charge, then the solution φ(r) of Poisson's equation
∇
2
φ
=
−
ρ
f
ε
{\displaystyle \nabla ^{2}\varphi =-{\frac {\rho _{f}}{\varepsilon }}}
is given by
φ
(
r
)
=
1
4
π
ε
Q
r
erf
(
r
2
σ
)
,
{\displaystyle \varphi (r)={\frac {1}{4\pi \varepsilon }}{\frac {Q}{r}}\operatorname {erf} \left({\frac {r}{{\sqrt {2}}\sigma }}\right),}
where erf(x) is the error function. This solution can be checked explicitly by evaluating ∇2φ.
Note that for r much greater than σ,
erf
(
r
/
2
σ
)
{\textstyle \operatorname {erf} (r/{\sqrt {2}}\sigma )}
approaches unity, and the potential φ(r) approaches the point-charge potential,
φ
≈
1
4
π
ε
Q
r
,
{\displaystyle \varphi \approx {\frac {1}{4\pi \varepsilon }}{\frac {Q}{r}},}
as one would expect. Furthermore, the error function approaches 1 extremely quickly as its argument increases; in practice, for r > 3σ the relative error is smaller than one part in a thousand.
=== Surface reconstruction ===
Surface reconstruction is an inverse problem. The goal is to digitally reconstruct a smooth surface based on a large number of points pi (a point cloud) where each point also carries an estimate of the local surface normal ni. Poisson's equation can be utilized to solve this problem with a technique called Poisson surface reconstruction.
The goal of this technique is to reconstruct an implicit function f whose value is zero at the points pi and whose gradient at the points pi equals the normal vectors ni. The set of (pi, ni) is thus modeled as a continuous vector field V. The implicit function f is found by integrating the vector field V. Since not every vector field is the gradient of a function, the problem may or may not have a solution: the necessary and sufficient condition for a smooth vector field V to be the gradient of a function f is that the curl of V must be identically zero. In case this condition is difficult to impose, it is still possible to perform a least-squares fit to minimize the difference between V and the gradient of f.
In order to effectively apply Poisson's equation to the problem of surface reconstruction, it is necessary to find a good discretization of the vector field V. The basic approach is to bound the data with a finite-difference grid. For a function valued at the nodes of such a grid, its gradient can be represented as valued on staggered grids, i.e. on grids whose nodes lie in between the nodes of the original grid. It is convenient to define three staggered grids, each shifted in one and only one direction corresponding to the components of the normal data. On each staggered grid we perform trilinear interpolation on the set of points. The interpolation weights are then used to distribute the magnitude of the associated component of ni onto the nodes of the particular staggered grid cell containing pi. Kazhdan and coauthors give a more accurate method of discretization using an adaptive finite-difference grid, i.e. the cells of the grid are smaller (the grid is more finely divided) where there are more data points. They suggest implementing this technique with an adaptive octree.
=== Fluid dynamics ===
For the incompressible Navier–Stokes equations, given by
∂
v
∂
t
+
(
v
⋅
∇
)
v
=
−
1
ρ
∇
p
+
ν
Δ
v
+
g
,
∇
⋅
v
=
0.
{\displaystyle {\begin{aligned}{\frac {\partial \mathbf {v} }{\partial t}}+(\mathbf {v} \cdot \nabla )\mathbf {v} &=-{\frac {1}{\rho }}\nabla p+\nu \Delta \mathbf {v} +\mathbf {g} ,\\\nabla \cdot \mathbf {v} &=0.\end{aligned}}}
The equation for the pressure field
p
{\displaystyle p}
is an example of a nonlinear Poisson equation:
Δ
p
=
−
ρ
∇
⋅
(
v
⋅
∇
v
)
=
−
ρ
Tr
(
(
∇
v
)
(
∇
v
)
)
.
{\displaystyle {\begin{aligned}\Delta p&=-\rho \nabla \cdot (\mathbf {v} \cdot \nabla \mathbf {v} )\\&=-\rho \operatorname {Tr} {\big (}(\nabla \mathbf {v} )(\nabla \mathbf {v} ){\big )}.\end{aligned}}}
Notice that the above trace is not sign-definite.
== See also ==
Discrete Poisson equation
Poisson–Boltzmann equation
Helmholtz equation
Uniqueness theorem for Poisson's equation
Weak formulation
Harmonic function
Heat equation
Potential theory
== References ==
== Further reading ==
Evans, Lawrence C. (1998). Partial Differential Equations. Providence (RI): American Mathematical Society. ISBN 0-8218-0772-2.
Mathews, Jon; Walker, Robert L. (1970). Mathematical Methods of Physics (2nd ed.). New York: W. A. Benjamin. ISBN 0-8053-7002-1.
Polyanin, Andrei D. (2002). Handbook of Linear Partial Differential Equations for Engineers and Scientists. Boca Raton (FL): Chapman & Hall/CRC Press. ISBN 1-58488-299-9.
== External links ==
"Poisson equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Poisson Equation at EqWorld: The World of Mathematical Equations | Wikipedia/Poisson_equation |
In the study of partial differential equations, particularly in fluid dynamics, a self-similar solution is a form of solution which is similar to itself if the independent and dependent variables are appropriately scaled. Self-similar solutions appear whenever the problem lacks a characteristic length or time scale (for example, the Blasius boundary layer of an infinite plate, but not of a finite-length plate). These include, for example, the Blasius boundary layer or the Sedov–Taylor shell.
== Concept ==
A powerful tool in physics is the concept of dimensional analysis and scaling laws. By examining the physical effects present in a system, we may estimate their size and hence which, for example, might be neglected. In some cases, the system may not have a fixed natural length or time scale, while the solution depends on space or time. It is then necessary to construct a scale using space or time and the other dimensional quantities present—such as the viscosity
ν
{\displaystyle \nu }
. These constructs are not 'guessed' but are derived immediately from the scaling of the governing equations.
== Classification ==
The normal self-similar solution is also referred to as a self-similar solution of the first kind, since another type of self-similar exists for finite-sized problems, which cannot be derived from dimensional analysis, known as a self-similar solution of the second kind.
=== Self-similar solution of the second kind ===
The early identification of self-similar solutions of the second kind can be found in problems of imploding shock waves (Guderley–Landau–Stanyukovich problem), analyzed by G. Guderley (1942) and Lev Landau and K. P. Stanyukovich (1944), and propagation of shock waves by a short impulse, analysed by Carl Friedrich von Weizsäcker and Yakov Borisovich Zel'dovich (1956), who also classified it as the second kind for the first time. An independent study about the same field was published by Leonid Ivanovich Sedov in 1959. A complete description was made in 1972 by Grigory Barenblatt and Yakov Borisovich Zel'dovich. The self-similar solution of the second kind also appears in different contexts such as in boundary-layer problems subjected to small perturbations, as was identified by Keith Stewartson, Paul A. Libby and Herbert Fox. Moffatt eddies are also a self-similar solution of the second kind.
== Examples ==
=== Rayleigh problem ===
A simple example is a semi-infinite domain bounded by a rigid wall and filled with viscous fluid. At time
t
=
0
{\displaystyle t=0}
the wall is made to move with constant speed
U
{\displaystyle U}
in a fixed direction (for definiteness, say the
x
{\displaystyle x}
direction and consider only the
x
−
y
{\displaystyle x-y}
plane), one can see that there is no distinguished length scale given in the problem. This is known as the Rayleigh problem. The boundary conditions of no-slip is
u
(
y
=
0
)
=
U
{\displaystyle u{(y\!=\!0)}=U}
Also, the condition that the plate has no effect on the fluid at infinity is enforced as
u
(
y
→
∞
)
=
0.
{\displaystyle u{(y\!\to \!\infty )}=0.}
Now, from the Navier-Stokes equations
ρ
(
∂
u
→
∂
t
+
u
→
⋅
∇
u
→
)
=
−
∇
p
+
μ
∇
2
u
→
{\displaystyle \rho \left({\dfrac {\partial {\vec {u}}}{\partial t}}+{\vec {u}}\cdot \nabla {\vec {u}}\right)=-\nabla p+\mu \nabla ^{2}{\vec {u}}}
one can observe that this flow will be rectilinear, with gradients in the
y
{\displaystyle y}
direction and flow in the
x
{\displaystyle x}
direction, and that the pressure term will have no tangential component so that
∂
p
∂
y
=
0
{\displaystyle {\dfrac {\partial p}{\partial y}}=0}
. The
x
{\displaystyle x}
component of the Navier-Stokes equations then becomes
∂
u
→
∂
t
=
ν
∂
y
2
u
→
{\displaystyle {\dfrac {\partial {\vec {u}}}{\partial t}}=\nu \partial _{y}^{2}{\vec {u}}}
and the scaling arguments can be applied to show that
U
t
∼
ν
U
y
2
{\displaystyle {\frac {U}{t}}\sim \nu {\frac {U}{y^{2}}}}
which gives the scaling of the
y
{\displaystyle y}
co-ordinate as
y
∼
(
ν
t
)
1
/
2
{\displaystyle y\sim (\nu t)^{1/2}}
.
This allows one to pose a self-similar ansatz such that, with
f
{\displaystyle f}
and
η
{\displaystyle \eta }
dimensionless,
u
=
U
f
(
η
≡
y
(
ν
t
)
1
/
2
)
{\displaystyle u=Uf\left(\eta \equiv {\dfrac {y}{(\nu t)^{1/2}}}\right)}
The above contains all the relevant physics and the next step is to solve the equations, which for many cases will include numerical methods. This equation is
−
η
f
′
/
2
=
f
″
{\displaystyle -\eta f'/2=f''}
with solution satisfying the boundary conditions that
f
=
1
−
erf
(
η
/
2
)
or
u
=
U
(
1
−
erf
(
y
/
(
4
ν
t
)
1
/
2
)
)
{\displaystyle f=1-\operatorname {erf} (\eta /2)\quad {\text{ or }}\quad u=U\left(1-\operatorname {erf} \left(y/(4\nu t)^{1/2}\right)\right)}
which is a self-similar solution of the first kind.
=== Semi-infinite solid approximation ===
In transient heat transfer applications, such as impingement heating on a ship deck during missile launches and the sizing of thermal protection systems, self-similar solutions can be found for semi-infinite solids. The governing equation when heat conduction is the primary heat transfer mechanism is the one-dimensional energy equation:
ρ
c
p
∂
T
∂
t
=
∂
∂
x
(
k
∂
T
∂
x
)
{\displaystyle \rho c_{p}{\frac {\partial T}{\partial t}}={\frac {\partial }{\partial x}}\left(k{\frac {\partial T}{\partial x}}\right)}
where
ρ
{\displaystyle \rho }
is the material's density,
c
p
{\displaystyle c_{p}}
is the material's specific heat capacity,
k
{\displaystyle k}
is the material's thermal conductivity. In the case when the material is assumed to be homogeneous and its properties constant, the energy equation is reduced to the heat equation:
∂
T
∂
t
=
α
∂
2
T
∂
x
2
,
α
=
k
ρ
c
p
{\displaystyle {\frac {\partial T}{\partial t}}=\alpha {\frac {\partial ^{2}T}{\partial x^{2}}},\quad \alpha ={\frac {k}{\rho c_{p}}}}
with
α
{\displaystyle \alpha }
being the thermal diffusivity. By introducing the similarity variable
η
=
x
/
t
{\displaystyle \eta =x/{\sqrt {t}}}
and assuming that
T
(
t
,
x
)
=
f
(
η
)
{\displaystyle T(t,x)=f(\eta )}
, the PDE can be transformed into the ODE:
f
″
(
η
)
+
1
2
α
η
f
′
(
η
)
=
0
{\displaystyle f''(\eta )+{\frac {1}{2\alpha }}\eta f'(\eta )=0}
If a simple model of thermal protection system sizing is assumed, where decomposition, pyrolysis gas flow, and surface recession are ignored, with the initial temperature
T
(
0
,
x
)
=
f
(
∞
)
=
T
i
{\displaystyle T(0,x)=f(\infty )=T_{i}}
and a constant surface temperature
T
(
t
,
0
)
=
f
(
0
)
=
T
s
{\displaystyle T(t,0)=f(0)=T_{s}}
, then the ODE can be solved for the temperature at a depth
x
{\displaystyle x}
and time
t
{\displaystyle t}
:
T
(
t
,
x
)
=
erf
(
x
2
α
t
)
(
T
i
−
T
s
)
+
T
s
{\displaystyle T(t,x)={\text{erf}}\left({\frac {x}{2{\sqrt {\alpha t}}}}\right)\left(T_{i}-T_{s}\right)+T_{s}}
where
erf
(
⋅
)
{\displaystyle {\text{erf}}(\cdot )}
is the error function.
== References == | Wikipedia/Self-similar_solution |
Discretization of the Navier–Stokes equations of fluid dynamics is a reformulation of the equations in such a way that they can be applied to computational fluid dynamics. Several methods of discretization can be applied:
Finite volume method
Finite elements method
Finite difference method
== Finite volume method ==
=== Incompressible flow ===
We begin with the incompressible form of the momentum equation. The equation has been divided through by the density (P = p/ρ) and density has been absorbed into the body force term.
∂
u
i
∂
t
+
∂
u
i
u
j
∂
x
j
=
−
∂
P
∂
x
i
+
ν
∂
2
u
i
∂
x
j
∂
x
j
+
f
i
{\displaystyle {\frac {\partial u_{i}}{\partial t}}+{\frac {\partial u_{i}u_{j}}{\partial x_{j}}}=-{\frac {\partial P}{\partial x_{i}}}+\nu {\frac {\partial ^{2}u_{i}}{\partial x_{j}\partial x_{j}}}+f_{i}}
The equation is integrated over the control volume of a computational cell.
∭
V
[
∂
u
i
∂
t
+
∂
u
i
u
j
∂
x
j
]
d
V
=
∭
V
[
−
∂
P
∂
x
i
+
ν
∂
2
u
i
∂
x
j
∂
x
j
+
f
i
]
d
V
{\displaystyle \iiint _{V}\left[{\frac {\partial u_{i}}{\partial t}}+{\frac {\partial u_{i}u_{j}}{\partial x_{j}}}\right]dV=\iiint _{V}\left[-{\frac {\partial P}{\partial x_{i}}}+\nu {\frac {\partial ^{2}u_{i}}{\partial x_{j}\partial x_{j}}}+f_{i}\right]dV}
The time-dependent term and the body force term are assumed constant over the volume of the cell. The divergence theorem is applied to the advection, pressure gradient, and diffusion terms.
∂
u
i
∂
t
V
+
∬
A
u
i
u
j
n
j
d
A
=
−
∬
A
P
n
i
d
A
+
∬
A
ν
∂
u
i
∂
x
j
n
j
d
A
+
f
i
V
{\displaystyle {\frac {\partial u_{i}}{\partial t}}V+\iint _{A}u_{i}u_{j}n_{j}dA=-\iint _{A}Pn_{i}dA+\iint _{A}\nu {\frac {\partial u_{i}}{\partial x_{j}}}n_{j}dA+f_{i}V}
where n is the normal of the surface of the control volume and V is the volume. If the control volume is a polyhedron and values are assumed constant over each face, the area integrals can be written as summations over each face.
∂
u
i
∂
t
V
+
∑
n
b
r
(
u
i
u
j
n
j
A
)
n
b
r
=
−
∑
n
b
r
(
P
n
i
A
)
n
b
r
+
∑
n
b
r
(
ν
∂
u
i
∂
x
j
n
j
A
)
n
b
r
+
f
i
V
{\displaystyle {\frac {\partial u_{i}}{\partial t}}V+\sum _{nbr}\left(u_{i}u_{j}n_{j}A\right)_{nbr}=-\sum _{nbr}\left(Pn_{i}A\right)_{nbr}+\sum _{nbr}\left(\nu {\frac {\partial u_{i}}{\partial x_{j}}}n_{j}A\right)_{nbr}+f_{i}V}
where the subscript nbr denotes the value at any given face.
==== Two-dimensional uniformly-spaced Cartesian grid ====
For a two-dimensional Cartesian grid, the equation can be expanded to
∂
u
i
∂
t
Δ
x
Δ
y
−
(
u
i
u
Δ
y
)
w
+
(
u
i
u
Δ
y
)
e
−
(
u
i
v
Δ
x
)
s
+
(
u
i
v
Δ
x
)
n
=
−
(
P
n
i
Δ
y
)
w
−
(
P
n
i
Δ
y
)
e
−
(
P
n
i
Δ
x
)
s
−
(
P
n
i
Δ
x
)
n
−
(
ν
∂
u
i
∂
x
Δ
y
)
w
+
(
ν
∂
u
i
∂
x
Δ
y
)
e
−
(
ν
∂
u
i
∂
y
Δ
x
)
s
+
(
ν
∂
u
i
∂
y
Δ
x
)
n
+
f
i
{\displaystyle {\begin{aligned}&{\frac {\partial u_{i}}{\partial t}}\Delta x\Delta y-\left(u_{i}u\Delta y\right)_{w}+\left(u_{i}u\Delta y\right)_{e}-\left(u_{i}v\Delta x\right)_{s}+\left(u_{i}v\Delta x\right)_{n}=\\&-\left(Pn_{i}\Delta y\right)_{w}-\left(Pn_{i}\Delta y\right)_{e}-\left(Pn_{i}\Delta x\right)_{s}-\left(Pn_{i}\Delta x\right)_{n}\\&-\left(\nu {\frac {\partial u_{i}}{\partial x}}\Delta y\right)_{w}+\left(\nu {\frac {\partial u_{i}}{\partial x}}\Delta y\right)_{e}-\left(\nu {\frac {\partial u_{i}}{\partial y}}\Delta x\right)_{s}+\left(\nu {\frac {\partial u_{i}}{\partial y}}\Delta x\right)_{n}+f_{i}\end{aligned}}}
On a staggered grid, the x-momentum equation is
∂
u
∂
t
Δ
x
Δ
y
−
(
u
u
Δ
y
)
w
+
(
u
u
Δ
y
)
e
−
(
u
v
Δ
x
)
s
+
(
u
v
Δ
x
)
n
=
+
(
P
Δ
y
)
w
−
(
P
Δ
y
)
e
−
(
ν
∂
u
∂
x
Δ
y
)
w
+
(
ν
∂
u
∂
x
Δ
y
)
e
−
(
ν
∂
u
∂
y
Δ
x
)
s
+
(
ν
∂
u
∂
y
Δ
x
)
n
+
f
x
{\displaystyle {\begin{aligned}&{\frac {\partial u}{\partial t}}\Delta x\Delta y-\left(uu\Delta y\right)_{w}+\left(uu\Delta y\right)_{e}-\left(uv\Delta x\right)_{s}+\left(uv\Delta x\right)_{n}=\\&+\left(P\Delta y\right)_{w}-\left(P\Delta y\right)_{e}-\left(\nu {\frac {\partial u}{\partial x}}\Delta y\right)_{w}+\left(\nu {\frac {\partial u}{\partial x}}\Delta y\right)_{e}-\left(\nu {\frac {\partial u}{\partial y}}\Delta x\right)_{s}+\left(\nu {\frac {\partial u}{\partial y}}\Delta x\right)_{n}+f_{x}\end{aligned}}}
and the y-momentum equation is
∂
v
∂
t
Δ
x
Δ
y
−
(
v
u
Δ
y
)
w
+
(
v
u
Δ
y
)
e
−
(
v
v
Δ
x
)
s
+
(
v
v
Δ
x
)
n
=
+
(
P
Δ
x
)
s
−
(
P
Δ
x
)
n
−
(
ν
∂
v
∂
x
Δ
y
)
w
+
(
ν
∂
v
∂
x
Δ
y
)
e
−
(
ν
∂
v
∂
y
Δ
x
)
s
+
(
ν
∂
v
∂
y
Δ
x
)
n
+
f
y
{\displaystyle {\begin{aligned}&{\frac {\partial v}{\partial t}}\Delta x\Delta y-\left(vu\Delta y\right)_{w}+\left(vu\Delta y\right)_{e}-\left(vv\Delta x\right)_{s}+\left(vv\Delta x\right)_{n}=\\&+\left(P\Delta x\right)_{s}-\left(P\Delta x\right)_{n}-\left(\nu {\frac {\partial v}{\partial x}}\Delta y\right)_{w}+\left(\nu {\frac {\partial v}{\partial x}}\Delta y\right)_{e}-\left(\nu {\frac {\partial v}{\partial y}}\Delta x\right)_{s}+\left(\nu {\frac {\partial v}{\partial y}}\Delta x\right)_{n}+f_{y}\end{aligned}}}
The goal at this point is to determine expressions for the face-values for u, v, and P and to approximate the derivatives using finite difference approximations. For this example we will use backward difference for the time derivative and central difference for the spatial derivatives. For both momentum equations, the time derivative becomes
∂
u
i
∂
t
=
u
i
n
−
u
i
n
−
1
Δ
t
{\displaystyle {\frac {\partial u_{i}}{\partial t}}={\frac {u_{i}^{n}-u_{i}^{n-1}}{\Delta t}}}
where n is the current time index and Δt is the time-step. As an example for the spatial derivatives, derivative in the west-face diffusion term in the x-momentum equation becomes
(
∂
u
∂
x
)
w
=
u
I
,
J
−
u
I
−
1
,
J
Δ
x
{\displaystyle \left({\frac {\partial u}{\partial x}}\right)_{w}={\frac {u_{I,J}-u_{I-1,J}}{\Delta x}}}
where I and J are the indices of the x-momentum cell of interest.
== Finite elements method ==
== Finite difference method == | Wikipedia/Discretization_of_Navier–Stokes_equations |
In physics, the Spalart–Allmaras model is a one-equation model that solves a modelled transport equation for the kinematic eddy turbulent viscosity. The Spalart–Allmaras model was designed specifically for aerospace applications involving wall-bounded flows and has been shown to give good results for boundary layers subjected to adverse pressure gradients. It is also gaining popularity in turbomachinery applications.
In its original form, the model is effectively a low-Reynolds number model, requiring the viscosity-affected region of the boundary layer to be properly resolved ( y+ ~1 meshes).
The Spalart–Allmaras model was developed for aerodynamic flows. It is not calibrated for general industrial flows, and does produce relatively larger errors for some free shear flows, especially plane and round jet flows. In addition, it cannot be relied on to predict the decay of homogeneous, isotropic turbulence.
It solves a transport equation for a viscosity-like variable
ν
~
{\displaystyle {\tilde {\nu }}}
. This may be referred to as the Spalart–Allmaras variable.
== Original model ==
The turbulent eddy viscosity is given by
ν
t
=
ν
~
f
v
1
,
f
v
1
=
χ
3
χ
3
+
C
v
1
3
,
χ
:=
ν
~
ν
{\displaystyle \nu _{t}={\tilde {\nu }}f_{v1},\quad f_{v1}={\frac {\chi ^{3}}{\chi ^{3}+C_{v1}^{3}}},\quad \chi :={\frac {\tilde {\nu }}{\nu }}}
∂
ν
~
∂
t
+
u
j
∂
ν
~
∂
x
j
=
C
b
1
[
1
−
f
t
2
]
S
~
ν
~
+
1
σ
{
∇
⋅
[
(
ν
+
ν
~
)
∇
ν
~
]
+
C
b
2
|
∇
ν
~
|
2
}
−
[
C
w
1
f
w
−
C
b
1
κ
2
f
t
2
]
(
ν
~
d
)
2
+
f
t
1
Δ
U
2
{\displaystyle {\frac {\partial {\tilde {\nu }}}{\partial t}}+u_{j}{\frac {\partial {\tilde {\nu }}}{\partial x_{j}}}=C_{b1}[1-f_{t2}]{\tilde {S}}{\tilde {\nu }}+{\frac {1}{\sigma }}\{\nabla \cdot [(\nu +{\tilde {\nu }})\nabla {\tilde {\nu }}]+C_{b2}|\nabla {\tilde {\nu }}|^{2}\}-\left[C_{w1}f_{w}-{\frac {C_{b1}}{\kappa ^{2}}}f_{t2}\right]\left({\frac {\tilde {\nu }}{d}}\right)^{2}+f_{t1}\Delta U^{2}}
S
~
≡
S
+
ν
~
κ
2
d
2
f
v
2
,
f
v
2
=
1
−
χ
1
+
χ
f
v
1
{\displaystyle {\tilde {S}}\equiv S+{\frac {\tilde {\nu }}{\kappa ^{2}d^{2}}}f_{v2},\quad f_{v2}=1-{\frac {\chi }{1+\chi f_{v1}}}}
f
w
=
g
[
1
+
C
w
3
6
g
6
+
C
w
3
6
]
1
/
6
,
g
=
r
+
C
w
2
(
r
6
−
r
)
,
r
≡
ν
~
S
~
κ
2
d
2
{\displaystyle f_{w}=g\left[{\frac {1+C_{w3}^{6}}{g^{6}+C_{w3}^{6}}}\right]^{1/6},\quad g=r+C_{w2}(r^{6}-r),\quad r\equiv {\frac {\tilde {\nu }}{{\tilde {S}}\kappa ^{2}d^{2}}}}
f
t
1
=
C
t
1
g
t
exp
(
−
C
t
2
ω
t
2
Δ
U
2
[
d
2
+
g
t
2
d
t
2
]
)
{\displaystyle f_{t1}=C_{t1}g_{t}\exp \left(-C_{t2}{\frac {\omega _{t}^{2}}{\Delta U^{2}}}[d^{2}+g_{t}^{2}d_{t}^{2}]\right)}
f
t
2
=
C
t
3
exp
(
−
C
t
4
χ
2
)
{\displaystyle f_{t2}=C_{t3}\exp \left(-C_{t4}\chi ^{2}\right)}
S
=
2
Ω
i
j
Ω
i
j
{\displaystyle S={\sqrt {2\Omega _{ij}\Omega _{ij}}}}
The rotation tensor is given by
Ω
i
j
=
1
2
(
∂
u
i
/
∂
x
j
−
∂
u
j
/
∂
x
i
)
{\displaystyle \Omega _{ij}={\frac {1}{2}}(\partial u_{i}/\partial x_{j}-\partial u_{j}/\partial x_{i})}
where d is the distance from the closest surface and
Δ
U
2
{\displaystyle \Delta U^{2}}
is the norm of the difference between the velocity at the trip (usually zero) and that at the field point we are considering.
The constants are
σ
=
2
/
3
C
b
1
=
0.1355
C
b
2
=
0.622
κ
=
0.41
C
w
1
=
C
b
1
/
κ
2
+
(
1
+
C
b
2
)
/
σ
C
w
2
=
0.3
C
w
3
=
2
C
v
1
=
7.1
C
t
1
=
1
C
t
2
=
2
C
t
3
=
1.1
C
t
4
=
2
{\displaystyle {\begin{matrix}\sigma &=&2/3\\C_{b1}&=&0.1355\\C_{b2}&=&0.622\\\kappa &=&0.41\\C_{w1}&=&C_{b1}/\kappa ^{2}+(1+C_{b2})/\sigma \\C_{w2}&=&0.3\\C_{w3}&=&2\\C_{v1}&=&7.1\\C_{t1}&=&1\\C_{t2}&=&2\\C_{t3}&=&1.1\\C_{t4}&=&2\end{matrix}}}
== Modifications to original model ==
According to Spalart it is safer to use the following values for the last two constants:
C
t
3
=
1.2
C
t
4
=
0.5
{\displaystyle {\begin{matrix}C_{t3}&=&1.2\\C_{t4}&=&0.5\end{matrix}}}
Other models related to the S-A model:
DES (1999) [1]
DDES (2006)
== Model for compressible flows ==
There are several approaches to adapting the model for compressible flows.
In all cases, the turbulent dynamic viscosity is computed from
μ
t
=
ρ
ν
~
f
v
1
{\displaystyle \mu _{t}=\rho {\tilde {\nu }}f_{v1}}
where
ρ
{\displaystyle \rho }
is the local density.
The first approach applies the original equation for
ν
~
{\displaystyle {\tilde {\nu }}}
.
In the second approach, the convective terms in the equation for
ν
~
{\displaystyle {\tilde {\nu }}}
are modified to
∂
ν
~
∂
t
+
∂
∂
x
j
(
ν
~
u
j
)
=
RHS
{\displaystyle {\frac {\partial {\tilde {\nu }}}{\partial t}}+{\frac {\partial }{\partial x_{j}}}({\tilde {\nu }}u_{j})={\mbox{RHS}}}
where the right hand side (RHS) is the same as in the original model.
The third approach involves inserting the density inside some of the derivatives on the LHS and RHS.
The second and third approaches are not recommended by the original authors, but they are found in several solvers.
== Boundary conditions ==
Walls:
ν
~
=
0
{\displaystyle {\tilde {\nu }}=0}
Freestream:
Ideally
ν
~
=
0
{\displaystyle {\tilde {\nu }}=0}
, but some solvers can have problems with a zero value, in which case
ν
~
≤
ν
2
{\displaystyle {\tilde {\nu }}\leq {\frac {\nu }{2}}}
can be used.
This is if the trip term is used to "start up" the model. A convenient option is to set
ν
~
=
5
ν
{\displaystyle {\tilde {\nu }}=5{\nu }}
in the freestream. The model then provides "Fully Turbulent" behavior, i.e., it becomes turbulent in any region that contains shear.
Outlet: convective outlet.
== References ==
Spalart, Philippe R. and Allmaras, Steven R., 1992, "A One-Equation Turbulence Model for Aerodynamic Flows" AIAA Paper 92-0439
== External links ==
This article was based on the Spalart-Allmaras model article in CFD-Wiki
What Are the Spalart-Allmaras Turbulence Models? from kxcad.net
The Spalart-Allmaras Turbulence Model at NASA's Langley Research Center Turbulence Modelling Resource site | Wikipedia/Spalart–Allmaras_turbulence_model |
In fluid dynamics, turbulence kinetic energy (TKE) is the mean kinetic energy per unit mass associated with eddies in turbulent flow. Physically, the turbulence kinetic energy is characterized by measured root-mean-square (RMS) velocity fluctuations. In the Reynolds-averaged Navier Stokes equations, the turbulence kinetic energy can be calculated based on the closure method, i.e. a turbulence model.
The TKE can be defined to be half the sum of the variances σ² (square of standard deviations σ) of the fluctuating velocity components:
k
=
1
2
(
σ
u
2
+
σ
v
2
+
σ
w
2
)
=
1
2
(
(
u
′
)
2
¯
+
(
v
′
)
2
¯
+
(
w
′
)
2
¯
)
,
{\displaystyle k={\frac {1}{2}}(\sigma _{u}^{2}+\sigma _{v}^{2}+\sigma _{w}^{2})={\frac {1}{2}}\left(\,{\overline {(u')^{2}}}+{\overline {(v')^{2}}}+{\overline {(w')^{2}}}\,\right),}
where each turbulent velocity component is the difference between the instantaneous and the average velocity:
u
′
=
u
−
u
¯
{\displaystyle u'=u-{\overline {u}}}
(Reynolds decomposition). The mean and variance are
u
′
¯
=
1
T
∫
0
T
(
u
(
t
)
−
u
¯
)
d
t
=
0
,
(
u
′
)
2
¯
=
1
T
∫
0
T
(
u
(
t
)
−
u
¯
)
2
d
t
=
σ
u
2
≥
0
,
{\displaystyle {\begin{aligned}{\overline {u'}}&={\frac {1}{T}}\int _{0}^{T}(u(t)-{\overline {u}})\,dt=0,\\[4pt]{\overline {(u')^{2}}}&={\frac {1}{T}}\int _{0}^{T}(u(t)-{\overline {u}})^{2}\,dt=\sigma _{u}^{2}\geq 0,\end{aligned}}}
respectively.
TKE can be produced by fluid shear, friction or buoyancy, or through external forcing at low-frequency eddy scales (integral scale). Turbulence kinetic energy is then transferred down the turbulence energy cascade, and is dissipated by viscous forces at the Kolmogorov scale. This process of production, transport and dissipation can be expressed as:
D
k
D
t
+
∇
⋅
T
′
=
P
−
ε
,
{\displaystyle {\frac {Dk}{Dt}}+\nabla \cdot T'=P-\varepsilon ,}
where:
D
k
D
t
{\displaystyle {\tfrac {Dk}{Dt}}}
is the mean-flow material derivative of TKE;
∇ · T′ is the turbulence transport of TKE;
P is the production of TKE, and
ε is the TKE dissipation.
Assuming that molecular viscosity is constant, and making the Boussinesq approximation, the TKE equation is:
∂
k
∂
t
⏟
Local
derivative
+
u
¯
j
∂
k
∂
x
j
⏟
Advection
=
−
1
ρ
o
∂
u
i
′
p
′
¯
∂
x
i
⏟
Pressure
diffusion
−
1
2
∂
u
j
′
u
j
′
u
i
′
¯
∂
x
i
⏟
Turbulent
transport
T
+
ν
∂
2
k
∂
x
j
2
⏟
Molecular
viscous
transport
−
u
i
′
u
j
′
¯
∂
u
i
¯
∂
x
j
⏟
Production
P
−
ν
∂
u
i
′
∂
x
j
∂
u
i
′
∂
x
j
¯
⏟
Dissipation
ε
k
−
g
ρ
o
ρ
′
u
i
′
¯
δ
i
3
⏟
Buoyancy flux
b
{\displaystyle \underbrace {\frac {\partial k}{\partial t}} _{{\text{Local}} \atop {\text{derivative}}}\!\!\!+\ \underbrace {{\overline {u}}_{j}{\frac {\partial k}{\partial x_{j}}}} _{{\text{Advection}} \atop {}}=-\underbrace {{\frac {1}{\rho _{o}}}{\frac {\partial {\overline {u'_{i}p'}}}{\partial x_{i}}}} _{{\text{Pressure}} \atop {\text{diffusion}}}-\underbrace {{\frac {1}{2}}{\frac {\partial {\overline {u_{j}'u_{j}'u_{i}'}}}{\partial x_{i}}}} _{{{\text{Turbulent}} \atop {\text{transport}}} \atop {\mathcal {T}}}+\underbrace {\nu {\frac {\partial ^{2}k}{\partial x_{j}^{2}}}} _{{{\text{Molecular}} \atop {\text{viscous}}} \atop {\text{transport}}}-\underbrace {{\overline {u'_{i}u'_{j}}}{\frac {\partial {\overline {u_{i}}}}{\partial x_{j}}}} _{{\text{Production}} \atop {\mathcal {P}}}-\underbrace {\nu {\overline {{\frac {\partial u'_{i}}{\partial x_{j}}}{\frac {\partial u'_{i}}{\partial x_{j}}}}}} _{{\text{Dissipation}} \atop \varepsilon _{k}}-\underbrace {{\frac {g}{\rho _{o}}}{\overline {\rho 'u'_{i}}}\delta _{i3}} _{{\text{Buoyancy flux}} \atop b}}
By examining these phenomena, the turbulence kinetic energy budget for a particular flow can be found.
== Computational fluid dynamics ==
In computational fluid dynamics (CFD), it is impossible to numerically simulate turbulence without discretizing the flow-field as far as the Kolmogorov microscales, which is called direct numerical simulation (DNS). Because DNS simulations are exorbitantly expensive due to memory, computational and storage overheads, turbulence models are used to simulate the effects of turbulence. A variety of models are used, but generally TKE is a fundamental flow property which must be calculated in order for fluid turbulence to be modelled.
=== Reynolds-averaged Navier–Stokes equations ===
Reynolds-averaged Navier–Stokes (RANS) simulations use the Boussinesq eddy viscosity hypothesis to calculate the Reynolds stress that results from the averaging procedure:
u
i
′
u
j
′
¯
=
2
3
k
δ
i
j
−
ν
t
(
∂
u
i
¯
∂
x
j
+
∂
u
j
¯
∂
x
i
)
,
{\displaystyle {\overline {u'_{i}u'_{j}}}={\frac {2}{3}}k\delta _{ij}-\nu _{t}\left({\frac {\partial {\overline {u_{i}}}}{\partial x_{j}}}+{\frac {\partial {\overline {u_{j}}}}{\partial x_{i}}}\right),}
where
ν
t
=
c
⋅
k
⋅
l
m
.
{\displaystyle \nu _{t}=c\cdot {\sqrt {k}}\cdot l_{m}.}
The exact method of resolving TKE depends upon the turbulence model used; k–ε (k–epsilon) models assume isotropy of turbulence whereby the normal stresses are equal:
(
u
′
)
2
¯
=
(
v
′
)
2
¯
=
(
w
′
)
2
¯
.
{\displaystyle {\overline {(u')^{2}}}={\overline {(v')^{2}}}={\overline {(w')^{2}}}.}
This assumption makes modelling of turbulence quantities (k and ε) simpler, but will not be accurate in scenarios where anisotropic behaviour of turbulence stresses dominates, and the implications of this in the production of turbulence also leads to over-prediction since the production depends on the mean rate of strain, and not the difference between the normal stresses (as they are, by assumption, equal).
Reynolds-stress models (RSM) use a different method to close the Reynolds stresses, whereby the normal stresses are not assumed isotropic, so the issue with TKE production is avoided.
=== Initial conditions ===
Accurate prescription of TKE as initial conditions in CFD simulations are important to accurately predict flows, especially in high Reynolds-number simulations. A smooth duct example is given below.
k
=
3
2
(
U
I
)
2
,
{\displaystyle k={\frac {3}{2}}(UI)^{2},}
where I is the initial turbulence intensity [%] given below, and U is the initial velocity magnitude. As an example for pipe flows, with the Reynolds number based on the pipe diameter:
I
=
0.16
R
e
−
1
8
.
{\displaystyle I=0.16Re^{-{\frac {1}{8}}}.}
Here l is the turbulence or eddy length scale, given below, and cμ is a k–ε model parameter whose value is typically given as 0.09;
ε
=
c
μ
3
4
k
3
2
l
−
1
.
{\displaystyle \varepsilon ={c_{\mu }}^{\frac {3}{4}}k^{\frac {3}{2}}l^{-1}.}
The turbulent length scale can be estimated as
l
=
0.07
L
,
{\displaystyle l=0.07L,}
with L a characteristic length. For internal flows this may take the value of the inlet duct (or pipe) width (or diameter) or the hydraulic diameter.
== References ==
== Further reading ==
Turbulence kinetic energy at CFD Online.
Absi, R. (2008). "Analytical solutions for the modeled k-equation". Journal of Applied Mechanics. 75 (44501): 044501. Bibcode:2008JAM....75d4501A. doi:10.1115/1.2912722.
Lacey, R. W. J.; Neary, V. S.; Liao, J. C.; Enders, E. C.; Tritico, H. M. (2012). "The IPOS framework: linking fish swimming performance in altered flows from laboratory experiments to rivers." River Res. Applic. 28 (4), pp. 429–443. doi:10.1002/rra.1584.
Wilcox, D. C. (2006). "Turbulence modeling for CFD". Third edition. DCW Industries, La Canada, USA. ISBN 978-1-928729-08-2. | Wikipedia/Turbulence_kinetic_energy |
In physics (specifically, the kinetic theory of gases), the Einstein relation is a previously unexpected connection revealed independently by William Sutherland in 1904, Albert Einstein in 1905, and by Marian Smoluchowski in 1906 in their works on Brownian motion. The more general form of the equation in the classical case is
D
=
μ
k
B
T
,
{\displaystyle D=\mu \,k_{\text{B}}T,}
where
D is the diffusion coefficient;
μ is the "mobility", or the ratio of the particle's terminal drift velocity to an applied force, μ = vd/F;
kB is the Boltzmann constant;
T is the absolute temperature.
This equation is an early example of a fluctuation-dissipation relation.
Note that the equation above describes the classical case and should be modified when quantum effects are relevant.
Two frequently used important special forms of the relation are:
Einstein–Smoluchowski equation, for diffusion of charged particles:
D
=
μ
q
k
B
T
q
{\displaystyle D={\frac {\mu _{q}\,k_{\text{B}}T}{q}}}
Stokes–Einstein–Sutherland equation, for diffusion of spherical particles through a liquid with low Reynolds number:
D
=
k
B
T
6
π
η
r
{\displaystyle D={\frac {k_{\text{B}}T}{6\pi \,\eta \,r}}}
Here
q is the electrical charge of a particle;
μq is the electrical mobility of the charged particle;
η is the dynamic viscosity;
r is the Stokes radius of the spherical particle.
== Special cases ==
=== Electrical mobility equation (classical case) ===
For a particle with electrical charge q, its electrical mobility μq is related to its generalized mobility μ by the equation μ = μq/q. The parameter μq is the ratio of the particle's terminal drift velocity to an applied electric field. Hence, the equation in the case of a charged particle is given as
D
=
μ
q
k
B
T
q
,
{\displaystyle D={\frac {\mu _{q}\,k_{\text{B}}T}{q}},}
where
D
{\displaystyle D}
is the diffusion coefficient (
m
2
s
−
1
{\displaystyle \mathrm {m^{2}s^{-1}} }
).
μ
q
{\displaystyle \mu _{q}}
is the electrical mobility (
m
2
V
−
1
s
−
1
{\displaystyle \mathrm {m^{2}V^{-1}s^{-1}} }
).
q
{\displaystyle q}
is the electric charge of particle (C, coulombs)
T
{\displaystyle T}
is the electron temperature or ion temperature in plasma (K).
If the temperature is given in volts, which is more common for plasma:
D
=
μ
q
T
Z
,
{\displaystyle D={\frac {\mu _{q}\,T}{Z}},}
where
Z
{\displaystyle Z}
is the charge number of particle (unitless)
T
{\displaystyle T}
is electron temperature or ion temperature in plasma (V).
=== Electrical mobility equation (quantum case) ===
For the case of Fermi gas or a Fermi liquid, relevant for the electron mobility in normal metals like in the free electron model, Einstein relation should be modified:
D
=
μ
q
E
F
q
,
{\displaystyle D={\frac {\mu _{q}\,E_{\mathrm {F} }}{q}},}
where
E
F
{\displaystyle E_{\mathrm {F} }}
is Fermi energy.
=== Stokes–Einstein–Sutherland equation ===
In the limit of low Reynolds number, the mobility μ is the inverse of the drag coefficient
ζ
{\displaystyle \zeta }
. A damping constant
γ
=
ζ
/
m
{\displaystyle \gamma =\zeta /m}
is frequently used for the inverse momentum relaxation time (time needed for the inertia momentum to become negligible compared to the random momenta) of the diffusive object. For spherical particles of radius r, Stokes' law gives
ζ
=
6
π
η
r
,
{\displaystyle \zeta =6\pi \,\eta \,r,}
where
η
{\displaystyle \eta }
is the viscosity of the medium. Thus the Einstein–Smoluchowski relation results into the Stokes–Einstein–Sutherland relation
D
=
k
B
T
6
π
η
r
.
{\displaystyle D={\frac {k_{\text{B}}T}{6\pi \,\eta \,r}}.}
This has been applied for many years to estimating the self-diffusion coefficient in liquids, and a version consistent with isomorph theory has been confirmed by computer simulations of the Lennard-Jones system.
In the case of rotational diffusion, the friction is
ζ
r
=
8
π
η
r
3
{\displaystyle \zeta _{\text{r}}=8\pi \eta r^{3}}
, and the rotational diffusion constant
D
r
{\displaystyle D_{\text{r}}}
is
D
r
=
k
B
T
8
π
η
r
3
.
{\displaystyle D_{\text{r}}={\frac {k_{\text{B}}T}{8\pi \,\eta \,r^{3}}}.}
This is sometimes referred to as the Stokes–Einstein–Debye relation.
=== Semiconductor ===
In a semiconductor with an arbitrary density of states, i.e. a relation of the form
p
=
p
(
φ
)
{\displaystyle p=p(\varphi )}
between the density of holes or electrons
p
{\displaystyle p}
and the corresponding quasi Fermi level (or electrochemical potential)
φ
{\displaystyle \varphi }
, the Einstein relation is
D
=
μ
q
p
q
d
p
d
φ
,
{\displaystyle D={\frac {\mu _{q}p}{q{\frac {dp}{d\varphi }}}},}
where
μ
q
{\displaystyle \mu _{q}}
is the electrical mobility (see § Proof of the general case for a proof of this relation). An example assuming a parabolic dispersion relation for the density of states and the Maxwell–Boltzmann statistics, which is often used to describe inorganic semiconductor materials, one can compute (see density of states):
p
(
φ
)
=
N
0
e
q
φ
k
B
T
,
{\displaystyle p(\varphi )=N_{0}e^{\frac {q\varphi }{k_{\text{B}}T}},}
where
N
0
{\displaystyle N_{0}}
is the total density of available energy states, which gives the simplified relation:
D
=
μ
q
k
B
T
q
.
{\displaystyle D=\mu _{q}{\frac {k_{\text{B}}T}{q}}.}
=== Nernst–Einstein equation ===
By replacing the diffusivities in the expressions of electric ionic mobilities of the cations and anions from the expressions of the equivalent conductivity of an electrolyte the Nernst–Einstein equation is derived:
Λ
e
=
z
i
2
F
2
R
T
(
D
+
+
D
−
)
.
{\displaystyle \Lambda _{e}={\frac {z_{i}^{2}F^{2}}{RT}}(D_{+}+D_{-}).}
were R is the gas constant.
== Proof of the general case ==
The proof of the Einstein relation can be found in many references, for example see the work of Ryogo Kubo.
Suppose some fixed, external potential energy
U
{\displaystyle U}
generates a conservative force
F
(
x
)
=
−
∇
U
(
x
)
{\displaystyle F(\mathbf {x} )=-\nabla U(\mathbf {x} )}
(for example, an electric force) on a particle located at a given position
x
{\displaystyle \mathbf {x} }
. We assume that the particle would respond by moving with velocity
v
(
x
)
=
μ
(
x
)
F
(
x
)
{\displaystyle v(\mathbf {x} )=\mu (\mathbf {x} )F(\mathbf {x} )}
(see Drag (physics)). Now assume that there are a large number of such particles, with local concentration
ρ
(
x
)
{\displaystyle \rho (\mathbf {x} )}
as a function of the position. After some time, equilibrium will be established: particles will pile up around the areas with lowest potential energy
U
{\displaystyle U}
, but still will be spread out to some extent because of diffusion. At equilibrium, there is no net flow of particles: the tendency of particles to get pulled towards lower
U
{\displaystyle U}
, called the drift current, perfectly balances the tendency of particles to spread out due to diffusion, called the diffusion current (see drift-diffusion equation).
The net flux of particles due to the drift current is
J
d
r
i
f
t
(
x
)
=
μ
(
x
)
F
(
x
)
ρ
(
x
)
=
−
ρ
(
x
)
μ
(
x
)
∇
U
(
x
)
,
{\displaystyle \mathbf {J} _{\mathrm {drift} }(\mathbf {x} )=\mu (\mathbf {x} )F(\mathbf {x} )\rho (\mathbf {x} )=-\rho (\mathbf {x} )\mu (\mathbf {x} )\nabla U(\mathbf {x} ),}
i.e., the number of particles flowing past a given position equals the particle concentration times the average velocity.
The flow of particles due to the diffusion current is, by Fick's law,
J
d
i
f
f
u
s
i
o
n
(
x
)
=
−
D
(
x
)
∇
ρ
(
x
)
,
{\displaystyle \mathbf {J} _{\mathrm {diffusion} }(\mathbf {x} )=-D(\mathbf {x} )\nabla \rho (\mathbf {x} ),}
where the minus sign means that particles flow from higher to lower concentration.
Now consider the equilibrium condition. First, there is no net flow, i.e.
J
d
r
i
f
t
+
J
d
i
f
f
u
s
i
o
n
=
0
{\displaystyle \mathbf {J} _{\mathrm {drift} }+\mathbf {J} _{\mathrm {diffusion} }=0}
. Second, for non-interacting point particles, the equilibrium density
ρ
{\displaystyle \rho }
is solely a function of the local potential energy
U
{\displaystyle U}
, i.e. if two locations have the same
U
{\displaystyle U}
then they will also have the same
ρ
{\displaystyle \rho }
(e.g. see Maxwell-Boltzmann statistics as discussed below.) That means, applying the chain rule,
∇
ρ
=
d
ρ
d
U
∇
U
.
{\displaystyle \nabla \rho ={\frac {\mathrm {d} \rho }{\mathrm {d} U}}\nabla U.}
Therefore, at equilibrium:
0
=
J
d
r
i
f
t
+
J
d
i
f
f
u
s
i
o
n
=
−
μ
ρ
∇
U
−
D
∇
ρ
=
(
−
μ
ρ
−
D
d
ρ
d
U
)
∇
U
.
{\displaystyle 0=\mathbf {J} _{\mathrm {drift} }+\mathbf {J} _{\mathrm {diffusion} }=-\mu \rho \nabla U-D\nabla \rho =\left(-\mu \rho -D{\frac {\mathrm {d} \rho }{\mathrm {d} U}}\right)\nabla U.}
As this expression holds at every position
x
{\displaystyle \mathbf {x} }
, it implies the general form of the Einstein relation:
D
=
−
μ
ρ
d
ρ
d
U
.
{\displaystyle D=-\mu {\frac {\rho }{\frac {\mathrm {d} \rho }{\mathrm {d} U}}}.}
The relation between
ρ
{\displaystyle \rho }
and
U
{\displaystyle U}
for classical particles can be modeled through Maxwell-Boltzmann statistics
ρ
(
x
)
=
A
e
−
U
(
x
)
k
B
T
,
{\displaystyle \rho (\mathbf {x} )=Ae^{-{\frac {U(\mathbf {x} )}{k_{\text{B}}T}}},}
where
A
{\displaystyle A}
is a constant related to the total number of particles. Therefore
d
ρ
d
U
=
−
1
k
B
T
ρ
.
{\displaystyle {\frac {\mathrm {d} \rho }{\mathrm {d} U}}=-{\frac {1}{k_{\text{B}}T}}\rho .}
Under this assumption, plugging this equation into the general Einstein relation gives:
D
=
−
μ
ρ
d
ρ
d
U
=
μ
k
B
T
,
{\displaystyle D=-\mu {\frac {\rho }{\frac {\mathrm {d} \rho }{\mathrm {d} U}}}=\mu k_{\text{B}}T,}
which corresponds to the classical Einstein relation.
== See also ==
Smoluchowski factor
Conductivity (electrolytic)
Stokes radius
Ion transport number
== References ==
== External links ==
Einstein relation calculators
ion diffusivity | Wikipedia/Einstein–Stokes_equation |
In fluid dynamics, the Hagen–Poiseuille equation, also known as the Hagen–Poiseuille law, Poiseuille law or Poiseuille equation, is a physical law that gives the pressure drop in an incompressible and Newtonian fluid in laminar flow flowing through a long cylindrical pipe of constant cross section.
It can be successfully applied to air flow in lung alveoli, or the flow through a drinking straw or through a hypodermic needle. It was experimentally derived independently by Jean Léonard Marie Poiseuille in 1838 and Gotthilf Heinrich Ludwig Hagen, and published by Hagen in 1839 and then by Poiseuille in 1840–41 and 1846. The theoretical justification of the Poiseuille law was given by George Stokes in 1845.
The assumptions of the equation are that the fluid is incompressible and Newtonian; the flow is laminar through a pipe of constant circular cross-section that is substantially longer than its diameter; and there is no acceleration of fluid in the pipe. For velocities and pipe diameters above a threshold, actual fluid flow is not laminar but turbulent, leading to larger pressure drops than calculated by the Hagen–Poiseuille equation.
Poiseuille's equation describes the pressure drop due to the viscosity of the fluid; other types of pressure drops may still occur in a fluid (see a demonstration here). For example, the pressure needed to drive a viscous fluid up against gravity would contain both that as needed in Poiseuille's law plus that as needed in Bernoulli's equation, such that any point in the flow would have a pressure greater than zero (otherwise no flow would happen).
Another example is when blood flows into a narrower constriction, its speed will be greater than in a larger diameter (due to continuity of volumetric flow rate), and its pressure will be lower than in a larger diameter (due to Bernoulli's equation). However, the viscosity of blood will cause additional pressure drop along the direction of flow, which is proportional to length traveled (as per Poiseuille's law). Both effects contribute to the actual pressure drop.
== Equation ==
In standard fluid-kinetics notation:
Δ
p
=
8
μ
L
Q
π
R
4
=
8
π
μ
L
Q
A
2
,
{\displaystyle \Delta p={\frac {8\mu LQ}{\pi R^{4}}}={\frac {8\pi \mu LQ}{A^{2}}},}
where
Δp is the pressure difference between the two ends,
L is the length of pipe,
μ is the dynamic viscosity,
Q is the volumetric flow rate,
R is the pipe radius,
A is the cross-sectional area of pipe.
The equation does not hold close to the pipe entrance.: 3
The equation fails in the limit of low viscosity, wide and/or short pipe. Low viscosity or a wide pipe may result in turbulent flow, making it necessary to use more complex models, such as the Darcy–Weisbach equation. The ratio of length to radius of a pipe should be greater than 1/48 of the Reynolds number for the Hagen–Poiseuille law to be valid. If the pipe is too short, the Hagen–Poiseuille equation may result in unphysically high flow rates; the flow is bounded by Bernoulli's principle, under less restrictive conditions, by
Δ
p
=
1
2
ρ
v
¯
max
2
=
1
2
ρ
(
Q
max
π
R
2
)
2
⇒
Q
max
=
π
R
2
2
Δ
p
ρ
,
{\displaystyle {\begin{aligned}\Delta p={\frac {1}{2}}\rho {\overline {v}}_{\text{max}}^{2}&={\frac {1}{2}}\rho \left({\frac {Q_{\text{max}}}{\pi R^{2}}}\right)^{2}\\\Rightarrow \quad Q_{\max }{}&=\pi R^{2}{\sqrt {\frac {2\Delta p}{\rho }}},\end{aligned}}}
because it is impossible to have negative (absolute) pressure (not to be confused with gauge pressure) in an incompressible flow.
== Relation to the Darcy–Weisbach equation ==
Normally, Hagen–Poiseuille flow implies not just the relation for the pressure drop, above, but also the full solution for the laminar flow profile, which is parabolic. However, the result for the pressure drop can be extended to turbulent flow by inferring an effective turbulent viscosity in the case of turbulent flow, even though the flow profile in turbulent flow is strictly speaking not actually parabolic. In both cases, laminar or turbulent, the pressure drop is related to the stress at the wall, which determines the so-called friction factor. The wall stress can be determined phenomenologically by the Darcy–Weisbach equation in the field of hydraulics, given a relationship for the friction factor in terms of the Reynolds number. In the case of laminar flow, for a circular cross section:
Λ
=
64
R
e
,
R
e
=
ρ
v
d
μ
,
{\displaystyle \Lambda ={\frac {64}{\mathrm {Re} }},\quad \mathrm {Re} ={\frac {\rho vd}{\mu }},}
where Re is the Reynolds number, ρ is the fluid density, and v is the mean flow velocity, which is half the maximal flow velocity in the case of laminar flow. It proves more useful to define the Reynolds number in terms of the mean flow velocity because this quantity remains well defined even in the case of turbulent flow, whereas the maximal flow velocity may not be, or in any case, it may be difficult to infer. In this form the law approximates the Darcy friction factor, the energy (head) loss factor, friction loss factor or Darcy (friction) factor Λ in the laminar flow at very low velocities in cylindrical tube. The theoretical derivation of a slightly different form of the law was made independently by Wiedman in 1856 and Neumann and E. Hagenbach in 1858 (1859, 1860). Hagenbach was the first who called this law Poiseuille's law.
The law is also very important in hemorheology and hemodynamics, both fields of physiology.
Poiseuille's law was later in 1891 extended to turbulent flow by L. R. Wilberforce, based on Hagenbach's work.
== Derivation ==
The Hagen–Poiseuille equation can be derived from the Navier–Stokes equations. The laminar flow through a pipe of uniform (circular) cross-section is known as Hagen–Poiseuille flow. The equations governing the Hagen–Poiseuille flow can be derived directly from the Navier–Stokes momentum equations in 3D cylindrical coordinates (r,θ,x) by making the following set of assumptions:
The flow is steady ( ∂.../∂t = 0 ).
The radial and azimuthal components of the fluid velocity are zero ( ur = uθ = 0 ).
The flow is axisymmetric ( ∂.../∂θ = 0 ).
The flow is fully developed ( ∂ux/∂x = 0 ). Here however, this can be proved via mass conservation, and the above assumptions.
Then the angular equation in the momentum equations and the continuity equation are identically satisfied. The radial momentum equation reduces to ∂p/∂r = 0, i.e., the pressure p is a function of the axial coordinate x only. For brevity, use u instead of
u
x
{\displaystyle u_{x}}
. The axial momentum equation reduces to
1
r
∂
∂
r
(
r
∂
u
∂
r
)
=
1
μ
d
p
d
x
{\displaystyle {\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial u}{\partial r}}\right)={\frac {1}{\mu }}{\frac {\mathrm {d} p}{\mathrm {d} x}}}
where μ is the dynamic viscosity of the fluid. In the above equation, the left-hand side is only a function of r and the right-hand side term is only a function of x, implying that both terms must be the same constant. Evaluating this constant is straightforward. If we take the length of the pipe to be L and denote the pressure difference between the two ends of the pipe by Δp (high pressure minus low pressure), then the constant is simply
−
d
p
d
x
=
Δ
p
L
=
G
{\displaystyle -{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {\Delta p}{L}}=G}
defined such that G is positive. The solution is
u
=
−
G
r
2
4
μ
+
c
1
ln
r
+
c
2
{\displaystyle u=-{\frac {Gr^{2}}{4\mu }}+c_{1}\ln r+c_{2}}
Since u needs to be finite at r = 0, c1 = 0. The no slip boundary condition at the pipe wall requires that u = 0 at r = R (radius of the pipe), which yields c2 = GR2/4μ. Thus we have finally the following parabolic velocity profile:
u
=
G
4
μ
(
R
2
−
r
2
)
.
{\displaystyle u={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right).}
The maximum velocity occurs at the pipe centerline (r = 0), umax = GR2/4μ. The average velocity can be obtained by integrating over the pipe cross section,
u
a
v
g
=
1
π
R
2
∫
0
R
2
π
r
u
d
r
=
1
2
u
m
a
x
.
{\displaystyle {u}_{\mathrm {avg} }={\frac {1}{\pi R^{2}}}\int _{0}^{R}2\pi ru\mathrm {d} r={\tfrac {1}{2}}{u}_{\mathrm {max} }.}
The easily measurable quantity in experiments is the volumetric flow rate Q = πR2 uavg. Rearrangement of this gives the Hagen–Poiseuille equation
Δ
p
=
8
μ
Q
L
π
R
4
.
{\displaystyle \Delta p={\frac {8\mu QL}{\pi R^{4}}}.}
=== Startup of Poiseuille flow in a pipe ===
When a constant pressure gradient G = −dp/dx is applied between two ends of a long pipe, the flow will not immediately obtain Poiseuille profile, rather it develops through time and reaches the Poiseuille profile at steady state. The Navier–Stokes equations reduce to
∂
u
∂
t
=
G
ρ
+
ν
(
∂
2
u
∂
r
2
+
1
r
∂
u
∂
r
)
{\displaystyle {\frac {\partial u}{\partial t}}={\frac {G}{\rho }}+\nu \left({\frac {\partial ^{2}u}{\partial r^{2}}}+{\frac {1}{r}}{\frac {\partial u}{\partial r}}\right)}
with initial and boundary conditions,
u
(
r
,
0
)
=
0
,
u
(
R
,
t
)
=
0.
{\displaystyle u(r,0)=0,\quad u(R,t)=0.}
The velocity distribution is given by
u
(
r
,
t
)
=
G
4
μ
(
R
2
−
r
2
)
−
2
G
R
2
μ
∑
n
=
1
∞
1
λ
n
3
J
0
(
λ
n
r
/
R
)
J
1
(
λ
n
)
e
−
λ
n
2
ν
t
/
R
2
,
J
0
(
λ
n
)
=
0
{\displaystyle u(r,t)={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right)-{\frac {2GR^{2}}{\mu }}\sum _{n=1}^{\infty }{\frac {1}{\lambda _{n}^{3}}}{\frac {J_{0}(\lambda _{n}r/R)}{J_{1}(\lambda _{n})}}e^{-\lambda _{n}^{2}\nu t/R^{2}},\quad J_{0}\left(\lambda _{n}\right)=0}
where J0(λnr/R) is the Bessel function of the first kind of order zero and λn are the positive roots of this function and J1(λn) is the Bessel function of the first kind of order one. As t → ∞, Poiseuille solution is recovered.
== Poiseuille flow in an annular section ==
If R1 is the inner cylinder radii and R2 is the outer cylinder radii, with constant applied pressure gradient between the two ends G = −dp/dx, the velocity distribution and the volume flux through the annular pipe are
u
(
r
)
=
G
4
μ
(
R
1
2
−
r
2
)
+
G
4
μ
(
R
2
2
−
R
1
2
)
ln
r
/
R
1
ln
R
2
/
R
1
,
Q
=
G
π
8
μ
[
R
2
4
−
R
1
4
−
(
R
2
2
−
R
1
2
)
2
ln
R
2
/
R
1
]
.
{\displaystyle {\begin{aligned}u(r)&={\frac {G}{4\mu }}\left(R_{1}^{2}-r^{2}\right)+{\frac {G}{4\mu }}\left(R_{2}^{2}-R_{1}^{2}\right){\frac {\ln r/R_{1}}{\ln R_{2}/R_{1}}},\\[6pt]Q&={\frac {G\pi }{8\mu }}\left[R_{2}^{4}-R_{1}^{4}-{\frac {\left(R_{2}^{2}-R_{1}^{2}\right)^{2}}{\ln R_{2}/R_{1}}}\right].\end{aligned}}}
When R2 = R, R1 = 0, the original problem is recovered.
== Poiseuille flow in a pipe with an oscillating pressure gradient ==
Flow through pipes with an oscillating pressure gradient finds applications in blood flow through large arteries. The imposed pressure gradient is given by
∂
p
∂
x
=
−
G
−
α
cos
ω
t
−
β
sin
ω
t
{\displaystyle {\frac {\partial p}{\partial x}}=-G-\alpha \cos \omega t-\beta \sin \omega t}
where G, α and β are constants and ω is the frequency. The velocity field is given by
u
(
r
,
t
)
=
G
4
μ
(
R
2
−
r
2
)
+
[
α
F
2
+
β
(
F
1
−
1
)
]
cos
ω
t
ρ
ω
+
[
β
F
2
−
α
(
F
1
−
1
)
]
sin
ω
t
ρ
ω
{\displaystyle u(r,t)={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right)+[\alpha F_{2}+\beta (F_{1}-1)]{\frac {\cos \omega t}{\rho \omega }}+[\beta F_{2}-\alpha (F_{1}-1)]{\frac {\sin \omega t}{\rho \omega }}}
where
F
1
(
k
r
)
=
b
e
r
(
k
r
)
b
e
r
(
k
R
)
+
b
e
i
(
k
r
)
b
e
i
(
k
R
)
b
e
r
2
(
k
R
)
+
b
e
i
2
(
k
R
)
,
F
2
(
k
r
)
=
b
e
r
(
k
r
)
b
e
i
(
k
R
)
−
b
e
i
(
k
r
)
b
e
r
(
k
R
)
b
e
r
2
(
k
R
)
+
b
e
i
2
(
k
R
)
,
{\displaystyle {\begin{aligned}F_{1}(kr)&={\frac {\mathrm {ber} (kr)\mathrm {ber} (kR)+\mathrm {bei} (kr)\mathrm {bei} (kR)}{\mathrm {ber} ^{2}(kR)+\mathrm {bei} ^{2}(kR)}},\\[6pt]F_{2}(kr)&={\frac {\mathrm {ber} (kr)\mathrm {bei} (kR)-\mathrm {bei} (kr)\mathrm {ber} (kR)}{\mathrm {ber} ^{2}(kR)+\mathrm {bei} ^{2}(kR)}},\end{aligned}}}
where ber and bei are the Kelvin functions and k2 = ρω/μ.
== Plane Poiseuille flow ==
Plane Poiseuille flow is flow created between two infinitely long parallel plates, separated by a distance h with a constant pressure gradient G = −dp/dx is applied in the direction of flow. The flow is essentially unidirectional because of infinite length. The Navier–Stokes equations reduce to
d
2
u
d
y
2
=
−
G
μ
{\displaystyle {\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}=-{\frac {G}{\mu }}}
with no-slip condition on both walls
u
(
0
)
=
0
,
u
(
h
)
=
0
{\displaystyle u(0)=0,\quad u(h)=0}
Therefore, the velocity distribution and the volume flow rate per unit length are
u
(
y
)
=
G
2
μ
y
(
h
−
y
)
,
Q
=
G
h
3
12
μ
.
{\displaystyle u(y)={\frac {G}{2\mu }}y(h-y),\quad Q={\frac {Gh^{3}}{12\mu }}.}
== Poiseuille flow through some non-circular cross-sections ==
Joseph Boussinesq derived the velocity profile and volume flow rate in 1868 for rectangular channel and tubes of equilateral triangular cross-section and for elliptical cross-section. Joseph Proudman derived the same for isosceles triangles in 1914. Let G = −dp/dx be the constant pressure gradient acting in direction parallel to the motion.
The velocity and the volume flow rate in a rectangular channel of height 0 ≤ y ≤ h and width 0 ≤ z ≤ l are
u
(
y
,
z
)
=
G
2
μ
y
(
h
−
y
)
−
4
G
h
2
μ
π
3
∑
n
=
1
∞
1
(
2
n
−
1
)
3
sinh
(
β
n
z
)
+
sinh
[
β
n
(
l
−
z
)
]
sinh
(
β
n
l
)
sin
(
β
n
y
)
,
β
n
=
(
2
n
−
1
)
π
h
,
Q
=
G
h
3
l
12
μ
−
16
G
h
4
π
5
μ
∑
n
=
1
∞
1
(
2
n
−
1
)
5
cosh
(
β
n
l
)
−
1
sinh
(
β
n
l
)
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu }}y(h-y)-{\frac {4Gh^{2}}{\mu \pi ^{3}}}\sum _{n=1}^{\infty }{\frac {1}{(2n-1)^{3}}}{\frac {\sinh(\beta _{n}z)+\sinh[\beta _{n}(l-z)]}{\sinh(\beta _{n}l)}}\sin(\beta _{n}y),\quad \beta _{n}={\frac {(2n-1)\pi }{h}},\\[6pt]Q&={\frac {Gh^{3}l}{12\mu }}-{\frac {16Gh^{4}}{\pi ^{5}\mu }}\sum _{n=1}^{\infty }{\frac {1}{(2n-1)^{5}}}{\frac {\cosh(\beta _{n}l)-1}{\sinh(\beta _{n}l)}}.\end{aligned}}}
The velocity and the volume flow rate of tube with equilateral triangular cross-section of side length 2h/√3 are
u
(
y
,
z
)
=
−
G
4
μ
h
(
y
−
h
)
(
y
2
−
3
z
2
)
,
Q
=
G
h
4
60
3
μ
.
{\displaystyle {\begin{aligned}u(y,z)&=-{\frac {G}{4\mu h}}(y-h)\left(y^{2}-3z^{2}\right),\\[6pt]Q&={\frac {Gh^{4}}{60{\sqrt {3}}\mu }}.\end{aligned}}}
The velocity and the volume flow rate in the right-angled isosceles triangle y = π, y ± z = 0 are
u
(
y
,
z
)
=
G
2
μ
(
y
+
z
)
(
π
−
y
)
−
G
π
μ
∑
n
=
1
∞
1
β
n
3
sinh
(
2
π
β
n
)
{
sinh
[
β
n
(
2
π
−
y
+
z
)
]
sin
[
β
n
(
y
+
z
)
]
−
sinh
[
β
n
(
y
+
z
)
]
sin
[
β
n
(
y
−
z
)
]
}
,
β
n
=
n
+
1
2
,
Q
=
G
π
4
12
μ
−
G
2
π
μ
∑
n
=
1
∞
1
β
n
5
[
coth
(
2
π
β
n
)
+
csc
(
2
π
β
n
)
]
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu }}(y+z)(\pi -y)-{\frac {G}{\pi \mu }}\sum _{n=1}^{\infty }{\frac {1}{\beta _{n}^{3}\sinh(2\pi \beta _{n})}}\left\{\sinh[\beta _{n}(2\pi -y+z)]\sin[\beta _{n}(y+z)]-\sinh[\beta _{n}(y+z)]\sin[\beta _{n}(y-z)]\right\},\quad \beta _{n}=n+{\tfrac {1}{2}},\\[6pt]Q&={\frac {G\pi ^{4}}{12\mu }}-{\frac {G}{2\pi \mu }}\sum _{n=1}^{\infty }{\frac {1}{\beta _{n}^{5}}}\left[\coth(2\pi \beta _{n})+\csc(2\pi \beta _{n})\right].\end{aligned}}}
The velocity distribution for tubes of elliptical cross-section with semiaxes a and b is
u
(
y
,
z
)
=
G
2
μ
(
1
a
2
+
1
b
2
)
(
1
−
y
2
a
2
−
z
2
b
2
)
,
Q
=
π
G
a
3
b
3
4
μ
(
a
2
+
b
2
)
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu \left({\frac {1}{a^{2}}}+{\frac {1}{b^{2}}}\right)}}\left(1-{\frac {y^{2}}{a^{2}}}-{\frac {z^{2}}{b^{2}}}\right),\\[6pt]Q&={\frac {\pi Ga^{3}b^{3}}{4\mu \left(a^{2}+b^{2}\right)}}.\end{aligned}}}
Here, when a = b, Poiseuille flow for circular pipe is recovered and when a → ∞, plane Poiseuille flow is recovered. More explicit solutions with cross-sections such as snail-shaped sections, sections having the shape of a notch circle following a semicircle, annular sections between homofocal ellipses, annular sections between non-concentric circles are also available, as reviewed by Ratip Berker.
== Poiseuille flow through arbitrary cross-section ==
The flow through arbitrary cross-section u(y,z) satisfies the condition that u = 0 on the walls. The governing equation reduces to
∂
2
u
∂
y
2
+
∂
2
u
∂
z
2
=
−
G
μ
.
{\displaystyle {\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}=-{\frac {G}{\mu }}.}
If we introduce a new dependent variable as
U
=
u
+
G
4
μ
(
y
2
+
z
2
)
,
{\displaystyle U=u+{\frac {G}{4\mu }}\left(y^{2}+z^{2}\right),}
then it is easy to see that the problem reduces to that integrating a Laplace equation
∂
2
U
∂
y
2
+
∂
2
U
∂
z
2
=
0
{\displaystyle {\frac {\partial ^{2}U}{\partial y^{2}}}+{\frac {\partial ^{2}U}{\partial z^{2}}}=0}
satisfying the condition
U
=
G
4
μ
(
y
2
+
z
2
)
{\displaystyle U={\frac {G}{4\mu }}\left(y^{2}+z^{2}\right)}
on the wall.
== Poiseuille's equation for an ideal isothermal gas ==
For a compressible fluid in a tube the volumetric flow rate Q(x) and the axial velocity are not constant along the tube; but the mass flow rate is constant along the tube length. The volumetric flow rate is usually expressed at the outlet pressure. As fluid is compressed or expanded, work is done and the fluid is heated or cooled. This means that the flow rate depends on the heat transfer to and from the fluid. For an ideal gas in the isothermal case, where the temperature of the fluid is permitted to equilibrate with its surroundings, an approximate relation for the pressure drop can be derived. Using ideal gas equation of state for constant temperature process (i.e.,
p
/
ρ
{\displaystyle p/\rho }
is constant) and the conservation of mass flow rate (i.e.,
m
˙
=
ρ
Q
{\displaystyle {\dot {m}}=\rho Q}
is constant), the relation Qp = Q1p1 = Q2p2 can be obtained. Over a short section of the pipe, the gas flowing through the pipe can be assumed to be incompressible so that Poiseuille law can be used locally,
−
d
p
d
x
=
8
μ
Q
π
R
4
=
8
μ
Q
2
p
2
π
p
R
4
⇒
−
p
d
p
d
x
=
8
μ
Q
2
p
2
π
R
4
.
{\displaystyle -{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {8\mu Q}{\pi R^{4}}}={\frac {8\mu Q_{2}p_{2}}{\pi pR^{4}}}\quad \Rightarrow \quad -p{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {8\mu Q_{2}p_{2}}{\pi R^{4}}}.}
Here we assumed the local pressure gradient is not too great to have any compressibility effects. Though locally we ignored the effects of pressure variation due to density variation, over long distances these effects are taken into account. Since μ is independent of pressure, the above equation can be integrated over the length L to give
p
1
2
−
p
2
2
=
16
μ
L
Q
2
p
2
π
R
4
.
{\displaystyle p_{1}^{2}-p_{2}^{2}={\frac {16\mu LQ_{2}p_{2}}{\pi R^{4}}}.}
Hence the volumetric flow rate at the pipe outlet is given by
Q
2
=
π
R
4
16
μ
L
(
p
1
2
−
p
2
2
p
2
)
=
π
R
4
(
p
1
−
p
2
)
8
μ
L
(
p
1
+
p
2
)
2
p
2
.
{\displaystyle Q_{2}={\frac {\pi R^{4}}{16\mu L}}\left({\frac {p_{1}^{2}-p_{2}^{2}}{p_{2}}}\right)={\frac {\pi R^{4}\left(p_{1}-p_{2}\right)}{8\mu L}}{\frac {\left(p_{1}+p_{2}\right)}{2p_{2}}}.}
This equation can be seen as Poiseuille's law with an extra correction factor p1 + p2/2p2 expressing the average pressure relative to the outlet pressure.
== Electrical circuits analogy ==
Electricity was originally understood to be a kind of fluid. This hydraulic analogy is still conceptually useful for understanding circuits. This analogy is also used to study the frequency response of fluid-mechanical networks using circuit tools, in which case the fluid network is termed a hydraulic circuit. Poiseuille's law corresponds to Ohm's law for electrical circuits, V = IR. Since the net force acting on the fluid is equal to ΔF = SΔp, where S = πr2, i.e. ΔF = πr2 ΔP, then from Poiseuille's law, it follows that
Δ
F
=
8
μ
L
Q
r
2
{\displaystyle \Delta F={\frac {8\mu LQ}{r^{2}}}}
.
For electrical circuits, let n be the concentration of free charged particles (in m−3) and let q* be the charge of each particle (in coulombs). (For electrons, q* = e = 1.6×10−19 C.) Then nQ is the number of particles in the volume Q, and nQq* is their total charge. This is the charge that flows through the cross section per unit time, i.e. the current I. Therefore, I = nQq*. Consequently, Q = I/nq*, and
Δ
F
=
8
μ
L
I
n
r
2
q
∗
.
{\displaystyle \Delta F={\frac {8\mu LI}{nr^{2}q^{*}}}.}
But ΔF = Eq, where q is the total charge in the volume of the tube. The volume of the tube is equal to πr2L, so the number of charged particles in this volume is equal to nπr2L, and their total charge is q = nπr2 Lq*. Since the voltage V = EL, it follows then
V
=
8
μ
L
I
n
2
π
r
4
(
q
∗
)
2
.
{\displaystyle V={\frac {8\mu LI}{n^{2}\pi r^{4}\left(q^{*}\right)^{2}}}.}
This is exactly Ohm's law, where the resistance R = V/I is described by the formula
R
=
8
μ
L
n
2
π
r
4
(
q
∗
)
2
{\displaystyle R={\frac {8\mu L}{n^{2}\pi r^{4}\left(q^{*}\right)^{2}}}}
.
It follows that the resistance R is proportional to the length L of the resistor, which is true. However, it also follows that the resistance R is inversely proportional to the fourth power of the radius r, i.e. the resistance R is inversely proportional to the second power of the cross section area S = πr2 of the resistor, which is different from the electrical formula. The electrical relation for the resistance is
R
=
ρ
L
S
,
{\displaystyle R={\frac {\rho L}{S}},}
where ρ is the resistivity; i.e. the resistance R is inversely proportional to the cross section area S of the resistor. The reason why Poiseuille's law leads to a different formula for the resistance R is the difference between the fluid flow and the electric current. Electron gas is inviscid, so its velocity does not depend on the distance to the walls of the conductor. The resistance is due to the interaction between the flowing electrons and the atoms of the conductor. Therefore, Poiseuille's law and the hydraulic analogy are useful only within certain limits when applied to electricity. Both Ohm's law and Poiseuille's law illustrate transport phenomena.
== Medical applications – intravenous access and fluid delivery ==
The Hagen–Poiseuille equation is useful in determining the vascular resistance and hence flow rate of intravenous (IV) fluids that may be achieved using various sizes of peripheral and central cannulas. The equation states that flow rate is proportional to the radius to the fourth power, meaning that a small increase in the internal diameter of the cannula yields a significant increase in flow rate of IV fluids. The radius of IV cannulas is typically measured in "gauge", which is inversely proportional to the radius. Peripheral IV cannulas are typically available as (from large to small) 14G, 16G, 18G, 20G, 22G, 26G. As an example, assuming cannula lengths are equal, the flow of a 14G cannula is 1.73 times that of a 16G cannula, and 4.16 times that of a 20G cannula. It also states that flow is inversely proportional to length, meaning that longer lines have lower flow rates. This is important to remember as in an emergency, many clinicians favor shorter, larger catheters compared to longer, narrower catheters. While of less clinical importance, an increased change in pressure (∆p) — such as by pressurizing the bag of fluid, squeezing the bag, or hanging the bag higher (relative to the level of the cannula) — can be used to speed up flow rate. It is also useful to understand that viscous fluids will flow slower (e.g. in blood transfusion).
== See also ==
Couette flow
Darcy's law
Pulse
Wave
Hydraulic circuit
== Cited references ==
== References ==
Sutera, S. P.; Skalak, R. (1993). "The history of Poiseuille's law". Annual Review of Fluid Mechanics. 25: 1–19. Bibcode:1993AnRFM..25....1S. doi:10.1146/annurev.fl.25.010193.000245..
Pfitzner, J (1976). "Poiseuille and his law". Anaesthesia. Vol. 31, no. 2 (published Mar 1976). pp. 273–5. doi:10.1111/j.1365-2044.1976.tb11804.x. PMID 779509..
Bennett, C. O.; Myers, J. E. (1962). Momentum, Heat, and Mass Transfer. McGraw-Hill..
== External links ==
Poiseuille's law for power-law non-Newtonian fluid
Poiseuille's law in a slightly tapered tube
Hagen–Poiseuille equation calculator | Wikipedia/Hagen-Poiseuille_equation |
In fluid dynamics, the Hagen–Poiseuille equation, also known as the Hagen–Poiseuille law, Poiseuille law or Poiseuille equation, is a physical law that gives the pressure drop in an incompressible and Newtonian fluid in laminar flow flowing through a long cylindrical pipe of constant cross section.
It can be successfully applied to air flow in lung alveoli, or the flow through a drinking straw or through a hypodermic needle. It was experimentally derived independently by Jean Léonard Marie Poiseuille in 1838 and Gotthilf Heinrich Ludwig Hagen, and published by Hagen in 1839 and then by Poiseuille in 1840–41 and 1846. The theoretical justification of the Poiseuille law was given by George Stokes in 1845.
The assumptions of the equation are that the fluid is incompressible and Newtonian; the flow is laminar through a pipe of constant circular cross-section that is substantially longer than its diameter; and there is no acceleration of fluid in the pipe. For velocities and pipe diameters above a threshold, actual fluid flow is not laminar but turbulent, leading to larger pressure drops than calculated by the Hagen–Poiseuille equation.
Poiseuille's equation describes the pressure drop due to the viscosity of the fluid; other types of pressure drops may still occur in a fluid (see a demonstration here). For example, the pressure needed to drive a viscous fluid up against gravity would contain both that as needed in Poiseuille's law plus that as needed in Bernoulli's equation, such that any point in the flow would have a pressure greater than zero (otherwise no flow would happen).
Another example is when blood flows into a narrower constriction, its speed will be greater than in a larger diameter (due to continuity of volumetric flow rate), and its pressure will be lower than in a larger diameter (due to Bernoulli's equation). However, the viscosity of blood will cause additional pressure drop along the direction of flow, which is proportional to length traveled (as per Poiseuille's law). Both effects contribute to the actual pressure drop.
== Equation ==
In standard fluid-kinetics notation:
Δ
p
=
8
μ
L
Q
π
R
4
=
8
π
μ
L
Q
A
2
,
{\displaystyle \Delta p={\frac {8\mu LQ}{\pi R^{4}}}={\frac {8\pi \mu LQ}{A^{2}}},}
where
Δp is the pressure difference between the two ends,
L is the length of pipe,
μ is the dynamic viscosity,
Q is the volumetric flow rate,
R is the pipe radius,
A is the cross-sectional area of pipe.
The equation does not hold close to the pipe entrance.: 3
The equation fails in the limit of low viscosity, wide and/or short pipe. Low viscosity or a wide pipe may result in turbulent flow, making it necessary to use more complex models, such as the Darcy–Weisbach equation. The ratio of length to radius of a pipe should be greater than 1/48 of the Reynolds number for the Hagen–Poiseuille law to be valid. If the pipe is too short, the Hagen–Poiseuille equation may result in unphysically high flow rates; the flow is bounded by Bernoulli's principle, under less restrictive conditions, by
Δ
p
=
1
2
ρ
v
¯
max
2
=
1
2
ρ
(
Q
max
π
R
2
)
2
⇒
Q
max
=
π
R
2
2
Δ
p
ρ
,
{\displaystyle {\begin{aligned}\Delta p={\frac {1}{2}}\rho {\overline {v}}_{\text{max}}^{2}&={\frac {1}{2}}\rho \left({\frac {Q_{\text{max}}}{\pi R^{2}}}\right)^{2}\\\Rightarrow \quad Q_{\max }{}&=\pi R^{2}{\sqrt {\frac {2\Delta p}{\rho }}},\end{aligned}}}
because it is impossible to have negative (absolute) pressure (not to be confused with gauge pressure) in an incompressible flow.
== Relation to the Darcy–Weisbach equation ==
Normally, Hagen–Poiseuille flow implies not just the relation for the pressure drop, above, but also the full solution for the laminar flow profile, which is parabolic. However, the result for the pressure drop can be extended to turbulent flow by inferring an effective turbulent viscosity in the case of turbulent flow, even though the flow profile in turbulent flow is strictly speaking not actually parabolic. In both cases, laminar or turbulent, the pressure drop is related to the stress at the wall, which determines the so-called friction factor. The wall stress can be determined phenomenologically by the Darcy–Weisbach equation in the field of hydraulics, given a relationship for the friction factor in terms of the Reynolds number. In the case of laminar flow, for a circular cross section:
Λ
=
64
R
e
,
R
e
=
ρ
v
d
μ
,
{\displaystyle \Lambda ={\frac {64}{\mathrm {Re} }},\quad \mathrm {Re} ={\frac {\rho vd}{\mu }},}
where Re is the Reynolds number, ρ is the fluid density, and v is the mean flow velocity, which is half the maximal flow velocity in the case of laminar flow. It proves more useful to define the Reynolds number in terms of the mean flow velocity because this quantity remains well defined even in the case of turbulent flow, whereas the maximal flow velocity may not be, or in any case, it may be difficult to infer. In this form the law approximates the Darcy friction factor, the energy (head) loss factor, friction loss factor or Darcy (friction) factor Λ in the laminar flow at very low velocities in cylindrical tube. The theoretical derivation of a slightly different form of the law was made independently by Wiedman in 1856 and Neumann and E. Hagenbach in 1858 (1859, 1860). Hagenbach was the first who called this law Poiseuille's law.
The law is also very important in hemorheology and hemodynamics, both fields of physiology.
Poiseuille's law was later in 1891 extended to turbulent flow by L. R. Wilberforce, based on Hagenbach's work.
== Derivation ==
The Hagen–Poiseuille equation can be derived from the Navier–Stokes equations. The laminar flow through a pipe of uniform (circular) cross-section is known as Hagen–Poiseuille flow. The equations governing the Hagen–Poiseuille flow can be derived directly from the Navier–Stokes momentum equations in 3D cylindrical coordinates (r,θ,x) by making the following set of assumptions:
The flow is steady ( ∂.../∂t = 0 ).
The radial and azimuthal components of the fluid velocity are zero ( ur = uθ = 0 ).
The flow is axisymmetric ( ∂.../∂θ = 0 ).
The flow is fully developed ( ∂ux/∂x = 0 ). Here however, this can be proved via mass conservation, and the above assumptions.
Then the angular equation in the momentum equations and the continuity equation are identically satisfied. The radial momentum equation reduces to ∂p/∂r = 0, i.e., the pressure p is a function of the axial coordinate x only. For brevity, use u instead of
u
x
{\displaystyle u_{x}}
. The axial momentum equation reduces to
1
r
∂
∂
r
(
r
∂
u
∂
r
)
=
1
μ
d
p
d
x
{\displaystyle {\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial u}{\partial r}}\right)={\frac {1}{\mu }}{\frac {\mathrm {d} p}{\mathrm {d} x}}}
where μ is the dynamic viscosity of the fluid. In the above equation, the left-hand side is only a function of r and the right-hand side term is only a function of x, implying that both terms must be the same constant. Evaluating this constant is straightforward. If we take the length of the pipe to be L and denote the pressure difference between the two ends of the pipe by Δp (high pressure minus low pressure), then the constant is simply
−
d
p
d
x
=
Δ
p
L
=
G
{\displaystyle -{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {\Delta p}{L}}=G}
defined such that G is positive. The solution is
u
=
−
G
r
2
4
μ
+
c
1
ln
r
+
c
2
{\displaystyle u=-{\frac {Gr^{2}}{4\mu }}+c_{1}\ln r+c_{2}}
Since u needs to be finite at r = 0, c1 = 0. The no slip boundary condition at the pipe wall requires that u = 0 at r = R (radius of the pipe), which yields c2 = GR2/4μ. Thus we have finally the following parabolic velocity profile:
u
=
G
4
μ
(
R
2
−
r
2
)
.
{\displaystyle u={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right).}
The maximum velocity occurs at the pipe centerline (r = 0), umax = GR2/4μ. The average velocity can be obtained by integrating over the pipe cross section,
u
a
v
g
=
1
π
R
2
∫
0
R
2
π
r
u
d
r
=
1
2
u
m
a
x
.
{\displaystyle {u}_{\mathrm {avg} }={\frac {1}{\pi R^{2}}}\int _{0}^{R}2\pi ru\mathrm {d} r={\tfrac {1}{2}}{u}_{\mathrm {max} }.}
The easily measurable quantity in experiments is the volumetric flow rate Q = πR2 uavg. Rearrangement of this gives the Hagen–Poiseuille equation
Δ
p
=
8
μ
Q
L
π
R
4
.
{\displaystyle \Delta p={\frac {8\mu QL}{\pi R^{4}}}.}
=== Startup of Poiseuille flow in a pipe ===
When a constant pressure gradient G = −dp/dx is applied between two ends of a long pipe, the flow will not immediately obtain Poiseuille profile, rather it develops through time and reaches the Poiseuille profile at steady state. The Navier–Stokes equations reduce to
∂
u
∂
t
=
G
ρ
+
ν
(
∂
2
u
∂
r
2
+
1
r
∂
u
∂
r
)
{\displaystyle {\frac {\partial u}{\partial t}}={\frac {G}{\rho }}+\nu \left({\frac {\partial ^{2}u}{\partial r^{2}}}+{\frac {1}{r}}{\frac {\partial u}{\partial r}}\right)}
with initial and boundary conditions,
u
(
r
,
0
)
=
0
,
u
(
R
,
t
)
=
0.
{\displaystyle u(r,0)=0,\quad u(R,t)=0.}
The velocity distribution is given by
u
(
r
,
t
)
=
G
4
μ
(
R
2
−
r
2
)
−
2
G
R
2
μ
∑
n
=
1
∞
1
λ
n
3
J
0
(
λ
n
r
/
R
)
J
1
(
λ
n
)
e
−
λ
n
2
ν
t
/
R
2
,
J
0
(
λ
n
)
=
0
{\displaystyle u(r,t)={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right)-{\frac {2GR^{2}}{\mu }}\sum _{n=1}^{\infty }{\frac {1}{\lambda _{n}^{3}}}{\frac {J_{0}(\lambda _{n}r/R)}{J_{1}(\lambda _{n})}}e^{-\lambda _{n}^{2}\nu t/R^{2}},\quad J_{0}\left(\lambda _{n}\right)=0}
where J0(λnr/R) is the Bessel function of the first kind of order zero and λn are the positive roots of this function and J1(λn) is the Bessel function of the first kind of order one. As t → ∞, Poiseuille solution is recovered.
== Poiseuille flow in an annular section ==
If R1 is the inner cylinder radii and R2 is the outer cylinder radii, with constant applied pressure gradient between the two ends G = −dp/dx, the velocity distribution and the volume flux through the annular pipe are
u
(
r
)
=
G
4
μ
(
R
1
2
−
r
2
)
+
G
4
μ
(
R
2
2
−
R
1
2
)
ln
r
/
R
1
ln
R
2
/
R
1
,
Q
=
G
π
8
μ
[
R
2
4
−
R
1
4
−
(
R
2
2
−
R
1
2
)
2
ln
R
2
/
R
1
]
.
{\displaystyle {\begin{aligned}u(r)&={\frac {G}{4\mu }}\left(R_{1}^{2}-r^{2}\right)+{\frac {G}{4\mu }}\left(R_{2}^{2}-R_{1}^{2}\right){\frac {\ln r/R_{1}}{\ln R_{2}/R_{1}}},\\[6pt]Q&={\frac {G\pi }{8\mu }}\left[R_{2}^{4}-R_{1}^{4}-{\frac {\left(R_{2}^{2}-R_{1}^{2}\right)^{2}}{\ln R_{2}/R_{1}}}\right].\end{aligned}}}
When R2 = R, R1 = 0, the original problem is recovered.
== Poiseuille flow in a pipe with an oscillating pressure gradient ==
Flow through pipes with an oscillating pressure gradient finds applications in blood flow through large arteries. The imposed pressure gradient is given by
∂
p
∂
x
=
−
G
−
α
cos
ω
t
−
β
sin
ω
t
{\displaystyle {\frac {\partial p}{\partial x}}=-G-\alpha \cos \omega t-\beta \sin \omega t}
where G, α and β are constants and ω is the frequency. The velocity field is given by
u
(
r
,
t
)
=
G
4
μ
(
R
2
−
r
2
)
+
[
α
F
2
+
β
(
F
1
−
1
)
]
cos
ω
t
ρ
ω
+
[
β
F
2
−
α
(
F
1
−
1
)
]
sin
ω
t
ρ
ω
{\displaystyle u(r,t)={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right)+[\alpha F_{2}+\beta (F_{1}-1)]{\frac {\cos \omega t}{\rho \omega }}+[\beta F_{2}-\alpha (F_{1}-1)]{\frac {\sin \omega t}{\rho \omega }}}
where
F
1
(
k
r
)
=
b
e
r
(
k
r
)
b
e
r
(
k
R
)
+
b
e
i
(
k
r
)
b
e
i
(
k
R
)
b
e
r
2
(
k
R
)
+
b
e
i
2
(
k
R
)
,
F
2
(
k
r
)
=
b
e
r
(
k
r
)
b
e
i
(
k
R
)
−
b
e
i
(
k
r
)
b
e
r
(
k
R
)
b
e
r
2
(
k
R
)
+
b
e
i
2
(
k
R
)
,
{\displaystyle {\begin{aligned}F_{1}(kr)&={\frac {\mathrm {ber} (kr)\mathrm {ber} (kR)+\mathrm {bei} (kr)\mathrm {bei} (kR)}{\mathrm {ber} ^{2}(kR)+\mathrm {bei} ^{2}(kR)}},\\[6pt]F_{2}(kr)&={\frac {\mathrm {ber} (kr)\mathrm {bei} (kR)-\mathrm {bei} (kr)\mathrm {ber} (kR)}{\mathrm {ber} ^{2}(kR)+\mathrm {bei} ^{2}(kR)}},\end{aligned}}}
where ber and bei are the Kelvin functions and k2 = ρω/μ.
== Plane Poiseuille flow ==
Plane Poiseuille flow is flow created between two infinitely long parallel plates, separated by a distance h with a constant pressure gradient G = −dp/dx is applied in the direction of flow. The flow is essentially unidirectional because of infinite length. The Navier–Stokes equations reduce to
d
2
u
d
y
2
=
−
G
μ
{\displaystyle {\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}=-{\frac {G}{\mu }}}
with no-slip condition on both walls
u
(
0
)
=
0
,
u
(
h
)
=
0
{\displaystyle u(0)=0,\quad u(h)=0}
Therefore, the velocity distribution and the volume flow rate per unit length are
u
(
y
)
=
G
2
μ
y
(
h
−
y
)
,
Q
=
G
h
3
12
μ
.
{\displaystyle u(y)={\frac {G}{2\mu }}y(h-y),\quad Q={\frac {Gh^{3}}{12\mu }}.}
== Poiseuille flow through some non-circular cross-sections ==
Joseph Boussinesq derived the velocity profile and volume flow rate in 1868 for rectangular channel and tubes of equilateral triangular cross-section and for elliptical cross-section. Joseph Proudman derived the same for isosceles triangles in 1914. Let G = −dp/dx be the constant pressure gradient acting in direction parallel to the motion.
The velocity and the volume flow rate in a rectangular channel of height 0 ≤ y ≤ h and width 0 ≤ z ≤ l are
u
(
y
,
z
)
=
G
2
μ
y
(
h
−
y
)
−
4
G
h
2
μ
π
3
∑
n
=
1
∞
1
(
2
n
−
1
)
3
sinh
(
β
n
z
)
+
sinh
[
β
n
(
l
−
z
)
]
sinh
(
β
n
l
)
sin
(
β
n
y
)
,
β
n
=
(
2
n
−
1
)
π
h
,
Q
=
G
h
3
l
12
μ
−
16
G
h
4
π
5
μ
∑
n
=
1
∞
1
(
2
n
−
1
)
5
cosh
(
β
n
l
)
−
1
sinh
(
β
n
l
)
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu }}y(h-y)-{\frac {4Gh^{2}}{\mu \pi ^{3}}}\sum _{n=1}^{\infty }{\frac {1}{(2n-1)^{3}}}{\frac {\sinh(\beta _{n}z)+\sinh[\beta _{n}(l-z)]}{\sinh(\beta _{n}l)}}\sin(\beta _{n}y),\quad \beta _{n}={\frac {(2n-1)\pi }{h}},\\[6pt]Q&={\frac {Gh^{3}l}{12\mu }}-{\frac {16Gh^{4}}{\pi ^{5}\mu }}\sum _{n=1}^{\infty }{\frac {1}{(2n-1)^{5}}}{\frac {\cosh(\beta _{n}l)-1}{\sinh(\beta _{n}l)}}.\end{aligned}}}
The velocity and the volume flow rate of tube with equilateral triangular cross-section of side length 2h/√3 are
u
(
y
,
z
)
=
−
G
4
μ
h
(
y
−
h
)
(
y
2
−
3
z
2
)
,
Q
=
G
h
4
60
3
μ
.
{\displaystyle {\begin{aligned}u(y,z)&=-{\frac {G}{4\mu h}}(y-h)\left(y^{2}-3z^{2}\right),\\[6pt]Q&={\frac {Gh^{4}}{60{\sqrt {3}}\mu }}.\end{aligned}}}
The velocity and the volume flow rate in the right-angled isosceles triangle y = π, y ± z = 0 are
u
(
y
,
z
)
=
G
2
μ
(
y
+
z
)
(
π
−
y
)
−
G
π
μ
∑
n
=
1
∞
1
β
n
3
sinh
(
2
π
β
n
)
{
sinh
[
β
n
(
2
π
−
y
+
z
)
]
sin
[
β
n
(
y
+
z
)
]
−
sinh
[
β
n
(
y
+
z
)
]
sin
[
β
n
(
y
−
z
)
]
}
,
β
n
=
n
+
1
2
,
Q
=
G
π
4
12
μ
−
G
2
π
μ
∑
n
=
1
∞
1
β
n
5
[
coth
(
2
π
β
n
)
+
csc
(
2
π
β
n
)
]
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu }}(y+z)(\pi -y)-{\frac {G}{\pi \mu }}\sum _{n=1}^{\infty }{\frac {1}{\beta _{n}^{3}\sinh(2\pi \beta _{n})}}\left\{\sinh[\beta _{n}(2\pi -y+z)]\sin[\beta _{n}(y+z)]-\sinh[\beta _{n}(y+z)]\sin[\beta _{n}(y-z)]\right\},\quad \beta _{n}=n+{\tfrac {1}{2}},\\[6pt]Q&={\frac {G\pi ^{4}}{12\mu }}-{\frac {G}{2\pi \mu }}\sum _{n=1}^{\infty }{\frac {1}{\beta _{n}^{5}}}\left[\coth(2\pi \beta _{n})+\csc(2\pi \beta _{n})\right].\end{aligned}}}
The velocity distribution for tubes of elliptical cross-section with semiaxes a and b is
u
(
y
,
z
)
=
G
2
μ
(
1
a
2
+
1
b
2
)
(
1
−
y
2
a
2
−
z
2
b
2
)
,
Q
=
π
G
a
3
b
3
4
μ
(
a
2
+
b
2
)
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu \left({\frac {1}{a^{2}}}+{\frac {1}{b^{2}}}\right)}}\left(1-{\frac {y^{2}}{a^{2}}}-{\frac {z^{2}}{b^{2}}}\right),\\[6pt]Q&={\frac {\pi Ga^{3}b^{3}}{4\mu \left(a^{2}+b^{2}\right)}}.\end{aligned}}}
Here, when a = b, Poiseuille flow for circular pipe is recovered and when a → ∞, plane Poiseuille flow is recovered. More explicit solutions with cross-sections such as snail-shaped sections, sections having the shape of a notch circle following a semicircle, annular sections between homofocal ellipses, annular sections between non-concentric circles are also available, as reviewed by Ratip Berker.
== Poiseuille flow through arbitrary cross-section ==
The flow through arbitrary cross-section u(y,z) satisfies the condition that u = 0 on the walls. The governing equation reduces to
∂
2
u
∂
y
2
+
∂
2
u
∂
z
2
=
−
G
μ
.
{\displaystyle {\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}=-{\frac {G}{\mu }}.}
If we introduce a new dependent variable as
U
=
u
+
G
4
μ
(
y
2
+
z
2
)
,
{\displaystyle U=u+{\frac {G}{4\mu }}\left(y^{2}+z^{2}\right),}
then it is easy to see that the problem reduces to that integrating a Laplace equation
∂
2
U
∂
y
2
+
∂
2
U
∂
z
2
=
0
{\displaystyle {\frac {\partial ^{2}U}{\partial y^{2}}}+{\frac {\partial ^{2}U}{\partial z^{2}}}=0}
satisfying the condition
U
=
G
4
μ
(
y
2
+
z
2
)
{\displaystyle U={\frac {G}{4\mu }}\left(y^{2}+z^{2}\right)}
on the wall.
== Poiseuille's equation for an ideal isothermal gas ==
For a compressible fluid in a tube the volumetric flow rate Q(x) and the axial velocity are not constant along the tube; but the mass flow rate is constant along the tube length. The volumetric flow rate is usually expressed at the outlet pressure. As fluid is compressed or expanded, work is done and the fluid is heated or cooled. This means that the flow rate depends on the heat transfer to and from the fluid. For an ideal gas in the isothermal case, where the temperature of the fluid is permitted to equilibrate with its surroundings, an approximate relation for the pressure drop can be derived. Using ideal gas equation of state for constant temperature process (i.e.,
p
/
ρ
{\displaystyle p/\rho }
is constant) and the conservation of mass flow rate (i.e.,
m
˙
=
ρ
Q
{\displaystyle {\dot {m}}=\rho Q}
is constant), the relation Qp = Q1p1 = Q2p2 can be obtained. Over a short section of the pipe, the gas flowing through the pipe can be assumed to be incompressible so that Poiseuille law can be used locally,
−
d
p
d
x
=
8
μ
Q
π
R
4
=
8
μ
Q
2
p
2
π
p
R
4
⇒
−
p
d
p
d
x
=
8
μ
Q
2
p
2
π
R
4
.
{\displaystyle -{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {8\mu Q}{\pi R^{4}}}={\frac {8\mu Q_{2}p_{2}}{\pi pR^{4}}}\quad \Rightarrow \quad -p{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {8\mu Q_{2}p_{2}}{\pi R^{4}}}.}
Here we assumed the local pressure gradient is not too great to have any compressibility effects. Though locally we ignored the effects of pressure variation due to density variation, over long distances these effects are taken into account. Since μ is independent of pressure, the above equation can be integrated over the length L to give
p
1
2
−
p
2
2
=
16
μ
L
Q
2
p
2
π
R
4
.
{\displaystyle p_{1}^{2}-p_{2}^{2}={\frac {16\mu LQ_{2}p_{2}}{\pi R^{4}}}.}
Hence the volumetric flow rate at the pipe outlet is given by
Q
2
=
π
R
4
16
μ
L
(
p
1
2
−
p
2
2
p
2
)
=
π
R
4
(
p
1
−
p
2
)
8
μ
L
(
p
1
+
p
2
)
2
p
2
.
{\displaystyle Q_{2}={\frac {\pi R^{4}}{16\mu L}}\left({\frac {p_{1}^{2}-p_{2}^{2}}{p_{2}}}\right)={\frac {\pi R^{4}\left(p_{1}-p_{2}\right)}{8\mu L}}{\frac {\left(p_{1}+p_{2}\right)}{2p_{2}}}.}
This equation can be seen as Poiseuille's law with an extra correction factor p1 + p2/2p2 expressing the average pressure relative to the outlet pressure.
== Electrical circuits analogy ==
Electricity was originally understood to be a kind of fluid. This hydraulic analogy is still conceptually useful for understanding circuits. This analogy is also used to study the frequency response of fluid-mechanical networks using circuit tools, in which case the fluid network is termed a hydraulic circuit. Poiseuille's law corresponds to Ohm's law for electrical circuits, V = IR. Since the net force acting on the fluid is equal to ΔF = SΔp, where S = πr2, i.e. ΔF = πr2 ΔP, then from Poiseuille's law, it follows that
Δ
F
=
8
μ
L
Q
r
2
{\displaystyle \Delta F={\frac {8\mu LQ}{r^{2}}}}
.
For electrical circuits, let n be the concentration of free charged particles (in m−3) and let q* be the charge of each particle (in coulombs). (For electrons, q* = e = 1.6×10−19 C.) Then nQ is the number of particles in the volume Q, and nQq* is their total charge. This is the charge that flows through the cross section per unit time, i.e. the current I. Therefore, I = nQq*. Consequently, Q = I/nq*, and
Δ
F
=
8
μ
L
I
n
r
2
q
∗
.
{\displaystyle \Delta F={\frac {8\mu LI}{nr^{2}q^{*}}}.}
But ΔF = Eq, where q is the total charge in the volume of the tube. The volume of the tube is equal to πr2L, so the number of charged particles in this volume is equal to nπr2L, and their total charge is q = nπr2 Lq*. Since the voltage V = EL, it follows then
V
=
8
μ
L
I
n
2
π
r
4
(
q
∗
)
2
.
{\displaystyle V={\frac {8\mu LI}{n^{2}\pi r^{4}\left(q^{*}\right)^{2}}}.}
This is exactly Ohm's law, where the resistance R = V/I is described by the formula
R
=
8
μ
L
n
2
π
r
4
(
q
∗
)
2
{\displaystyle R={\frac {8\mu L}{n^{2}\pi r^{4}\left(q^{*}\right)^{2}}}}
.
It follows that the resistance R is proportional to the length L of the resistor, which is true. However, it also follows that the resistance R is inversely proportional to the fourth power of the radius r, i.e. the resistance R is inversely proportional to the second power of the cross section area S = πr2 of the resistor, which is different from the electrical formula. The electrical relation for the resistance is
R
=
ρ
L
S
,
{\displaystyle R={\frac {\rho L}{S}},}
where ρ is the resistivity; i.e. the resistance R is inversely proportional to the cross section area S of the resistor. The reason why Poiseuille's law leads to a different formula for the resistance R is the difference between the fluid flow and the electric current. Electron gas is inviscid, so its velocity does not depend on the distance to the walls of the conductor. The resistance is due to the interaction between the flowing electrons and the atoms of the conductor. Therefore, Poiseuille's law and the hydraulic analogy are useful only within certain limits when applied to electricity. Both Ohm's law and Poiseuille's law illustrate transport phenomena.
== Medical applications – intravenous access and fluid delivery ==
The Hagen–Poiseuille equation is useful in determining the vascular resistance and hence flow rate of intravenous (IV) fluids that may be achieved using various sizes of peripheral and central cannulas. The equation states that flow rate is proportional to the radius to the fourth power, meaning that a small increase in the internal diameter of the cannula yields a significant increase in flow rate of IV fluids. The radius of IV cannulas is typically measured in "gauge", which is inversely proportional to the radius. Peripheral IV cannulas are typically available as (from large to small) 14G, 16G, 18G, 20G, 22G, 26G. As an example, assuming cannula lengths are equal, the flow of a 14G cannula is 1.73 times that of a 16G cannula, and 4.16 times that of a 20G cannula. It also states that flow is inversely proportional to length, meaning that longer lines have lower flow rates. This is important to remember as in an emergency, many clinicians favor shorter, larger catheters compared to longer, narrower catheters. While of less clinical importance, an increased change in pressure (∆p) — such as by pressurizing the bag of fluid, squeezing the bag, or hanging the bag higher (relative to the level of the cannula) — can be used to speed up flow rate. It is also useful to understand that viscous fluids will flow slower (e.g. in blood transfusion).
== See also ==
Couette flow
Darcy's law
Pulse
Wave
Hydraulic circuit
== Cited references ==
== References ==
Sutera, S. P.; Skalak, R. (1993). "The history of Poiseuille's law". Annual Review of Fluid Mechanics. 25: 1–19. Bibcode:1993AnRFM..25....1S. doi:10.1146/annurev.fl.25.010193.000245..
Pfitzner, J (1976). "Poiseuille and his law". Anaesthesia. Vol. 31, no. 2 (published Mar 1976). pp. 273–5. doi:10.1111/j.1365-2044.1976.tb11804.x. PMID 779509..
Bennett, C. O.; Myers, J. E. (1962). Momentum, Heat, and Mass Transfer. McGraw-Hill..
== External links ==
Poiseuille's law for power-law non-Newtonian fluid
Poiseuille's law in a slightly tapered tube
Hagen–Poiseuille equation calculator | Wikipedia/Hagen–Poiseuille_flow_from_the_Navier–Stokes_equations |
Hemodynamics or haemodynamics are the dynamics of blood flow. The circulatory system is controlled by homeostatic mechanisms of autoregulation, just as hydraulic circuits are controlled by control systems. The hemodynamic response continuously monitors and adjusts to conditions in the body and its environment. Hemodynamics explains the physical laws that govern the flow of blood in the blood vessels.
Blood flow ensures the transportation of nutrients, hormones, metabolic waste products, oxygen, and carbon dioxide throughout the body to maintain cell-level metabolism, the regulation of the pH, osmotic pressure and temperature of the whole body, and the protection from microbial and mechanical harm.
Blood is a non-Newtonian fluid, and is most efficiently studied using rheology rather than hydrodynamics. Because blood vessels are not rigid tubes, classic hydrodynamics and fluids mechanics based on the use of classical viscometers are not capable of explaining haemodynamics.
The study of the blood flow is called hemodynamics, and the study of the properties of the blood flow is called hemorheology.
== Blood ==
Blood is a complex liquid. Blood is composed of plasma and formed elements. The plasma contains 91.5% water, 7% proteins and 1.5% other solutes. The formed elements are platelets, white blood cells, and red blood cells. The presence of these formed elements and their interaction with plasma molecules are the main reasons why blood differs so much from ideal Newtonian fluids.
=== Viscosity of plasma ===
Normal blood plasma behaves like a Newtonian fluid at physiological rates of shear. Typical values for the viscosity of normal human plasma at 37 °C is 1.4 mN·s/m2. The viscosity of normal plasma varies with temperature in the same way as does that of its solvent water;a 3°C change in temperature in the physiological range (36.5°C to 39.5°C)reduces plasma viscosity by about 10%.
=== Osmotic pressure of plasma ===
The osmotic pressure of solution is determined by the number of particles present and by the temperature. For example, a 1 molar solution of a substance contains 6.022×1023 molecules per liter of that substance and at 0 °C it has an osmotic pressure of 2.27 MPa (22.4 atm). The osmotic pressure of the plasma affects the mechanics of the circulation in several ways. An alteration of the osmotic pressure difference across the membrane of a blood cell causes a shift of water and a change of cell volume. The changes in shape and flexibility affect the mechanical properties of whole blood. A change in plasma osmotic pressure alters the hematocrit, that is, the volume concentration of red cells in the whole blood by redistributing water between the intravascular and extravascular spaces. This in turn affects the mechanics of the whole blood.
=== Red blood cells ===
The red blood cell is highly flexible and biconcave in shape. Its membrane has a Young's modulus in the region of 106 Pa. Deformation in red blood cells is induced by shear stress. When a suspension is sheared, the red blood cells deform and spin because of the velocity gradient, with the rate of deformation and spin depending on the shear rate and the concentration.
This can influence the mechanics of the circulation and may complicate the measurement of blood viscosity. It is true that in a steady state flow of a viscous fluid through a rigid spherical body immersed in the fluid, where we assume the inertia is negligible in such a flow, it is believed that the downward gravitational force of the particle is balanced by the viscous drag force. From this force balance the speed of fall can be shown to be given by Stokes' law
U
s
=
2
9
(
ρ
p
−
ρ
f
)
μ
g
a
2
{\displaystyle U_{s}={\frac {2}{9}}{\frac {\left(\rho _{p}-\rho _{f}\right)}{\mu }}g\,a^{2}}
Where a is the particle radius, ρp, ρf are the respectively particle and fluid density μ is the fluid viscosity, g is the gravitational acceleration. From the above equation we can see that the sedimentation velocity of the particle depends on the square of the radius. If the particle is released from rest in the fluid, its sedimentation velocity Us increases until it attains the steady value called the terminal velocity (U), as shown above.
=== Hemodilution ===
Hemodilution is the dilution of the concentration of red blood cells and plasma constituents by partially substituting the blood with colloids or crystalloids. It is a strategy to avoid exposure of patients to the potential hazards of homologous blood transfusions.
Hemodilution can be normovolemic, which implies the dilution of normal blood constituents by the use of expanders. During acute normovolemic hemodilution (ANH), blood subsequently lost during surgery contains proportionally fewer red blood cells per milliliter, thus minimizing intraoperative loss of the whole blood. Therefore, blood lost by the patient during surgery is not actually lost by the patient, for this volume is purified and redirected into the patient.
On the other hand, hypervolemic hemodilution (HVH) uses acute preoperative volume expansion without any blood removal. In choosing a fluid, however, it must be assured that when mixed, the remaining blood behaves in the microcirculation as in the original blood fluid, retaining all its properties of viscosity.
In presenting what volume of ANH should be applied one study suggests a mathematical model of ANH which calculates the maximum possible RCM savings using ANH, given the patients weight Hi and Hm.
To maintain the normovolemia, the withdrawal of autologous blood must be simultaneously replaced by a suitable hemodilute. Ideally, this is achieved by isovolemia exchange transfusion of a plasma substitute with a colloid osmotic pressure (OP). A colloid is a fluid containing particles that are large enough to exert an oncotic pressure across the micro-vascular membrane.
When debating the use of colloid or crystalloid, it is imperative to think about all the components of the starling equation:
Q
=
K
(
[
P
c
−
P
i
]
S
−
[
P
c
−
P
i
]
)
{\displaystyle \ Q=K([P_{c}-P_{i}]S-[P_{c}-P_{i}])}
To identify the minimum safe hematocrit desirable for a given patient the following equation is useful:
B
L
s
=
E
B
V
ln
H
i
H
m
{\displaystyle \ BL_{s}=EBV\ln {\frac {H_{i}}{H_{m}}}}
where EBV is the estimated blood volume; 70 mL/kg was used in this model and Hi (initial hematocrit) is the patient's initial hematocrit.
From the equation above it is clear that the volume of blood removed during the ANH to the Hm is the same as the BLs.
How much blood is to be removed is usually based on the weight, not the volume. The number of units that need to be removed to hemodilute to the maximum safe hematocrit (ANH) can be found by
A
N
H
=
B
L
s
450
{\displaystyle ANH={\frac {BL_{s}}{450}}}
This is based on the assumption that each unit removed by hemodilution has a volume of 450 mL (the actual volume of a unit will vary somewhat since completion of collection is dependent on weight and not volume).
The model assumes that the hemodilute value is equal to the Hm prior to surgery, therefore, the re-transfusion of blood obtained by hemodilution must begin when SBL begins.
The RCM available for retransfusion after ANH (RCMm) can be calculated from the patient's Hi and the final hematocrit after hemodilution(Hm)
R
C
M
=
E
V
B
×
(
H
i
−
H
m
)
{\displaystyle RCM=EVB\times (H_{i}-H_{m})}
The maximum SBL that is possible when ANH is used without falling below Hm(BLH) is found by assuming that all the blood removed during ANH is returned to the patient at a rate sufficient to maintain the hematocrit at the minimum safe level
B
L
H
=
R
C
M
H
H
m
{\displaystyle BL_{H}={\frac {RCM_{H}}{H_{m}}}}
If ANH is used as long as SBL does not exceed BLH there will not be any need for blood transfusion. We can conclude from the foregoing that H should therefore not exceed s.
The difference between the BLH and the BLs therefore is the incremental surgical blood loss (BLi) possible when using ANH.
B
L
i
=
B
L
H
−
B
L
s
{\displaystyle \ {BL_{i}}={BL_{H}}-{BL_{s}}}
When expressed in terms of the RCM
R
C
M
i
=
B
L
i
×
H
m
{\displaystyle {RCM_{i}}={BL_{i}}\times {H_{m}}}
Where RCMi is the red cell mass that would have to be administered using homologous blood to maintain the Hm if ANH is not used and blood loss equals BLH.
The model used assumes ANH used for a 70 kg patient with an estimated blood volume of 70 ml/kg (4900 ml). A range of Hi and Hm was evaluated to understand conditions where hemodilution is necessary to benefit the patient.
==== Result ====
The result of the model calculations are presented in a table given in the appendix for a range of Hi from 0.30 to 0.50 with ANH performed to minimum hematocrits from 0.30 to 0.15. Given a Hi of 0.40, if the Hm is assumed to be 0.25.then from the equation above the RCM count is still high and ANH is not necessary, if BLs does not exceed 2303 ml, since the hemotocrit will not fall below Hm, although five units of blood must be removed during hemodilution. Under these conditions, to achieve the maximum benefit from the technique if ANH is used, no homologous blood will be required to maintain the Hm if blood loss does not exceed 2940 ml. In such a case, ANH can save a maximum of 1.1 packed red blood cell unit equivalent, and homologous blood transfusion is necessary to maintain Hm, even if ANH is used.
This model can be used to identify when ANH may be used for a given patient and the degree of ANH necessary to maximize that benefit.
For example, if Hi is 0.30 or less it is not possible to save a red cell mass equivalent to two units of homologous PRBC even if the patient is hemodiluted to an Hm of 0.15. That is because from the RCM equation the patient RCM falls short from the equation giving above.
If Hi is 0.40 one must remove at least 7.5 units of blood during ANH, resulting in an Hm of 0.20 to save two units equivalence. Clearly, the greater the Hi and the greater the number of units removed during hemodilution, the more effective ANH is for preventing homologous blood transfusion. The model here is designed to allow doctors to determine where ANH may be beneficial for a patient based on their knowledge of the Hi, the potential for SBL, and an estimate of the Hm. Though the model used a 70 kg patient, the result can be applied to any patient. To apply these result to any body weight, any of the values BLs, BLH and ANHH or PRBC given in the table need to be multiplied by the factor we will call T
T
=
patient's weight in kg
70
{\displaystyle T={\frac {\text{patient's weight in kg}}{70}}}
Basically, the model considered above is designed to predict the maximum RCM that can save ANH.
In summary, the efficacy of ANH has been described mathematically by means of measurements of surgical blood loss and blood volume flow measurement. This form of analysis permits accurate estimation of the potential efficiency of the techniques and shows the application of measurement in the medical field.
== Blood flow ==
=== Cardiac output ===
The heart is the driver of the circulatory system, pumping blood through rhythmic contraction and relaxation. The rate of blood flow out of the heart (often expressed in L/min) is known as the cardiac output (CO).
Blood being pumped out of the heart first enters the aorta, the largest artery of the body. It then proceeds to divide into smaller and smaller arteries, then into arterioles, and eventually capillaries, where oxygen transfer occurs. The capillaries connect to venules, and the blood then travels back through the network of veins to the venae cavae into the right heart. The micro-circulation — the arterioles, capillaries, and venules —constitutes most of the area of the vascular system and is the site of the transfer of O2, glucose, and enzyme substrates into the cells. The venous system returns the de-oxygenated blood to the right heart where it is pumped into the lungs to become oxygenated and CO2 and other gaseous wastes exchanged and expelled during breathing. Blood then returns to the left side of the heart where it begins the process again.
In a normal circulatory system, the volume of blood returning to the heart each minute is approximately equal to the volume that is pumped out each minute (the cardiac output). Because of this, the velocity of blood flow across each level of the circulatory system is primarily determined by the total cross-sectional area of that level.
Cardiac output is determined by two methods. One is to use the Fick equation:
C
O
=
V
O
2
/
C
a
O
2
−
C
v
O
2
{\displaystyle CO=VO2/C_{a}O_{2}-C_{v}O_{2}}
The other thermodilution method is to sense the temperature change from a liquid injected in the proximal port of a Swan-Ganz to the distal port.
Cardiac output is mathematically expressed by the following equation:
C
O
=
S
V
×
H
R
{\displaystyle CO=SV\times HR}
where
CO = cardiac output (L/sec)
SV = stroke volume (ml)
HR = heart rate (bpm)
The normal human cardiac output is 5-6 L/min at rest. Not all blood that enters the left ventricle exits the heart. What is left at the end of diastole (EDV) minus the stroke volume make up the end systolic volume (ESV).
==== Anatomical features ====
Circulatory system of species subjected to orthostatic blood pressure (such as arboreal snakes) has evolved with physiological and morphological features to overcome the circulatory disturbance. For instance, in arboreal snakes the heart is closer to the head, in comparison with aquatic snakes. This facilitates blood perfusion to the brain.
=== Turbulence ===
Blood flow is also affected by the smoothness of the vessels, resulting in either turbulent (chaotic) or laminar (smooth) flow. Smoothness is reduced by the buildup of fatty deposits on the arterial walls.
The Reynolds number (denoted NR or Re) is a relationship that helps determine the behavior of a fluid in a tube, in this case blood in the vessel.
The equation for this dimensionless relationship is written as:
N
R
=
ρ
v
L
μ
{\displaystyle NR={\frac {\rho vL}{\mu }}}
ρ: density of the blood
v: mean velocity of the blood
L: characteristic dimension of the vessel, in this case diameter
μ: viscosity of blood
The Reynolds number is directly proportional to the velocity and diameter of the tube. Note that NR is directly proportional to the mean velocity as well as the diameter. A Reynolds number of less than 2300 is laminar fluid flow, which is characterized by constant flow motion, whereas a value of over 4000, is represented as turbulent flow. Due to its smaller radius and lowest velocity compared to other vessels, the Reynolds number at the capillaries is very low, resulting in laminar instead of turbulent flow.
=== Velocity ===
Often expressed in cm/s. This value is inversely related to the total cross-sectional area of the blood vessel and also differs per cross-section, because in normal condition the blood flow has laminar characteristics. For this reason, the blood flow velocity is the fastest in the middle of the vessel and slowest at the vessel wall. In most cases, the mean velocity is used. There are many ways to measure blood flow velocity, like videocapillary microscoping with frame-to-frame analysis, or laser Doppler anemometry.
Blood velocities in arteries are higher during systole than during diastole. One parameter to quantify this difference is the pulsatility index (PI), which is equal to the difference between the peak systolic velocity and the minimum diastolic velocity divided by the mean velocity during the cardiac cycle. This value decreases with distance from the heart.
P
I
=
v
s
y
s
t
o
l
e
−
v
d
i
a
s
t
o
l
e
v
m
e
a
n
{\displaystyle PI={\frac {v_{systole}-v_{diastole}}{v_{mean}}}}
== Blood vessels ==
=== Vascular resistance ===
Resistance is also related to vessel radius, vessel length, and blood viscosity.
In a first approach based on fluids, as indicated by the Hagen–Poiseuille equation. The equation is as follows:
Δ
P
=
8
μ
l
Q
π
r
4
{\displaystyle \Delta P={\frac {8\mu lQ}{\pi r^{4}}}}
∆P: pressure drop/gradient
μ: viscosity
l: length of tube. In the case of vessels with infinitely long lengths, l is replaced with diameter of the vessel.
Q: flow rate of the blood in the vessel
r: radius of the vessel
In a second approach, more realistic of the vascular resistance and coming from experimental observations on blood flows, according to Thurston, there is a plasma release-cell layering at the walls surrounding a plugged flow. It is a fluid layer in which at a distance δ, viscosity η is a function of δ written as η(δ), and these surrounding layers do not meet at the vessel centre in real blood flow. Instead, there is the plugged flow which is hyperviscous because holding high concentration of RBCs. Thurston assembled this layer to the flow resistance to describe blood flow by means of a viscosity η(δ) and thickness δ from the wall layer.
The blood resistance law appears as R adapted to blood flow profile :
R
=
c
L
η
(
δ
)
(
π
δ
r
3
)
{\displaystyle R={\frac {cL\eta (\delta )}{(\pi \delta r^{3})}}}
where
R = resistance to blood flow
c = constant coefficient of flow
L = length of the vessel
η(δ) = viscosity of blood in the wall plasma release-cell layering
r = radius of the blood vessel
δ = distance in the plasma release-cell layer
Blood resistance varies depending on blood viscosity and its plugged flow (or sheath flow since they are complementary across the vessel section) size as well, and on the size of the vessels.
Assuming steady, laminar flow in the vessel, the blood vessels behavior is similar to that of a pipe. For instance if p1 and p2 are pressures are at the ends of the tube, the pressure drop/gradient is:
p
1
−
p
2
l
=
Δ
P
{\displaystyle {\frac {p_{1}-p_{2}}{l}}=\Delta P}
The larger arteries, including all large enough to see without magnification, are conduits with low vascular resistance (assuming no advanced atherosclerotic changes) with high flow rates that generate only small drops in pressure. The smaller arteries and arterioles have higher resistance, and confer the main blood pressure drop across major arteries to capillaries in the circulatory system.
In the arterioles blood pressure is lower than in the major arteries. This is due to bifurcations, which cause a drop in pressure. The more bifurcations, the higher the total cross-sectional area, therefore the pressure across the surface drops. This is why the arterioles have the highest pressure-drop. The pressure drop of the arterioles is the product of flow rate and resistance: ∆P=Q xresistance. The high resistance observed in the arterioles, which factor largely in the ∆P is a result of a smaller radius of about 30 μm. The smaller the radius of a tube, the larger the resistance to fluid flow.
Immediately following the arterioles are the capillaries. Following the logic observed in the arterioles, we expect the blood pressure to be lower in the capillaries compared to the arterioles. Since pressure is a function of force per unit area, (P = F/A), the larger the surface area, the lesser the pressure when an external force acts on it. Though the radii of the capillaries are very small, the network of capillaries has the largest surface area in the vascular network. They are known to have the largest surface area (485 mm^2) in the human vascular network. The larger the total cross-sectional area, the lower the mean velocity as well as the pressure.
Substances called vasoconstrictors can reduce the size of blood vessels, thereby increasing blood pressure. Vasodilators (such as nitroglycerin) increase the size of blood vessels, thereby decreasing arterial pressure.
If the blood viscosity increases (gets thicker), the result is an increase in arterial pressure. Certain medical conditions can change the viscosity of the blood. For instance, anemia (low red blood cell concentration) reduces viscosity, whereas increased red blood cell concentration increases viscosity. It had been thought that aspirin and related "blood thinner" drugs decreased the viscosity of blood, but instead studies found that they act by reducing the tendency of the blood to clot.
To determine the systemic vascular resistance (SVR) the formula for calculating all resistance is used.
R
=
(
Δ
p
r
e
s
s
u
r
e
)
/
f
l
o
w
.
{\displaystyle R=(\Delta pressure)/flow.}
This translates for SVR into:
S
V
R
=
(
M
A
P
−
C
V
P
)
/
C
O
{\displaystyle SVR=(MAP-CVP)/CO}
Where
SVR = systemic vascular resistance (mmHg/L/min)
MAP = mean arterial pressure (mmHg)
CVP = central venous pressure (mmHg)
CO = cardiac output (L/min)
To get this in Wood units the answer is multiplied by 80.
Normal systemic vascular resistance is between 900 and 1440 dynes/sec/cm−5.
=== Wall tension ===
Regardless of site, blood pressure is related to the wall tension of the vessel according to the Young–Laplace equation (assuming that the thickness of the vessel wall is very small as compared to the diameter of the lumen):
σ
θ
=
P
r
t
{\displaystyle \sigma _{\theta }={\dfrac {Pr}{t}}\ }
where
P is the blood pressure
t is the wall thickness
r is the inside radius of the cylinder.
σ
θ
{\displaystyle \sigma _{\theta }\!}
is the cylinder stress or "hoop stress".
For the thin-walled assumption to be valid the vessel must have a wall thickness of no more than about one-tenth (often cited as one twentieth) of its radius.
The cylinder stress, in turn, is the average force exerted circumferentially (perpendicular both to the axis and to the radius of the object) in the cylinder wall, and can be described as:
σ
θ
=
F
t
l
{\displaystyle \sigma _{\theta }={\dfrac {F}{tl}}\ }
where:
F is the force exerted circumferentially on an area of the cylinder wall that has the following two lengths as sides:
t is the radial thickness of the cylinder
l is the axial length of the cylinder
=== Stress ===
When force is applied to a material it starts to deform or move. As the force needed to deform a material (e.g. to make a fluid flow) increases with the size of the surface of the material A., the magnitude of this force F is proportional to the area A of the portion of the surface. Therefore, the quantity (F/A) that is the force per unit area is called the stress. The shear stress at the wall that is associated with blood flow through an artery depends on the artery size and geometry and can range between 0.5 and 4 Pa.
σ
=
F
A
{\displaystyle \sigma ={\frac {F}{A}}}
.
Under normal conditions, to avoid atherogenesis, thrombosis, smooth muscle proliferation and endothelial apoptosis, shear stress maintains its magnitude and direction within an acceptable range. In some cases occurring due to blood hammer, shear stress reaches larger values. While the direction of the stress may also change by the reverse flow, depending on the hemodynamic conditions. Therefore, this situation can lead to atherosclerosis disease.
=== Capacitance ===
Veins are described as the "capacitance vessels" of the body because over 70% of the blood volume resides in the venous system. Veins are more compliant than arteries and expand to accommodate changing volume.
== Blood pressure ==
The blood pressure in the circulation is principally due to the pumping action of the heart. The pumping action of the heart generates pulsatile blood flow, which is conducted into the arteries, across the micro-circulation and eventually, back via the venous system to the heart. During each heartbeat, systemic arterial blood pressure varies between a maximum (systolic) and a minimum (diastolic) pressure. In physiology, these are often simplified into one value, the mean arterial pressure (MAP), which is calculated as follows:
M
A
P
=
D
P
+
1
/
3
(
P
P
)
{\displaystyle MAP=DP+1/3(PP)}
where:
MAP = Mean Arterial Pressure
DP = Diastolic blood pressure
PP = Pulse pressure which is systolic pressure minus diastolic pressure.
Differences in mean blood pressure are responsible for blood flow from one location to another in the circulation. The rate of mean blood flow depends on both blood pressure and the resistance to flow presented by the blood vessels. Mean blood pressure decreases as the circulating blood moves away from the heart through arteries and capillaries due to viscous losses of energy. Mean blood pressure drops over the whole circulation, although most of the fall occurs along the small arteries and arterioles. Gravity affects blood pressure via hydrostatic forces (e.g., during standing), and valves in veins, breathing, and pumping from contraction of skeletal muscles also influence blood pressure in veins.
The relationship between pressure, flow, and resistance is expressed in the following equation:
F
l
o
w
=
P
r
e
s
s
u
r
e
/
R
e
s
i
s
t
a
n
c
e
{\displaystyle Flow=Pressure/Resistance}
When applied to the circulatory system, we get:
C
O
=
(
M
A
P
−
R
A
P
)
/
S
V
R
{\displaystyle CO=(MAP-RAP)/SVR}
where
CO = cardiac output (in L/min)
MAP = mean arterial pressure (in mmHg), the average pressure of blood as it leaves the heart
RAP = right atrial pressure (in mmHg), the average pressure of blood as it returns to the heart
SVR = systemic vascular resistance (in mmHg * min/L)
A simplified form of this equation assumes right atrial pressure is approximately 0:
C
O
≈
M
A
P
/
S
V
R
{\displaystyle CO\approx MAP/SVR}
The ideal blood pressure in the brachial artery, where standard blood pressure cuffs measure pressure, is <120/80 mmHg. Other major arteries have similar levels of blood pressure recordings indicating very low disparities among major arteries. In the innominate artery, the average reading is 110/70 mmHg, the right subclavian artery averages 120/80 and the abdominal aorta is 110/70 mmHg. The relatively uniform pressure in the arteries indicate that these blood vessels act as a pressure reservoir for fluids that are transported within them.
Pressure drops gradually as blood flows from the major arteries, through the arterioles, the capillaries until blood is pushed up back into the heart via the venules, the veins through the vena cava with the help of the muscles. At any given pressure drop, the flow rate is determined by the resistance to the blood flow. In the arteries, with the absence of diseases, there is very little or no resistance to blood. The vessel diameter is the most principal determinant to control resistance. Compared to other smaller vessels in the body, the artery has a much bigger diameter (4 mm), therefore the resistance is low.
The arm–leg (blood pressure) gradient is the difference between the blood pressure measured in the arms and that measured in the legs. It is normally less than 10 mm Hg, but may be increased in e.g. coarctation of the aorta.
== Clinical significance ==
=== Pressure monitoring ===
Hemodynamic monitoring is the observation of hemodynamic parameters over time, such as blood pressure and heart rate. Blood pressure can be monitored either invasively through an inserted blood pressure transducer assembly (providing continuous monitoring), or noninvasively by repeatedly measuring the blood pressure with an inflatable blood pressure cuff.
Hypertension is diagnosed by the presence of arterial blood pressures of 140/90 or greater for two clinical visits.
Pulmonary Artery Wedge Pressure can show if there is congestive heart failure, mitral and aortic valve disorders, hypervolemia, shunts, or cardiac tamponade.
=== Remote, indirect monitoring of blood flow by laser Doppler ===
Noninvasive hemodynamic monitoring of eye fundus vessels can be performed by Laser Doppler holography, with near infrared light. The eye offers a unique opportunity for the non-invasive exploration of cardiovascular diseases. Laser Doppler imaging by digital holography can measure blood flow in the retina and choroid, whose Doppler responses exhibit a pulse-shaped profile with time This technique enables non invasive functional microangiography by high-contrast measurement of Doppler responses from endoluminal blood flow profiles in vessels in the posterior segment of the eye. Differences in blood pressure drive the flow of blood throughout the circulation. The rate of mean blood flow depends on both blood pressure and the hemodynamic resistance to flow presented by the blood vessels.
== Glossary ==
ANH
Acute Normovolemic Hemodilution
ANHu
Number of Units During ANH
BLH
Maximum Blood Loss Possible When ANH Is Used Before Homologous Blood Transfusion Is Needed
BLI
Incremental Blood Loss Possible with ANH.(BLH – BLs)
BLs
Maximum blood loss without ANH before homologous blood transfusion is required
EBV
Estimated Blood Volume(70 mL/kg)
Hct
Haematocrit Always Expressed Here As A Fraction
Hi
Initial Haematocrit
Hm
Minimum Safe Haematocrit
PRBC
Packed Red Blood Cell Equivalent Saved by ANH
RCM
Red cell mass.
RCMH
Cell Mass Available For Transfusion after ANH
RCMI
Red Cell Mass Saved by ANH
SBL
Surgical Blood Loss
== Etymology and pronunciation ==
The word hemodynamics () uses combining forms of hemo- (which comes from the ancient Greek haima, meaning blood) and dynamics, thus "the dynamics of blood". The vowel of the hemo- syllable is variously written according to the ae/e variation.
== Notes and references ==
== Bibliography ==
Berne RM, Levy MN. Cardiovascular physiology. 7th Ed Mosby 1997
Rowell LB. Human Cardiovascular Control. Oxford University press 1993
Braunwald E (Editor). Heart Disease: A Textbook of Cardiovascular Medicine. 5th Ed. W.B.Saunders 1997
Siderman S, Beyar R, Kleber AG. Cardiac Electrophysiology, Circulation and Transport. Kluwer Academic Publishers 1991
American Heart Association
Otto CM, Stoddard M, Waggoner A, Zoghbi WA. Recommendations for Quantification of Doppler Echocardiography: A Report from the Doppler Quantification Task Force of the Nomenclature and Standards Committee of the American Society of Echocardiography. J Am Soc Echocardiogr 2002;15:167-184
Peterson LH, The Dynamics of Pulsatile Blood Flow, Circ. Res. 1954;2;127-139
Hemodynamic Monitoring, Bigatello LM, George E., Minerva Anestesiol, 2002 Apr;68(4):219-25
Claude Franceschi L'investigation vasculaire par ultrasonographie Doppler Masson 1979 ISBN Nr 2-225-63679-6
Claude Franceschi; Paolo Zamboni Principles of Venous Hemodynamics Nova Science Publishers 2009-01 ISBN Nr 1606924850/9781606924853
Claude Franceschi Venous Insufficiency of the pelvis and lower extremities-Hemodynamic Rationale
WR Milnor: Hemodynamics, Williams & Wilkins, 1982
B Bo Sramek: Systemic Hemodynamics and Hemodynamic Management, 4th Edition, ESBN 1-59196-046-0
== External links ==
Learn hemodynamics | Wikipedia/Hemodynamics |
In computational fluid dynamics, the k–omega (k–ω) turbulence model is a common two-equation turbulence model, that is used as an approximation for the Reynolds-averaged Navier–Stokes equations (RANS equations). The model attempts to predict turbulence by two partial differential equations for two variables, k and ω, with the first variable being the turbulence kinetic energy (k) while the second (ω) is the specific rate of dissipation (of the turbulence kinetic energy k into internal thermal energy).
== Standard (Wilcox) k–ω turbulence model ==
The eddy viscosity νT, as needed in the RANS equations, is given by: νT = k/ω, while the evolution of k and ω is modelled as:
∂
(
ρ
k
)
∂
t
+
∂
(
ρ
u
j
k
)
∂
x
j
=
ρ
P
−
β
∗
ρ
ω
k
+
∂
∂
x
j
[
(
μ
+
σ
k
ρ
k
ω
)
∂
k
∂
x
j
]
,
with
P
=
τ
i
j
∂
u
i
∂
x
j
,
∂
(
ρ
ω
)
∂
t
+
∂
(
ρ
u
j
ω
)
∂
x
j
=
α
ω
k
ρ
P
−
β
ρ
ω
2
+
∂
∂
x
j
[
(
μ
+
σ
ω
ρ
k
ω
)
∂
ω
∂
x
j
]
+
ρ
σ
d
ω
∂
k
∂
x
j
∂
ω
∂
x
j
.
{\displaystyle {\begin{aligned}&{\frac {\partial (\rho k)}{\partial t}}+{\frac {\partial (\rho u_{j}k)}{\partial x_{j}}}=\rho P-\beta ^{*}\rho \omega k+{\frac {\partial }{\partial x_{j}}}\left[\left(\mu +\sigma _{k}{\frac {\rho k}{\omega }}\right){\frac {\partial k}{\partial x_{j}}}\right],\qquad {\text{with }}P=\tau _{ij}{\frac {\partial u_{i}}{\partial x_{j}}},\\&\displaystyle {\frac {\partial (\rho \omega )}{\partial t}}+{\frac {\partial (\rho u_{j}\omega )}{\partial x_{j}}}={\frac {\alpha \omega }{k}}\rho P-\beta \rho \omega ^{2}+{\frac {\partial }{\partial x_{j}}}\left[\left(\mu +\sigma _{\omega }{\frac {\rho k}{\omega }}\right){\frac {\partial \omega }{\partial x_{j}}}\right]+{\frac {\rho \sigma _{d}}{\omega }}{\frac {\partial k}{\partial x_{j}}}{\frac {\partial \omega }{\partial x_{j}}}.\end{aligned}}}
For recommendations for the values of the different parameters, see Wilcox (2008).
== Notes ==
== References ==
Wilcox, D. C. (2008), "Formulation of the k–ω Turbulence Model Revisited", AIAA Journal, 46 (11): 2823–2838, Bibcode:2008AIAAJ..46.2823W, doi:10.2514/1.36541
Wilcox, D. C. (1998), Turbulence Modeling for CFD (2nd ed.), DCW Industries, ISBN 0963605100
Bradshaw, P. (1971), An introduction to turbulence and its measurement, Pergamon Press, ISBN 0080166210
Versteeg, H.; Malalasekera, W. (2007), An Introduction to Computational Fluid Dynamics: The Finite Volume Method (2nd ed.), Pearson Education Limited, ISBN 978-0131274983
== External links ==
CFD Online Wilcox k–omega turbulence model description, retrieved May 12, 2014 | Wikipedia/K-omega_turbulence_model |
The convection–diffusion equation is a parabolic partial differential equation that combines the diffusion and convection (advection) equations. It describes physical phenomena where particles, energy, or other physical quantities are transferred inside a physical system due to two processes: diffusion and convection. Depending on context, the same equation can be called the advection–diffusion equation, drift–diffusion equation, or (generic) scalar transport equation.
== Equation ==
The general equation in conservative form is
∂
c
∂
t
=
∇
⋅
(
D
∇
c
−
v
c
)
+
R
{\displaystyle {\frac {\partial c}{\partial t}}=\mathbf {\nabla } \cdot (D\mathbf {\nabla } c-\mathbf {v} c)+R}
where
c is the variable of interest (species concentration for mass transfer, temperature for heat transfer),
D is the diffusivity (also called diffusion coefficient), such as mass diffusivity for particle motion or thermal diffusivity for heat transport,
v is the velocity field that the quantity is moving with. It is a function of time and space. For example, in advection, c might be the concentration of salt in a river, and then v would be the velocity of the water flow as a function of time and location. Another example, c might be the concentration of small bubbles in a calm lake, and then v would be the velocity of bubbles rising towards the surface by buoyancy (see below) depending on time and location of the bubble. For multiphase flows and flows in porous media, v is the (hypothetical) superficial velocity.
R describes sources or sinks of the quantity c, i.e. the creation or destruction of the quantity. For example, for a chemical species, R > 0 means that a chemical reaction is creating more of the species, and R < 0 means that a chemical reaction is destroying the species. For heat transport, R > 0 might occur if thermal energy is being generated by friction.
∇ represents gradient and ∇ ⋅ represents divergence. In this equation, ∇c represents concentration gradient.
In general, D, v, and R may vary with space and time. In cases in which they depend on concentration as well, the equation becomes nonlinear, giving rise to many distinctive mixing phenomena such as Rayleigh–Bénard convection when v depends on temperature in the heat transfer formulation and reaction–diffusion pattern formation when R depends on concentration in the mass transfer formulation.
Often there are several quantities, each with its own convection–diffusion equation, where the destruction of one quantity entails the creation of another. For example, when methane burns, it involves not only the destruction of methane and oxygen but also the creation of carbon dioxide and water vapor. Therefore, while each of these chemicals has its own convection–diffusion equation, they are coupled together and must be solved as a system of differential equations.
=== Derivation ===
The convection–diffusion equation can be derived in a straightforward way from the continuity equation, which states that the rate of change for a scalar quantity in a differential control volume is given by flow and diffusion into and out of that part of the system along with any generation or consumption inside the control volume:
∂
c
∂
t
+
∇
⋅
j
=
R
,
{\displaystyle {\frac {\partial c}{\partial t}}+\nabla \cdot \mathbf {j} =R,}
where j is the total flux and R is a net volumetric source for c. There are two sources of flux in this situation. First, diffusive flux arises due to diffusion. This is typically approximated by Fick's first law:
j
diff
=
−
D
∇
c
{\displaystyle \mathbf {j} _{\text{diff}}=-D\nabla c}
i.e., the flux of the diffusing material (relative to the bulk motion) in any part of the system is proportional to the local concentration gradient. Second, when there is overall convection or flow, there is an associated flux called advective flux:
j
adv
=
v
c
{\displaystyle \mathbf {j} _{\text{adv}}=\mathbf {v} c}
The total flux (in a stationary coordinate system) is given by the sum of these two:
j
=
j
diff
+
j
adv
=
−
D
∇
c
+
v
c
.
{\displaystyle \mathbf {j} =\mathbf {j} _{\text{diff}}+\mathbf {j} _{\text{adv}}=-D\nabla c+\mathbf {v} c.}
Plugging into the continuity equation:
∂
c
∂
t
+
∇
⋅
(
−
D
∇
c
+
v
c
)
=
R
.
{\displaystyle {\frac {\partial c}{\partial t}}+\nabla \cdot \left(-D\nabla c+\mathbf {v} c\right)=R.}
=== Common simplifications ===
In a common situation, the diffusion coefficient is constant, there are no sources or sinks, and the velocity field describes an incompressible flow (i.e., it has zero divergence). Then the formula simplifies to:
∂
c
∂
t
=
D
∇
2
c
−
v
⋅
∇
c
.
{\displaystyle {\frac {\partial c}{\partial t}}=D\nabla ^{2}c-\mathbf {v} \cdot \nabla c.}
In this case the equation can be put in the simple diffusion form:
d
c
d
t
=
D
∇
2
c
,
{\displaystyle {\frac {dc}{dt}}=D\nabla ^{2}c,}
where the derivative of the left hand side is the material derivative of the variable c.
In non-interacting material, D=0 (for example, when temperature is close to absolute zero, dilute gas has almost zero mass diffusivity), hence the transport equation is simply the continuity equation:
∂
c
∂
t
+
v
⋅
∇
c
=
0.
{\displaystyle {\frac {\partial c}{\partial t}}+\mathbf {v} \cdot \nabla c=0.}
Using Fourier transform in both temporal and spatial domain (that is, with integral kernel
e
i
ω
t
+
i
k
⋅
x
{\displaystyle e^{i\omega t+i\mathbf {k} \cdot \mathbf {x} }}
), its characteristic equation can be obtained:
i
ω
c
~
+
v
⋅
i
k
c
~
=
0
→
ω
=
−
k
⋅
v
,
{\displaystyle i\omega {\tilde {c}}+\mathbf {v} \cdot i\mathbf {k} {\tilde {c}}=0\rightarrow \omega =-\mathbf {k} \cdot \mathbf {v} ,}
which gives the general solution:
c
=
f
(
x
−
v
t
)
,
{\displaystyle c=f(\mathbf {x} -\mathbf {v} t),}
where
f
{\displaystyle f}
is any differentiable scalar function. This is the basis of temperature measurement for near Bose–Einstein condensate via time of flight method.
=== Stationary version ===
The stationary convection–diffusion equation describes the steady-state behavior of a convection–diffusion system. In a steady state, ∂c/∂t = 0, so the equation to solve becomes the second order equation:
∇
⋅
(
−
D
∇
c
+
v
c
)
=
R
.
{\displaystyle \nabla \cdot (-D\nabla c+\mathbf {v} c)=R.}
In one spatial dimension, the equation can be written as
d
d
x
(
−
D
(
x
)
d
c
(
x
)
d
x
+
v
(
x
)
c
(
x
)
)
=
R
(
x
)
{\displaystyle {\frac {d}{dx}}\left(-D(x){\frac {dc(x)}{dx}}+v(x)c(x)\right)=R(x)}
Which can be integrated one time in the space variable x to give:
D
(
x
)
d
c
(
x
)
d
x
−
v
(
x
)
c
(
x
)
=
−
∫
x
R
(
x
′
)
d
x
′
{\displaystyle D(x){\frac {dc(x)}{dx}}-v(x)c(x)=-\int _{x}R(x')dx'}
Where D is not zero, this is an inhomogeneous first-order linear differential equation with variable coefficients in the variable c(x):
y
′
(
x
)
=
f
(
x
)
y
(
x
)
+
g
(
x
)
.
{\displaystyle y'(x)=f(x)y(x)+g(x).}
where the coefficients are:
f
(
x
)
=
v
(
x
)
D
(
x
)
{\displaystyle f(x)={\frac {v(x)}{D(x)}}}
and:
g
(
x
)
=
−
1
D
(
x
)
∫
x
R
(
x
′
)
d
x
′
{\displaystyle g(x)=-{\frac {1}{D(x)}}\int _{x}R(x')dx'}
On the other hand, in the positions x where D=0, the first-order diffusion term disappears and the solution becomes simply the ratio:
c
(
x
)
=
1
v
(
x
)
∫
x
R
(
x
′
)
d
x
′
{\displaystyle c(x)={\frac {1}{v(x)}}\int _{x}R(x')dx'}
== Velocity in response to a force ==
In some cases, the average velocity field v exists because of a force; for example, the equation might describe the flow of ions dissolved in a liquid, with an electric field pulling the ions in some direction (as in gel electrophoresis). In this situation, it is usually called the drift–diffusion equation or the Smoluchowski equation, after Marian Smoluchowski who described it in 1915 (not to be confused with the Einstein–Smoluchowski relation or Smoluchowski coagulation equation).
Typically, the average velocity is directly proportional to the applied force, giving the equation:
∂
c
∂
t
=
∇
⋅
(
D
∇
c
)
−
∇
⋅
(
ζ
−
1
F
c
)
+
R
{\displaystyle {\frac {\partial c}{\partial t}}=\nabla \cdot (D\nabla c)-\nabla \cdot \left(\zeta ^{-1}\mathbf {F} c\right)+R}
where F is the force, and ζ characterizes the friction or viscous drag. (The inverse ζ−1 is called mobility.)
=== Derivation of Einstein relation ===
When the force is associated with a potential energy F = −∇U (see conservative force), a steady-state solution to the above equation (i.e. 0 = R = ∂c/∂t) is:
c
∝
exp
(
−
D
−
1
ζ
−
1
U
)
{\displaystyle c\propto \exp \left(-D^{-1}\zeta ^{-1}U\right)}
(assuming D and ζ are constant). In other words, there are more particles where the energy is lower. This concentration profile is expected to agree with the Boltzmann distribution (more precisely, the Gibbs measure). From this assumption, the Einstein relation can be proven:
D
ζ
=
k
B
T
.
{\displaystyle D\zeta =k_{\mathrm {B} }T.}
== Similar equations in other contexts ==
The convection–diffusion equation is a relatively simple equation describing flows, or alternatively, describing a stochastically-changing system. Therefore, the same or similar equation arises in many contexts unrelated to flows through space.
It is formally identical to the Fokker–Planck equation for the velocity of a particle.
It is closely related to the Black–Scholes equation and other equations in financial mathematics.
It is closely related to the Navier–Stokes equations, because the flow of momentum in a fluid is mathematically similar to the flow of mass or energy. The correspondence is clearest in the case of an incompressible Newtonian fluid, in which case the Navier–Stokes equation is:
∂
j
∂
t
=
μ
∇
2
j
−
u
⋅
∇
j
+
(
f
−
∇
P
)
{\displaystyle {\frac {\partial \mathbf {j} }{\partial t}}=\mu \nabla ^{2}\mathbf {j} -\mathbf {u} \cdot \nabla \mathbf {j} +(\mathbf {f} -\nabla P)}
where j is the momentum of the fluid (per unit volume) at each point (equal to the density ρ multiplied by the flow velocity u), μ is viscosity, P is fluid pressure, and f is any other body force such as gravity. In this equation, the term on the left-hand side describes the change in momentum at a given point; the first term on the right describes the diffusion of momentum by viscosity; the second term on the right describes the advective flow of momentum; and the last two terms on the right describes the external and internal forces which can act as sources or sinks of momentum.
=== In probability theory ===
The convection–diffusion equation (with R = 0) can be viewed as the Fokker-Planck equation, corresponding to random motion with diffusivity D and bias v. For example, the equation can describe the Brownian motion of a single particle, where the variable c describes the probability distribution for the particle to be in a given position at a given time. The reason the equation can be used that way is because there is no mathematical difference between the probability distribution of a single particle, and the concentration profile of a collection of infinitely many particles (as long as the particles do not interact with each other).
The Langevin equation describes advection, diffusion, and other phenomena in an explicitly stochastic way. One of the simplest forms of the Langevin equation is when its "noise term" is Gaussian; in this case, the Langevin equation is exactly equivalent to the convection–diffusion equation. However, the Langevin equation is more general.
=== In semiconductor physics ===
In semiconductor physics, this equation is called the drift–diffusion equation. The word "drift" is related to drift current and drift velocity. The equation is normally written:
J
n
−
q
=
−
D
n
∇
n
−
n
μ
n
E
J
p
q
=
−
D
p
∇
p
+
p
μ
p
E
∂
n
∂
t
=
−
∇
⋅
J
n
−
q
+
R
∂
p
∂
t
=
−
∇
⋅
J
p
q
+
R
{\displaystyle {\begin{aligned}{\frac {\mathbf {J} _{n}}{-q}}&=-D_{n}\nabla n-n\mu _{n}\mathbf {E} \\{\frac {\mathbf {J} _{p}}{q}}&=-D_{p}\nabla p+p\mu _{p}\mathbf {E} \\{\frac {\partial n}{\partial t}}&=-\nabla \cdot {\frac {\mathbf {J} _{n}}{-q}}+R\\{\frac {\partial p}{\partial t}}&=-\nabla \cdot {\frac {\mathbf {J} _{p}}{q}}+R\end{aligned}}}
where
n and p are the concentrations (densities) of electrons and holes, respectively,
q > 0 is the elementary charge,
Jn and Jp are the electric currents due to electrons and holes respectively,
Jn/−q and Jp/q are the corresponding "particle currents" of electrons and holes respectively,
R represents carrier generation and recombination (R > 0 for generation of electron-hole pairs, R < 0 for recombination.)
E is the electric field vector
μ
n
{\displaystyle \mu _{n}}
and
μ
p
{\displaystyle \mu _{p}}
are electron and hole mobility.
The diffusion coefficient and mobility are related by the Einstein relation as above:
D
n
=
μ
n
k
B
T
q
,
D
p
=
μ
p
k
B
T
q
,
{\displaystyle {\begin{aligned}D_{n}&={\frac {\mu _{n}k_{\mathrm {B} }T}{q}},\\D_{p}&={\frac {\mu _{p}k_{\mathrm {B} }T}{q}},\end{aligned}}}
where kB is the Boltzmann constant and T is absolute temperature. The drift current and diffusion current refer separately to the two terms in the expressions for J, namely:
J
n
,
drift
−
q
=
−
n
μ
n
E
,
J
p
,
drift
q
=
p
μ
p
E
,
J
n
,
diff
−
q
=
−
D
n
∇
n
,
J
p
,
diff
q
=
−
D
p
∇
p
.
{\displaystyle {\begin{aligned}{\frac {\mathbf {J} _{n,{\text{drift}}}}{-q}}&=-n\mu _{n}\mathbf {E} ,\\{\frac {\mathbf {J} _{p,{\text{drift}}}}{q}}&=p\mu _{p}\mathbf {E} ,\\{\frac {\mathbf {J} _{n,{\text{diff}}}}{-q}}&=-D_{n}\nabla n,\\{\frac {\mathbf {J} _{p,{\text{diff}}}}{q}}&=-D_{p}\nabla p.\end{aligned}}}
This equation can be solved together with Poisson's equation numerically.
An example of results of solving the drift diffusion equation is shown on the right. When light shines on the center of semiconductor, carriers are generated in the middle and diffuse towards two ends. The drift–diffusion equation is solved in this structure and electron density distribution is displayed in the figure. One can see the gradient of carrier from center towards two ends.
== See also ==
== Notes ==
== References ==
Wesseling, Pieter (2001). Principles of Computational Fluid Dynamics. Springer Series in Computational Mathematics. Vol. 29. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-05146-3. ISBN 978-3-642-05145-6.
== Further reading ==
Sewell, Granville (1988). The Numerical Solution of Ordinary and Partial Differential Equations. Academic Press. ISBN 0-12-637475-9. | Wikipedia/Convection–diffusion_equation |
The derivation of the Navier–Stokes equations as well as their application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.
== Basic assumptions ==
The Navier–Stokes equations are based on the assumption that the fluid, at the scale of interest, is a continuum – a continuous substance rather than discrete particles. Another necessary assumption is that all the fields of interest including pressure, flow velocity, density, and temperature are at least weakly differentiable.
The equations are derived from the basic principles of continuity of mass, conservation of momentum, and conservation of energy. Sometimes it is necessary to consider a finite arbitrary volume, called a control volume, over which these principles can be applied. This finite volume is denoted by Ω and its bounding surface ∂Ω. The control volume can remain fixed in space or can move with the fluid.
== The material derivative ==
Changes in properties of a moving fluid can be measured in two different ways. One can measure a given property by either carrying out the measurement on a fixed point in space as particles of the fluid pass by, or by following a parcel of fluid along its streamline. The derivative of a field with respect to a fixed position in space is called the Eulerian derivative, while the derivative following a moving parcel is called the advective or material (or Lagrangian) derivative.
The material derivative is defined as the linear operator:
D
D
t
=
d
e
f
∂
∂
t
+
u
⋅
∇
{\displaystyle {\frac {D}{Dt}}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla }
where u is the flow velocity. The first term on the right-hand side of the equation is the ordinary Eulerian derivative (the derivative on a fixed reference frame, representing changes at a point with respect to time) whereas the second term represents changes of a quantity with respect to position (see advection). This "special" derivative is in fact the ordinary derivative of a function of many variables along a path following the fluid motion; it may be derived through application of the chain rule in which all independent variables are checked for change along the path (which is to say, the total derivative).
For example, the measurement of changes in wind velocity in the atmosphere can be obtained with the help of an anemometer in a weather station or by observing the movement of a weather balloon. The anemometer in the first case is measuring the velocity of all the moving particles passing through a fixed point in space, whereas in the second case the instrument is measuring changes in velocity as it moves with the flow.
== Continuity equations ==
The Navier–Stokes equation is a special continuity equation. A continuity equation may be derived from conservation principles of:
mass,
momentum,
energy.
A continuity equation (or conservation law) is an integral relation stating that the rate of change of some integrated property φ defined over a control volume Ω must be equal to the rate at which it is lost or gained through the boundaries Γ of the volume plus the rate at which it is created or consumed by sources and sinks inside the volume. This is expressed by the following integral continuity equation:
d
d
t
∫
Ω
φ
d
Ω
=
−
∫
Γ
φ
u
⋅
n
d
Γ
−
∫
Ω
s
d
Ω
{\displaystyle {\frac {d}{dt}}\int _{\Omega }\varphi \ d\Omega =-\int _{\Gamma }\varphi \mathbf {u\cdot n} \ d\Gamma -\int _{\Omega }s\ d\Omega }
where u is the flow velocity of the fluid, n is the outward-pointing unit normal vector, and s represents the sources and sinks in the flow, taking the sinks as positive.
The divergence theorem may be applied to the surface integral, changing it into a volume integral:
d
d
t
∫
Ω
φ
d
Ω
=
−
∫
Ω
∇
⋅
(
φ
u
)
d
Ω
−
∫
Ω
s
d
Ω
.
{\displaystyle {\frac {d}{dt}}\int _{\Omega }\varphi \ d\Omega =-\int _{\Omega }\nabla \cdot (\varphi \mathbf {u} )\ d\Omega -\int _{\Omega }s\ d\Omega .}
Applying the Reynolds transport theorem to the integral on the left and then combining all of the integrals:
∫
Ω
∂
φ
∂
t
d
Ω
=
−
∫
Ω
∇
⋅
(
φ
u
)
d
Ω
−
∫
Ω
s
d
Ω
⇒
∫
Ω
(
∂
φ
∂
t
+
∇
⋅
(
φ
u
)
+
s
)
d
Ω
=
0.
{\displaystyle \int _{\Omega }{\frac {\partial \varphi }{\partial t}}\ d\Omega =-\int _{\Omega }\nabla \cdot (\varphi \mathbf {u} )\ d\Omega -\int _{\Omega }s\ d\Omega \quad \Rightarrow \quad \int _{\Omega }\left({\frac {\partial \varphi }{\partial t}}+\nabla \cdot (\varphi \mathbf {u} )+s\right)d\Omega =0.}
The integral must be zero for any control volume; this can only be true if the integrand itself is zero, so that:
∂
φ
∂
t
+
∇
⋅
(
φ
u
)
+
s
=
0.
{\displaystyle {\frac {\partial \varphi }{\partial t}}+\nabla \cdot (\varphi \mathbf {u} )+s=0.}
From this valuable relation (a very generic continuity equation), three important concepts may be concisely written: conservation of mass, conservation of momentum, and conservation of energy. Validity is retained if φ is a vector, in which case the vector-vector product in the second term will be a dyad.
=== Conservation of mass ===
Mass may be considered also. When the intensive property φ is considered as the mass, by substitution into the general continuity equation, and taking s = 0 (no sources or sinks of mass):
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0}
where ρ is the mass density (mass per unit volume), and u is the flow velocity. This equation is called the mass continuity equation, or simply the continuity equation. This equation generally accompanies the Navier–Stokes equation.
In the case of an incompressible fluid, Dρ/Dt = 0 (the density following the path of a fluid element is constant) and the equation reduces to:
∇
⋅
u
=
0
{\displaystyle \nabla \cdot \mathbf {u} =0}
which is in fact a statement of the conservation of volume.
=== Conservation of momentum ===
A general momentum equation is obtained when the conservation relation is applied to momentum. When the intensive property φ is considered as the mass flux (also momentum density), that is, the product of mass density and flow velocity ρu, by substitution into the general continuity equation:
∂
∂
t
(
ρ
u
)
+
∇
⋅
(
ρ
u
⊗
u
)
=
s
{\displaystyle {\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} )=\mathbf {s} }
where u ⊗ u is a dyad, a special case of tensor product, which results in a second rank tensor; the divergence of a second rank tensor is again a vector (a first-rank tensor).
Using the formula for the divergence of a dyad,
∇
⋅
(
a
⊗
b
)
=
(
∇
⋅
a
)
b
+
a
⋅
∇
b
{\displaystyle \nabla \cdot (\mathbf {a} \otimes \mathbf {b} )=(\nabla \cdot \mathbf {a} )\mathbf {b} +\mathbf {a} \cdot \nabla \mathbf {b} }
we then have
u
∂
ρ
∂
t
+
ρ
∂
u
∂
t
+
u
∇
⋅
(
ρ
u
)
+
ρ
u
⋅
∇
u
=
s
{\displaystyle \mathbf {u} {\frac {\partial \rho }{\partial t}}+\rho {\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \nabla \cdot (\rho \mathbf {u} )+\rho \mathbf {u} \cdot \nabla \mathbf {u} =\mathbf {s} }
Note that the gradient of a vector is a special case of the covariant derivative, the operation results in second rank tensors; except in Cartesian coordinates, it is important to understand that this is not simply an element by element gradient. Rearranging :
u
(
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
)
+
ρ
(
∂
u
∂
t
+
u
⋅
∇
u
)
=
s
{\displaystyle \mathbf {u} \left({\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )\right)+\rho \left({\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} \right)=\mathbf {s} }
The leftmost expression enclosed in parentheses is, by mass continuity (shown before), equal to zero. Noting that what remains on the left side of the equation is the material derivative of flow velocity:
ρ
D
u
D
t
=
ρ
(
∂
u
∂
t
+
u
⋅
∇
u
)
=
s
{\displaystyle \rho {\frac {D\mathbf {u} }{Dt}}=\rho \left({\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} \right)=\mathbf {s} }
This appears to simply be an expression of Newton's second law (F = ma) in terms of body forces instead of point forces. Each term in any case of the Navier–Stokes equations is a body force. A shorter though less rigorous way to arrive at this result would be the application of the chain rule to acceleration:
ρ
d
d
t
(
u
(
x
,
y
,
z
,
t
)
)
=
s
⇒
ρ
(
∂
u
∂
t
+
∂
u
∂
x
d
x
d
t
+
∂
u
∂
y
d
y
d
t
+
∂
u
∂
z
d
z
d
t
)
=
s
⇒
ρ
(
∂
u
∂
t
+
u
∂
u
∂
x
+
v
∂
u
∂
y
+
w
∂
u
∂
z
)
=
s
⇒
ρ
(
∂
u
∂
t
+
u
⋅
∇
u
)
=
s
{\displaystyle {\begin{aligned}\rho {\frac {d}{dt}}{\bigl (}\mathbf {u} (x,y,z,t){\bigr )}=\mathbf {s} \quad &\Rightarrow &\rho \left({\frac {\partial \mathbf {u} }{\partial t}}+{\frac {\partial \mathbf {u} }{\partial x}}{\frac {dx}{dt}}+{\frac {\partial \mathbf {u} }{\partial y}}{\frac {dy}{dt}}+{\frac {\partial \mathbf {u} }{\partial z}}{\frac {dz}{dt}}\right)&=\mathbf {s} \\\quad &\Rightarrow &\rho \left({\frac {\partial \mathbf {u} }{\partial t}}+u{\frac {\partial \mathbf {u} }{\partial x}}+v{\frac {\partial \mathbf {u} }{\partial y}}+w{\frac {\partial \mathbf {u} }{\partial z}}\right)&=\mathbf {s} \\\quad &\Rightarrow &\rho \left({\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} \right)&=\mathbf {s} \end{aligned}}}
where u = (u, v, w). The reason why this is "less rigorous" is that we haven't shown that the choice of
u
=
(
d
x
d
t
,
d
y
d
t
,
d
z
d
t
)
{\displaystyle \mathbf {u} =\left({\frac {dx}{dt}},{\frac {dy}{dt}},{\frac {dz}{dt}}\right)}
is correct; however it does make sense since with that choice of path the derivative is "following" a fluid "particle", and in order for Newton's second law to work, forces must be summed following a particle. For this reason the convective derivative is also known as the particle derivative.
== Cauchy momentum equation ==
The generic density of the momentum source s seen previously is made specific first by breaking it up into two new terms, one to describe internal stresses and one for external forces, such as gravity. By examining the forces acting on a small cube in a fluid, it may be shown that
ρ
D
u
D
t
=
∇
⋅
σ
+
ρ
f
{\displaystyle \rho {\frac {D\mathbf {u} }{Dt}}=\nabla \cdot {\boldsymbol {\sigma }}+\mathbf {\rho } {f}}
where σ is the Cauchy stress tensor, and f accounts for body forces present. This equation is called the Cauchy momentum equation and describes the non-relativistic momentum conservation of any continuum that conserves mass. σ is a rank two symmetric tensor given by its covariant components. In orthogonal coordinates in three dimensions it is represented as the 3 × 3 matrix:
σ
i
j
=
(
σ
x
x
τ
x
y
τ
x
z
τ
y
x
σ
y
y
τ
y
z
τ
z
x
τ
z
y
σ
z
z
)
{\displaystyle \sigma _{ij}={\begin{pmatrix}\sigma _{xx}&\tau _{xy}&\tau _{xz}\\\tau _{yx}&\sigma _{yy}&\tau _{yz}\\\tau _{zx}&\tau _{zy}&\sigma _{zz}\end{pmatrix}}}
where the σ are normal stresses and τ shear stresses. This matrix is split up into two terms:
σ
i
j
=
(
σ
x
x
τ
x
y
τ
x
z
τ
y
x
σ
y
y
τ
y
z
τ
z
x
τ
z
y
σ
z
z
)
=
−
(
p
0
0
0
p
0
0
0
p
)
+
(
σ
x
x
+
p
τ
x
y
τ
x
z
τ
y
x
σ
y
y
+
p
τ
y
z
τ
z
x
τ
z
y
σ
z
z
+
p
)
=
−
p
I
+
τ
{\displaystyle \sigma _{ij}={\begin{pmatrix}\sigma _{xx}&\tau _{xy}&\tau _{xz}\\\tau _{yx}&\sigma _{yy}&\tau _{yz}\\\tau _{zx}&\tau _{zy}&\sigma _{zz}\end{pmatrix}}=-{\begin{pmatrix}p&0&0\\0&p&0\\0&0&p\end{pmatrix}}+{\begin{pmatrix}\sigma _{xx}+p&\tau _{xy}&\tau _{xz}\\\tau _{yx}&\sigma _{yy}+p&\tau _{yz}\\\tau _{zx}&\tau _{zy}&\sigma _{zz}+p\end{pmatrix}}=-p\mathbf {I} +{\boldsymbol {\tau }}}
where I is the 3 × 3 identity matrix and τ is the deviatoric stress tensor. Note that the mechanical pressure p is equal to the negative of the mean normal stress:
p
=
−
1
3
(
σ
x
x
+
σ
y
y
+
σ
z
z
)
.
{\displaystyle p=-{\tfrac {1}{3}}\left(\sigma _{xx}+\sigma _{yy}+\sigma _{zz}\right).}
The motivation for doing this is that pressure is typically a variable of interest, and also this simplifies application to specific fluid families later on since the rightmost tensor τ in the equation above must be zero for a fluid at rest. Note that τ is traceless. The Cauchy equation may now be written in another more explicit form:
ρ
D
u
D
t
=
−
∇
p
+
∇
⋅
τ
+
ρ
f
{\displaystyle \rho {\frac {D\mathbf {u} }{Dt}}=-\nabla p+\nabla \cdot {\boldsymbol {\tau }}+\mathbf {\rho } {f}}
This equation is still incomplete. For completion, one must make hypotheses on the forms of τ and p, that is, one needs a constitutive law for the stress tensor which can be obtained for specific fluid families and on the pressure. Some of these hypotheses lead to the Euler equations (fluid dynamics), other ones lead to the Navier–Stokes equations. Additionally, if the flow is assumed compressible an equation of state will be required, which will likely further require a conservation of energy formulation.
== Application to different fluids ==
The general form of the equations of motion is not "ready for use", the stress tensor is still unknown so that more information is needed; this information is normally some knowledge of the viscous behavior of the fluid. For different types of fluid flow this results in specific forms of the Navier–Stokes equations.
=== Newtonian fluid ===
==== Compressible Newtonian fluid ====
The formulation for Newtonian fluids stems from an observation made by Newton that, for most fluids,
τ
∝
∂
u
∂
y
{\displaystyle \tau \propto {\frac {\partial u}{\partial y}}}
In order to apply this to the Navier–Stokes equations, three assumptions were made by Stokes:
The stress tensor is a linear function of the strain rate tensor or equivalently the velocity gradient.
The fluid is isotropic.
For a fluid at rest, ∇ ⋅ τ must be zero (so that hydrostatic pressure results).
The above list states the classic argument that the shear strain rate tensor (the (symmetric) shear part of the velocity gradient) is a pure shear tensor and does not include any inflow/outflow part (any compression/expansion part). This means that its trace is zero, and this is achieved by subtracting ∇ ⋅ u in a symmetric way from the diagonal elements of the tensor. The compressional contribution to viscous stress is added as a separate diagonal tensor.
Applying these assumptions will lead to :
τ
=
μ
(
∇
u
+
(
∇
u
)
T
)
+
λ
(
∇
⋅
u
)
I
{\displaystyle {\boldsymbol {\tau }}=\mu \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathsf {T}}\right)+\lambda \left(\nabla \cdot \mathbf {u} \right)\mathbf {I} }
or in tensor form
τ
i
j
=
μ
(
∂
u
i
∂
x
j
+
∂
u
j
∂
x
i
)
+
δ
i
j
λ
∂
u
k
∂
x
k
{\displaystyle \tau _{ij}=\mu \left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}\right)+\delta _{ij}\lambda {\frac {\partial u_{k}}{\partial x_{k}}}}
That is, the deviatoric of the deformation rate tensor is identified to the deviatoric of the stress tensor, up to a factor μ.
δij is the Kronecker delta. μ and λ are proportionality constants associated with the assumption that stress depends on strain linearly; μ is called the first coefficient of viscosity or shear viscosity (usually just called "viscosity") and λ is the second coefficient of viscosity or volume viscosity (and it is related to bulk viscosity). The value of λ, which produces a viscous effect associated with volume change, is very difficult to determine, not even its sign is known with absolute certainty. Even in compressible flows, the term involving λ is often negligible; however it can occasionally be important even in nearly incompressible flows and is a matter of controversy. When taken nonzero, the most common approximation is λ ≈ −2/3μ.
A straightforward substitution of τij into the momentum conservation equation will yield the Navier–Stokes equations, describing a compressible Newtonian fluid:
ρ
(
∂
u
∂
t
+
u
⋅
∇
u
)
=
−
∇
p
+
∇
⋅
[
μ
(
∇
u
+
(
∇
u
)
T
)
]
+
∇
⋅
[
λ
(
∇
⋅
u
)
I
]
+
ρ
g
{\displaystyle \rho \left({\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} \right)=-\nabla p+\nabla \cdot \left[\mu \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathsf {T}}\right)\right]+\nabla \cdot \left[\lambda \left(\nabla \cdot \mathbf {u} \right)\mathbf {I} \right]+\rho \mathbf {g} }
The body force has been decomposed into density and external acceleration, that is, f = ρg. The associated mass continuity equation is:
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0}
In addition to this equation, an equation of state and an equation for the conservation of energy is needed. The equation of state to use depends on context (often the ideal gas law), the conservation of energy will read:
ρ
D
h
D
t
=
D
p
D
t
+
∇
⋅
(
k
∇
T
)
+
Φ
{\displaystyle \rho {\frac {Dh}{Dt}}={\frac {Dp}{Dt}}+\nabla \cdot (k\nabla T)+\Phi }
Here, h is the specific enthalpy, T is the temperature, and Φ is a function representing the dissipation of energy due to viscous effects:
Φ
=
μ
(
2
(
∂
u
∂
x
)
2
+
2
(
∂
v
∂
y
)
2
+
2
(
∂
w
∂
z
)
2
+
(
∂
v
∂
x
+
∂
u
∂
y
)
2
+
(
∂
w
∂
y
+
∂
v
∂
z
)
2
+
(
∂
u
∂
z
+
∂
w
∂
x
)
2
)
+
λ
(
∇
⋅
u
)
2
.
{\displaystyle \Phi =\mu \left(2\left({\frac {\partial u}{\partial x}}\right)^{2}+2\left({\frac {\partial v}{\partial y}}\right)^{2}+2\left({\frac {\partial w}{\partial z}}\right)^{2}+\left({\frac {\partial v}{\partial x}}+{\frac {\partial u}{\partial y}}\right)^{2}+\left({\frac {\partial w}{\partial y}}+{\frac {\partial v}{\partial z}}\right)^{2}+\left({\frac {\partial u}{\partial z}}+{\frac {\partial w}{\partial x}}\right)^{2}\right)+\lambda (\nabla \cdot \mathbf {u} )^{2}.}
With a good equation of state and good functions for the dependence of parameters (such as viscosity) on the variables, this system of equations seems to properly model the dynamics of all known gases and most liquids.
==== Incompressible Newtonian fluid ====
For the special (but very common) case of incompressible flow, the momentum equations simplify significantly. Using the following assumptions:
Viscosity μ will now be a constant
The second viscosity effect λ = 0
The simplified mass continuity equation ∇ ⋅ u = 0
This gives incompressible Navier-Stokes equations, describing incompressible Newtonian fluid:
ρ
(
∂
u
∂
t
+
u
⋅
∇
u
)
=
−
∇
p
+
∇
⋅
[
μ
(
∇
u
+
(
∇
u
)
T
)
]
+
ρ
g
{\displaystyle \rho \left({\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} \right)=-\nabla p+\nabla \cdot \left[\mu \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathsf {T}}\right)\right]+\rho \mathbf {g} }
then looking at the viscous terms of the x momentum equation for example we have:
∂
∂
x
(
2
μ
∂
u
∂
x
)
+
∂
∂
y
(
μ
(
∂
u
∂
y
+
∂
v
∂
x
)
)
+
∂
∂
z
(
μ
(
∂
u
∂
z
+
∂
w
∂
x
)
)
=
2
μ
∂
2
u
∂
x
2
+
μ
∂
2
u
∂
y
2
+
μ
∂
2
v
∂
y
∂
x
+
μ
∂
2
u
∂
z
2
+
μ
∂
2
w
∂
z
∂
x
=
μ
∂
2
u
∂
x
2
+
μ
∂
2
u
∂
y
2
+
μ
∂
2
u
∂
z
2
+
μ
∂
2
u
∂
x
2
+
μ
∂
2
v
∂
y
∂
x
+
μ
∂
2
w
∂
z
∂
x
=
μ
∇
2
u
+
μ
∂
∂
x
(
∂
u
∂
x
+
∂
v
∂
y
+
∂
w
∂
z
)
0
=
μ
∇
2
u
{\displaystyle {\begin{aligned}&{\frac {\partial }{\partial x}}\left(2\mu {\frac {\partial u}{\partial x}}\right)+{\frac {\partial }{\partial y}}\left(\mu \left({\frac {\partial u}{\partial y}}+{\frac {\partial v}{\partial x}}\right)\right)+{\frac {\partial }{\partial z}}\left(\mu \left({\frac {\partial u}{\partial z}}+{\frac {\partial w}{\partial x}}\right)\right)\\[8px]&\qquad =2\mu {\frac {\partial ^{2}u}{\partial x^{2}}}+\mu {\frac {\partial ^{2}u}{\partial y^{2}}}+\mu {\frac {\partial ^{2}v}{\partial y\,\partial x}}+\mu {\frac {\partial ^{2}u}{\partial z^{2}}}+\mu {\frac {\partial ^{2}w}{\partial z\,\partial x}}\\[8px]&\qquad =\mu {\frac {\partial ^{2}u}{\partial x^{2}}}+\mu {\frac {\partial ^{2}u}{\partial y^{2}}}+\mu {\frac {\partial ^{2}u}{\partial z^{2}}}+\mu {\frac {\partial ^{2}u}{\partial x^{2}}}+\mu {\frac {\partial ^{2}v}{\partial y\,\partial x}}+\mu {\frac {\partial ^{2}w}{\partial z\,\partial x}}\\[8px]&\qquad =\mu \nabla ^{2}u+\mu {\frac {\partial }{\partial x}}{\cancelto {0}{\left({\frac {\partial u}{\partial x}}+{\frac {\partial v}{\partial y}}+{\frac {\partial w}{\partial z}}\right)}}\\[8px]&\qquad =\mu \nabla ^{2}u\end{aligned}}\,}
Similarly for the y and z momentum directions we have μ∇2v and μ∇2w.
The above solution is key to deriving Navier–Stokes equations from the equation of motion in fluid dynamics when density and viscosity are constant.
=== Non-Newtonian fluids ===
A non-Newtonian fluid is a fluid whose flow properties differ in any way from those of Newtonian fluids. Most commonly the viscosity of non-Newtonian fluids is a function of shear rate or shear rate history. However, there are some non-Newtonian fluids with shear-independent viscosity, that nonetheless exhibit normal stress-differences or other non-Newtonian behaviour. Many salt solutions and molten polymers are non-Newtonian fluids, as are many commonly found substances such as ketchup, custard, toothpaste, starch suspensions, paint, blood, and shampoo. In a Newtonian fluid, the relation between the shear stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different, and can even be time-dependent. The study of the non-Newtonian fluids is usually called rheology. A few examples are given here.
==== Bingham fluid ====
In Bingham fluids, the situation is slightly different:
∂
u
∂
y
=
{
0
,
τ
<
τ
0
τ
−
τ
0
μ
,
τ
≥
τ
0
{\displaystyle {\frac {\partial u}{\partial y}}={\begin{cases}0,&\tau <\tau _{0}\\[5px]{\dfrac {\tau -\tau _{0}}{\mu }},&\tau \geq \tau _{0}\end{cases}}}
These are fluids capable of bearing some stress before they start flowing. Some common examples are toothpaste and clay.
==== Power-law fluid ====
A power law fluid is an idealised fluid for which the shear stress, τ, is given by
τ
=
K
(
∂
u
∂
y
)
n
{\displaystyle \tau =K\left({\frac {\partial u}{\partial y}}\right)^{n}}
This form is useful for approximating all sorts of general fluids, including shear thinning (such as latex paint) and shear thickening (such as corn starch water mixture).
== Stream function formulation ==
In the analysis of a flow, it is often desirable to reduce the number of equations and/or the number of variables. The incompressible Navier–Stokes equation with mass continuity (four equations in four unknowns) can be reduced to a single equation with a single dependent variable in 2D, or one vector equation in 3D. This is enabled by two vector calculus identities:
∇
×
(
∇
ϕ
)
=
0
∇
⋅
(
∇
×
A
)
=
0
{\displaystyle {\begin{aligned}\nabla \times (\nabla \phi )&=0\\\nabla \cdot (\nabla \times \mathbf {A} )&=0\end{aligned}}}
for any differentiable scalar φ and vector A. The first identity implies that any term in the Navier–Stokes equation that may be represented as the gradient of a scalar will disappear when the curl of the equation is taken. Commonly, pressure p and external acceleration g will be eliminated, resulting in (this is true in 2D as well as 3D):
∇
×
(
∂
u
∂
t
+
u
⋅
∇
u
)
=
ν
∇
×
(
∇
2
u
)
{\displaystyle \nabla \times \left({\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} \right)=\nu \nabla \times \left(\nabla ^{2}\mathbf {u} \right)}
where it is assumed that all body forces are describable as gradients (for example it is true for gravity), and density has been divided so that viscosity becomes kinematic viscosity.
The second vector calculus identity above states that the divergence of the curl of a vector field is zero. Since the (incompressible) mass continuity equation specifies the divergence of flow velocity being zero, we can replace the flow velocity with the curl of some vector ψ so that mass continuity is always satisfied:
∇
⋅
u
=
0
⇒
∇
⋅
(
∇
×
ψ
)
=
0
⇒
0
=
0
{\displaystyle \nabla \cdot \mathbf {u} =0\quad \Rightarrow \quad \nabla \cdot (\nabla \times {\boldsymbol {\psi }})=0\quad \Rightarrow \quad 0=0}
So, as long as flow velocity is represented through u = ∇ × ψ, mass continuity is unconditionally satisfied. With this new dependent vector variable, the Navier–Stokes equation (with curl taken as above) becomes a single fourth order vector equation, no longer containing the unknown pressure variable and no longer dependent on a separate mass continuity equation:
∇
×
(
∂
∂
t
(
∇
×
ψ
)
+
(
∇
×
ψ
)
⋅
∇
(
∇
×
ψ
)
)
=
ν
∇
×
(
∇
2
(
∇
×
ψ
)
)
{\displaystyle \nabla \times \left({\frac {\partial }{\partial t}}(\nabla \times {\boldsymbol {\psi }})+(\nabla \times {\boldsymbol {\psi }})\cdot \nabla (\nabla \times {\boldsymbol {\psi }})\right)=\nu \nabla \times \left(\nabla ^{2}(\nabla \times {\boldsymbol {\psi }})\right)}
Apart from containing fourth order derivatives, this equation is fairly complicated, and is thus uncommon. Note that if the cross differentiation is left out, the result is a third order vector equation containing an unknown vector field (the gradient of pressure) that may be determined from the same boundary conditions that one would apply to the fourth order equation above.
=== 2D flow in orthogonal coordinates ===
The true utility of this formulation is seen when the flow is two dimensional in nature and the equation is written in a general orthogonal coordinate system, in other words a system where the basis vectors are orthogonal. Note that this by no means limits application to Cartesian coordinates, in fact most of the common coordinates systems are orthogonal, including familiar ones like cylindrical and obscure ones like toroidal.
The 3D flow velocity is expressed as (note that the discussion not used coordinates so far):
u
=
u
1
e
1
+
u
2
e
2
+
u
3
e
3
{\displaystyle \mathbf {u} =u_{1}\mathbf {e} _{1}+u_{2}\mathbf {e} _{2}+u_{3}\mathbf {e} _{3}}
where ei are basis vectors, not necessarily constant and not necessarily normalized, and ui are flow velocity components; let also the coordinates of space be (x1, x2, x3).
Now suppose that the flow is 2D. This does not mean the flow is in a plane, rather it means that the component of flow velocity in one direction is zero and the remaining components are independent of the same direction. In that case (take component 3 to be zero):
u
=
u
1
e
1
+
u
2
e
2
;
∂
u
1
∂
x
3
=
∂
u
2
∂
x
3
=
0
{\displaystyle \mathbf {u} =u_{1}\mathbf {e} _{1}+u_{2}\mathbf {e} _{2};\qquad {\frac {\partial u_{1}}{\partial x_{3}}}={\frac {\partial u_{2}}{\partial x_{3}}}=0}
The vector function ψ is still defined via:
u
=
∇
×
ψ
{\displaystyle \mathbf {u} =\nabla \times {\boldsymbol {\psi }}}
but this must simplify in some way also since the flow is assumed 2D. If orthogonal coordinates are assumed, the curl takes on a fairly simple form, and the equation above expanded becomes:
u
1
e
1
+
u
2
e
2
=
e
1
h
2
h
3
[
∂
∂
x
2
(
h
3
ψ
3
)
−
∂
∂
x
3
(
h
2
ψ
2
)
]
+
{\displaystyle u_{1}\mathbf {e} _{1}+u_{2}\mathbf {e} _{2}={\frac {\mathbf {e} _{1}}{h_{2}h_{3}}}\left[{\frac {\partial }{\partial x_{2}}}\left(h_{3}\psi _{3}\right)-{\frac {\partial }{\partial x_{3}}}\left(h_{2}\psi _{2}\right)\right]+}
+
e
2
h
3
h
1
[
∂
∂
x
3
(
h
1
ψ
1
)
−
∂
∂
x
1
(
h
3
ψ
3
)
]
+
e
3
h
1
h
2
[
∂
∂
x
1
(
h
2
ψ
2
)
−
∂
∂
x
2
(
h
1
ψ
1
)
]
{\displaystyle {\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }+{\frac {\mathbf {e} _{2}}{h_{3}h_{1}}}\left[{\frac {\partial }{\partial x_{3}}}\left(h_{1}\psi _{1}\right)-{\frac {\partial }{\partial x_{1}}}\left(h_{3}\psi _{3}\right)\right]+{\frac {\mathbf {e} _{3}}{h_{1}h_{2}}}\left[{\frac {\partial }{\partial x_{1}}}\left(h_{2}\psi _{2}\right)-{\frac {\partial }{\partial x_{2}}}\left(h_{1}\psi _{1}\right)\right]}
Examining this equation shows that we can set ψ1 = ψ2 = 0 and retain equality with no loss of generality, so that:
u
1
e
1
+
u
2
e
2
=
e
1
h
2
h
3
∂
∂
x
2
(
h
3
ψ
3
)
−
e
2
h
3
h
1
∂
∂
x
1
(
h
3
ψ
3
)
{\displaystyle u_{1}\mathbf {e} _{1}+u_{2}\mathbf {e} _{2}={\frac {\mathbf {e} _{1}}{h_{2}h_{3}}}{\frac {\partial }{\partial x_{2}}}\left(h_{3}\psi _{3}\right)-{\frac {\mathbf {e} _{2}}{h_{3}h_{1}}}{\frac {\partial }{\partial x_{1}}}\left(h_{3}\psi _{3}\right)}
the significance here is that only one component of ψ remains, so that 2D flow becomes a problem with only one dependent variable. The cross differentiated Navier–Stokes equation becomes two 0 = 0 equations and one meaningful equation.
The remaining component ψ3 = ψ is called the stream function. The equation for ψ can simplify since a variety of quantities will now equal zero, for example:
∇
⋅
ψ
=
1
h
1
h
2
h
3
∂
∂
x
3
(
ψ
h
1
h
2
)
=
0
{\displaystyle \nabla \cdot {\boldsymbol {\psi }}={\frac {1}{h_{1}h_{2}h_{3}}}{\frac {\partial }{\partial x_{3}}}\left(\psi h_{1}h_{2}\right)=0}
if the scale factors h1 and h2 also are independent of x3. Also, from the definition of the vector Laplacian
∇
×
(
∇
×
ψ
)
=
∇
(
∇
⋅
ψ
)
−
∇
2
ψ
=
−
∇
2
ψ
{\displaystyle \nabla \times (\nabla \times {\boldsymbol {\psi }})=\nabla (\nabla \cdot {\boldsymbol {\psi }})-\nabla ^{2}{\boldsymbol {\psi }}=-\nabla ^{2}{\boldsymbol {\psi }}}
Manipulating the cross differentiated Navier–Stokes equation using the above two equations and a variety of identities will eventually yield the 1D scalar equation for the stream function:
∂
∂
t
(
∇
2
ψ
)
+
(
∇
×
ψ
)
⋅
∇
(
∇
2
ψ
)
=
ν
∇
4
ψ
{\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+(\nabla \times {\boldsymbol {\psi }})\cdot \nabla \left(\nabla ^{2}\psi \right)=\nu \nabla ^{4}\psi }
where ∇4 is the biharmonic operator. This is very useful because it is a single self-contained scalar equation that describes both momentum and mass conservation in 2D. The only other equations that this partial differential equation needs are initial and boundary conditions.
The assumptions for the stream function equation are:
The flow is incompressible and Newtonian.
Coordinates are orthogonal.
Flow is 2D: u3 = ∂u1/∂x3 = ∂u2/∂x3 = 0
The first two scale factors of the coordinate system are independent of the last coordinate: ∂h1/∂x3 = ∂h2/∂x3 = 0, otherwise extra terms appear.
The stream function has some useful properties:
Since −∇2ψ = ∇ × (∇ × ψ) = ∇ × u, the vorticity of the flow is just the negative of the Laplacian of the stream function.
The level curves of the stream function are streamlines.
== The stress tensor ==
The derivation of the Navier–Stokes equation involves the consideration of forces acting on fluid elements, so that a quantity called the stress tensor appears naturally in the Cauchy momentum equation. Since the divergence of this tensor is taken, it is customary to write out the equation fully simplified, so that the original appearance of the stress tensor is lost.
However, the stress tensor still has some important uses, especially in formulating boundary conditions at fluid interfaces. Recalling that σ = −pI + τ, for a Newtonian fluid the stress tensor is:
σ
i
j
=
−
p
δ
i
j
+
μ
(
∂
u
i
∂
x
j
+
∂
u
j
∂
x
i
)
+
δ
i
j
λ
∇
⋅
u
.
{\displaystyle \sigma _{ij}=-p\delta _{ij}+\mu \left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}\right)+\delta _{ij}\lambda \nabla \cdot \mathbf {u} .}
If the fluid is assumed to be incompressible, the tensor simplifies significantly. In 3D cartesian coordinates for example:
σ
=
−
(
p
0
0
0
p
0
0
0
p
)
+
μ
(
2
∂
u
∂
x
∂
u
∂
y
+
∂
v
∂
x
∂
u
∂
z
+
∂
w
∂
x
∂
v
∂
x
+
∂
u
∂
y
2
∂
v
∂
y
∂
v
∂
z
+
∂
w
∂
y
∂
w
∂
x
+
∂
u
∂
z
∂
w
∂
y
+
∂
v
∂
z
2
∂
w
∂
z
)
=
−
p
I
+
μ
(
∇
u
+
(
∇
u
)
T
)
=
−
p
I
+
2
μ
e
{\displaystyle {\begin{aligned}{\boldsymbol {\sigma }}&=-{\begin{pmatrix}p&0&0\\0&p&0\\0&0&p\end{pmatrix}}+\mu {\begin{pmatrix}2\displaystyle {\frac {\partial u}{\partial x}}&\displaystyle {{\frac {\partial u}{\partial y}}+{\frac {\partial v}{\partial x}}}&\displaystyle {{\frac {\partial u}{\partial z}}+{\frac {\partial w}{\partial x}}}\\\displaystyle {{\frac {\partial v}{\partial x}}+{\frac {\partial u}{\partial y}}}&2\displaystyle {\frac {\partial v}{\partial y}}&\displaystyle {{\frac {\partial v}{\partial z}}+{\frac {\partial w}{\partial y}}}\\\displaystyle {{\frac {\partial w}{\partial x}}+{\frac {\partial u}{\partial z}}}&\displaystyle {{\frac {\partial w}{\partial y}}+{\frac {\partial v}{\partial z}}}&2\displaystyle {\frac {\partial w}{\partial z}}\end{pmatrix}}\\[6px]&=-p\mathbf {I} +\mu \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathsf {T}}\right)\\[6px]&=-p\mathbf {I} +2\mu \mathbf {e} \end{aligned}}}
e is the strain rate tensor, by definition:
e
i
j
=
1
2
(
∂
u
i
∂
x
j
+
∂
u
j
∂
x
i
)
.
{\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}\right).}
== See also ==
Derivation of Navier–Stokes equation from discrete LBE
First law of thermodynamics (fluid mechanics)
== References ==
Batchelor, G. K. (2000). An Introduction to Fluid Dynamics. New York: Cambridge University Press. ISBN 978-0-521-66396-0.
White, Frank M. (2006). Viscous Fluid Flow (3rd ed.). New York: McGraw Hill. ISBN 0-07-240231-8.
Surface Tension Module Archived 2007-10-27 at the Wayback Machine, by John W. M. Bush, at MIT OCW
Galdi, An Introduction to the Mathematical Theory of the Navier–Stokes Equations: Steady-State Problems. Springer 2011 | Wikipedia/Derivation_of_the_Navier–Stokes_equations |
Pressure-correction method is a class of methods used in computational fluid dynamics for numerically solving the Navier-Stokes equations normally for incompressible flows.
== Common properties ==
The equations solved in this approach arise from the implicit time integration of the incompressible Navier–Stokes equations.
Due to the non-linearity of the convective term in the momentum equation that is written above, this problem is solved with a nested-loop approach. While so called global
or inner iterations represent the real time-steps and are used to update the variables
v
{\displaystyle \mathbf {v} }
and
p
{\displaystyle p}
, based on a linearized system, and boundary conditions; there is also an outer loop for updating the coefficients of the linearized system.
The outer iterations comprise two steps:
Solve the momentum equation for a provisional velocity based on the velocity and pressure of the previous outer loop.
Plug the new newly obtained velocity into the continuity equation to obtain a correction.
The correction for the velocity that is obtained from the second equation one has with incompressible flow, the non-divergence criterion or continuity equation
∇
⋅
v
=
0
{\displaystyle \nabla \cdot \mathbf {v} =0}
is computed by first calculating a residual value
m
˙
{\displaystyle {\dot {m}}}
, resulting from spurious mass flux, then using this mass imbalance to get a new pressure value. The pressure value that is attempted to compute, is such that when plugged into momentum equations a divergence-free velocity field results. The mass imbalance is often also used for control of the outer loop.
The name of this class of methods stems from the fact that the correction of the velocity field is computed through the pressure-field.
The discretization of this is typically done with either the finite element method or the finite volume method. With the latter, one might also encounter the dual mesh, i.e. the computation grid obtained from connecting the centers of the cells that the initial subdivision into finite elements of the computation domain yielded.
== Implicit split-update procedures ==
Another approach which is typically used in FEM is the following.
The aim of the correction step is to ensure conservation of mass. In continuous form for compressible substances mass, conservation of mass is expressed by
∇
⋅
(
ρ
(
x
)
v
(
x
)
)
=
d
d
t
p
(
x
)
c
2
{\displaystyle \nabla \cdot \left(\rho (\mathbf {x} )\mathbf {v} (\mathbf {x} )\right)={\frac {{\frac {d}{dt}}p(\mathbf {x} )}{c^{2}}}}
where
c
2
{\displaystyle c^{2}}
is the square of the "speed of sound". For low Mach numbers and incompressible media
c
{\displaystyle c}
is assumed to be infinite, which is the reason for the above continuity equation to reduce to
∇
⋅
v
=
0
{\displaystyle {\begin{aligned}\nabla \cdot \mathbf {v} &=0\end{aligned}}}
The way of obtaining a velocity field satisfying the above, is to compute a pressure which when substituted into the momentum equation leads to the desired correction of a preliminary computed intermediate velocity.
Applying the divergence operator to the compressible momentum equation yields
∇
⋅
∂
t
v
=
−
∇
⋅
(
v
⋅
∇
)
v
+
∇
⋅
∇
2
v
−
∇
2
p
∂
t
∇
⋅
v
=
−
∇
⋅
(
v
⋅
∇
)
v
+
∇
2
∇
⋅
v
−
∇
2
p
0
=
−
∇
⋅
(
v
⋅
∇
)
v
−
∇
2
p
∇
2
p
=
−
∇
⋅
(
v
⋅
∇
)
v
(
∗
)
{\displaystyle {\begin{aligned}\nabla \cdot \partial _{t}\mathbf {v} &=-\nabla \cdot (\mathbf {v} \cdot \nabla )\mathbf {v} +\nabla \cdot \nabla ^{2}\mathbf {v} -\nabla ^{2}p\\\partial _{t}\nabla \cdot \mathbf {v} &=-\nabla \cdot (\mathbf {v} \cdot \nabla )\mathbf {v} +\nabla ^{2}\nabla \cdot \mathbf {v} -\nabla ^{2}p\\0&=-\nabla \cdot (\mathbf {v} \cdot \nabla )\mathbf {v} -\nabla ^{2}p\\\nabla ^{2}p&=-\nabla \cdot (\mathbf {v} \cdot \nabla )\mathbf {v} &(\ast )\end{aligned}}}
(
∗
)
{\displaystyle (\ast )}
then provides the governing equation for pressure computation.
The idea of pressure-correction also exists in the case of variable density and high Mach numbers, although in this case there is a real physical meaning behind the coupling of dynamic pressure and velocity as arising from the continuity equation
∂
t
ρ
=
∇
⋅
(
ρ
v
)
∂
t
ρ
=
1
c
2
∂
t
p
{\displaystyle {\begin{aligned}\partial _{t}\rho &=\nabla \cdot (\rho \mathbf {v} )\\\partial _{t}\rho &={\frac {1}{c^{2}}}\partial _{t}p\end{aligned}}}
p
{\displaystyle p}
is with compressibility, still an additional variable that can be eliminated with algebraic operations, but its variability is not a pure artifice as in the compressible case, and the methods for its computation differ significantly from those with
ρ
=
constant
.
{\displaystyle \rho ={\text{constant}}.}
== References ==
M. Thomadakis, M. Leschziner: A PRESSURE-CORRECTION METHOD FOR THE SOLUTION OF INCOMPRESSIBLE VISCOUS FLOWS ON UNSTRUCTURED GRIDS, Int. Journal for Numerical Meth. in Fluids, Vol. 22, 1996
A. Meister, J. Struckmeier: Hyperbolic Partial Differential Equations, 1st Edition, Vieweg, 2002
== External links ==
ISNaS – incompressible flow solver
Application of Temperature and/or Pressure Correction Factors in Gas Measurement | Wikipedia/Pressure-correction_method |
In fluid mechanics, non-dimensionalization of the Navier–Stokes equations is the conversion of the Navier–Stokes equation to a nondimensional form. This technique can ease the analysis of the problem at hand, and reduce the number of free parameters. Small or large sizes of certain dimensionless parameters indicate the importance of certain terms in the equations for the studied flow. This may provide possibilities to neglect terms in (certain areas of) the considered flow. Further, non-dimensionalized Navier–Stokes equations can be beneficial if one is posed with similar physical situations – that is problems where the only changes are those of the basic dimensions of the system.
Scaling of Navier–Stokes equation refers to the process of selecting the proper spatial scales – for a certain type of flow – to be used in the non-dimensionalization of the equation. Since the resulting equations need to be dimensionless, a suitable combination of parameters and constants of the equations and flow (domain) characteristics have to be found. As a result of this combination, the number of parameters to be analyzed is reduced and the results may be obtained in terms of the scaled variables.
== Need for non-dimensionalization and scaling ==
In addition to reducing the number of parameters, non-dimensionalized equation helps to gain a greater insight into the relative size of various terms present in the equation.
Following appropriate selecting of scales for the non-dimensionalization process, this leads to identification of small terms in the equation. Neglecting the smaller terms against the bigger ones allows for the simplification of the situation. For the case of flow without heat transfer, the non-dimensionalized Navier–Stokes equation depends only on the Reynolds Number and hence all physical realizations of the related experiment will have the same value of non-dimensionalized variables for the same Reynolds Number.
Scaling helps provide better understanding of the physical situation, with the variation in dimensions of the parameters involved in the equation. This allows for experiments to be conducted on smaller scale prototypes provided that any physical effects which are not included in the non-dimensionalized equation are unimportant.
== Non-dimensionalized Navier–Stokes equation ==
The incompressible Navier–Stokes momentum equation is written as:
∂
u
∂
t
+
(
u
⋅
∇
)
u
=
−
1
ρ
∇
p
+
ν
∇
2
u
+
g
.
{\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} =-{\frac {1}{\rho }}\nabla p+\nu \nabla ^{2}\mathbf {u} +\mathbf {g} .}
where ρ is the density, p is the pressure, ν is the kinematic viscosity, u is the flow velocity, and g is the body acceleration field.
The above equation can be non-dimensionalized through selection of appropriate scales as follows:
Substituting the scales the non-dimensionalized equation obtained is:
where
F
r
{\displaystyle Fr}
is the Froude number and
R
e
{\displaystyle Re}
is the Reynolds number (
R
e
=
U
L
/
ν
{\displaystyle Re=UL/\nu }
).
=== Flows with large viscosity ===
For flows where viscous forces are dominant i.e. slow flows with large viscosity, a viscous pressure scale μU/L is used. In the absence of a free surface, the equation obtained is
=== Stokes regime ===
Scaling of equation (1) can be done, in a flow where inertia term is smaller than the viscous term i.e. when Re → 0 then inertia terms can be neglected, leaving the equation of a creeping motion.
R
e
∂
u
∗
∂
t
∗
=
−
∇
∗
p
∗
+
∇
∗
2
u
∗
.
{\displaystyle Re{\frac {\partial \mathbf {u^{*}} }{\partial t^{*}}}=-\nabla ^{*}p^{*}+\nabla ^{*2}\mathbf {u^{*}} .}
Such flows tend to have influence of viscous interaction over large distances from an object. At low Reynolds number the same equation reduces to a diffusion equation, named Stokes equation
−
∇
∗
p
∗
+
∇
∗
2
u
∗
=
0
.
{\displaystyle -\nabla ^{*}p^{*}+\nabla ^{*2}\mathbf {u^{*}} =\mathbf {0} .}
=== Euler regime ===
Similarly if Re → ∞ i.e. when the inertia forces dominates, the viscous contribution can be neglected. The non-dimensionalized Euler equation for an inviscid flow is
∂
u
∗
∂
t
+
(
u
∗
⋅
∇
∗
)
u
∗
=
−
∇
∗
p
∗
.
{\displaystyle {\frac {\partial \mathbf {u^{*}} }{\partial t}}+(\mathbf {u^{*}} \cdot \nabla ^{*})\mathbf {u^{*}} \ =-\nabla ^{*}p^{*}.}
=== When density varies due to both concentration and temperature ===
Density variation due to both concentration and temperature is an important field of study in double diffusive convection. If density changes due to both temperature and salinity are taken into account, then some more terms add to the Z-Component of momentum as follows:
∂
W
∂
t
+
U
∂
W
∂
X
+
W
∂
W
∂
Z
=
−
1
ρ
o
∂
p
d
∂
Z
+
v
(
∂
2
W
∂
X
2
+
∂
2
W
∂
Z
2
)
−
g
(
β
s
∇
S
−
β
T
∇
T
)
{\displaystyle {\frac {\partial W}{\partial t}}+U{\frac {\partial W}{\partial X}}+W{\frac {\partial W}{\partial Z}}\ =-{\frac {1}{\rho _{o}}}{\frac {\partial p_{d}}{\partial Z}}+v\left({\frac {\partial ^{2}W}{\partial X^{2}}}+{\frac {\partial ^{2}W}{\partial Z^{2}}}\right)\ -g\left(\beta _{s}\nabla {S}-\beta _{T}\nabla {T}\right)}
Where S is the salinity of the fluid, βT is the thermal expansion coefficient at constant pressure and βS is the coefficient of saline expansion at constant pressure and temperature.
Non dimensionalizing using the scale:
S
∗
=
S
−
S
B
S
T
−
S
B
{\displaystyle S^{*}={\frac {S-S_{B}}{S_{T}-S_{B}}}}
and
T
∗
=
T
−
T
B
T
T
−
T
B
{\displaystyle T^{*}={\frac {T-T_{B}}{T_{T}-T_{B}}}}
we get
∂
W
∗
∂
t
∗
+
U
∗
∂
W
∗
∂
X
∗
+
W
∗
∂
W
∗
∂
Z
∗
=
−
∂
p
d
∂
Z
∗
+
P
r
(
∂
2
W
∗
∂
X
∗
2
+
∂
2
W
∗
∂
Z
∗
2
)
−
R
a
s
P
r
s
S
+
R
a
T
P
r
T
T
{\displaystyle {\frac {\partial W^{*}}{\partial t^{*}}}+U^{*}{\frac {\partial W^{*}}{\partial X^{*}}}+W^{*}{\frac {\partial W^{*}}{\partial Z^{*}}}\ =-{\frac {\partial p_{d}}{\partial Z^{*}}}+Pr\left({\frac {\partial ^{2}W^{*}}{\partial X^{*2}}}+{\frac {\partial ^{2}W^{*}}{\partial Z^{*2}}}\right)\ -{Ra_{s}Pr_{s}S}+{Ra_{T}Pr_{T}T}}
where ST, TT denote the salinity and temperature at top layer, SB, TB denote the salinity and temperature at bottom layer, Ra is the Rayleigh Number, and Pr is the Prandtl Number. The sign of RaS and RaT will change depending on whether it stabilizes or destabilizes the system.
== References ==
=== Footnotes ===
=== Other ===
Y. Cengel and J. Cimbala, FLUID MECHANICS: Fundamentals and Applications, 4th Edition, McGraw-Hill Education, 2018 (see p521, section 10.2. Nondimensionalized Equations of Motion).
== Further reading == | Wikipedia/Non-dimensionalization_and_scaling_of_the_Navier–Stokes_equations |
The aircraft design process is a loosely defined method used to balance many competing and demanding requirements to produce an aircraft that is strong, lightweight, economical and can carry an adequate payload while being sufficiently reliable to safely fly for the design life of the aircraft. Similar to, but more exacting than, the usual engineering design process, the technique is highly iterative, involving high-level configuration tradeoffs, a mixture of analysis and testing and the detailed examination of the adequacy of every part of the structure. For some types of aircraft, the design process is regulated by civil airworthiness authorities.
This article deals with powered aircraft such as airplanes and helicopter designs.
== Design constraints ==
=== Purpose ===
The design process starts with the aircraft's intended purpose. Commercial airliners are designed for carrying a passenger or cargo payload, long range and greater fuel efficiency whereas fighter jets are designed to perform high speed maneuvers and provide close support to ground troops. Some aircraft have specific missions, for instance, amphibious airplanes have a unique design that allows them to operate from both land and water, some fighters, like the Harrier jump jet, have VTOL (vertical take-off and landing) ability, helicopters have the ability to hover over an area for a period of time.
The purpose may be to fit a specific requirement, e.g. as in the historical case of a British Air Ministry specification, or fill a perceived "gap in the market"; that is, a class or design of aircraft which does not yet exist, but for which there would be significant demand.
=== Aircraft regulations ===
Another important factor that influences the design are the requirements for obtaining a type certificate for a new design of aircraft. These requirements are published by major national airworthiness authorities including the US Federal Aviation Administration and the European Aviation Safety Agency.
Airports may also impose limits on aircraft, for instance, the maximum wingspan allowed for a conventional aircraft is 80 metres (260 ft) to prevent collisions between aircraft while taxiing.
=== Financial factors and market ===
Budget limitations, market requirements and competition set constraints on the design process and comprise the non-technical influences on aircraft design along with environmental factors. Competition leads to companies striving for better efficiency in the design without compromising performance and incorporating new techniques and technology.
In the 1950s and '60s, unattainable project goals were regularly set, but then abandoned, whereas today troubled programs like the Boeing 787 and the Lockheed Martin F-35 have proven far more costly and complex to develop than expected.
More advanced and integrated design tools have been developed. Model-based systems engineering predicts potentially problematic interactions, while computational analysis and optimization allows designers to explore more options early in the process. Increasing automation in engineering and manufacturing allows faster and cheaper development.
Technology advances from materials to manufacturing enable more complex design variations like multifunction parts. Once impossible to design or construct, these can now be 3D printed, but they have yet to prove their utility in applications like the Northrop Grumman B-21 or the re-engined A320neo and 737 MAX. Airbus and Boeing also recognize the economic limits, that the next airliner generation cannot cost more than the previous ones did.
=== Environmental factors ===
An increase in the number of aircraft also means greater carbon emissions. Environmental scientists have voiced concern over the main kinds of pollution associated with aircraft, mainly noise and emissions. Aircraft engines have been historically notorious for creating noise pollution and the expansion of airways over already congested and polluted cities have drawn heavy criticism, making it necessary to have environmental policies for aircraft noise. Noise also arises from the airframe, where the airflow directions are changed. Improved noise regulations have forced designers to create quieter engines and airframes. Emissions from aircraft include particulates, carbon dioxide (CO2), sulfur dioxide (SO2), carbon monoxide (CO), various oxides of nitrates and unburnt hydrocarbons. To combat the pollution, ICAO set recommendations in 1981 to control aircraft emissions. Newer, environmentally friendly fuels have been developed and the use of recyclable materials in manufacturing have helped reduce the ecological impact due to aircraft. Environmental limitations also affect airfield compatibility. Airports around the world have been built to suit the topography of the particular region. Space limitations, pavement design, runway end safety areas and the unique location of airport are some of the airport factors that influence aircraft design. However changes in aircraft design also influence airfield design as well, for instance, the recent introduction of new large aircraft (NLAs) such as the superjumbo Airbus A380, have led to airports worldwide redesigning their facilities to accommodate its large size and service requirements.
=== Safety ===
The high speeds, fuel tanks, atmospheric conditions at cruise altitudes, natural hazards (thunderstorms, hail and bird strikes) and human error are some of the many hazards that pose a threat to air travel.
Airworthiness is the standard by which aircraft are determined fit to fly. The responsibility for airworthiness lies with the national civil aviation regulatory bodies, manufacturers, as well as owners and operators.
The International Civil Aviation Organization sets international standards and recommended practices on which national authorities should base their regulations. The national regulatory authorities set standards for airworthiness, issue certificates to manufacturers and operators and the standards of personnel training. Every country has its own regulatory body such as the Federal Aviation Administration in USA, DGCA (Directorate General of Civil Aviation) in India, etc.
The aircraft manufacturer makes sure that the aircraft meets existing design standards, defines the operating limitations and maintenance schedules and provides support and maintenance throughout the operational life of the aircraft. The aviation operators include the passenger and cargo airliners, air forces and owners of private aircraft. They agree to comply with the regulations set by the regulatory bodies, understand the limitations of the aircraft as specified by the manufacturer, report defects and assist the manufacturers in keeping up the airworthiness standards.
Most of the design criticisms these days are built on crashworthiness. Even with the greatest attention to airworthiness, accidents still occur. Crashworthiness is the qualitative evaluation of how aircraft survive an accident. The main objective is to protect the passengers or valuable cargo from the damage caused by an accident. In the case of airliners the stressed skin of the pressurized fuselage provides this feature, but in the event of a nose or tail impact, large bending moments build all the way through the fuselage, causing fractures in the shell, causing the fuselage to break up into smaller sections. So the passenger aircraft are designed in such a way that seating arrangements are away from areas likely to be intruded in an accident, such as near a propeller, engine nacelle undercarriage etc. The interior of the cabin is also fitted with safety features such as oxygen masks that drop down in the event of loss of cabin pressure, lockable luggage compartments, safety belts, lifejackets, emergency doors and luminous floor strips. Aircraft are sometimes designed with emergency water landing in mind, for instance the Airbus A330 has a 'ditching' switch that closes valves and openings beneath the aircraft slowing the ingress of water.
== Design optimization ==
Aircraft designers normally rough-out the initial design with consideration of all the constraints on their design. Historically design teams used to be small, usually headed by a Chief Designer who knows all the design requirements and objectives and coordinated the team accordingly. As time progressed, the complexity of military and airline aircraft also grew. Modern military and airline design projects are of such a large scale that every design aspect is tackled by different teams and then brought together. In general aviation a large number of light aircraft are designed and built by amateur hobbyists and enthusiasts.
== Computer-aided design of aircraft ==
In the early years of aircraft design, designers generally used analytical theory to do the various engineering calculations that go into the design process along with a lot of experimentation. These calculations were labour-intensive and time-consuming. In the 1940s, several engineers started looking for ways to automate and simplify the calculation process and many relations and semi-empirical formulas were developed. Even after simplification, the calculations continued to be extensive. With the invention of the computer, engineers realized that a majority of the calculations could be automated, but the lack of design visualization and the huge amount of experimentation involved kept the field of aircraft design stagnant. With the rise of programming languages, engineers could now write programs that were tailored to design an aircraft. Originally this was done with mainframe computers and used low-level programming languages that required the user to be fluent in the language and know the architecture of the computer. With the introduction of personal computers, design programs began employing a more user-friendly approach.
== Design aspects ==
The main aspects of aircraft design are:
Aerodynamics
Propulsion
Controls
Mass
Structure
All aircraft designs involve compromises of these factors to achieve the design mission.
=== Wing design ===
The wing of a fixed-wing aircraft provides the lift necessary for flight. Wing geometry affects every aspect of an aircraft's flight. The wing area will usually be dictated by the desired stalling speed but the overall shape of the planform and other detail aspects may be influenced by wing layout factors. The wing can be mounted to the fuselage in high, low and middle positions. The wing design depends on many parameters such as selection of aspect ratio, taper ratio, sweepback angle, thickness ratio, section profile, washout and dihedral. The cross-sectional shape of the wing is its airfoil. The construction of the wing starts with the rib which defines the airfoil shape. Ribs can be made of wood, metal, plastic or even composites.
The wing must be designed and tested to ensure it can withstand the maximum loads imposed by maneuvering, and by atmospheric gusts.
=== Fuselage ===
The fuselage is the part of the aircraft that contains the cockpit, passenger cabin or cargo hold.
=== Empennage ===
=== Propulsion ===
Aircraft propulsion may be achieved by specially designed aircraft engines, adapted auto, motorcycle or snowmobile engines, electric engines or even human muscle power. The main parameters of engine design are:
Maximum engine thrust available
Fuel consumption
Engine mass
Engine geometry
The thrust provided by the engine must balance the drag at cruise speed and be greater than the drag to allow acceleration. The engine requirement varies with the type of aircraft. For instance, commercial airliners spend more time in cruise speed and need more engine efficiency. High-performance fighter jets need very high acceleration and therefore have very high thrust requirements.
=== Landing gear ===
=== Weight ===
The weight of the aircraft is the common factor that links all aspects of aircraft design such as aerodynamics, structure, and propulsion, all together. An aircraft's weight is derived from various factors such as empty weight, payload, useful load, etc. The various weights are used to then calculate the center of mass of the entire aircraft. The center of mass must fit within the established limits set by the manufacturer.
=== Structure ===
The aircraft structure focuses not only on strength, aeroelasticity, durability, damage tolerance, stability, but also on fail-safety, corrosion resistance, maintainability and ease of manufacturing. The structure must be able to withstand the stresses caused by cabin pressurization, if fitted, turbulence and engine or rotor vibrations.
== Design process and simulation ==
The design of any aircraft starts out in three phases
=== Conceptual design ===
Aircraft conceptual design involves sketching a variety of possible configurations that meet the required design specifications. By drawing a set of configurations, designers seek to reach the design configuration that satisfactorily meets all requirements as well as go hand in hand with factors such as aerodynamics, propulsion, flight performance, structural and control systems. This is called design optimization. Fundamental aspects such as fuselage shape, wing configuration and location, engine size and type are all determined at this stage. Constraints to design like those mentioned above are all taken into account at this stage as well. The final product is a conceptual layout of the aircraft configuration on paper or computer screen, to be reviewed by engineers and other designers.
=== Preliminary design phase ===
The design configuration arrived at in the conceptual design phase is then tweaked and remodeled to fit into the design parameters. In this phase, wind tunnel testing and computational fluid dynamic calculations of the flow field around the aircraft are done. Major structural and control analysis is also carried out in this phase. Aerodynamic flaws and structural instabilities if any are corrected and the final design is drawn and finalized. Then after the finalization of the design lies the key decision with the manufacturer or individual designing it whether to actually go ahead with the production of the aircraft. At this point several designs, though perfectly capable of flight and performance, might have been opted out of production due to their being economically nonviable.
=== Detail design phase ===
This phase simply deals with the fabrication aspect of the aircraft to be manufactured. It determines the number, design and location of ribs, spars, sections and other structural elements. All aerodynamic, structural, propulsion, control and performance aspects have already been covered in the preliminary design phase and only the manufacturing remains. Flight simulators for aircraft are also developed at this stage.
=== Delays ===
Some commercial aircraft have experienced significant schedule delays and cost overruns in the development phase. Examples of this include the Boeing 787 Dreamliner with a delay of 4 years with massive cost overruns, the Boeing 747-8 with a two-year delay, the Airbus A380 with a two-year delay and US$6.1 billion in cost overruns, the Airbus A350 with delays and cost overruns, the Bombardier C Series, Global 7000 and 8000, the Comac C919 with a four-year delay and the Mitsubishi Regional Jet, which was delayed by four years and ended up with empty weight issues.
== Program development ==
An existing aircraft program can be developed for performance and economy gains by stretching the fuselage, increasing the MTOW, enhancing the aerodynamics, installing new engines, new wings or new avionics.
For a 9,100 nmi long range at Mach 0.8/FL360, a 10% lower TSFC saves 13% of fuel, a 10% L/D increase saves 12%, a 10% lower OEW saves 6% and all combined saves 28%.
=== Re-engine ===
=== Fuselage stretch ===
== See also ==
Index of aviation articles
Aerospace engineering
Aircraft manufacturer
Iron bird (aviation)
== References ==
== External links ==
Egbert Torenbeek (1976), Synthesis of Subsonic Airplane Design, Delft University Press
Antonio Filippone (2000), "Data and performances of selected aircraft and rotorcraft", Progress in Aerospace Sciences, 36 (8), Elsevier: 629–654, Bibcode:2000PrAeS..36..629F, CiteSeerX 10.1.1.539.1597, doi:10.1016/S0376-0421(00)00011-7
"Aircraft Design: Synthesis and Analysis". Desktop Aeronautics, Inc. 2001.
Dennis F. Shanahan (8 Mar 2005). "Basic principles of Crashworthiness" (PDF). NATO.
M. Nila; D. Scholz (2010). "From preliminary aircraft cabin design to cabin optimization" (PDF). Deutscher Luft- und Raumfahrtkongress – via Hamburg University of Applied Sciences.
"Airman". Nonresident Training Courses. U.S. Navy. December 2012. Archived from the original on October 26, 2017.
"chapter 4: Aircraft Basic Construction" (PDF). Archived from the original (PDF) on December 28, 2016.
Guy Norris (Mar 10, 2014). "Boeing's 'Wonder Wall'". Aviation Week Network.
Dieter Scholz (9 July 2018). "Aircraft Design - an Open Educational Resource". Hamburg Open Online University.
=== Re-engine ===
Thomas C. Hayes (November 27, 1981). "BOEING'S 'RE-ENGINING' WORRY". NY Times.
Oliver Wyman (December 2010). "To Re-Engine or Not to Re-Engine: That is the Question". Aviation Week Network. | Wikipedia/Aircraft_design_process |
In continuum mechanics, the strain-rate tensor or rate-of-strain tensor is a physical quantity that describes the rate of change of the strain (i.e., the relative deformation) of a material in the neighborhood of a certain point, at a certain moment of time. It can be defined as the derivative of the strain tensor with respect to time, or as the symmetric component of the Jacobian matrix (derivative with respect to position) of the flow velocity. In fluid mechanics it also can be described as the velocity gradient, a measure of how the velocity of a fluid changes between different points within the fluid. Though the term can refer to a velocity profile (variation in velocity across layers of flow in a pipe), it is often used to mean the gradient of a flow's velocity with respect to its coordinates. The concept has implications in a variety of areas of physics and engineering, including magnetohydrodynamics, mining and water treatment.
The strain rate tensor is a purely kinematic concept that describes the macroscopic motion of the material. Therefore, it does not depend on the nature of the material, or on the forces and stresses that may be acting on it; and it applies to any continuous medium, whether solid, liquid or gas.
On the other hand, for any fluid except superfluids, any gradual change in its deformation (i.e. a non-zero strain rate tensor) gives rise to viscous forces in its interior, due to friction between adjacent fluid elements, that tend to oppose that change. At any point in the fluid, these stresses can be described by a viscous stress tensor that is, almost always, completely determined by the strain rate tensor and by certain intrinsic properties of the fluid at that point. Viscous stress also occur in solids, in addition to the elastic stress observed in static deformation; when it is too large to be ignored, the material is said to be viscoelastic.
== Dimensional analysis ==
By performing dimensional analysis, the dimensions of velocity gradient can be determined. The dimensions of velocity are
L
1
T
−
1
{\displaystyle {\mathsf {L^{1}T^{-1}}}}
, and the dimensions of distance are
L
1
{\displaystyle {\mathsf {L^{1}}}}
. Since the velocity gradient can be expressed as
Δ
velocity
Δ
distance
{\displaystyle {\frac {\Delta {\text{velocity}}}{\Delta {\text{distance}}}}}
. Therefore, the velocity gradient has the same dimensions as this ratio, i.e.,
T
−
1
{\displaystyle {\mathsf {T^{-1}}}}
.
== In continuum mechanics ==
In 3 dimensions, the gradient
∇
v
{\displaystyle \nabla \mathbf {v} }
of the velocity
v
{\displaystyle \mathbf {v} }
is a second-order tensor which can be expressed as the matrix
L
{\displaystyle \mathbf {L} }
:
L
=
∇
v
=
[
∂
v
x
∂
x
∂
v
y
∂
x
∂
v
z
∂
x
∂
v
x
∂
y
∂
v
y
∂
y
∂
v
z
∂
y
∂
v
x
∂
z
∂
v
y
∂
z
∂
v
z
∂
z
]
{\displaystyle \mathbf {L} =\nabla \mathbf {v} ={\begin{bmatrix}{\frac {\partial v_{x}}{\partial x}}&{\frac {\partial v_{y}}{\partial x}}&{\frac {\partial v_{z}}{\partial x}}\\{\frac {\partial v_{x}}{\partial y}}&{\frac {\partial v_{y}}{\partial y}}&{{\frac {\partial v_{z}}{\partial y}}\ }\\{\frac {\partial v_{x}}{\partial z}}&{\frac {\partial v_{y}}{\partial z}}&{\frac {\partial v_{z}}{\partial z}}\end{bmatrix}}}
L
{\displaystyle \mathbf {L} }
can be decomposed into the sum of a symmetric matrix
E
{\displaystyle {\textbf {E}}}
and a skew-symmetric matrix
W
{\displaystyle {\textbf {W}}}
as follows
E
=
1
2
(
L
+
L
T
)
W
=
1
2
(
L
−
L
T
)
{\displaystyle {\begin{aligned}\mathbf {E} &={\frac {1}{2}}\left(\mathbf {L} +\mathbf {L} ^{\textsf {T}}\right)\\\mathbf {W} &={\frac {1}{2}}\left(\mathbf {L} -\mathbf {L} ^{\textsf {T}}\right)\end{aligned}}}
E
{\displaystyle {\textbf {E}}}
is called the strain rate tensor and describes the rate of stretching and shearing.
W
{\displaystyle {\textbf {W}}}
is called the spin tensor and describes the rate of rotation.
== Relationship between shear stress and the velocity field ==
Sir Isaac Newton proposed that shear stress is directly proportional to the velocity gradient:
τ
=
μ
∂
u
∂
y
.
{\displaystyle \tau =\mu {\frac {\partial u}{\partial y}}.}
The constant of proportionality,
μ
{\displaystyle \mu }
, is called the dynamic viscosity.
== Formal definition ==
Consider a material body, solid or fluid, that is flowing and/or moving in space. Let v be the velocity field within the body; that is, a smooth function from R3 × R such that v(p, t) is the macroscopic velocity of the material that is passing through the point p at time t.
The velocity v(p + r, t) at a point displaced from p by a small vector r can be written as a Taylor series:
v
(
p
+
r
,
t
)
=
v
(
p
,
t
)
+
(
∇
v
)
(
p
,
t
)
(
r
)
+
higher order terms
,
{\displaystyle \mathbf {v} (\mathbf {p} +\mathbf {r} ,t)=\mathbf {v} (\mathbf {p} ,t)+(\nabla \mathbf {v} )(\mathbf {p} ,t)(\mathbf {r} )+{\text{higher order terms}},}
where ∇v the gradient of the velocity field, understood as a linear map that takes a displacement vector r to the corresponding change in the velocity.
In an arbitrary reference frame, ∇v is related to the Jacobian matrix of the field, namely in 3 dimensions it is the 3 × 3 matrix
(
∇
v
)
T
=
[
∂
1
v
1
∂
2
v
1
∂
3
v
1
∂
1
v
2
∂
2
v
2
∂
3
v
2
∂
1
v
3
∂
2
v
3
∂
3
v
3
]
=
J
.
{\displaystyle \left(\nabla \mathbf {v} \right)^{\mathrm {T} }={\begin{bmatrix}\partial _{1}v_{1}&\partial _{2}v_{1}&\partial _{3}v_{1}\\\partial _{1}v_{2}&\partial _{2}v_{2}&\partial _{3}v_{2}\\\partial _{1}v_{3}&\partial _{2}v_{3}&\partial _{3}v_{3}\end{bmatrix}}=\mathbf {J} .}
where vi is the component of v parallel to axis i and ∂jf denotes the partial derivative of a function f with respect to the space coordinate xj. Note that J is a function of p and t.
In this coordinate system, the Taylor approximation for the velocity near p is
v
i
(
p
+
r
,
t
)
=
v
i
(
p
,
t
)
+
∑
j
J
i
j
(
p
,
t
)
r
j
=
v
i
(
p
,
t
)
+
∑
j
∂
j
v
i
(
p
,
t
)
r
j
;
{\displaystyle v_{i}(\mathbf {p} +\mathbf {r} ,t)=v_{i}(\mathbf {p} ,t)+\sum _{j}J_{ij}(\mathbf {p} ,t)r_{j}=v_{i}(\mathbf {p} ,t)+\sum _{j}\partial _{j}v_{i}(\mathbf {p} ,t)r_{j};}
or simply
v
(
p
+
r
,
t
)
=
v
(
p
,
t
)
+
J
(
p
,
t
)
r
{\displaystyle \mathbf {v} (\mathbf {p} +\mathbf {r} ,t)=\mathbf {v} (\mathbf {p} ,t)+\mathbf {J} (\mathbf {p} ,t)\mathbf {r} }
if v and r are viewed as 3 × 1 matrices.
=== Symmetric and antisymmetric parts ===
Any matrix can be decomposed into the sum of a symmetric matrix and an antisymmetric matrix. Applying this to the Jacobian matrix with symmetric and antisymmetric components E and R respectively:
E
=
1
2
(
J
+
J
T
)
R
=
1
2
(
J
−
J
T
)
E
i
j
=
1
2
(
∂
j
v
i
+
∂
i
v
j
)
R
i
j
=
1
2
(
∂
j
v
i
−
∂
i
v
j
)
{\displaystyle {\begin{aligned}\mathbf {E} &={\frac {1}{2}}\left(\mathbf {J} +\mathbf {J} ^{\textsf {T}}\right)&\mathbf {R} &={\frac {1}{2}}\left(\mathbf {J} -\mathbf {J} ^{\textsf {T}}\right)\\E_{ij}&={\frac {1}{2}}\left(\partial _{j}v_{i}+\partial _{i}v_{j}\right)&R_{ij}&={\frac {1}{2}}\left(\partial _{j}v_{i}-\partial _{i}v_{j}\right)\end{aligned}}}
This decomposition is independent of coordinate system, and so has physical significance. Then the velocity field may be approximated as
v
(
p
+
r
,
t
)
≈
v
(
p
,
t
)
+
E
(
p
,
t
)
(
r
)
+
R
(
p
,
t
)
(
r
)
,
{\displaystyle \mathbf {v} (\mathbf {p} +\mathbf {r} ,t)\approx \mathbf {v} (\mathbf {p} ,t)+\mathbf {E} (\mathbf {p} ,t)(\mathbf {r} )+\mathbf {R} (\mathbf {p} ,t)(\mathbf {r} ),}
that is,
v
i
(
p
+
r
,
t
)
=
v
i
(
p
,
t
)
+
∑
j
E
i
j
(
p
,
t
)
r
j
+
∑
j
R
i
j
(
p
,
t
)
r
j
=
v
i
(
p
,
t
)
+
1
2
∑
j
(
∂
j
v
i
(
p
,
t
)
+
∂
i
v
j
(
p
,
t
)
)
r
j
+
1
2
∑
j
(
∂
j
v
i
(
p
,
t
)
−
∂
i
v
j
(
p
,
t
)
)
r
j
{\displaystyle {\begin{aligned}v_{i}(\mathbf {p} +\mathbf {r} ,t)&=v_{i}(\mathbf {p} ,t)+\sum _{j}E_{ij}(\mathbf {p} ,t)r_{j}+\sum _{j}R_{ij}(\mathbf {p} ,t)r_{j}\\&=v_{i}(\mathbf {p} ,t)+{\frac {1}{2}}\sum _{j}\left(\partial _{j}v_{i}(\mathbf {p} ,t)+\partial _{i}v_{j}(\mathbf {p} ,t)\right)r_{j}+{\frac {1}{2}}\sum _{j}\left(\partial _{j}v_{i}(\mathbf {p} ,t)-\partial _{i}v_{j}(\mathbf {p} ,t)\right)r_{j}\end{aligned}}}
The antisymmetric term R represents a rigid-like rotation of the fluid about the point p. Its angular velocity
ω
→
{\displaystyle {\vec {\omega }}}
is
ω
→
=
1
2
∇
×
v
=
1
2
[
∂
2
v
3
−
∂
3
v
2
∂
3
v
1
−
∂
1
v
3
∂
1
v
2
−
∂
2
v
1
]
.
{\displaystyle {\vec {\omega }}={\frac {1}{2}}\nabla \times \mathbf {v} ={\frac {1}{2}}{\begin{bmatrix}\partial _{2}v_{3}-\partial _{3}v_{2}\\\partial _{3}v_{1}-\partial _{1}v_{3}\\\partial _{1}v_{2}-\partial _{2}v_{1}\end{bmatrix}}.}
The product ∇ × v is called the vorticity of the vector field. A rigid rotation does not change the relative positions of the fluid elements, so the antisymmetric term R of the velocity gradient does not contribute to the rate of change of the deformation. The actual strain rate is therefore described by the symmetric E term, which is the strain rate tensor.
=== Shear rate and compression rate ===
The symmetric term E (the rate-of-strain tensor) can be broken down further as the sum of a scalar times the unit tensor, that represents a gradual isotropic expansion or contraction; and a traceless symmetric tensor which represents a gradual shearing deformation, with no change in volume:
E
(
p
,
t
)
(
r
)
=
S
(
p
,
t
)
(
r
)
+
D
(
p
,
t
)
(
r
)
.
{\displaystyle \mathbf {E} (\mathbf {p} ,t)(\mathbf {r} )=\mathbf {S} (\mathbf {p} ,t)(\mathbf {r} )+\mathbf {D} (\mathbf {p} ,t)(\mathbf {r} ).}
That is,
E
i
j
=
1
3
(
∑
k
∂
k
v
k
)
δ
i
j
⏟
rate-of-expansion tensor
S
i
j
+
1
2
(
∂
i
v
j
+
∂
j
v
i
)
⏞
E
i
j
−
S
i
j
⏟
rate-of-shear tensor
D
i
j
,
{\displaystyle E_{ij}=\underbrace {{\frac {1}{3}}\left(\sum _{k}\partial _{k}v_{k}\right)\delta _{ij}} _{{\text{rate-of-expansion tensor }}S_{ij}}+\underbrace {\overbrace {{\frac {1}{2}}\left(\partial _{i}v_{j}+\partial _{j}v_{i}\right)} ^{E_{ij}}-S_{ij}} _{{\text{rate-of-shear tensor }}D_{ij}},}
Here δ is the unit tensor, such that δij is 1 if i = j and 0 if i ≠ j. This decomposition is independent of the choice of coordinate system, and is therefore physically significant.
The trace of the expansion rate tensor is the divergence of the velocity field:
∇
⋅
v
=
∂
1
v
1
+
∂
2
v
2
+
∂
3
v
3
;
{\displaystyle \nabla \cdot \mathbf {v} =\partial _{1}v_{1}+\partial _{2}v_{2}+\partial _{3}v_{3};}
which is the rate at which the volume of a fixed amount of fluid increases at that point.
The shear rate tensor is represented by a symmetric 3 × 3 matrix, and describes a flow that combines compression and expansion flows along three orthogonal axes, such that there is no change in volume. This type of flow occurs, for example, when a rubber strip is stretched by pulling at the ends, or when honey falls from a spoon as a smooth unbroken stream.
For a two-dimensional flow, the divergence of v has only two terms and quantifies the change in area rather than volume. The factor 1/3 in the expansion rate term should be replaced by 1/2 in that case.
== Examples ==
The study of velocity gradients is useful in analysing path dependent materials and in the subsequent study of stresses and strains; e.g., Plastic deformation of metals. The near-wall velocity gradient of the unburned reactants flowing from a tube is a key parameter for characterising flame stability.: 1–3 The velocity gradient of a plasma can define conditions for the solutions to fundamental equations in magnetohydrodynamics.
=== Fluid in a pipe ===
Consider the velocity field of a fluid flowing through a pipe. The layer of fluid in contact with the pipe tends to be at rest with respect to the pipe. This is called the no slip condition. If the velocity difference between fluid layers at the centre of the pipe and at the sides of the pipe is sufficiently small, then the fluid flow is observed in the form of continuous layers. This type of flow is called laminar flow.
The flow velocity difference between adjacent layers can be measured in terms of a velocity gradient, given by
Δ
u
/
Δ
y
{\displaystyle \Delta u/\Delta y}
. Where
Δ
u
{\displaystyle \Delta u}
is the difference in flow velocity between the two layers and
Δ
y
{\displaystyle \Delta y}
is the distance between the layers.
== See also ==
Stress tensor (disambiguation)
Finite strain theory § Time-derivative of the deformation gradient, the spatial and material velocity gradient from continuum mechanics
== References == | Wikipedia/Strain-rate_tensor |
In fluid dynamics, the Stokes stream function is used to describe the streamlines and flow velocity in a three-dimensional incompressible flow with axisymmetry. A surface with a constant value of the Stokes stream function encloses a streamtube, everywhere tangential to the flow velocity vectors. Further, the volume flux within this streamtube is constant, and all the streamlines of the flow are located on this surface. The velocity field associated with the Stokes stream function is solenoidal—it has zero divergence. This stream function is named in honor of George Gabriel Stokes.
== Cylindrical coordinates ==
Consider a cylindrical coordinate system ( ρ , φ , z ), with the z–axis the line around which the incompressible flow is axisymmetrical, φ the azimuthal angle and ρ the distance to the z–axis. Then the flow velocity components uρ and uz can be expressed in terms of the Stokes stream function
Ψ
{\displaystyle \Psi }
by:
u
ρ
=
−
1
ρ
∂
Ψ
∂
z
,
u
z
=
+
1
ρ
∂
Ψ
∂
ρ
.
{\displaystyle {\begin{aligned}u_{\rho }&=-{\frac {1}{\rho }}\,{\frac {\partial \Psi }{\partial z}},\\u_{z}&=+{\frac {1}{\rho }}\,{\frac {\partial \Psi }{\partial \rho }}.\end{aligned}}}
The azimuthal velocity component uφ does not depend on the stream function. Due to the axisymmetry, all three velocity components ( uρ , uφ , uz ) only depend on ρ and z and not on the azimuth φ.
The volume flux, through the surface bounded by a constant value ψ of the Stokes stream function, is equal to 2π ψ.
== Spherical coordinates ==
In spherical coordinates ( r , θ , φ ), r is the radial distance from the origin, θ is the zenith angle and φ is the azimuthal angle. In axisymmetric flow, with θ = 0 the rotational symmetry axis, the quantities describing the flow are again independent of the azimuth φ. The flow velocity components ur and uθ are related to the Stokes stream function
Ψ
{\displaystyle \Psi }
through:
u
r
=
+
1
r
2
sin
θ
∂
Ψ
∂
θ
,
u
θ
=
−
1
r
sin
θ
∂
Ψ
∂
r
.
{\displaystyle {\begin{aligned}u_{r}&=+{\frac {1}{r^{2}\,\sin \theta }}\,{\frac {\partial \Psi }{\partial \theta }},\\u_{\theta }&=-{\frac {1}{r\,\sin \theta }}\,{\frac {\partial \Psi }{\partial r}}.\end{aligned}}}
Again, the azimuthal velocity component uφ is not a function of the Stokes stream function ψ. The volume flux through a stream tube, bounded by a surface of constant ψ, equals 2π ψ, as before.
=== Vorticity ===
The vorticity is defined as:
ω
=
∇
×
u
=
∇
×
∇
×
ψ
{\displaystyle {\boldsymbol {\omega }}=\nabla \times {\boldsymbol {u}}=\nabla \times \nabla \times {\boldsymbol {\psi }}}
, where
ψ
=
Ψ
r
sin
θ
ϕ
^
,
{\displaystyle {\boldsymbol {\psi }}={\frac {\Psi }{r\sin \theta }}{\boldsymbol {\hat {\phi }}},}
with
ϕ
^
{\displaystyle {\boldsymbol {\hat {\phi }}}}
the unit vector in the
ϕ
{\displaystyle \phi \,}
–direction.
As a result, from the calculation the vorticity vector is found to be equal to:
ω
=
(
0
0
−
1
r
sin
θ
(
∂
2
Ψ
∂
r
2
+
sin
θ
r
2
∂
∂
θ
(
1
sin
θ
∂
Ψ
∂
θ
)
)
)
.
{\displaystyle {\boldsymbol {\omega }}={\begin{pmatrix}0\\[1ex]0\\[1ex]\displaystyle -{\frac {1}{r\sin \theta }}\left({\frac {\partial ^{2}\Psi }{\partial r^{2}}}+{\frac {\sin \theta }{r^{2}}}{\partial \over \partial \theta }\left({\frac {1}{\sin \theta }}{\frac {\partial \Psi }{\partial \theta }}\right)\right)\end{pmatrix}}.}
=== Comparison with cylindrical ===
The cylindrical and spherical coordinate systems are related through
z
=
r
cos
θ
{\displaystyle z=r\,\cos \theta \,}
and
ρ
=
r
sin
θ
.
{\displaystyle \rho =r\,\sin \theta .\,}
== Alternative definition with opposite sign ==
As explained in the general stream function article, definitions using an opposite sign convention – for the relationship between the Stokes stream function and flow velocity – are also in use.
== Zero divergence ==
In cylindrical coordinates, the divergence of the velocity field u becomes:
∇
⋅
u
=
1
ρ
∂
∂
ρ
(
ρ
u
ρ
)
+
∂
u
z
∂
z
=
1
ρ
∂
∂
ρ
(
−
∂
Ψ
∂
z
)
+
∂
∂
z
(
1
ρ
∂
Ψ
∂
ρ
)
=
0
,
{\displaystyle {\begin{aligned}\nabla \cdot {\boldsymbol {u}}&={\frac {1}{\rho }}{\frac {\partial }{\partial \rho }}{\Bigl (}\rho \,u_{\rho }{\Bigr )}+{\frac {\partial u_{z}}{\partial z}}\\&={\frac {1}{\rho }}{\frac {\partial }{\partial \rho }}\left(-{\frac {\partial \Psi }{\partial z}}\right)+{\frac {\partial }{\partial z}}\left({\frac {1}{\rho }}{\frac {\partial \Psi }{\partial \rho }}\right)=0,\end{aligned}}}
as expected for an incompressible flow.
And in spherical coordinates:
∇
⋅
u
=
1
r
sin
θ
∂
∂
θ
(
u
θ
sin
θ
)
+
1
r
2
∂
∂
r
(
r
2
u
r
)
=
1
r
sin
θ
∂
∂
θ
(
−
1
r
∂
Ψ
∂
r
)
+
1
r
2
∂
∂
r
(
1
sin
θ
∂
Ψ
∂
θ
)
=
0.
{\displaystyle {\begin{aligned}\nabla \cdot {\boldsymbol {u}}&={\frac {1}{r\,\sin \theta }}{\frac {\partial }{\partial \theta }}(u_{\theta }\,\sin \theta )+{\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}{\Bigl (}r^{2}\,u_{r}{\Bigr )}\\&={\frac {1}{r\,\sin \theta }}{\frac {\partial }{\partial \theta }}\left(-{\frac {1}{r}}{\frac {\partial \Psi }{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left({\frac {1}{\sin \theta }}{\frac {\partial \Psi }{\partial \theta }}\right)=0.\end{aligned}}}
== Streamlines as curves of constant stream function ==
From calculus it is known that the gradient vector
∇
Ψ
{\displaystyle \nabla \Psi }
is normal to the curve
Ψ
=
C
{\displaystyle \Psi =C}
(see e.g. Level set#Level sets versus the gradient). If it is shown that everywhere
u
⋅
∇
Ψ
=
0
,
{\displaystyle {\boldsymbol {u}}\cdot \nabla \Psi =0,}
using the formula for
u
{\displaystyle {\boldsymbol {u}}}
in terms of
Ψ
,
{\displaystyle \Psi ,}
then this proves that level curves of
Ψ
{\displaystyle \Psi }
are streamlines.
Cylindrical coordinates
In cylindrical coordinates,
∇
Ψ
=
∂
Ψ
∂
ρ
e
ρ
+
∂
Ψ
∂
z
e
z
{\displaystyle \nabla \Psi ={\partial \Psi \over \partial \rho }{\boldsymbol {e}}_{\rho }+{\partial \Psi \over \partial z}{\boldsymbol {e}}_{z}}
.
and
u
=
u
ρ
e
ρ
+
u
z
e
z
=
−
1
ρ
∂
Ψ
∂
z
e
ρ
+
1
ρ
∂
Ψ
∂
ρ
e
z
.
{\displaystyle {\boldsymbol {u}}=u_{\rho }{\boldsymbol {e}}_{\rho }+u_{z}{\boldsymbol {e}}_{z}=-{1 \over \rho }{\partial \Psi \over \partial z}{\boldsymbol {e}}_{\rho }+{1 \over \rho }{\partial \Psi \over \partial \rho }{\boldsymbol {e}}_{z}.}
So that
∇
Ψ
⋅
u
=
∂
Ψ
∂
ρ
(
−
1
ρ
∂
Ψ
∂
z
)
+
∂
Ψ
∂
z
1
ρ
∂
Ψ
∂
ρ
=
0.
{\displaystyle \nabla \Psi \cdot {\boldsymbol {u}}={\partial \Psi \over \partial \rho }(-{1 \over \rho }{\partial \Psi \over \partial z})+{\partial \Psi \over \partial z}{1 \over \rho }{\partial \Psi \over \partial \rho }=0.}
Spherical coordinates
And in spherical coordinates
∇
Ψ
=
∂
Ψ
∂
r
e
r
+
1
r
∂
Ψ
∂
θ
e
θ
{\displaystyle \nabla \Psi ={\partial \Psi \over \partial r}{\boldsymbol {e}}_{r}+{1 \over r}{\partial \Psi \over \partial \theta }{\boldsymbol {e}}_{\theta }}
and
u
=
u
r
e
r
+
u
θ
e
θ
=
1
r
2
sin
θ
∂
Ψ
∂
θ
e
r
−
1
r
sin
θ
∂
Ψ
∂
r
e
θ
.
{\displaystyle {\boldsymbol {u}}=u_{r}{\boldsymbol {e}}_{r}+u_{\theta }{\boldsymbol {e}}_{\theta }={1 \over r^{2}\sin \theta }{\partial \Psi \over \partial \theta }{\boldsymbol {e}}_{r}-{1 \over r\sin \theta }{\partial \Psi \over \partial r}{\boldsymbol {e}}_{\theta }.}
So that
∇
Ψ
⋅
u
=
∂
Ψ
∂
r
⋅
1
r
2
sin
θ
∂
Ψ
∂
θ
+
1
r
∂
Ψ
∂
θ
⋅
(
−
1
r
sin
θ
∂
Ψ
∂
r
)
=
0.
{\displaystyle \nabla \Psi \cdot {\boldsymbol {u}}={\partial \Psi \over \partial r}\cdot {1 \over r^{2}\sin \theta }{\partial \Psi \over \partial \theta }+{1 \over r}{\partial \Psi \over \partial \theta }\cdot {\Big (}-{1 \over r\sin \theta }{\partial \Psi \over \partial r}{\Big )}=0.}
== Notes ==
== References ==
Batchelor, G.K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-66396-2.
Lamb, H. (1994). Hydrodynamics (6th ed.). Cambridge University Press. ISBN 978-0-521-45868-9. Originally published in 1879, the 6th extended edition appeared first in 1932.
Stokes, G.G. (1842). "On the steady motion of incompressible fluids". Transactions of the Cambridge Philosophical Society. 7: 439–453. Bibcode:1848TCaPS...7..439S.Reprinted in: Stokes, G.G. (1880). Mathematical and Physical Papers, Volume I. Cambridge University Press. pp. 1–16. | Wikipedia/Stokes_stream_function |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.