diff --git "a/domain_2.jsonl" "b/domain_2.jsonl"
new file mode 100644--- /dev/null
+++ "b/domain_2.jsonl"
@@ -0,0 +1,5000 @@
+{"text":"A spectrum (plural \"spectra\" or \"spectrums\") is a condition that is not limited to a specific set of values but can vary, without steps, across a continuum. The word was first used scientifically in optics to describe the rainbow of colors in visible light after passing through a prism. As scientific understanding of light advanced, it came to apply to the entire electromagnetic spectrum. It thereby became a mapping of a range of magnitudes (wavelengths) to a range of qualities, which are the perceived \"colors of the rainbow\" and other properties which correspond to wavelengths that lie outside of the visible light spectrum."}
+{"text":"Spectrum has since been applied by analogy to topics outside optics. Thus, one might talk about the \"spectrum of political opinion\", or the \"spectrum of activity\" of a drug, or the \"autism spectrum\". In these uses, values within a spectrum may not be associated with precisely quantifiable numbers or definitions. Such uses imply a broad range of conditions or behaviors grouped together and studied under a single title for ease of discussion. Nonscientific uses of the term \"spectrum\" are sometimes misleading. For instance, a single left\u2013right spectrum of political opinion does not capture the full range of people's political beliefs. Political scientists use a variety of biaxial and multiaxial systems to more accurately characterize political opinion."}
+{"text":"In most modern usages of \"spectrum\" there is a unifying theme between the extremes at either end. This was not always true in older usage."}
+{"text":"In Latin, \"spectrum\" means \"image\" or \"apparition\", including the meaning \"spectre\". Spectral evidence is testimony about what was done by spectres of persons not present physically, or hearsay evidence about what ghosts or apparitions of Satan said. It was used to convict a number of persons of witchcraft at Salem, Massachusetts in the late 17th century. The word \"spectrum\" [Spektrum] was strictly used to designate a ghostly optical afterimage by Goethe in his \"Theory of Colors\" and Schopenhauer in \"On Vision and Colors\"."}
+{"text":"The prefix \"spectro-\" is used to form words relating to spectra. For example, a spectrometer is a device used to record spectra and spectroscopy is the use of a spectrometer for chemical analysis."}
+{"text":"In the 17th century, the word \"spectrum\" was introduced into optics by Isaac Newton, referring to the range of colors observed when white light was dispersed through a prism. Soon the term referred to a plot of light intensity or power as a function of frequency or wavelength, also known as a \"spectral density plot\"."}
+{"text":"The term \"spectrum\" was expanded to apply to other waves, such as sound waves that could also be measured as a function of frequency, frequency spectrum and power spectrum of a signal. The term now applies to any signal that can be measured or decomposed along a continuous variable such as energy in electron spectroscopy or mass-to-charge ratio in mass spectrometry. Spectrum is also used to refer to a graphical representation of the signal as a function of the dependent variable."}
+{"text":"Light from many different sources contains various colors, each with its own brightness or intensity. A rainbow, or prism, sends these component colors in different directions, making them individually visible at different angles. A graph of the intensity plotted against the frequency (showing the brightness of each color) is the frequency spectrum of the light. When all the visible frequencies are present equally, the perceived color of the light is white, and the spectrum is a flat line. Therefore, flat-line spectra in general are often referred to as \"white\", whether they represent light or another type of wave phenomenon (sound, for example, or vibration in a structure)."}
+{"text":"In astronomical spectroscopy, the strength, shape, and position of absorption and emission lines, as well as the overall spectral energy distribution of the continuum, reveal many properties of astronomical objects. Stellar classification is the categorisation of stars based on their characteristic electromagnetic spectra. The spectral flux density is used to represent the spectrum of a light-source, such as a star."}
+{"text":"In radiometry and colorimetry (or color science more generally), the spectral power distribution (SPD) of a light source is a measure of the power contributed by each frequency or color in a light source. The light spectrum is usually measured at points (often 31) along the visible spectrum, in wavelength space instead of frequency space, which makes it not strictly a spectral density. Some spectrophotometers can measure increments as fine as one to two nanometers. the values are used to calculate other specifications and then plotted to show the spectral attributes of the source. This can be helpful in analyzing the color characteristics of a particular source."}
+{"text":"A plot of ion abundance as a function of mass-to-charge ratio is called a mass spectrum. It can be produced by a mass spectrometer instrument. The mass spectrum can be used to determine the quantity and mass of atoms and molecules. Tandem mass spectrometry is used to determine molecular structure."}
+{"text":"In physics, the energy spectrum of a particle is the number of particles or intensity of a particle beam as a function of particle energy. Examples of techniques that produce an energy spectrum are alpha-particle spectroscopy, electron energy loss spectroscopy, and mass-analyzed ion-kinetic-energy spectrometry."}
+{"text":"In physics, particularly in quantum mechanics, some differential operators have discrete spectra, with gaps between values. Common cases include the Hamiltonian and the angular momentum operator."}
+{"text":"In acoustics, a spectrogram is a visual representation of the frequency spectrum of sound as a function of time or another variable."}
+{"text":"A source of sound can have many different frequencies mixed. A Musical tone's timbre is characterized by its harmonic spectrum. Sound in our environment that we refer to as \"noise\" includes many different frequencies. When a sound signal contains a mixture of all audible frequencies, distributed equally over the audio spectrum, it is called white noise."}
+{"text":"The spectrum analyzer is an instrument which can be used to convert the sound wave of the musical note into a visual display of the constituent frequencies. This visual display is referred to as an acoustic spectrogram. Software based audio spectrum analyzers are available at low cost, providing easy access not only to industry professionals, but also to academics, students and the hobbyist. The acoustic spectrogram generated by the spectrum analyzer provides an acoustic signature of the musical note. In addition to revealing the fundamental frequency and its overtones, the spectrogram is also useful for analysis of the temporal attack, decay, sustain, and release of the musical note."}
+{"text":"Antibiotic spectrum of activity is a component of antibiotic classification. A broad-spectrum antibiotic is active against a wide range of bacteria, whereas a narrow-spectrum antibiotic is effective against specific families of bacteria. An example of a commonly used broad-spectrum antibiotic is ampicillin. An example of a narrow spectrum antibiotic is Dicloxacillin, which acts on beta-lactamase-producing Gram-positive bacteria such as \"Staphylococcus aureus\"."}
+{"text":"In psychiatry, the spectrum approach uses the term spectrum to describe a range of linked conditions, sometimes also extending to include singular symptoms and traits. For example, the autism spectrum describes a range of conditions classified as neurodevelopmental disorders."}
+{"text":"In mathematics, the spectrum of a matrix is the multiset of the eigenvalues of the matrix."}
+{"text":"In functional analysis, the concept of the spectrum of a bounded operator is a generalization of the eigenvalue concept for matrices."}
+{"text":"In algebraic topology, a spectrum is an object representing a generalized cohomology theory."}
+{"text":"In social science, economic spectrum is used to indicate the range of social class along some indicator of wealth or income. In political science, the term political spectrum refers to a system of classifying political positions in one or more dimensions, for example in a range including right wing and left wing."}
+{"text":"In dynamical system theory, a phase space is a space in which all possible states of a system are represented, with each possible state corresponding to one unique point in the phase space. For mechanical systems, the phase space usually consists of all possible values of position and momentum variables. The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincar\u00e9, and Josiah Willard Gibbs."}
+{"text":"In classical mechanics, any choice of generalized coordinates \"q\"\"i\" for the position (i.e. coordinates on configuration space) defines conjugate generalized momenta \"p\"\"i\" which together define co-ordinates on phase space. More abstractly, in classical mechanics phase space is the cotangent bundle of configuration space, and in this interpretation the procedure above expresses that a choice of local coordinates on configuration space induces a choice of natural local Darboux coordinates for the standard symplectic structure on a cotangent space."}
+{"text":"The motion of an ensemble of systems in this space is studied by classical statistical mechanics. The local density of points in such systems obeys Liouville's theorem, and so can be taken as constant. Within the context of a model system in classical mechanics, the phase space coordinates of the system at any given time are composed of all of the system's dynamic variables. Because of this, it is possible to calculate the state of the system at any given time in the future or the past, through integration of Hamilton's or Lagrange's equations of motion."}
+{"text":"For simple systems, there may be as few as one or two degrees of freedom. One degree of freedom occurs when one has an autonomous ordinary differential equation in a single variable, formula_1 with the resulting one-dimensional system being called a phase line, and the qualitative behaviour of the system being immediately visible from the phase line. The simplest non-trivial examples are the exponential growth model\/decay (one unstable\/stable equilibrium) and the logistic growth model (two equilibria, one stable, one unstable)."}
+{"text":"The phase space of a two-dimensional system is called a phase plane, which occurs in classical mechanics for a single particle moving in one dimension, and where the two variables are position and velocity. In this case, a sketch of the phase portrait may give qualitative information about the dynamics of the system, such as the limit cycle of the Van der Pol oscillator shown in the diagram."}
+{"text":"Here, the horizontal axis gives the position and vertical axis the velocity. As the system evolves, its state follows one of the lines (trajectories) on the phase diagram."}
+{"text":"Classic examples of phase diagrams from chaos theory are :"}
+{"text":"A plot of position and momentum variables as a function of time is sometimes called a phase plot or a phase diagram. However the latter expression, \"phase diagram\", is more usually reserved in the physical sciences for a diagram showing the various regions of stability of the thermodynamic phases of a chemical system, which consists of pressure, temperature, and composition."}
+{"text":"In quantum mechanics, the coordinates \"p\" and \"q\" of phase space normally become Hermitian operators in a Hilbert space."}
+{"text":"But they may alternatively retain their classical interpretation, provided functions of them compose in novel algebraic ways (through Groenewold's 1946 star product). This is consistent with the uncertainty principle of quantum mechanics."}
+{"text":"Every quantum mechanical observable corresponds to a unique function or distribution on phase space, and vice versa, as specified by Hermann Weyl (1927) and supplemented by John von Neumann (1931); Eugene Wigner (1932); and, in a grand synthesis, by H J Groenewold (1946)."}
+{"text":"With J E Moyal (1949), these completed the foundations of the phase space formulation of quantum mechanics, a complete and logically autonomous reformulation of quantum mechanics. (Its modern abstractions include deformation quantization and geometric quantization.)"}
+{"text":"Expectation values in phase-space quantization are obtained isomorphically to tracing operator observables with the density matrix in Hilbert space: they are obtained by phase-space integrals of observables, with the Wigner quasi-probability distribution effectively serving as a measure."}
+{"text":"Thus, by expressing quantum mechanics in phase space (the same ambit as for classical mechanics), the Weyl map facilitates recognition of quantum mechanics as a deformation (generalization) of classical mechanics, with deformation parameter \"\u0127\/S\", where \"S\" is the action of the relevant process. (Other familiar deformations in physics involve the deformation of classical Newtonian into relativistic mechanics, with deformation parameter \"v\"\/\"c\"; or the deformation of Newtonian gravity into General Relativity, with deformation parameter Schwarzschild radius\/characteristic-dimension.)"}
+{"text":"Classical expressions, observables, and operations (such as Poisson brackets) are modified by \u0127-dependent quantum corrections, as the conventional commutative multiplication applying in classical mechanics is generalized to the noncommutative star-multiplication characterizing quantum mechanics and underlying its uncertainty principle."}
+{"text":"The phase space can also refer to the space that is parameterized by the \"macroscopic\" states of the system, such as pressure, temperature, etc. For instance, one may view the pressure-volume diagram or entropy-temperature diagrams as describing part of this phase space. A point in this phase space is correspondingly called a macrostate. There may easily be more than one microstate with the same macrostate. For example, for a fixed temperature, the system could have many dynamic configurations at the microscopic level. When used in this sense, a phase is a region of phase space where the system in question is in, for example, the liquid phase, or solid phase, etc."}
+{"text":"Since there are many more microstates than macrostates, the phase space in the first sense is usually a manifold of much larger dimensions than in the second sense. Clearly, many more parameters are required to register every detail of the system down to the molecular or atomic scale than to simply specify, say, the temperature or the pressure of the system."}
+{"text":"Phase space is extensively used in nonimaging optics, the branch of optics devoted to illumination. It is also an important concept in Hamiltonian optics."}
+{"text":"In classical statistical mechanics (continuous energies) the concept of phase space provides a classical analog to the partition function (sum over states) known as the phase integral. Instead of summing the Boltzmann factor over discretely spaced energy states (defined by"}
+{"text":"appropriate integer quantum numbers for each degree of freedom) one may integrate over continuous phase space. Such integration essentially consists of two parts: integration of the momentum component of all degrees of freedom (momentum space) and integration of the position component of all degrees of freedom (configuration space). Once the phase integral is known, it may be related to the classical partition function by multiplication of a normalization constant representing the number of quantum energy states per unit phase space. This normalization constant is simply the inverse of Planck's constant raised to a power equal to the number of degrees of freedom for the system."}
+{"text":"The macroscopic scale is the length scale on which objects or phenomena are large enough to be visible with the naked eye, without magnifying optical instruments. It is the opposite of microscopic."}
+{"text":"When applied to physical phenomena and bodies, the macroscopic scale describes things as a person can directly perceive them, without the aid of magnifying devices. This is in contrast to observations (microscopy) or theories (microphysics, statistical physics) of objects of geometric lengths smaller than perhaps some hundreds of micrometers."}
+{"text":"A macroscopic view of a ball is just that: a ball. A microscopic view could reveal a thick round skin seemingly composed entirely of puckered cracks and fissures (as viewed through a microscope) or, further down in scale, a collection of molecules in a roughly spherical shape (as viewed through an electron microscope). An example of a physical theory that takes a deliberately macroscopic viewpoint is thermodynamics. An example of a topic that extends from macroscopic to microscopic viewpoints is histology."}
+{"text":"In the Quantum Measurement Problem the issue of what constitutes macroscopic and what constitutes the quantum world is unresolved and possibly unsolvable. The related Correspondence Principle can be articulated thus: every macroscopic phenomena can be formulated as a problem in quantum theory. A violation of the Correspondence Principle would thus ensure an empirical distinction between the macroscopic and the quantum."}
+{"text":"In pathology, macroscopic diagnostics generally involves gross pathology, in contrast to microscopic histopathology."}
+{"text":"The term \"megascopic\" is a synonym. \"Macroscopic\" may also refer to a \"larger view\", namely a view available only from a large perspective (a hypothetical \"macroscope\"). A macroscopic position could be considered the \"big picture\"."}
+{"text":"High energy physics compared to low energy physics."}
+{"text":"Finally, when reaching the quantum particle level, the high energy domain is revealed. The proton has a mass-energy of ~ ; some other massive quantum particles, both elementary and hadronic, have yet higher mass-energies. Quantum particles with lower mass-energies are also part of high energy physics; they also have a mass-energy that is far higher than that at the macroscopic scale (such as electrons), or are equally involved in reactions at the particle level (such as neutrinos). Relativistic effects, as in particle accelerators and cosmic rays, can further increase the accelerated particles' energy by many orders of magnitude, as well as the total energy of the particles emanating from their collision and annihilation."}
+{"text":"The term zero-point energy (ZPE) is a translation from the German Nullpunktsenergie."}
+{"text":"Sometimes used interchangeably with it are the terms zero-point radiation and ground state energy. The term zero-point field (ZPF) can be used when referring to a specific vacuum field, for instance the QED vacuum which specifically deals with quantum electrodynamics (e.g., electromagnetic interactions between photons, electrons and the vacuum) or the QCD vacuum which deals with quantum chromodynamics (e.g., color charge interactions between quarks, gluons and the vacuum). A vacuum can be viewed not as empty space but as the combination of all zero-point fields. In quantum field theory this combination of fields is called the vacuum state, its associated zero-point energy is called the vacuum energy and the average energy value is called the vacuum expectation value (VEV) also called its condensate."}
+{"text":"In classical mechanics all particles can be thought of as having some energy made up of their potential energy and kinetic energy. Temperature, for example, arises from the intensity of random particle motion caused by kinetic energy (known as Brownian motion). As temperature is reduced to absolute zero, it might be thought that all motion ceases and particles come completely to rest. In fact, however, kinetic energy is retained by particles even at the lowest possible temperature. The random motion corresponding to this zero-point energy never vanishes as a consequence of the uncertainty principle of quantum mechanics."}
+{"text":"The uncertainty principle states that no object can ever have precise values of position and velocity simultaneously. The total energy of a quantum mechanical object (potential and kinetic) is described by its Hamiltonian which also describes the system as a harmonic oscillator, or wave function, that fluctuates between various energy states (see wave-particle duality). All quantum mechanical systems undergo fluctuations even in their ground state, a consequence of their wave-like nature. The uncertainty principle requires every quantum mechanical system to have a fluctuating zero-point energy greater than the minimum of its classical potential well. This results in motion even at absolute zero. For example, liquid helium does not freeze under atmospheric pressure regardless of temperature due to its zero-point energy."}
+{"text":"Late in the 19th century, however, it became apparent that the evacuated region still contained thermal radiation. The existence of the aether as a substitute for a true void was the most prevalent theory of the time. According to the successful electromagnetic aether theory based upon Maxwell's electrodynamics, this all-encompassing aether was endowed with energy and hence very different from nothingness. The fact that electromagnetic and gravitational phenomena were easily transmitted in empty space indicated that their associated aethers were part of the fabric of space itself. Maxwell himself noted that:"}
+{"text":"However, the results of the Michelson\u2013Morley experiment in 1887 were the first strong evidence that the then-prevalent aether theories were seriously flawed, and initiated a line of research that eventually led to special relativity, which ruled out the idea of a stationary aether altogether. To scientists of the period, it seemed that a true vacuum in space might be completely eliminated by cooling thus eliminating all radiation or energy. From this idea evolved the second concept of achieving a real vacuum: cool it down to absolute zero temperature after evacuation. Absolute zero was technically impossible to achieve in the 19th century, so the debate remained unsolved."}
+{"text":"In 1900, Max Planck derived the average energy of a single \"energy radiator\", e.g., a vibrating atomic unit, as a function of absolute temperature:"}
+{"text":"where is Planck's constant, is the frequency, is Boltzmann's constant, and is the absolute temperature. The zero-point energy makes no contribution to Planck's original law, as its existence was unknown to Planck in 1900."}
+{"text":"The concept of zero-point energy was developed by Max Planck in Germany in 1911 as a corrective term added to a zero-grounded formula developed in his original quantum theory in 1900."}
+{"text":"Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. This view was popularized by Victor Weisskopf who in 1935 wrote:"}
+{"text":"This view was also later supported by (1948), who argued that spontaneous emission \"can be thought of as forced emission taking place under the action of the fluctuating field.\" This new theory, which Dirac coined quantum electrodynamics (QED) predicted a fluctuating zero-point or \"vacuum\" field existing even in the absence of sources."}
+{"text":"In 1951 Herbert Callen and Theodore Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. Fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. FDT has been shown to be true experimentally under certain quantum, non-classical, conditions."}
+{"text":"Zero-point energy is fundamentally related to the Heisenberg uncertainty principle. Roughly speaking, the uncertainty principle states that complementary variables (such as a particle's position and momentum, or a field's value and derivative at a point in space) cannot simultaneously be specified precisely by any given quantum state. In particular, there cannot exist a state in which the system simply sits motionless at the bottom of its potential well: for, then, its position and momentum would both be completely determined to arbitrarily great precision. Therefore, instead, the lowest-energy state (the ground state) of the system must have a distribution in position and momentum that satisfies the uncertainty principle\u2212\u2212which implies its energy must be greater than the minimum of the potential well."}
+{"text":"Near the bottom of a potential well, the Hamiltonian of a general system (the quantum-mechanical operator giving its energy) can be approximated as a quantum harmonic oscillator,"}
+{"text":"where is the minimum of the classical potential well."}
+{"text":"making the expectation values of the kinetic and potential terms above satisfy"}
+{"text":"The expectation value of the energy must therefore be at least"}
+{"text":"where is the angular frequency at which the system oscillates."}
+{"text":"A more thorough treatment, showing that the energy of the ground state actually saturates this bound and is exactly , requires solving for the ground state of the system."}
+{"text":"The idea of a quantum harmonic oscillator and its associated energy can apply to either an atom or subatomic particle. In ordinary atomic physics, the zero-point energy is the energy associated with the ground state of the system. The professional physics literature tends to measure frequency, as denoted by above, using angular frequency, denoted with and defined by . This leads to a convention of writing Planck's constant with a bar through its top () to denote the quantity . In these terms, the most famous such example of zero-point energy is the above associated with the ground state of the quantum harmonic oscillator. In quantum mechanical terms, the zero-point energy is the expectation value of the Hamiltonian of the system in the ground state."}
+{"text":"If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator which acts non-trivially on a ground state and commutes with the Hamiltonian of the system."}
+{"text":"According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature."}
+{"text":"The wave function of the ground state of a particle in a one-dimensional well is a half-period sine wave which goes to zero at the two edges of the well. The energy of the particle is given by:"}
+{"text":"where is the Planck constant, is the mass of the particle, is the energy state ( corresponds to the ground-state energy), and is the width of the well."}
+{"text":"In quantum field theory (QFT), the fabric of \"empty\" space is visualized as consisting of fields, with the field at every point in space and time being a quantum harmonic oscillator, with neighboring oscillators interacting with each other. According to QFT the universe is made up of matter fields whose quanta are fermions (e.g. electrons and quarks), force fields whose quanta are bosons (i.e. photons and gluons) and a Higgs field whose quantum is the Higgs boson. The matter and force fields have zero-point energy. A related term is \"zero-point field\" (ZPF), which is the lowest energy state of a particular field. The vacuum can be viewed not as empty space, but as the combination of all zero-point fields."}
+{"text":"In QFT the zero-point energy of the vacuum state is called the vacuum energy and the average expectation value of the Hamiltonian is called the vacuum expectation value (also called condensate or simply VEV). The QED vacuum is a part of the vacuum state which specifically deals with quantum electrodynamics (e.g. electromagnetic interactions between photons, electrons and the vacuum) and the QCD vacuum deals with quantum chromodynamics (e.g. color charge interactions between quarks, gluons and the vacuum). Recent experiments advocate the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions with the zero-point field."}
+{"text":"Each point in space makes a contribution of , resulting in a calculation of infinite zero-point energy in any finite volume; this is one reason renormalization is needed to make sense of quantum field theories. In cosmology, the vacuum energy is one possible explanation for the cosmological constant and the source of dark energy."}
+{"text":"Scientists are not in agreement about how much energy is contained in the vacuum. Quantum mechanics requires the energy to be large as Paul Dirac claimed it is, like a sea of energy. Other scientists specializing in General Relativity require the energy to be small enough for curvature of space to agree with observed astronomy. The Heisenberg uncertainty principle allows the energy to be as large as needed to promote quantum actions for a brief moment of time, even if the average energy is small enough to satisfy relativity and flat space. To cope with disagreements, the vacuum energy is described as a virtual energy potential of positive and negative energy."}
+{"text":"In quantum perturbation theory, it is sometimes said that the contribution of one-loop and multi-loop Feynman diagrams to elementary particle propagators are the contribution of vacuum fluctuations, or the zero-point energy to the particle masses."}
+{"text":"The oldest and best known quantized force field is the electromagnetic field. Maxwell's equations have been superseded by quantum electrodynamics (QED). By considering the zero-point energy that arises from QED it is possible to gain a characteristic understanding of zero-point energy that arises not just through electromagnetic interactions but in all quantum field theories."}
+{"text":"In the quantum theory of the electromagnetic field, classical wave amplitudes and are replaced by operators and that satisfy:"}
+{"text":"The classical quantity appearing in the classical expression for the energy of a field mode is replaced in quantum theory by the photon number operator . The fact that:"}
+{"text":"implies that quantum theory does not allow states of the radiation field for which the photon number and a field amplitude can be precisely defined, i.e., we cannot have simultaneous eigenstates for and . The reconciliation of wave and particle attributes of the field is accomplished via the association of a probability amplitude with a classical mode pattern. The calculation of field modes is entirely classical problem, while the quantum properties of the field are carried by the mode \"amplitudes\" and associated with these classical modes."}
+{"text":"The zero-point energy of the field arises formally from the non-commutativity of and . This is true for any harmonic oscillator: the zero-point energy appears when we write the Hamiltonian:"}
+{"text":"It is often argued that the entire universe is completely bathed in the zero-point electromagnetic field, and as such it can add only some constant amount to expectation values. Physical measurements will therefore reveal only deviations from the vacuum state. Thus the zero-point energy can be dropped from the Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion. Thus we can choose to declare by fiat that the ground state has zero energy and a field Hamiltonian, for example, can be replaced by:"}
+{"text":"without affecting any physical predictions of the theory. The new Hamiltonian is said to be normally ordered (or Wick ordered) and is denoted by a double-dot symbol. The normally ordered Hamiltonian is denoted , i.e.:"}
+{"text":"In other words, within the normal ordering symbol we can commute and . Since zero-point energy is intimately connected to the non-commutativity of and , the normal ordering procedure eliminates any contribution from the zero-point field. This is especially reasonable in the case of the field Hamiltonian, since the zero-point term merely adds a constant energy which can be eliminated by a simple redefinition for the zero of energy. Moreover, this constant energy in the Hamiltonian obviously commutes with and and so cannot have any effect on the quantum dynamics described by the Heisenberg equations of motion."}
+{"text":"However, things are not quite that simple. The zero-point energy cannot be eliminated by dropping its energy from the Hamiltonian: When we do this and solve the Heisenberg equation for a field operator, we must include the vacuum field, which is the homogeneous part of the solution for the field operator. In fact we can show that the vacuum field is essential for the preservation of the commutators and the formal consistent of QED. When we calculate the field energy we obtain not only a contribution from particles and forces that may be present but also a contribution from the vacuum field itself i.e. the zero-point field energy. In other words, the zero-point energy reappears even though we may have deleted it from the Hamiltonian."}
+{"text":"From Maxwell's equations, the electromagnetic energy of a \"free\" field i.e. one with no sources, is described by:"}
+{"text":"We introduce the \"mode function\" that satisfies the Helmholtz equation:"}
+{"text":"where and assume it is normalized such that:"}
+{"text":"We wish to \"quantize\" the electromagnetic energy of free space for a multimode field. The field intensity of free space should be independent of position such that should be independent of for each mode of the field. The mode function satisfying these conditions is:"}
+{"text":"where in order to have the transversality condition satisfied for the Coulomb gauge in which we are working."}
+{"text":"To achieve the desired normalization we pretend space is divided into cubes of volume and impose on the field the periodic boundary condition:"}
+{"text":"where can assume any integer value. This allows us to consider the field in any one of the imaginary cubes and to define the mode function:"}
+{"text":"which satisfies the Helmholtz equation, transversality, and the \"box normalization\":"}
+{"text":"where is chosen to be a unit vector which specifies the polarization of the field mode. The condition means that there are two independent choices of , which we call and where and . Thus we define the mode functions:"}
+{"text":"in terms of which the vector potential becomes:"}
+{"text":"where and , are photon annihilation and creation operators for the mode with wave vector and polarization . This gives the vector potential for a plane wave mode of the field. The condition for shows that there are infinitely many such modes. The linearity of Maxwell's equations allows us to write:"}
+{"text":"for the total vector potential in free space. Using the fact that:"}
+{"text":"This is the Hamiltonian for an infinite number of uncoupled harmonic oscillators. Thus different modes of the field are independent and satisfy the commutation relations:"}
+{"text":"This state describes the zero-point energy of the vacuum. It appears that this sum is divergent \u2013 in fact highly divergent, as putting in the density factor"}
+{"text":"shows. The summation becomes approximately the integral:"}
+{"text":"for high values of . It diverges proportional to for large ."}
+{"text":"Necessity of the vacuum field in QED."}
+{"text":"The vacuum state of the \"free\" electromagnetic field (that with no sources) is defined as the ground state in which for all modes . The vacuum state, like all stationary states of the field, is an eigenstate of the Hamiltonian but not the electric and magnetic field operators. In the vacuum state, therefore, the electric and magnetic fields do not have definite values. We can imagine them to be fluctuating about their mean value of zero."}
+{"text":"In a process in which a photon is annihilated (absorbed), we can think of the photon as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. An atom, for instance, can be considered to be \"dressed\" by emission and reabsorption of \"virtual photons\" from the vacuum. The vacuum state energy described by is infinite. We can make the replacement:"}
+{"text":"or in other words the spectral energy density of the vacuum field:"}
+{"text":"The zero-point energy density in the frequency range from to is therefore:"}
+{"text":"This can be large even in relatively narrow \"low frequency\" regions of the spectrum. In the optical region from 400 to 700\u00a0nm, for instance, the above equation yields around 220\u00a0erg\/cm3."}
+{"text":"We showed in the above section that the zero-point energy can be eliminated from the Hamiltonian by the normal ordering prescription. However, this elimination does not mean that the vacuum field has been rendered unimportant or without physical consequences. To illustrate this point we consider a linear dipole oscillator in the vacuum. The Hamiltonian for the oscillator plus the field with which it interacts is:"}
+{"text":"This has the same form as the corresponding classical Hamiltonian and the Heisenberg equations of motion for the oscillator and the field are formally the same as their classical counterparts. For instance the Heisenberg equations for the coordinate and the canonical momentum of the oscillator are:"}
+{"text":"since the rate of change of the vector potential in the frame of the moving charge is given by the convective derivative"}
+{"text":"For nonrelativistic motion we may neglect the magnetic force and replace the expression for by:"}
+{"text":"Above we have made the electric dipole approximation in which the spatial dependence of the field is neglected. The Heisenberg equation for is found similarly from the Hamiltonian to be:"}
+{"text":"In deriving these equations for , , and we have used the fact that equal-time particle and field operators commute. This follows from the assumption that particle and field operators commute at some time (say, ) when the matter-field interpretation is presumed to begin, together with the fact that a Heisenberg-picture operator evolves in time as , where is the time evolution operator satisfying"}
+{"text":"Alternatively, we can argue that these operators must commute if we are to obtain the correct equations of motion from the Hamiltonian, just as the corresponding Poisson brackets in classical theory must vanish in order to generate the correct Hamilton equations. The formal solution of the field equation is:"}
+{"text":"and therefore the equation for may be written:"}
+{"text":"It can be shown that in the radiation reaction field, if the mass is regarded as the \"observed\" mass then we can take:"}
+{"text":"The total field acting on the dipole has two parts, and . is the free or zero-point field acting on the dipole. It is the homogeneous solution of the Maxwell equation for the field acting on the dipole, i.e., the solution, at the position of the dipole, of the wave equation"}
+{"text":"satisfied by the field in the (source free) vacuum. For this reason is often referred to as the \"vacuum field\", although it is of course a Heisenberg-picture operator acting on whatever state of the field happens to be appropriate at . is the source field, the field generated by the dipole and acting on the dipole."}
+{"text":"Using the above equation for we obtain an equation for the Heisenberg-picture operator formula_48 that is formally the same as the classical equation for a linear dipole oscillator:"}
+{"text":"where . in this instance we have considered a dipole in the vacuum, without any \"external\" field acting on it. the role of the external field in the above equation is played by the vacuum electric field acting on the dipole."}
+{"text":"Classically, a dipole in the vacuum is not acted upon by any \"external\" field: if there are no sources other than the dipole itself, then the only field acting on the dipole is its own radiation reaction field. In quantum theory however there is always an \"external\" field, namely the source-free or vacuum field ."}
+{"text":"According to our earlier equation for the free field is the only field in existence at as the time at which the interaction between the dipole and the field is \"switched on\". The state vector of the dipole-field system at is therefore of the form"}
+{"text":"where is the vacuum state of the field and is the initial state of the dipole oscillator. The expectation value of the free field is therefore at all times equal to zero:"}
+{"text":"since . however, the energy density associated with the free field is infinite:"}
+{"text":"The important point of this is that the zero-point field energy does not affect the Heisenberg equation for since it is a c-number or constant (i.e. an ordinary number rather than an operator) and commutes with . We can therefore drop the zero-point field energy from the Hamiltonian, as is usually done. But the zero-point field re-emerges as the homogeneous solution for the field equation. A charged particle in the vacuum will therefore always see a zero-point field of infinite density. This is the origin of one of the infinities of quantum electrodynamics, and it cannot be eliminated by the trivial expedient dropping of the term in the field Hamiltonian."}
+{"text":"The free field is in fact necessary for the formal consistency of the theory. In particular, it is necessary for the preservation of the commutation relations, which is required by the unitary of time evolution in quantum theory:"}
+{"text":"We can calculate from the formal solution of the operator equation of motion"}
+{"text":"and that equal-time particle and field operators commute, we obtain:"}
+{"text":"For the dipole oscillator under consideration it can be assumed that the radiative damping rate is small compared with the natural oscillation frequency, i.e., . Then the integrand above is sharply peaked at and:"}
+{"text":"the necessity of the vacuum field can also be appreciated by making the small damping approximation in"}
+{"text":"Without the free field in this equation the operator would be exponentially dampened, and commutators like would approach zero for . With the vacuum field included, however, the commutator is at all times, as required by unitarity, and as we have just shown. A similar result is easily worked out for the case of a free particle instead of a dipole oscillator."}
+{"text":"What we have here is an example of a \"fluctuation-dissipation elation\". Generally speaking if a system is coupled to a bath that can take energy from the system in an effectively irreversible way, then the bath must also cause fluctuations. The fluctuations and the dissipation go hand in hand we cannot have one without the other. In the current example the coupling of a dipole oscillator to the electromagnetic field has a dissipative component, in the form of the zero-point (vacuum) field; given the existence of radiation reaction, the vacuum field must also exist in order to preserve the canonical commutation rule and all it entails."}
+{"text":"The spectral density of the vacuum field is fixed by the form of the radiation reaction field, or vice versa: because the radiation reaction field varies with the third derivative of , the spectral energy density of the vacuum field must be proportional to the third power of in order for to hold. In the case of a dissipative force proportional to , by contrast, the fluctuation force must be proportional to formula_60 in order to maintain the canonical commutation relation. This relation between the form of the dissipation and the spectral density of the fluctuation is the essence of the fluctuation-dissipation theorem."}
+{"text":"The fact that the canonical commutation relation for a harmonic oscillator coupled to the vacuum field is preserved implies that the zero-point energy of the oscillator is preserved. it is easy to show that after a few damping times the zero-point motion of the oscillator is in fact sustained by the driving zero-point field."}
+{"text":"The QCD vacuum is the vacuum state of quantum chromodynamics (QCD). It is an example of a \"non-perturbative\" vacuum state, characterized by a non-vanishing condensates such as the gluon condensate and the quark condensate in the complete theory which includes quarks. The presence of these condensates characterizes the confined phase of quark matter. In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics) as it deals with nonlinear equations to characterize such interactions."}
+{"text":"The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged and thus the field has a nonzero vacuum expectation value. Interaction with the vacuum energy filling the space prevents certain forces from propagating over long distances (as it does in a superconducting medium; e.g., in the Ginzburg\u2013Landau theory)."}
+{"text":"A phenomenon that is commonly presented as evidence for the existence of zero-point energy in vacuum is the Casimir effect, proposed in 1948 by Dutch physicist Hendrik Casimir, who considered the quantized electromagnetic field between a pair of grounded, neutral metal plates. The vacuum energy contains contributions from all wavelengths, except those excluded by the spacing between plates. As the plates draw together, more wavelengths are excluded and the vacuum energy decreases. The decrease in energy means there must be a force doing work on the plates as they move."}
+{"text":"Early experimental tests from the 1950s onwards gave positive results showing the force was real, but other external factors could not be ruled out as the primary cause, with the range of experimental error sometimes being nearly 100%. That changed in 1997 with Lamoreaux conclusively showing that the Casimir force was real. Results have been repeatedly replicated since then."}
+{"text":"In 2009, Munday et al. published experimental proof that (as predicted in 1961) the Casimir force could also be repulsive as well as being attractive. Repulsive Casimir forces could allow quantum levitation of objects in a fluid and lead to a new class of switchable nanoscale devices with ultra-low static friction."}
+{"text":"An interesting hypothetical side effect of the Casimir effect is the Scharnhorst effect, a hypothetical phenomenon in which light signals travel slightly faster than between two closely spaced conducting plates."}
+{"text":"Taking (Planck's constant divided by ), (the speed of light), and (the electromagnetic coupling constant i.e. a measure of the strength of the electromagnetic force (where is the absolute value of the electronic charge and formula_61 is the vacuum permittivity)) we can form a dimensionless quantity called the fine-structure constant:"}
+{"text":"The fine-structure constant is the coupling constant of quantum electrodynamics (QED) determining the strength of the interaction between electrons and photons. It turns out that the fine-structure constant is not really a constant at all owing to the zero-point energy fluctuations of the electron-positron field. The quantum fluctuations caused by zero-point energy have the effect of screening electric charges: owing to (virtual) electron-positron pair production, the charge of the particle measured far from the particle is far smaller than the charge measured when close to it."}
+{"text":"The Heisenberg inequality where , and , are the standard deviations of position and momentum states that:"}
+{"text":"It means that a short distance implies large momentum and therefore high energy i.e. particles of high energy must be used to explore short distances. QED concludes that the fine-structure constant is an increasing function of energy. It has been shown that at energies of the order of the Z0 boson rest energy, 90\u00a0GeV, that:"}
+{"text":"rather than the low-energy . The renormalization procedure of eliminating zero-point energy infinities allows the choice of an arbitrary energy (or distance) scale for defining . All in all, depends on the energy scale characteristic of the process under study, and also on details of the renormalization procedure. The energy dependence of has been observed for several years now in precision experiment in high-energy physics."}
+{"text":"In the late 1990s it was discovered that very distant supernova were dimmer than expected suggesting that the universe's expansion was accelerating rather than slowing down. This revived discussion that Einstein's cosmological constant, long disregarded by physicists as being equal to zero, was in fact some small positive value. This would indicate empty space exerted some form of negative pressure or energy."}
+{"text":"There is no natural candidate for what might cause what has been called dark energy but the current best guess is that it is the zero-point energy of the vacuum. One difficulty with this assumption is that the zero-point energy of the vacuum is absurdly large compared to the observed cosmological constant. In general relativity, mass and energy are equivalent; both produce a gravitational field and therefore the theorized vacuum energy of quantum field theory should have led to the universe ripping itself to pieces. This obviously has not happened and this issue, called the cosmological constant problem, is one of the greatest unsolved mysteries in physics."}
+{"text":"Cosmic inflation is a faster-than-light expansion of space just after the Big Bang. It explains the origin of the large-scale structure of the cosmos. It is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the Universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the Universe is flat, and why no magnetic monopoles have been observed."}
+{"text":"The mechanism for inflation is unclear, it is similar in effect to dark energy but is a far more energetic and short lived process. As with dark energy the best explanation is some form of vacuum energy arising from quantum fluctuations. It may be that inflation caused baryogenesis, the hypothetical physical processes that produced an asymmetry (imbalance) between baryons and antibaryons produced in the very early universe, but this is far from certain."}
+{"text":"Physicists overwhelmingly reject any possibility that the zero-point energy field can be exploited to obtain useful energy (work) or uncompensated momentum; such efforts are seen as tantamount to perpetual motion machines."}
+{"text":"Nevertheless, the allure of free energy has motivated such research, usually falling in the category of fringe science. As long ago as 1889 (before quantum theory or discovery of the zero point energy) Nikola Tesla proposed that useful energy could be obtained from free space, or what was assumed at that time to be an all-pervasive aether. Others have since claimed to exploit zero-point or vacuum energy with a large amount of pseudoscientific literature causing ridicule around the subject. Despite rejection by the scientific community, harnessing zero-point energy remains an interest of research by non-scientific entities, particularly in the US where it has attracted the attention of major aerospace\/defence contractors and the U.S. Department of Defense as well as in China, Germany, Russia and Brazil."}
+{"text":"A common assumption is that the Casimir force is of little practical use; the argument is made that the only way to actually gain energy from the two plates is to allow them to come together (getting them apart again would then require more energy), and therefore it is a one-use-only tiny force in nature. In 1984 Robert Forward published work showing how a \"vacuum-fluctuation battery\" could be constructed. The battery can be recharged by making the electrical forces slightly stronger than the Casimir force to reexpand the plates."}
+{"text":"In 1995 and 1998 Maclay et al. published the first models of a microelectromechanical system (MEMS) with Casimir forces. While not exploiting the Casimir force for useful work, the papers drew attention from the MEMS community due to the revelation that Casimir effect needs to be considered as a vital factor in the future design of MEMS. In particular, Casimir effect might be the critical factor in the stiction failure of MEMS."}
+{"text":"In 1999, Pinto, a former scientist at NASA's Jet Propulsion Laboratory at Caltech in Pasadena, published in \"Physical Review\" his thought experiment (Gedankenexperiment) for a \"Casimir engine\". The paper showed that continuous positive net exchange of energy from the Casimir effect was possible, even stating in the abstract \"In the event of no other alternative explanations, one should conclude that major technological advances in the area of endless, by-product free-energy production could be achieved.\""}
+{"text":"In 2001, Capasso et al. showed how the force can be used to control the mechanical motion of a MEMS device, The researchers suspended a polysilicon plate from a torsional rod \u2013 a twisting horizontal bar just a few microns in diameter. When they brought a metallized sphere close up to the plate, the attractive Casimir force between the two objects made the plate rotate. They also studied the dynamical behaviour of the MEMS device by making the plate oscillate. The Casimir force reduced the rate of oscillation and led to nonlinear phenomena, such as hysteresis and bistability in the frequency response of the oscillator. According to the team, the system's behaviour agreed well with theoretical calculations."}
+{"text":"Despite this and several similar peer reviewed papers, there is not a consensus as to whether such devices can produce a continuous output of work. Garret Moddel at University of Colorado has highlighted that he believes such devices hinge on the assumption that the Casimir force is a nonconservative force, he argues that there is sufficient evidence (e.g. analysis by Scandurra (2001)) to say that the Casimir effect is a conservative force and therefore even though such an engine can exploit the Casimir force for useful work it cannot produce more output energy than has been input into the system."}
+{"text":"In 2008, DARPA solicited research proposals in the area of Casimir Effect Enhancement (CEE). The goal of the program is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir force."}
+{"text":"There have been a growing number of papers showing that in some instances the classical laws of thermodynamics, such as limits on the Carnot efficiency, can be violated by exploiting negative entropy of quantum fluctuations."}
+{"text":"Despite efforts to reconcile quantum mechanics and thermodynamics over the years, their compatibility is still an open fundamental problem. The full extent that quantum properties can alter classical thermodynamic bounds is unknown"}
+{"text":"In 1986 the U.S. Air Force's then Rocket Propulsion Laboratory (RPL) at Edwards Air Force Base solicited \"Non Conventional Propulsion Concepts\" under a small business research and innovation program. One of the six areas of interest was \"Esoteric energy sources for propulsion, including the quantum dynamic energy of vacuum space...\" In the same year BAE Systems launched \"Project Greenglow\" to provide a \"focus for research into novel propulsion systems and the means to power them\"."}
+{"text":"In 2002 Phantom Works, Boeing's advanced research and development facility in Seattle, approached Evgeny Podkletnov directly. Phantom Works was blocked by Russian technology transfer controls. At this time Lieutenant General George Muellner, the outgoing head of the Boeing Phantom Works, confirmed that attempts by Boeing to work with Podkletnov had been blocked by Moscow, also commenting that \"The physical principles \u2013 and Podkletnov's device is not the only one \u2013 appear to be valid... There is basic science there. They're not breaking the laws of physics. The issue is whether the science can be engineered into something workable\""}
+{"text":"Two-dimensional rotation can occur in two possible directions. Clockwise motion (abbreviated CW) proceeds in the same direction as a clock's hands: from the top to the right, then down and then to the left, and back up to the top. The opposite sense of rotation or revolution is (in Commonwealth English) anticlockwise (ACW) or (in North American English) counterclockwise (CCW)."}
+{"text":"Before clocks were commonplace, the terms \"sunwise\" and \"deasil\", \"deiseil\" and even \"deocil\" from the Scottish Gaelic language and from the same root as the Latin \"dexter\" (\"right\") were used for clockwise. \"Widdershins\" or \"withershins\" (from Middle Low German \"weddersinnes\", \"opposite course\") was used for counterclockwise."}
+{"text":"The terms clockwise and counterclockwise can only be applied to a rotational motion once a side of the rotational plane is specified, from which the rotation is observed. For example, the daily rotation of the Earth is clockwise when viewed from above the South Pole, and counterclockwise when viewed from above the North Pole (considering \"above a point\" to be defined as \"farther away from the center of earth and on the same ray\")."}
+{"text":"Occasionally, clocks whose hands revolve counterclockwise are nowadays sold as a novelty. Historically, some Jewish clocks were built that way, for example in some synagogue towers in Europe such as the Jewish Town Hall in Prague, to accord with right-to-left reading in the Hebrew language. In 2014 under Bolivian president Evo Morales, the clock outside the Legislative Assembly in Plaza Murillo, La Paz, was shifted to counterclockwise motion to promote indigenous values."}
+{"text":"Typical nuts, screws, bolts, bottle caps, and jar lids are tightened (moved away from the observer) clockwise and loosened (moved towards the observer) counterclockwise in accordance with the right-hand rule."}
+{"text":"To apply the right-hand rule, place one's loosely clenched right hand above the object with the thumb pointing in the direction one wants the screw, nut, bolt, or cap ultimately to move, and the curl of the fingers, from the palm to the tips, will indicate in which way one needs to turn the screw, nut, bolt or cap to achieve the desired result. Almost all threaded objects obey this rule except for a few left-handed exceptions described below."}
+{"text":"The reason for the clockwise standard for most screws and bolts is that supination of the arm, which is used by a right-handed person to tighten a screw clockwise, is generally stronger than pronation used to loosen."}
+{"text":"In trigonometry and mathematics in general, plane angles are conventionally measured counterclockwise, starting with 0\u00b0 or 0 radians pointing directly to the right (or east), and 90\u00b0 pointing straight up (or north). However, in navigation, compass headings increase clockwise around the compass face, starting with 0\u00b0 at the top of the compass (the northerly direction), with 90\u00b0 to the right (east)."}
+{"text":"A circle defined parametrically in a positive Cartesian plane by the equations and is traced counterclockwise as the angle \"t\" increases in value, from the right-most point at . An alternative formulation with sin and cos swapped gives a clockwise trace from the upper-most point."}
+{"text":"In general, most card games, board games, parlor games, and multiple team sports play in a clockwise turn rotation in Western Countries and Latin America with a notable resistance to playing in the opposite direction (counterclockwise). Traditionally (and still continued for the most part) turns pass counterclockwise in many Asian countries. In Western countries, when speaking and discussion activities take part in a circle, turns tend to naturally pass in a clockwise motion even though there is no obligation to do so. Curiously, unlike with games, there is usually no objection when the activity uncharacteristically begins in a counterclockwise motion."}
+{"text":"Notably, the game of baseball is played counterclockwise."}
+{"text":"Most left-handed people prefer to draw circles and circulate in buildings clockwise, while most right-handed people prefer to draw circles and circulate in buildings counterclockwise. While this was theorized to result from dominant brain hemispheres, research shows little correlation and instead attributes it to muscle mechanics."}
+{"text":"In the mathematical field of differential geometry, one definition of a metric tensor is a type of function which takes as input a pair of tangent vectors and at a point of a surface (or higher dimensional differentiable manifold) and produces a real number scalar in a way that generalizes many of the familiar properties of the dot product of vectors in Euclidean space. In the same way as a dot product, metric tensors are used to define the length of and angle between tangent vectors. Through integration, the metric tensor allows one to define and compute the length of curves on the manifold."}
+{"text":"While the notion of a metric tensor was known in some sense to mathematicians such as Carl Gauss from the early 19th century, it was not until the early 20th century that its properties as a tensor were understood by, in particular, Gregorio Ricci-Curbastro and Tullio Levi-Civita, who first codified the notion of a tensor. The metric tensor is an example of a tensor field."}
+{"text":"The components of a metric tensor in a coordinate basis take on the form of a symmetric matrix whose entries transform covariantly under changes to the coordinate system. Thus a metric tensor is a covariant symmetric tensor. From the coordinate-independent point of view, a metric tensor field is defined to be a nondegenerate symmetric bilinear form on each tangent space that varies smoothly from point to point."}
+{"text":"Carl Friedrich Gauss in his 1827 \"Disquisitiones generales circa superficies curvas\" (\"General investigations of curved surfaces\") considered a surface parametrically, with the Cartesian coordinates , , and of points on the surface depending on two auxiliary variables and . Thus a parametric surface is (in today's terms) a vector-valued function"}
+{"text":"depending on an ordered pair of real variables , and defined in an open set in the -plane. One of the chief aims of Gauss's investigations was to deduce those features of the surface which could be described by a function which would remain unchanged if the surface underwent a transformation in space (such as bending the surface without stretching it), or a change in the particular parametric form of the same geometrical surface."}
+{"text":"One natural such invariant quantity is the length of a curve drawn along the surface. Another is the angle between a pair of curves drawn along the surface and meeting at a common point. A third such quantity is the area of a piece of the surface. The study of these invariants of a surface led Gauss to introduce the predecessor of the modern notion of the metric tensor."}
+{"text":"If the variables and are taken to depend on a third variable, , taking values in an interval , then will trace out a parametric curve in parametric surface . The arc length of that curve is given by the integral"}
+{"text":"where formula_3 represents the Euclidean norm. Here the chain rule has been applied, and the subscripts denote partial derivatives:"}
+{"text":"The integrand is the restriction to the curve of the square root of the (quadratic) differential"}
+{"text":"The quantity in () is called the line element, while is called the first fundamental form of . Intuitively, it represents the principal part of the square of the displacement undergone by when is increased by units, and is increased by units."}
+{"text":"Using matrix notation, the first fundamental form becomes"}
+{"text":"Suppose now that a different parameterization is selected, by allowing and to depend on another pair of variables and . Then the analog of () for the new variables is"}
+{"text":"The chain rule relates , , and to , , and via the matrix equation"}
+{"text":"where the superscript T denotes the matrix transpose. The matrix with the coefficients , , and arranged in this way therefore transforms by the Jacobian matrix of the coordinate change"}
+{"text":"A matrix which transforms in this way is one kind of what is called a tensor. The matrix"}
+{"text":"with the transformation law () is known as the metric tensor of the surface."}
+{"text":"first observed the significance of a system of coefficients , , and , that transformed in this way on passing from one system of coordinates to another. The upshot is that the first fundamental form () is \"invariant\" under changes in the coordinate system, and that this follows exclusively from the transformation properties of , , and . Indeed, by the chain rule,"}
+{"text":"Another interpretation of the metric tensor, also considered by Gauss, is that it provides a way in which to compute the length of tangent vectors to the surface, as well as the angle between two tangent vectors. In contemporary terms, the metric tensor allows one to compute the dot product of tangent vectors in a manner independent of the parametric description of the surface. Any tangent vector at a point of the parametric surface can be written in the form"}
+{"text":"for suitable real numbers and . If two tangent vectors are given:"}
+{"text":"then using the bilinearity of the dot product,"}
+{"text":"This is plainly a function of the four variables , , , and . It is more profitably viewed, however, as a function that takes a pair of arguments and which are vectors in the -plane. That is, put"}
+{"text":"This is a symmetric function in and , meaning that"}
+{"text":"It is also bilinear, meaning that it is linear in each variable and separately. That is,"}
+{"text":"for any vectors , , , and in the plane, and any real numbers and ."}
+{"text":"In particular, the length of a tangent vector is given by"}
+{"text":"and the angle between two vectors and is calculated by"}
+{"text":"The surface area is another numerical quantity which should depend only on the surface itself, and not on how it is parameterized. If the surface is parameterized by the function over the domain in the -plane, then the surface area of is given by the integral"}
+{"text":"where denotes the cross product, and the absolute value denotes the length of a vector in Euclidean space. By Lagrange's identity for the cross product, the integral can be written"}
+{"text":"Let be a smooth manifold of dimension ; for instance a surface (in the case ) or hypersurface in the Cartesian space . At each point there is a vector space , called the tangent space, consisting of all tangent vectors to the manifold at the point . A metric tensor at is a function which takes as inputs a pair of tangent vectors and at , and produces as an output a real number (scalar), so that the following conditions are satisfied:"}
+{"text":"A metric tensor field on assigns to each point of a metric tensor in the tangent space at in a way that varies smoothly with . More precisely, given any open subset of manifold and any (smooth) vector fields and on , the real function"}
+{"text":"The components of the metric in any basis of vector fields, or frame, are given by"}
+{"text":"The functions form the entries of an symmetric matrix, . If"}
+{"text":"are two vectors at , then the value of the metric applied to and is determined by the coefficients () by bilinearity:"}
+{"text":"Denoting the matrix by and arranging the components of the vectors and into column vectors and ,"}
+{"text":"where T and T denote the transpose of the vectors and , respectively. Under a change of basis of the form"}
+{"text":"for some invertible matrix , the matrix of components of the metric changes by as well. That is,"}
+{"text":"or, in terms of the entries of this matrix,"}
+{"text":"For this reason, the system of quantities is said to transform covariantly with respect to changes in the frame ."}
+{"text":"A system of real-valued functions , giving a local coordinate system on an open set in , determines a basis of vector fields on"}
+{"text":"The metric has components relative to this frame given by"}
+{"text":"Relative to a new system of local coordinates, say"}
+{"text":"the metric tensor will determine a different matrix of coefficients,"}
+{"text":"This new system of functions is related to the original by means of the chain rule"}
+{"text":"Or, in terms of the matrices and ,"}
+{"text":"where denotes the Jacobian matrix of the coordinate change."}
+{"text":"Associated to any metric tensor is the quadratic form defined in each tangent space by"}
+{"text":"If is positive for all non-zero , then the metric is positive-definite at . If the metric is positive-definite at every , then is called a Riemannian metric. More generally, if the quadratic forms have constant signature independent of , then the signature of is this signature, and is called a pseudo-Riemannian metric. If is connected, then the signature of does not depend on ."}
+{"text":"By Sylvester's law of inertia, a basis of tangent vectors can be chosen locally so that the quadratic form diagonalizes in the following manner"}
+{"text":"for some between 1 and . Any two such expressions of (at the same point of ) will have the same number of positive signs. The signature of is the pair of integers , signifying that there are positive signs and negative signs in any such expression. Equivalently, the metric has signature if the matrix of the metric has positive and negative eigenvalues."}
+{"text":"Certain metric signatures which arise frequently in applications are:"}
+{"text":"Let be a basis of vector fields, and as above let be the matrix of coefficients"}
+{"text":"One can consider the inverse matrix , which is identified with the inverse metric (or \"conjugate\" or \"dual metric\"). The inverse metric satisfies a transformation law when the frame is changed by a matrix via"}
+{"text":"The inverse metric transforms \"contravariantly\", or with respect to the inverse of the change of basis matrix . Whereas the metric itself provides a way to measure the length of (or angle between) vector fields, the inverse metric supplies a means of measuring the length of (or angle between) covector fields; that is, fields of linear functionals."}
+{"text":"To see this, suppose that is a covector field. To wit, for each point , determines a function defined on tangent vectors at so that the following linearity condition holds for all tangent vectors and , and all real numbers and :"}
+{"text":"As varies, is assumed to be a smooth function in the sense that"}
+{"text":"is a smooth function of for any smooth vector field ."}
+{"text":"Any covector field has components in the basis of vector fields . These are determined by"}
+{"text":"Denote the row vector of these components by"}
+{"text":"Under a change of by a matrix , changes by the rule"}
+{"text":"That is, the row vector of components transforms as a \"covariant\" vector."}
+{"text":"For a pair and of covector fields, define the inverse metric applied to these two covectors by"}
+{"text":"The resulting definition, although it involves the choice of basis , does not actually depend on in an essential way. Indeed, changing basis to gives"}
+{"text":"So that the right-hand side of equation () is unaffected by changing the basis to any other basis whatsoever. Consequently, the equation may be assigned a meaning independently of the choice of basis. The entries of the matrix are denoted by , where the indices and have been raised to indicate the transformation law ()."}
+{"text":"In a basis of vector fields , any smooth tangent vector field can be written in the form"}
+{"text":"for some uniquely determined smooth functions . Upon changing the basis by a nonsingular matrix , the coefficients change in such a way that equation () remains true. That is,"}
+{"text":"Consequently, . In other words, the components of a vector transform \"contravariantly\" (that is, inversely or in the opposite way) under a change of basis by the nonsingular matrix . The contravariance of the components of is notationally designated by placing the indices of in the upper position."}
+{"text":"A frame also allows covectors to be expressed in terms of their components. For the basis of vector fields define the dual basis to be the linear functionals such that"}
+{"text":"That is, , the Kronecker delta. Let"}
+{"text":"Under a change of basis for a nonsingular matrix , transforms via"}
+{"text":"Any linear functional on tangent vectors can be expanded in terms of the dual basis"}
+{"text":"where denotes the row vector . The components transform when the basis is replaced by in such a way that equation () continues to hold. That is,"}
+{"text":"whence, because , it follows that . That is, the components transform \"covariantly\" (by the matrix rather than its inverse). The covariance of the components of is notationally designated by placing the indices of in the lower position."}
+{"text":"Now, the metric tensor gives a means to identify vectors and covectors as follows. Holding fixed, the function"}
+{"text":"of tangent vector defines a linear functional on the tangent space at . This operation takes a vector at a point and produces a covector . In a basis of vector fields , if a vector field has components , then the components of the covector field in the dual basis are given by the entries of the row vector"}
+{"text":"Under a change of basis , the right-hand side of this equation transforms via"}
+{"text":"so that : transforms covariantly. The operation of associating to the (contravariant) components of a vector field T the (covariant) components of the covector field , where"}
+{"text":"To \"raise the index\", one applies the same construction but with the inverse metric instead of the metric. If are the components of a covector in the dual basis , then the column vector"}
+{"text":"Consequently, the quantity does not depend on the choice of basis in an essential way, and thus defines a vector field on . The operation () associating to the (covariant) components of a covector the (contravariant) components of a vector given is called raising the index. In components, () is"}
+{"text":"Let be an open set in , and let be a continuously differentiable function from into the Euclidean space , where . The mapping is called an immersion if its differential is injective at every point of . The image of is called an immersed submanifold. More specifically, for , which means that the ambient Euclidean space is , the induced metric tensor is called the first fundamental form."}
+{"text":"Suppose that is an immersion onto the submanifold . The usual Euclidean dot product in is a metric which, when restricted to vectors tangent to , gives a means for taking the dot product of these tangent vectors. This is called the induced metric."}
+{"text":"Suppose that is a tangent vector at a point of , say"}
+{"text":"where are the standard coordinate vectors in . When is applied to , the vector goes over to the vector tangent to given by"}
+{"text":"(This is called the pushforward of along .) Given two such vectors, and , the induced metric is defined by"}
+{"text":"It follows from a straightforward calculation that the matrix of the induced metric in the basis of coordinate vector fields is given by"}
+{"text":"The notion of a metric can be defined intrinsically using the language of fiber bundles and vector bundles. In these terms, a metric tensor is a function"}
+{"text":"from the fiber product of the tangent bundle of with itself to such that the restriction of to each fiber is a nondegenerate bilinear mapping"}
+{"text":"The mapping () is required to be continuous, and often continuously differentiable, smooth, or real analytic, depending on the case of interest, and whether can support such a structure."}
+{"text":"Metric as a section of a bundle."}
+{"text":"By the universal property of the tensor product, any bilinear mapping () gives rise naturally to a section of the dual of the tensor product bundle of with itself"}
+{"text":"The section is defined on simple elements of by"}
+{"text":"and is defined on arbitrary elements of by extending linearly to linear combinations of simple elements. The original bilinear form is symmetric if and only if"}
+{"text":"Since is finite-dimensional, there is a natural isomorphism"}
+{"text":"so that is regarded also as a section of the bundle of the cotangent bundle with itself. Since is symmetric as a bilinear mapping, it follows that is a symmetric tensor."}
+{"text":"More generally, one may speak of a metric in a vector bundle. If is a vector bundle over a manifold , then a metric is a mapping"}
+{"text":"from the fiber product of to which is bilinear in each fiber:"}
+{"text":"Using duality as above, a metric is often identified with a section of the tensor product bundle . (See metric (vector bundle).)"}
+{"text":"The metric tensor gives a natural isomorphism from the tangent bundle to the cotangent bundle, sometimes called the musical isomorphism. This isomorphism is obtained by setting, for each tangent vector ,"}
+{"text":"the linear functional on which sends a tangent vector at to . That is, in terms of the pairing between and its dual space ,"}
+{"text":"for all tangent vectors and . The mapping is a linear transformation from to . It follows from the definition of non-degeneracy that the kernel of is reduced to zero, and so by the rank\u2013nullity theorem, is a linear isomorphism. Furthermore, is a symmetric linear transformation in the sense that"}
+{"text":"Conversely, any linear isomorphism defines a non-degenerate bilinear form on by means of"}
+{"text":"This bilinear form is symmetric if and only if is symmetric. There is thus a natural one-to-one correspondence between symmetric bilinear forms on and symmetric linear isomorphisms of to the dual ."}
+{"text":"As varies over , defines a section of the bundle of vector bundle isomorphisms of the tangent bundle to the cotangent bundle. This section has the same smoothness as : it is continuous, differentiable, smooth, or real-analytic according as . The mapping , which associates to every vector field on a covector field on gives an abstract formulation of \"lowering the index\" on a vector field. The inverse of is a mapping which, analogously, gives an abstract formulation of \"raising the index\" on a covector field."}
+{"text":"which is nonsingular and symmetric in the sense that"}
+{"text":"for all covectors , . Such a nonsingular symmetric mapping gives rise (by the tensor-hom adjunction) to a map"}
+{"text":"or by the double dual isomorphism to a section of the tensor product"}
+{"text":"Suppose that is a Riemannian metric on . In a local coordinate system , , the metric tensor appears as a matrix, denoted here by , whose entries are the components of the metric tensor relative to the coordinate vector fields."}
+{"text":"Let be a piecewise-differentiable parametric curve in , for . The arclength of the curve is defined by"}
+{"text":"In connection with this geometrical application, the quadratic differential form"}
+{"text":"is called the first fundamental form associated to the metric, while is the line element. When is pulled back to the image of a curve in , it represents the square of the differential with respect to arclength."}
+{"text":"For a pseudo-Riemannian metric, the length formula above is not always defined, because the term under the square root may become negative. We generally only define the length of a curve when the quantity under the square root is always of one sign or the other. In this case, define"}
+{"text":"Note that, while these formulas use coordinate expressions, they are in fact independent of the coordinates chosen; they depend only on the metric, and the curve along which the formula is integrated."}
+{"text":"Given a segment of a curve, another frequently defined quantity is the (kinetic) energy of the curve:"}
+{"text":"This usage comes from physics, specifically, classical mechanics, where the integral can be seen to directly correspond to the kinetic energy of a point particle moving on the surface of a manifold. Thus, for example, in Jacobi's formulation of Maupertuis' principle, the metric tensor can be seen to correspond to the mass tensor of a moving particle."}
+{"text":"In many cases, whenever a calculation calls for the length to be used, a similar calculation using the energy may be done as well. This often leads to simpler formulas by avoiding the need for the square-root. Thus, for example, the geodesic equations may be obtained by applying variational principles to either the length or the energy. In the latter case, the geodesic equations are seen to arise from the principle of least action: they describe the motion of a \"free particle\" (a particle feeling no forces) that is confined to move on the manifold, but otherwise moves freely, with constant momentum, within the manifold."}
+{"text":"In analogy with the case of surfaces, a metric tensor on an -dimensional paracompact manifold gives rise to a natural way to measure the -dimensional volume of subsets of the manifold. The resulting natural positive Borel measure allows one to develop a theory of integrating functions on the manifold by means of the associated Lebesgue integral."}
+{"text":"A measure can be defined, by the Riesz representation theorem, by giving a positive linear functional on the space of compactly supported continuous functions on . More precisely, if is a manifold with a (pseudo-)Riemannian metric tensor , then there is a unique positive Borel measure such that for any coordinate chart ,"}
+{"text":"for all supported in . Here is the determinant of the matrix formed by the components of the metric tensor in the coordinate chart. That is well-defined on functions supported in coordinate neighborhoods is justified by Jacobian change of variables. It extends to a unique positive linear functional on by means of a partition of unity."}
+{"text":"If is also oriented, then it is possible to define a natural volume form from the metric tensor. In a positively oriented coordinate system the volume form is represented as"}
+{"text":"where the are the coordinate differentials and denotes the exterior product in the algebra of differential forms. The volume form also gives a way to integrate functions on the manifold, and this geometric integral agrees with the integral obtained by the canonical Borel measure."}
+{"text":"The most familiar example is that of elementary Euclidean geometry: the two-dimensional Euclidean metric tensor. In the usual coordinates, we can write"}
+{"text":"The length of a curve reduces to the formula:"}
+{"text":"The Euclidean metric in some other common coordinate systems can be written as follows."}
+{"text":"In general, in a Cartesian coordinate system on a Euclidean space, the partial derivatives are orthonormal with respect to the Euclidean metric. Thus the metric tensor is the Kronecker delta \u03b4\"ij\" in this coordinate system. The metric tensor with respect to arbitrary (possibly curvilinear) coordinates is given by"}
+{"text":"The unit sphere in comes equipped with a natural metric induced from the ambient Euclidean metric, through the process explained in the induced metric section. In standard spherical coordinates , with the colatitude, the angle measured from the -axis, and the angle from the -axis in the -plane, the metric takes the form"}
+{"text":"This is usually written in the form"}
+{"text":"In flat Minkowski space (special relativity), with coordinates"}
+{"text":"the metric is, depending on choice of metric signature,"}
+{"text":"For a curve with\u2014for example\u2014constant time coordinate, the length formula with this metric reduces to the usual length formula. For a timelike curve, the length formula gives the proper time along the curve."}
+{"text":"In this case, the spacetime interval is written as"}
+{"text":"The Schwarzschild metric describes the spacetime around a spherically symmetric body, such as a planet, or a black hole. With coordinates"}
+{"text":"where (inside the matrix) is the gravitational constant and represents the total mass-energy content of the central object."}
+{"text":"Classical fluids are systems of particles which retain a definite volume, and are at sufficiently high temperatures (compared to their Fermi energy) that quantum effects can be neglected. A system of hard spheres, interacting only by hard collisions (e.g., billiards, marbles), is a model classical fluid. Such a system is well described by the Percus\u2013Yevik equation. Common liquids, e.g., liquid air, gasoline etc., are essentially mixtures of classical fluids. Electrolytes, molten salts, salts dissolved in water, are classical charged fluids. A classical fluid when cooled undergoes a freezing transition. On heating it undergoes an evaporation transition and becomes a classical gas that obeys Boltzmann statistics."}
+{"text":"A system of charged classical particles moving in a uniform positive neutralizing background is known as a one-component plasma (OCP). This is well described by the Hyper-netted chain equation (see CHNC)."}
+{"text":"An essentially very accurate way of determining the properties of classical fluids is provided by the method of molecular dynamics."}
+{"text":"An electron gas confined in a metal is \"not\" a classical fluid, whereas a very high-temperature plasma of electrons could behave as a classical fluid. Such non-classical Fermi systems, i.e., quantum fluids, can be studied using quantum Monte Carlo methods, Feynman path integral equation methods, and approximately via CHNC integral-equation methods."}
+{"text":"The Wigner quasiprobability distribution (also called the Wigner function or the Wigner\u2013Ville distribution after Eugene Wigner and ) is a quasiprobability distribution. It was introduced by Eugene Wigner in 1932 to study quantum corrections to classical statistical mechanics. The goal was to link the wavefunction that appears in Schr\u00f6dinger's equation to a probability distribution in phase space."}
+{"text":"It is a generating function for all spatial autocorrelation functions of a given quantum-mechanical wavefunction ."}
+{"text":"Thus, it maps on the quantum density matrix in the map between real phase-space functions and Hermitian operators introduced by Hermann Weyl in 1927, in a context related to representation theory in mathematics (cf. Weyl quantization in physics). In effect, it is the Wigner\u2013Weyl transform of the density matrix, so the realization of that operator in phase space. It was later rederived by Jean Ville in 1948 as a quadratic (in signal) representation of the local time-frequency energy of a signal, effectively a spectrogram."}
+{"text":"In 1949, Jos\u00e9 Enrique Moyal, who had derived it independently, recognized it as the quantum moment-generating functional, and thus as the basis of an elegant encoding of all quantum expectation values, and hence quantum mechanics, in phase space (cf. phase space formulation). It has applications in statistical mechanics, quantum chemistry, quantum optics, classical optics and signal analysis in diverse fields such as electrical engineering, seismology, time\u2013frequency analysis for music signals, spectrograms in biology and speech processing, and engine design."}
+{"text":"A classical particle has a definite position and momentum, and hence it is represented by a point in phase space. Given a collection (ensemble) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville density. This strict interpretation fails"}
+{"text":"for a quantum particle, due to the uncertainty principle. Instead, the above quasiprobability Wigner distribution plays an analogous role, but does not satisfy all the properties of a conventional probability distribution; and, conversely, satisfies boundedness properties unavailable to classical distributions."}
+{"text":"For instance, the Wigner distribution can and normally does take on negative values for states which have no classical model\u2014and is a convenient indicator of quantum mechanical interference. (See below for a characterization of pure states whose Wigner functions are non-negative.)"}
+{"text":"Smoothing the Wigner distribution through a filter of size larger than (e.g., convolving with a"}
+{"text":"phase-space Gaussian, a Weierstrass transform, to yield the Husimi representation, below), results in a positive-semidefinite function, i.e., it may be thought to have been coarsened to a semi-classical one."}
+{"text":"Regions of such negative value are provable (by convolving them with a small Gaussian) to be \"small\": they cannot extend to compact regions larger than a few , and hence disappear in the classical limit. They are shielded by the uncertainty principle, which does not allow precise location within phase-space regions smaller than , and thus renders such \"negative probabilities\" less paradoxical."}
+{"text":"The Wigner distribution of a pure state is defined as:"}
+{"text":"where is the wavefunction and and are position and momentum but could be any conjugate variable pair (e.g. real and imaginary parts of the electric field or frequency and time of a signal). Note that it may have support in even in regions where has no support in (\"beats\")."}
+{"text":"where is the normalized momentum-space wave function, proportional to the Fourier transform of ."}
+{"text":"In the general case, which includes mixed states, it is the Wigner transform of the density matrix,"}
+{"text":"where \u27e8\"x\"|\"\u03c8\"\u27e9 = . This Wigner transformation (or map) is the inverse of the Weyl transform, which maps phase-space functions to Hilbert-space operators, in Weyl quantization."}
+{"text":"Thus, the Wigner function is the cornerstone of quantum mechanics in phase space."}
+{"text":"how the Wigner function provides the integration measure (analogous"}
+{"text":"to a probability density function) in phase space, to yield expectation values from phase-space c-number functions uniquely associated to suitably ordered operators through Weyl's transform (cf. Wigner\u2013Weyl transform and property 7 below), in a manner evocative of classical probability theory."}
+{"text":"Specifically, an operator's expectation value is a \"phase-space average\" of the Wigner transform of that operator,"}
+{"text":"1. \"W\"(\"x\",\u00a0\"p\") is a real valued function."}
+{"text":"2. The \"x\" and \"p\" probability distributions are given by the marginals:"}
+{"text":"3. \"W\"(\"x\", \"p\") has the following reflection symmetries:"}
+{"text":"5. The equation of motion for each point in the phase space is classical in the absence of forces:"}
+{"text":"7. Operator expectation values (averages) are calculated as phase-space averages of the respective Wigner transforms:"}
+{"text":"8. In order that \"W\"(\"x\", \"p\") represent physical (positive) density matrices:"}
+{"text":"9. By virtue of the Cauchy\u2013Schwarz inequality, for a pure state, it is constrained to be bounded,"}
+{"text":"10. The Wigner transformation is simply the Fourier transform of the antidiagonals of the density matrix, when that matrix is expressed in a position basis."}
+{"text":"Let formula_19 be the formula_20-th Fock state of a quantum harmonic oscillator. Groenewold (1946) discovered its associated Wigner function, in dimensionless variables, isformula_21"}
+{"text":"where formula_22 denotes the formula_20-th Laguerre polynomial."}
+{"text":"This may follow from the expression for the static eigenstate wavefunctions, formula_24,"}
+{"text":"where formula_25 is the formula_20-th Hermite polynomial. From the above definition of the Wigner function,"}
+{"text":"The expression then follows from the integral relation between Hermite and Laguerre polynomials."}
+{"text":"The Wigner transformation is a general invertible transformation of an operator on a Hilbert space to a function \"g(x,p)\" on phase space, and is given by"}
+{"text":"Hermitian operators map to real functions. The inverse of this transformation,"}
+{"text":"so from phase space to Hilbert space, is called the Weyl transformation,"}
+{"text":"(not to be confused with the distinct Weyl transformation in differential geometry)."}
+{"text":"The Wigner function \"W\"(\"x,p\") discussed here is thus seen to be the Wigner transform of the density matrix operator \"\u03c1\u0302\". Thus, the trace of an operator with the density matrix Wigner-transforms to the equivalent phase-space integral overlap of \"g\"(\"x\",\u00a0\"p\") with the Wigner function."}
+{"text":"The Wigner transform of the von Neumann evolution equation of the density matrix in the Schr\u00f6dinger picture is"}
+{"text":"where H(x,p) is Hamiltonian and { {\u2022, \u2022} } is the Moyal bracket. In the classical limit \"\u0127\" \u2192 0, the Moyal bracket reduces to the Poisson bracket, while this evolution equation reduces to the Liouville equation of classical statistical mechanics."}
+{"text":"Strictly formally, in terms of quantum characteristics, the solution of"}
+{"text":"where formula_32 and formula_33 are solutions of"}
+{"text":"so-called quantum Hamilton's equations, subject to initial conditions"}
+{"text":"composition is understood for all argument functions."}
+{"text":"Since, however, formula_36-composition is thoroughly nonlocal (the \"quantum probability fluid\" diffuses, as observed by Moyal), vestiges of local trajectories"}
+{"text":"are normally barely discernible in the evolution of the Wigner distribution function."}
+{"text":"In the integral representation of -products, successive operations by them have been adapted to a phase-space path-integral, to solve this evolution equation for the Wigner function (see also )."}
+{"text":"This non-trajectoral feature of Moyal time evolution is illustrated in the gallery below, for Hamiltonians more complex than the harmonic oscillator."}
+{"text":"In the special case of the quantum harmonic oscillator, however, the evolution is simple and appears identical to the classical motion: a rigid rotation in phase space with a frequency given by the oscillator frequency. This is illustrated in the gallery below. This same time evolution occurs with quantum states of light modes, which are harmonic oscillators."}
+{"text":"The Wigner function allows one to study the classical limit, offering a comparison of the classical and quantum dynamics in phase space."}
+{"text":"It has been suggested that the Wigner function approach can be viewed as a quantum analogy to the operatorial formulation of classical mechanics introduced in 1932 by Bernard Koopman and John von Neumann: the time evolution of the Wigner function approaches, in the limit \"\u0127\" \u2192 0, the time evolution of the Koopman\u2013von Neumann wavefunction of a classical particle."}
+{"text":"As already noted, the Wigner function of quantum state typically takes some negative values. Indeed, for a pure state in one variable, if formula_38 for all formula_39 and formula_40, then the wave function must have the form"}
+{"text":"for some complex numbers formula_42 with formula_43 (Hudson's theorem). Note that formula_44 is allowed to be complex, so that formula_45 is not necessarily a Gaussian wave packet in the usual sense. Thus, pure states with non-negative Wigner functions are not necessarily minimum uncertainty states in the sense of the Heisenberg uncertainty formula; rather, they give equality in the Schr\u00f6dinger uncertainty formula, which includes an anticommutator term in addition to the commutator term. (With careful definition of the respective variances, all pure state Wigner functions lead to Heisenberg's inequality all the same.)"}
+{"text":"In higher dimensions, the characterization of pure states with non-negative Wigner functions is similar; the wave function must have the form"}
+{"text":"where formula_47 is a symmetric complex matrix whose real part is positive definite, formula_48 is a complex vector, and is a complex number. The Wigner function of any such state is a Gaussian distribution on phase space."}
+{"text":"The cited paper of Soto and Claverie gives an elegant proof of this characterization, using the Segal\u2013Bargmann transform. The reasoning is as follows. The Husimi Q function of formula_45 may be computed as the squared magnitude of the Segal\u2013Bargmann transform of formula_45, multiplied by a Gaussian. Meanwhile, the Husimi Q function is the convolution of the Wigner function with a Gaussian. If the Wigner function of formula_45 is non-negative everywhere on phase space, then the Husimi Q function will be strictly positive everywhere on phase space. Thus, the Segal\u2013Bargmann transform formula_52 of formula_45 will be nowhere zero. Thus, by a standard result from complex analysis, we have"}
+{"text":"for some holomorphic function formula_55. But in order for formula_56 to belong to the Segal\u2013Bargmann space\u2014that is, for formula_56 to be square-integrable with respect to a Gaussian measure\u2014formula_55 must have at most quadratic growth at infinity. From this, elementary complex analysis can be used to show that formula_55 must actually be a quadratic polynomial. Thus, we obtain an explicit form of the Segal\u2013Bargmann transform of any pure state whose Wigner function is non-negative. We can then invert the Segal\u2013Bargmann transform to obtain the claimed form of the position wave function."}
+{"text":"There does not appear to be any simple characterization of mixed states with non-negative Wigner functions."}
+{"text":"The Wigner function in relation to other interpretations of quantum mechanics."}
+{"text":"It has been shown that the Wigner quasiprobability distribution function can be regarded as an -deformation of another phase space distribution function that describes an ensemble of de Broglie\u2013Bohm causal trajectories. Basil Hiley has shown that the quasi-probability distribution may be understood as the density matrix re-expressed in terms of a mean position and momentum of a \"cell\" in phase space, and the de Broglie\u2013Bohm interpretation allows one to describe the dynamics of the centers of such \"cells\"."}
+{"text":"There is a close connection between the description of quantum states in terms of the Wigner function and a method of quantum states reconstruction in terms of mutually unbiased bases."}
+{"text":"The Wigner distribution was the first quasiprobability distribution to be formulated, but many more followed, formally equivalent and transformable to and from it (viz. Transformation between distributions in time\u2013frequency analysis). As in the case of coordinate systems, on account of varying properties, several such have with various advantages for specific applications:"}
+{"text":"Nevertheless, in some sense, the Wigner distribution holds a privileged position among all these distributions, since it is the \"only one\" whose requisite star product drops out (integrates out by parts to effective unity) in the evaluation of expectation values, as illustrated above, and so \"can\" be visualized as a quasiprobability measure analogous to the classical ones."}
+{"text":"As indicated, the formula for the Wigner function was independently derived several times in different contexts. In fact, apparently, Wigner was unaware that even within the context of quantum theory, it had been introduced previously by Heisenberg and Dirac, albeit purely formally: these two missed its significance, and that of its negative values, as they merely considered it as an approximation to the full quantum description of a system such as the atom. (Incidentally, Dirac would later become Wigner's brother-in-law, marrying his sister Manci.) Symmetrically, in most of his legendary 18-month correspondence with Moyal in the mid-1940s, Dirac was unaware that Moyal's quantum-moment generating function was effectively the Wigner function, and it was Moyal who finally brought it to his attention."}
+{"text":"In physics, two wave sources are coherent if their frequency and waveform are identical. Coherence is an ideal property of waves that enables stationary (i.e. temporally and spatially constant) interference. It contains several distinct concepts, which are limiting cases that never quite occur in reality but allow an understanding of the physics of waves, and has become a very important concept in quantum physics. More generally, coherence describes all properties of the correlation between physical quantities of a single wave, or between several waves or wave packets."}
+{"text":"Spatial coherence describes the correlation (or predictable relationship) between waves at different points in space, either lateral or longitudinal. Temporal coherence describes the correlation between waves observed at different moments in time. Both are observed in the Michelson\u2013Morley experiment and Young's interference experiment. Once the fringes are obtained in the Michelson interferometer, when one of the mirrors is moved away gradually from the beam-splitter, the time for the beam to travel increases and the fringes become dull and finally disappear, showing temporal coherence. Similarly, in a double-slit experiment, if the space between the two slits is increased, the coherence dies gradually and finally the fringes disappear, showing spatial coherence. In both cases, the fringe amplitude slowly disappears, as the path difference increases past the coherence length."}
+{"text":"Coherence was originally conceived in connection with Thomas Young's double-slit experiment in optics but is now used in any field that involves waves, such as acoustics, electrical engineering, neuroscience, and quantum mechanics. Coherence describes the statistical similarity of a field (electromagnetic field, quantum wave packet etc.) at two points in space or time. The property of coherence is the basis for commercial applications such as holography, the Sagnac gyroscope, radio antenna arrays, optical coherence tomography and telescope interferometers (astronomical optical interferometers and radio telescopes)."}
+{"text":"A precise definition is given in the article on degree of coherence."}
+{"text":"The coherence function between two signals formula_1 and formula_2 is defined as"}
+{"text":"The coherence varies in the interval formula_11. If formula_12 it means that the signals are perfectly correlated or linearly related and if formula_13 they are totally uncorrelated. If a linear system is being measured, formula_1 being the input and formula_2 the output, the coherence function will be unitary all over the spectrum. However, if non-linearities are present in the system the coherence will vary in the limit given above."}
+{"text":"These states are unified by the fact that their behavior is described by a wave equation or some generalization thereof."}
+{"text":"In most of these systems, one can measure the wave directly. Consequently, its correlation with another wave can simply be calculated. However, in optics one cannot measure the electric field directly as it oscillates much faster than any detector's time resolution. Instead, one measures the intensity of the light. Most of the concepts involving coherence which will be introduced below were developed in the field of optics and then used in other fields. Therefore, many of the standard measurements of coherence are indirect measurements, even in fields where the wave can be measured directly."}
+{"text":"Temporal coherence is the measure of the average correlation between the value of a wave and itself delayed by \u03c4, at any pair of times. Temporal coherence tells us how monochromatic a source is. In other words, it characterizes how well a wave can interfere with itself at a different time. The delay over which the phase or amplitude wanders by a significant amount (and hence the correlation decreases by significant amount) is defined as the coherence time \"\u03c4c\". At a delay of \u03c4=0 the degree of coherence is perfect, whereas it drops significantly as the delay passes \"\u03c4=\u03c4c\". The coherence length \"Lc\" is defined as the distance the wave travels in time \u03c4c."}
+{"text":"One should be careful not to confuse the coherence time with the time duration of the signal, nor the coherence length with the coherence area (see below)."}
+{"text":"The relationship between coherence time and bandwidth."}
+{"text":"It can be shown that the larger the range of frequencies \u0394f a wave contains, the faster the wave decorrelates (and hence the smaller \u03c4c is). Thus there is a tradeoff:"}
+{"text":"Formally, this follows from the convolution theorem in mathematics, which relates the Fourier transform of the power spectrum (the intensity of each frequency) to its autocorrelation."}
+{"text":"We consider four examples of temporal coherence."}
+{"text":"The high degree of monochromaticity of lasers implies long coherence lengths (up to hundreds of meters). For example, a stabilized and monomode helium\u2013neon laser can easily produce light with coherence lengths of 300 m. Not all lasers have a high degree of monochromaticity, however (e.g. for a mode-locked Ti-sapphire laser, \u0394\u03bb \u2248 2\u00a0nm - 70\u00a0nm). LEDs are characterized by \u0394\u03bb \u2248 50\u00a0nm, and tungsten filament lights exhibit \u0394\u03bb \u2248 600\u00a0nm, so these sources have shorter coherence times than the most monochromatic lasers."}
+{"text":"Holography requires light with a long coherence time. In contrast, optical coherence tomography, in its classical version, uses light with a short coherence time."}
+{"text":"Holography requires temporally and spatially coherent light. Its inventor, Dennis Gabor, produced successful holograms more than ten years before lasers were invented. To produce coherent light he passed the monochromatic light from an emission line of a mercury-vapor lamp through a pinhole spatial filter."}
+{"text":"In February 2011 it was reported that helium atoms, cooled to near absolute zero \/ Bose\u2013Einstein condensate state, can be made to flow and behave as a coherent beam as occurs in a laser."}
+{"text":"Waves of different frequencies (in light these are different colours) can interfere to form a pulse if they have a fixed relative phase-relationship (see Fourier transform). Conversely, if waves of different frequencies are not coherent, then, when combined, they create a wave that is continuous in time (e.g. white light or white noise). The temporal duration of the pulse formula_19 is limited by the spectral bandwidth of the light formula_20 according to:"}
+{"text":"which follows from the properties of the Fourier transform and results in K\u00fcpfm\u00fcller's uncertainty principle (for quantum particles it also results in the Heisenberg uncertainty principle)."}
+{"text":"If the phase depends linearly on the frequency (i.e. formula_22) then the pulse will have the minimum time duration for its bandwidth (a \"transform-limited\" pulse), otherwise it is chirped (see dispersion)."}
+{"text":"Measurement of the spectral coherence of light requires a nonlinear optical interferometer, such as an intensity optical correlator, frequency-resolved optical gating (FROG), or spectral phase interferometry for direct electric-field reconstruction (SPIDER)."}
+{"text":"Light also has a polarization, which is the direction in which the electric field oscillates. Unpolarized light is composed of incoherent light waves with random polarization angles. The electric field of the unpolarized light wanders in every direction and changes in phase over the coherence time of the two light waves. An absorbing polarizer rotated to any angle will always transmit half the incident intensity when averaged over time."}
+{"text":"If the electric field wanders by a smaller amount the light will be partially polarized so that at some angle, the polarizer will transmit more than half the intensity. If a wave is combined with an orthogonally polarized copy of itself delayed by less than the coherence time, partially polarized light is created."}
+{"text":"The polarization of a light beam is represented by a vector in the Poincar\u00e9 sphere. For polarized light the end of the vector lies on the surface of the sphere, whereas the vector has zero length for unpolarized light. The vector for partially polarized light lies within the sphere"}
+{"text":"Coherent superpositions of \"optical wave fields\" include holography. Holographic objects are used frequently in daily life in television and credit card security."}
+{"text":"Further applications concern the coherent superposition of \"non-optical wave fields\". In quantum mechanics for example one considers a probability field, which is related to the wave function formula_23 (interpretation: density of the probability amplitude). Here the applications concern, among others, the future technologies of quantum computing and the already available technology of quantum cryptography. Additionally the problems of the following subchapter are treated."}
+{"text":"Coherence is used to check the quality of the transfer functions (FRFs) being measured. Low coherence can be caused by poor signal to noise ratio, and\/or inadequate frequency resolution."}
+{"text":"According to quantum mechanics, all objects can have wave-like properties (see de Broglie waves). For instance, in Young's double-slit experiment electrons can be used in the place of light waves. Each electron's wave-function goes through both slits, and hence has two separate split-beams that contribute to the intensity pattern on a screen. According to standard wave theory these two contributions give rise to an intensity pattern of bright bands due to constructive interference, interlaced with dark bands due to destructive interference, on a downstream screen. This ability to interfere and diffract is related to coherence (classical or quantum) of the waves produced at both slits. The association of an electron with a wave is unique to quantum theory."}
+{"text":"When the incident beam is represented by a quantum pure state, the split beams downstream of the two slits are represented as a superposition of the pure states representing each split beam. The quantum description of imperfectly coherent paths is called a mixed state. A perfectly coherent state has a density matrix (also called the \"statistical operator\") that is a projection onto the pure coherent state and is equivalent to a wave function, while a mixed state is described by a classical probability distribution for the pure states that make up the mixture."}
+{"text":"Macroscopic scale quantum coherence leads to novel phenomena, the so-called macroscopic quantum phenomena. For instance, the laser, superconductivity and superfluidity are examples of highly coherent quantum systems whose effects are evident at the macroscopic scale. The macroscopic quantum coherence (off-diagonal long-range order, ODLRO) for superfluidity, and laser light, is related to first-order (1-body) coherence\/ODLRO, while superconductivity is related to second-order coherence\/ODLRO. (For fermions, such as electrons, only even orders of coherence\/ODLRO are possible.) For bosons, a Bose\u2013Einstein condensate is an example of a system exhibiting macroscopic quantum coherence through a multiple occupied single-particle state."}
+{"text":"The classical electromagnetic field exhibits macroscopic quantum coherence. The most obvious example is the carrier signal for radio and TV. They satisfy Glauber's quantum description of coherence."}
+{"text":"Recently M. B. Plenio and co-workers constructed an operational formulation of quantum coherence as a resource theory. They introduced coherence monotones analogous to the entanglement monotones. Quantum coherence has been shown to be equivalent to quantum entanglement in the sense that coherence can be faithfully described as entanglement, and conversely that each entanglement measure corresponds to a coherence measure."}
+{"text":"In analytic geometry, spatial transformations in the 3-dimensional Euclidean space formula_1 are distinguished into active or alibi transformations, and passive or alias transformations. An active transformation is a transformation which actually changes the physical position (alibi, elsewhere) of a point, or rigid body, which can be defined in the absence of a coordinate system; whereas a passive transformation is merely a change in the coordinate system in which the object is described (alias, other name) (change of coordinate map, or change of basis). By \"transformation\", mathematicians usually refer to active transformations, while physicists and engineers could mean either. Both types of transformation can be represented by a combination of a translation and a linear transformation."}
+{"text":"Put differently, a \"passive\" transformation refers to description of the \"same\" object in two different coordinate systems."}
+{"text":"On the other hand, an \"active transformation\" is a transformation of one or more objects with respect to the same coordinate system. For instance, active transformations are useful to describe successive positions of a rigid body. On the other hand, passive transformations may be useful in human motion analysis to observe the motion of the tibia relative to the femur, that is, its motion relative to a (\"local\") coordinate system which moves together with the femur, rather than a (\"global\") coordinate system which is fixed to the floor."}
+{"text":"As an example, let the vector formula_2, be a vector in the plane. A rotation of the vector through an angle \u03b8 in counterclockwise direction is given by the rotation matrix:"}
+{"text":"which can be viewed either as an \"active transformation\" or a \"passive transformation\" (where the above matrix will be inverted), as described below."}
+{"text":"Spatial transformations in the Euclidean space formula_1."}
+{"text":"In general a spatial transformation formula_5 may consist of a translation and a linear transformation. In the following, the translation will be omitted, and the linear transformation will be represented by a 3\u00d73-matrix formula_6."}
+{"text":"As an active transformation, formula_6 transforms the initial vector formula_8 into a new vector formula_9."}
+{"text":"If one views formula_10 as a new basis, then the coordinates of the new vector formula_11 in the new basis are the same as those of formula_12 in the original basis. Note that active transformations make sense even as a linear transformation into a different vector space. It makes sense to write the new vector in the unprimed basis (as above) only when the transformation is from the space into itself."}
+{"text":"On the other hand, when one views formula_6 as a passive transformation, the initial vector formula_8 is left unchanged, while the coordinate system and its basis vectors are transformed in the opposite direction, that is, with the inverse transformation formula_15."}
+{"text":"This gives a new coordinate system XYZ with basis vectors:"}
+{"text":"The new coordinates formula_17 of formula_18 with respect to the new coordinate system XYZ are given by:"}
+{"text":"From this equation one sees that the new coordinates are given by"}
+{"text":"As a passive transformation formula_6 transforms the old coordinates into the new ones."}
+{"text":"Note the equivalence between the two kinds of transformations: the coordinates of the new point in the active transformation and the new coordinates of the point in the passive transformation are the same, namely"}
+{"text":"A point particle (ideal particle or point-like particle, often spelled pointlike particle) is an idealization of particles heavily used in physics. Its defining feature is that it lacks spatial extension; being dimensionless, it does not take up space. A point particle is an appropriate representation of any object whenever its size, shape, and structure are irrelevant in a given context. For example, from far enough away, any finite-size object will look and behave as a point-like object. A point particle can also be referred in the case of a moving body in terms of physics."}
+{"text":"In the theory of gravity, physicists often discuss a ', meaning a point particle with a nonzero mass and no other properties or structure. Likewise, in electromagnetism, physicists discuss a ', a point particle with a nonzero charge."}
+{"text":"Sometimes, due to specific combinations of properties, extended objects behave as point-like even in their immediate vicinity. For example, spherical objects interacting in 3-dimensional space whose interactions are described by the inverse square law behave in such a way as if all their matter were concentrated in their centers of mass. In Newtonian gravitation and classical electromagnetism, for example, the respective fields outside a spherical object are identical to those of a point particle of equal charge\/mass located at the center of the sphere."}
+{"text":"In quantum mechanics, the concept of a point particle is complicated by the Heisenberg uncertainty principle, because even an elementary particle, with no internal structure, occupies a nonzero volume. For example, the atomic orbit of an electron in the hydrogen atom occupies a volume of ~10\u221230 m3. There is nevertheless a distinction between elementary particles such as electrons or quarks, which have no known internal structure, versus composite particles such as protons, which do have internal structure: A proton is made of three quarks."}
+{"text":"Elementary particles are sometimes called \"point particles\", but this is in a different sense than discussed above."}
+{"text":"When a point particle has an additive property, such as mass or charge, concentrated at a single point in space, this can be represented by a Dirac delta function."}
+{"text":"Point mass (pointlike mass) is the concept, for example in classical physics, of a physical object (typically matter) that has nonzero mass, and yet explicitly and specifically is (or is being thought of or modeled as) infinitesimal (infinitely small) in its volume or linear dimensions."}
+{"text":"A common use for point mass lies in the analysis of the gravitational fields. When analyzing the gravitational forces in a system, it becomes impossible to account for every unit of mass individually. However, a spherically symmetric body affects external objects gravitationally as if all of its mass were concentrated at its center."}
+{"text":"A point mass in probability and statistics does not refer to mass in the sense of physics, but rather refers to a finite nonzero probability that is concentrated at a point in the probability mass distribution, where there is a discontinuous segment in a probability density function. To calculate such a point mass, an integration is carried out over the entire range of the random variable, on the probability density of the continuous part. After equating this integral to 1, the point mass can be found by further calculation."}
+{"text":"A point charge is an idealized model of a particle which has an electric charge. A point charge is an electric charge at a mathematical point with no dimensions."}
+{"text":"The fundamental equation of electrostatics is Coulomb's law, which describes the electric force between two point charges. The electric field associated with a classical point charge increases to infinity as the distance from the point charge decreases towards zero making energy (thus mass) of point charge infinite."}
+{"text":"Earnshaw's theorem states that a collection of point charges cannot be maintained in an equilibrium configuration solely by the electrostatic interaction of the charges."}
+{"text":"In quantum mechanics, there is a distinction between an elementary particle (also called \"point particle\") and a composite particle. An elementary particle, such as an electron, quark, or photon, is a particle with no internal structure. Whereas a composite particle, such as a proton or neutron, has an internal structure (see figure)."}
+{"text":"However, neither elementary nor composite particles are spatially localized, because of the Heisenberg uncertainty principle. The particle wavepacket always occupies a nonzero volume. For example, see atomic orbital: The electron is an elementary particle, but its quantum states form three-dimensional patterns."}
+{"text":"Nevertheless, there is good reason that an elementary particle is often called a point particle. Even if an elementary particle has a delocalized wavepacket, the wavepacket can be represented as a quantum superposition of quantum states wherein the particle is exactly localized. Moreover, the \"interactions\" of the particle can be represented as a superposition of interactions of individual states which are localized. This is not true for a composite particle, which can never be represented as a superposition of exactly-localized quantum states. It is in this sense that physicists can discuss the intrinsic \"size\" of a particle: The size of its internal structure, not the size of its wavepacket. The \"size\" of an elementary particle, in this sense, is exactly zero."}
+{"text":"For example, for the electron, experimental evidence shows that the size of an electron is less than 10\u221218 m. This is consistent with the expected value of exactly zero. (This should not be confused with the classical electron radius, which, despite the name, is unrelated to the actual size of an electron.)"}
+{"text":"In a relativistic theory of physics, a Lorentz scalar is an expression, formed from items of the theory, which evaluates to a scalar, invariant under any Lorentz transformation. A Lorentz scalar may be generated from e.g., the scalar product of vectors, or from contracting tensors of the theory. While the components of vectors and tensors are in general altered under Lorentz transformations, Lorentz scalars remain unchanged."}
+{"text":"A Lorentz scalar is not always immediately seen to be an invariant scalar in the mathematical sense, but the resulting scalar value is invariant under any basis transformation applied to the vector space, on which the considered theory is based. A simple Lorentz scalar in Minkowski spacetime is the \"spacetime distance\" (\"length\" of their difference) of two fixed events in spacetime. While the \"position\"-4-vectors of the events change between different inertial frames, their spacetime distance remains invariant under the corresponding Lorentz transformation. Other examples of Lorentz scalars are the \"length\" of 4-velocities (see below), or the Ricci curvature in a point in spacetime from General relativity, which is a contraction of the Riemann curvature tensor there."}
+{"text":"In special relativity the location of a particle in 4-dimensional spacetime is given by"}
+{"text":"where formula_2 is the position in 3-dimensional space of the particle, formula_3 is the velocity in 3-dimensional space and formula_4 is the speed of light."}
+{"text":"The \"length\" of the vector is a Lorentz scalar and is given by"}
+{"text":"where formula_6 is the proper time as measured by a clock in the rest frame of the particle and the Minkowski metric is given by"}
+{"text":"Often the alternate signature of the Minkowski metric is used in which the signs of the ones are reversed."}
+{"text":"In the Minkowski metric the space-like interval formula_9 is defined as"}
+{"text":"We use the space-like Minkowski metric in the rest of this article."}
+{"text":"The velocity in spacetime is defined as"}
+{"text":"The magnitude of the 4-velocity is a Lorentz scalar,"}
+{"text":"The inner product of acceleration and velocity."}
+{"text":"The 4-acceleration is always perpendicular to the 4-velocity"}
+{"text":"Therefore, we can regard acceleration in spacetime as simply a rotation of the 4-velocity. The inner product of the acceleration and the velocity is a Lorentz scalar and is zero. This rotation is simply an expression of energy conservation:"}
+{"text":"where formula_17 is the energy of a particle and formula_18 is the 3-force on the particle."}
+{"text":"Energy, rest mass, 3-momentum, and 3-speed from 4-momentum."}
+{"text":"where formula_20 is the particle rest mass, formula_21 is the momentum in 3-space, and"}
+{"text":"Measurement of the energy of a particle."}
+{"text":"Consider a second particle with 4-velocity formula_23 and a 3-velocity formula_24. In the rest frame of the second particle the inner product of formula_23 with formula_26 is proportional to the energy of the first particle"}
+{"text":"where the subscript 1 indicates the first particle."}
+{"text":"Since the relationship is true in the rest frame of the second particle, it is true in any reference frame. formula_28, the energy of the first particle in the frame of the second particle, is a Lorentz scalar. Therefore,"}
+{"text":"in any inertial reference frame, where formula_30 is still the energy of the first particle in the frame of the second particle ."}
+{"text":"Measurement of the rest mass of the particle."}
+{"text":"In the rest frame of the particle the inner product of the momentum is"}
+{"text":"Therefore, the rest mass (m) is a Lorentz scalar. The relationship remains true independent of the frame in which the inner product is calculated. In many cases the rest mass is written as formula_32 to avoid confusion with the relativistic mass, which is formula_33"}
+{"text":"Measurement of the 3-momentum of the particle."}
+{"text":"The square of the magnitude of the 3-momentum of the particle as measured in the frame of the second particle is a Lorentz scalar."}
+{"text":"Measurement of the 3-speed of the particle."}
+{"text":"The 3-speed, in the frame of the second particle, can be constructed from two Lorentz scalars"}
+{"text":"Scalars may also be constructed from the tensors and vectors, from the contraction of tensors (such as formula_36), or combinations of contractions of tensors and vectors (such as formula_37)."}
+{"text":"In electromagnetism, the Lorenz gauge condition or Lorenz gauge, for Ludvig Lorenz, is a partial gauge fixing of the electromagnetic vector potential by requiring formula_1 The name is frequently confused with Hendrik Lorentz, who has given his name to many concepts in this field. The condition is Lorentz invariant. The condition does not completely determine the gauge: one can still make a gauge transformation formula_2 where formula_3 is a harmonic scalar function (that is, a scalar function satisfying formula_4 the equation of a massless scalar field). The Lorenz condition is used to eliminate the redundant spin-0 component in the representation theory of the Lorentz group. It is equally used for massive spin-1 fields where the concept of gauge transformations does not apply at all."}
+{"text":"In electromagnetism, the Lorenz condition is generally used in calculations of time-dependent electromagnetic fields through retarded potentials. The condition is"}
+{"text":"where formula_6 is the four-potential, the comma denotes a partial differentiation and the repeated index indicates that the Einstein summation convention is being used. The condition has the advantage of being Lorentz invariant. It still leaves substantial gauge degrees of freedom."}
+{"text":"In ordinary vector notation and SI units, the condition is"}
+{"text":"where formula_8 is the magnetic vector potential and formula_9 is the electric potential; see also gauge fixing."}
+{"text":"A quick justification of the Lorenz gauge can be found using Maxwell's equations and the relation between the magnetic vector potential and the magnetic field:"}
+{"text":"Since the curl is zero, that means there is a scalar function formula_13 such that"}
+{"text":"This gives the well known equation for the electric field,"}
+{"text":"This result can be plugged into the Amp\u00e8re\u2013Maxwell equation,"}
+{"text":"To have Lorentz invariance, the time derivatives and spatial derivatives must be treated equally (i.e. of the same order). Therefore, it is convenient to choose the Lorenz gauge condition, which gives the result"}
+{"text":"A similar procedure with a focus on the electric scalar potential and making the same gauge choice will yield"}
+{"text":"These are simpler and more symmetric forms of the inhomogeneous Maxwell's equations. Note that the Coulomb gauge also fixes the problem of Lorentz invariance, but leaves a coupling term with first-order derivatives."}
+{"text":"is the vacuum velocity of light, and formula_21 is the d'Alembertian operator. These equations are not only valid under vacuum conditions, but also in polarized media, if formula_22 and formula_23 are source density and circulation density, respectively, of the electromagnetic induction fields formula_24 and formula_25 calculated as usual from formula_13 and formula_27 by the equations"}
+{"text":"The explicit solutions for formula_13 and formula_8 \u2013 unique, if all quantities vanish sufficiently fast at infinity \u2013 are known as retarded potentials."}
+{"text":"Time translation symmetry or temporal translation symmetry (TTS) is a mathematical transformation in physics that moves the times of events through a common interval. Time translation symmetry is the hypothesis that the laws of physics are unchanged, (i.e. invariant) under such a transformation. Time translation symmetry is a rigorous way to formulate the idea that the laws of physics are the same throughout history. Time translation symmetry is closely connected, via the Noether theorem, to conservation of energy. In mathematics, the set of all time translations on a given system form a Lie group."}
+{"text":"There are many symmetries in nature besides time translation, such as spatial translation or rotational symmetries. These symmetries can be broken and explain diverse phenomena such as crystals, superconductivity, and the Higgs mechanism. However, it was thought until very recently that time translation symmetry could not be broken. Time crystals, a state of matter first observed in 2017, break time translation symmetry."}
+{"text":"Symmetries are of prime importance in physics and are closely related to the hypothesis that certain physical quantities are only relative and unobservable. Symmetries apply to the equations that govern the physical laws (e.g. to a Hamiltonian or Lagrangian) rather than the initial conditions, values or magnitudes of the equations themselves and state that the laws remain unchanged under a transformation. If a symmetry is preserved under a transformation it is said to be \"invariant\". Symmetries in nature lead directly to conservation laws, something which is precisely formulated by the Noether theorem."}
+{"text":"To formally describe time translation symmetry we say the equations, or laws, that describe a system at times formula_1 and formula_2 are the same for any value of formula_1 and formula_4."}
+{"text":"One finds for its solutions formula_6 the combination:"}
+{"text":"The invariance of a Hamiltonian formula_10 of an isolated system under time translation implies its energy does not change with the passage of time. Conservation of energy implies, according to the Heisenberg equations of motion, that formula_11."}
+{"text":"Where formula_14 is the time translation operator which implies invariance of the Hamiltonian under the time translation operation and leads to the conservation of energy."}
+{"text":"In many nonlinear field theories like general relativity or Yang\u2013Mills theories, the basic field equations are highly nonlinear and exact solutions are only known for \u2018sufficiently symmetric\u2019 distributions of matter (e.g. rotationally or axially symmetric configurations). Time translation symmetry is guaranteed only in spacetimes where the metric is static: that is, where there is a coordinate system in which the metric coefficients contain no time variable. Many general relativity systems are not static in any frame of reference so no conserved energy can be defined."}
+{"text":"Time crystals, a state of matter first observed in 2017, break discrete time translation symmetry."}
+{"text":"In physics and mathematics, an ansatz (; , meaning: \"initial placement of a tool at a work piece\", plural Ans\u00e4tze ; ) is an educated guess or an additional assumption made to help solve a problem, and which may later be verified to be part of the solution by its results."}
+{"text":"An ansatz is the establishment of the starting equation(s), the theorem(s), or the value(s) describing a mathematical or physical problem or solution. It typically provides an initial estimate or framework to the solution of a mathematical problem, and can also take into consideration the boundary conditions (in fact, an ansatz is sometimes thought of as a \"trial answer\" and an important technique in solving differential equations)."}
+{"text":"After an ansatz, which constitutes nothing more than an assumption, has been established, the equations are solved more precisely for the general function of interest, which then constitutes a confirmation of the assumption. In essence, an ansatz makes assumptions about the form of the solution to a problem so as to make the solution easier to find."}
+{"text":"It has been demonstrated that machine learning techniques can be applied to provide initial estimates similar to those invented by humans and to discover new ones in case no ansatz is available."}
+{"text":"Given a set of experimental data that looks to be clustered about a line, a linear ansatz could be made to find the parameters of the line by a least squares curve fit. Variational approximation methods use ans\u00e4tze and then fit the parameters."}
+{"text":"Another example could be the mass, energy, and entropy balance equations that, considered simultaneous for purposes of the elementary operations of linear algebra, are the \"ansatz\" to most basic problems of thermodynamics."}
+{"text":"Another example of an ansatz is to suppose the solution of a homogeneous linear differential equation to take an exponential form, or a power form in the case of a difference equation. More generally, one can guess a particular solution of a system of equations, and test such an ansatz by directly substituting the solution into the system of equations. In many cases, the assumed form of the solution is general enough that it can represent arbitrary functions, in such a way that the set of solutions found this way is a full set of all the solutions."}
+{"text":"In mathematics, a tensor is an algebraic object that describes a (multilinear) relationship between sets of algebraic objects related to a vector space. Objects that tensors may map between include vectors and scalars, and even other tensors. Tensors can take several different forms \u2013 for example: scalars and vectors (which are the simplest tensors), dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system."}
+{"text":"Tensors are important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics (stress, elasticity, fluid mechanics, moment of inertia, ...), electrodynamics (electromagnetic tensor, Maxwell tensor, permittivity, magnetic susceptibility, ...), or general relativity (stress\u2013energy tensor, curvature tensor, ...) and others. In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field. In some areas, tensor fields are so ubiquitous that they are often simply called \"tensors\"."}
+{"text":"Tensors were conceived in 1900 by Tullio Levi-Civita and Gregorio Ricci-Curbastro, who continued the earlier work of Bernhard Riemann and Elwin Bruno Christoffel and others, as part of the \"absolute differential calculus\". The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor."}
+{"text":"Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction. For example, tensors are defined and discussed for statistical and machine learning applications."}
+{"text":"Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a \"transformation law\" that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see covariance and contravariance of vectors), where the new basis vectors formula_1 are expressed in terms of the old basis vectors formula_2 as,"}
+{"text":"Here \"R\"\" j\"\"i\" are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article. The components \"v\"\"i\" of a column vector v transform with the inverse of the matrix \"R\","}
+{"text":"where the hat denotes the components in the new basis. This is called a \"contravariant\" transformation law, because the vector components transform by the \"inverse\" of the change of basis. In contrast, the components, \"w\"\"i\", of a covector (or row vector), w transform with the matrix \"R\" itself,"}
+{"text":"This is called a \"covariant\" transformation law, because the covector components transforms by the \"same matrix\" as the change of basis matrix. The components of a more general tensor transform by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called \"contravariant\" and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called \"covariant\" and is denoted with a lower index (subscript)."}
+{"text":"As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array formula_6 that transforms under a change of basis matrix formula_7 by formula_8. For the individual matrix entries, this transformation law has the form formula_9 so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1)."}
+{"text":"Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above:"}
+{"text":"where formula_11 is the Kronecker delta, which functions similarly to the identity matrix, and has the effect of renaming indices (\"j\" into \"k\" in this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like formula_12 can immediately be seen to be geometrically identical in all coordinate systems."}
+{"text":"Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components formula_13 are given by formula_14. These components transform contravariantly, since"}
+{"text":"The transformation law for an order tensor with \"p\" contravariant indices and \"q\" covariant indices is thus given as,"}
+{"text":"Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or \"type\" . The terms \"order\", \"type\", \"rank\", \"valence\", and \"degree\" are all sometimes used for the same concept. Here, the term \"order\" or \"total order\" will be used for the total dimension of the array (or its generalization in other definitions), in the preceding example, and the term \"type\" for the pair giving the number of contravariant and covariant indices. A tensor of type is also called a -tensor for short."}
+{"text":"This discussion motivates the following formal definition:"}
+{"text":"to each basis of an \"n\"-dimensional vector space such that, if we apply the change of basis"}
+{"text":"then the multidimensional array obeys the transformation law"}
+{"text":"The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci."}
+{"text":"An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an \"n\"-dimensional vector space. If formula_23 is an ordered basis, and formula_24 is an invertible formula_25 matrix, then the action is given by"}
+{"text":"Let \"F\" be the set of all ordered bases. Then \"F\" is a principal homogeneous space for GL(\"n\"). Let \"W\" be a vector space and let formula_27 be a representation of GL(\"n\") on \"W\" (that is, a group homomorphism formula_28). Then a tensor of type formula_27 is an equivariant map formula_30. Equivariance here means that"}
+{"text":"When formula_27 is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups."}
+{"text":"A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space \"V\", which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold. In this approach, a type tensor \"T\" is defined as a multilinear map,"}
+{"text":"where \"V\"\u2217 is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes \"V\" is a vector space over the real numbers, \u211d. More generally, \"V\" can be taken over an arbitrary field of numbers, \"F\" (e.g. the complex numbers) with a one-dimensional vector space over \"F\" replacing \u211d as the codomain of the multilinear maps."}
+{"text":"By applying a multilinear map \"T\" of type to a basis {e\"j\"} for \"V\" and a canonical cobasis {\u03b5\"i\"} for \"V\"\u2217,"}
+{"text":"a -dimensional array of components can be obtained. A different choice of basis will yield different components. But, because \"T\" is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of \"T\" thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map \"T\". This motivates viewing multilinear maps as the intrinsic objects underlying tensors."}
+{"text":"In viewing a tensor as a multilinear map, it is conventional to identify the double dual \"V\"\u2217\u2217 of the vector space \"V\", i.e., the space of linear functionals on the dual vector space \"V\"\u2217, with the vector space \"V\". There is always a natural linear map from \"V\" to its double dual, given by evaluating a linear form in \"V\"\u2217 against a vector in \"V\". This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify \"V\" with its double dual."}
+{"text":"For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property. A type tensor is defined in this context as an element of the tensor product of vector spaces,"}
+{"text":"A basis of and basis of naturally induce a basis of the tensor product . The components of a tensor are the coefficients of the tensor with respect to the basis obtained from a basis for and its dual basis , i.e."}
+{"text":"Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type tensor. Moreover, the universal property of the tensor product gives a -to- correspondence between tensors defined in this way and tensors defined as multilinear maps."}
+{"text":"Tensor products can be defined in great generality\u00a0\u2013 for example, involving arbitrary modules over a ring. In principle, one could define a \"tensor\" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term \"tensor\" for an element of a tensor product of any number of copies of a single vector space and its dual, as above."}
+{"text":"In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, often referred to simply as a tensor."}
+{"text":"In this context, a coordinate basis is often chosen for the tangent vector space. The transformation law may then be expressed in terms of partial derivatives of the coordinate functions,"}
+{"text":"This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type , where \"n\" is the number of contravariant indices, \"m\" is the number of covariant indices, and gives the total order of the tensor. For example, a bilinear form is the same thing as a -tensor; an inner product is an example of a -tensor, but not all -tensors are inner products. In the -entry of the table, \"M\" denotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor."}
+{"text":"Raising an index on an -tensor produces an -tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table. Contraction of an upper with a lower index of an -tensor produces an -tensor; this corresponds to moving diagonally up and to the left on the table."}
+{"text":"Assuming a basis of a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organized multidimensional array of numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows to \"define\" tensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers a tensor. Compare this to the array representing formula_40 not being a tensor, for the sign change under transformations changing the orientation."}
+{"text":"Because the components of vectors and their duals transform differently under the change of their dual bases, there is a covariant and\/or contravariant transformation law that relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively, (contravariant indices) and dual (covariant indices) in the input and output of a tensor determine the \"type\" (or \"valence\") of the tensor, a pair of natural numbers , which determine the precise form of the transformation law. The \"\" of a tensor is the sum of these two numbers."}
+{"text":"The collection of tensors on a vector space and its dual forms a tensor algebra, which allows products of arbitrary tensors. Simple applications of tensors of order , which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this."}
+{"text":"There are several notational systems that are used to describe tensors and perform calculations involving them."}
+{"text":"Ricci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives."}
+{"text":"The Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index is used twice in a given term of a tensor expression, it means that the term is to be summed for all . Several distinct pairs of indices may be summed this way."}
+{"text":"Penrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices."}
+{"text":"The abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation."}
+{"text":"A component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces."}
+{"text":"There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type."}
+{"text":"The tensor product takes two tensors, \"S\" and \"T\", and produces a new tensor, , whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e."}
+{"text":"which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e."}
+{"text":"If is of type and is of type , then the tensor product has type ."}
+{"text":"Tensor contraction is an operation that reduces a type tensor to a type tensor, of which the trace is a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a -tensor formula_43 can be contracted to a scalar through formula_44. Where the summation is again implied. When the -tensor is interpreted as a linear map, this operation is known as the trace."}
+{"text":"The contraction is often used in conjunction with the tensor product to contract an index from each tensor."}
+{"text":"The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the space \"V\" with the space \"V\"\u2217 by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from \"V\"\u2217 to a factor from \"V\". For example, a tensor formula_45 can be written as a linear combination"}
+{"text":"The contraction of \"T\" on the first and last slots is then the vector"}
+{"text":"In a vector space with an inner product (also known as a metric) \"g\", the term contraction is used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a -tensor formula_48 can be contracted to a scalar through formula_49 (yet again assuming the summation convention)."}
+{"text":"When a vector space is equipped with a nondegenerate bilinear form (or \"metric tensor\" as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known as \"lowering an index\"."}
+{"text":"Conversely, the inverse operation can be defined, and is called \"raising an index\". This is equivalent to a similar contraction on the product with a -tensor. This \"inverse metric tensor\" has components that are the matrix inverse of those of the metric tensor."}
+{"text":"If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type, in linear elasticity, or more precisely by a tensor field of type , since the stresses may vary from point to point."}
+{"text":"Applications of tensors of order > 2."}
+{"text":"The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix."}
+{"text":"The field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed \"nonlinear\". To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities:"}
+{"text":"Here formula_51 is the linear susceptibility, formula_52 gives the Pockels effect and second harmonic generation, and formula_53 gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter."}
+{"text":"The vector spaces of a tensor product need not be the same, and sometimes the elements of such a more general tensor product are called \"tensors\". For example, an element of the tensor product space is a second-order \"tensor\" in this more general sense, and an order- tensor may likewise be defined as an element of a tensor product of different vector spaces. A type tensor, in the sense defined previously, is also a tensor of order in this more general sense. The concept of tensor product can be extended to arbitrary modules over a ring."}
+{"text":"The notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces. Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual. Tensors thus live naturally on Banach manifolds and Fr\u00e9chet manifolds."}
+{"text":"Suppose that a homogeneous medium fills , so that the density of the medium is described by a single scalar value in . The mass, in kg, of a region is obtained by multiplying by the volume of the region , or equivalently integrating the constant over the region:"}
+{"text":"where the Cartesian coordinates are measured in m. If the units of length are changed into cm, then the numerical values of the coordinate functions must be rescaled by a factor of 100:"}
+{"text":"The numerical value of the density must then also transform by formula_56 to compensate, so that the numerical value of the mass in kg is still given by integral of formula_57. Thus formula_58 (in units of )."}
+{"text":"More generally, if the Cartesian coordinates undergo a linear transformation, then the numerical value of the density must change by a factor of the reciprocal of the absolute value of the determinant of the coordinate transformation, so that the integral remains invariant, by the change of variables formula for integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called a scalar density. To model a non-constant density, is a function of the variables (a scalar field), and under a curvilinear change of coordinates, it transforms by the reciprocal of the Jacobian of the coordinate change. For more on the intrinsic meaning, see Density on a manifold."}
+{"text":"A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition:"}
+{"text":"Here \"w\" is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism."}
+{"text":"Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from the rational representations of the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are still semisimple representations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation, consisting of an with the transformation law"}
+{"text":"The transformation law for a tensor behaves as a functor on the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such as local diffeomorphisms.) This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes. Examples of objects obeying more general kinds of transformation laws are jets and, more generally still, natural bundles."}
+{"text":"Succinctly, spinors are elements of the spin representation of the rotation group, while tensors are elements of its tensor representations. Other classical groups have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well."}
+{"text":"The concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century. The word \"tensor\" itself was introduced in 1846 by William Rowan Hamilton to describe something different from what is now meant by a tensor. The contemporary usage was introduced by Woldemar Voigt in 1898."}
+{"text":"Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title \"absolute differential calculus\", and originally presented by Ricci-Curbastro in 1892. It was made accessible to many mathematicians by the publication of Ricci-Curbastro and Tullio Levi-Civita's 1900 classic text \"M\u00e9thodes de calcul diff\u00e9rentiel absolu et leurs applications\" (Methods of absolute differential calculus and their applications)."}
+{"text":"In the 20th century, the subject came to be known as \"tensor analysis\", and achieved broader acceptance with the introduction of Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann. Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915\u201317, and was characterized by mutual respect:"}
+{"text":"Tensors were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of \u00c9lie Cartan made differential forms one of the basic kinds of tensors used in mathematics."}
+{"text":"From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the K\u00fcnneth theorem). Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field. For example, scalars can come from a ring. But the theory is then less geometric and computations more technical and less algorithmic. Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s."}
+{"text":"Frame-dragging is an effect on spacetime, predicted by Albert Einstein's general theory of relativity, that is due to non-static stationary distributions of mass\u2013energy. A stationary field is one that is in a steady state, but the masses causing that field may be non-static\u200a\u2060\u2014\u200arotating, for instance. More generally, the subject that deals with the effects caused by mass\u2013energy currents is known as gravitoelectromagnetism, which is analogous to the magnetism of classical electromagnetism."}
+{"text":"The first frame-dragging effect was derived in 1918, in the framework of general relativity, by the Austrian physicists Josef Lense and Hans Thirring, and is also known as the Lense\u2013Thirring effect. They predicted that the rotation of a massive object would distort the spacetime metric, making the orbit of a nearby test particle precess. This does not happen in Newtonian mechanics for which the gravitational field of a body depends only on its mass, not on its rotation. The Lense\u2013Thirring effect is very small \u2013 about one part in a few trillion. To detect it, it is necessary to examine a very massive object, or build an instrument that is very sensitive."}
+{"text":"In 2015, new general-relativistic extensions of Newtonian rotation laws were formulated to describe geometric dragging of frames which incorporates a newly discovered antidragging effect."}
+{"text":"Rotational frame-dragging (the Lense\u2013Thirring effect) appears in the general principle of relativity and similar theories in the vicinity of rotating massive objects. Under the Lense\u2013Thirring effect, the frame of reference in which a clock ticks the fastest is one which is revolving around the object as viewed by a distant observer. This also means that light traveling in the direction of rotation of the object will move past the massive object faster than light moving against the rotation, as seen by a distant observer. It is now the best known frame-dragging effect, partly thanks to the Gravity Probe B experiment. Qualitatively, frame-dragging can be viewed as the gravitational analog of electromagnetic induction."}
+{"text":"Another interesting consequence is that, for an object constrained in an equatorial orbit, but not in freefall, it weighs more if orbiting anti-spinward, and less if orbiting spinward. For example, in a suspended equatorial bowling alley, a bowling ball rolled anti-spinward would weigh more than the same ball rolled in a spinward direction. Note, frame dragging will neither accelerate nor slow down the bowling ball in either direction. It is not a \"viscosity\". Similarly, a stationary plumb-bob suspended over the rotating object will not list. It will hang vertically. If it starts to fall, induction will push it in the spinward direction."}
+{"text":"Linear frame dragging is the similarly inevitable result of the general principle of relativity, applied to linear momentum. Although it arguably has equal theoretical legitimacy to the \"rotational\" effect, the difficulty of obtaining an experimental verification of the effect means that it receives much less discussion and is often omitted from articles on frame-dragging (but see Einstein, 1921)."}
+{"text":"Static mass increase is a third effect noted by Einstein in the same paper. The effect is an increase in inertia of a body when other masses are placed nearby. While not strictly a frame dragging effect (the term frame dragging is not used by Einstein), it is demonstrated by Einstein that it derives from the same equation of general relativity. It is also a tiny effect that is difficult to confirm experimentally."}
+{"text":"In 1976 Van Patten and Everitt proposed to implement a dedicated mission aimed to measure the Lense\u2013Thirring node precession of a pair of counter-orbiting spacecraft to be placed in terrestrial polar orbits with drag-free apparatus. A somewhat equivalent, cheaper version of such an idea was put forth in 1986 by Ciufolini who proposed to launch a passive, geodetic satellite in an orbit identical to that of the LAGEOS satellite, launched in 1976, apart from the orbital planes which should have been displaced by 180 deg apart: the so-called butterfly configuration. The measurable quantity was, in this case, the sum of the nodes of LAGEOS and of the new spacecraft, later named LAGEOS III, LARES, WEBER-SAT."}
+{"text":"The Gravity Probe B experiment was a satellite-based mission by a Stanford group and NASA, used to experimentally measure another gravitomagnetic effect, the Schiff precession of a gyroscope, to an expected 1% accuracy or better. Unfortunately such accuracy was not achieved. The first preliminary results released in April 2007 pointed towards an accuracy of 256\u2013128%, with the hope of reaching about 13% in December 2007."}
+{"text":"In 2008 the Senior Review Report of the NASA Astrophysics Division Operating Missions stated that it was unlikely that Gravity Probe B team will be able to reduce the errors to the level necessary to produce a convincing test of currently untested aspects of General Relativity (including frame-dragging)."}
+{"text":"On May 4, 2011, the Stanford-based analysis group and NASA announced the final report, and in it the data from GP-B demonstrated the frame-dragging effect with an error of about 19 percent, and Einstein's predicted value was at the center of the confidence interval."}
+{"text":"NASA published claims of success in verification of frame dragging for the GRACE twin satellites and Gravity Probe B, both of which claims are still in public view. A research group in Italy, USA, and UK also claimed success in verification of frame dragging with the Grace gravity model, published in a peer reviewed journal. All the claims include recommendations for further research at greater accuracy and other gravity models."}
+{"text":"In the case of stars orbiting close to a spinning, supermassive black hole, frame dragging should cause the star's orbital plane to precess about the black hole spin axis. This effect should be detectable within the next few years via astrometric monitoring of stars at the center of the Milky Way galaxy."}
+{"text":"By comparing the rate of orbital precession of two stars on different orbits, it is possible in principle to test the no-hair theorems of general relativity, in addition to measuring the spin of the black hole."}
+{"text":"Relativistic jets may provide evidence for the reality of frame-dragging. Gravitomagnetic forces produced by the Lense\u2013Thirring effect (frame dragging) within the ergosphere of rotating black holes combined with the energy extraction mechanism by Penrose have been used to explain the observed properties of relativistic jets. The gravitomagnetic model developed by Reva Kay Williams predicts the observed high energy particles (~GeV) emitted by quasars and active galactic nuclei; the extraction of X-rays, \u03b3-rays, and relativistic e\u2212\u2013 e+ pairs; the collimated jets about the polar axis; and the asymmetrical formation of jets (relative to the orbital plane)."}
+{"text":"The Lense\u2013Thirring effect has been observed in a binary system that consists of a massive white dwarf and a pulsar."}
+{"text":"Frame-dragging may be illustrated most readily using the Kerr metric, which describes the geometry of spacetime in the vicinity of a mass \"M\" rotating with angular momentum \"J\", and Boyer\u2013Lindquist coordinates (see the link for the transformation):"}
+{"text":"and where the following shorthand variables have been introduced for brevity"}
+{"text":"In the non-relativistic limit where \"M\" (or, equivalently, \"r\"\"s\") goes to zero, the Kerr metric becomes the orthogonal metric for the oblate spheroidal coordinates"}
+{"text":"We may rewrite the Kerr metric in the following form"}
+{"text":"This metric is equivalent to a co-rotating reference frame that is rotating with angular speed \u03a9 that depends on both the radius \"r\" and the colatitude \"\u03b8\""}
+{"text":"In the plane of the equator this simplifies to:"}
+{"text":"Thus, an inertial reference frame is entrained by the rotating central mass to participate in the latter's rotation; this is frame-dragging."}
+{"text":"An extreme version of frame dragging occurs within the ergosphere of a rotating black hole. The Kerr metric has two surfaces on which it appears to be singular. The inner surface corresponds to a spherical event horizon similar to that observed in the Schwarzschild metric; this occurs at"}
+{"text":"where the purely radial component \"grr\" of the metric goes to infinity. The outer surface can be approximated by an oblate spheroid with lower spin parameters, and resembles a pumpkin-shape with higher spin parameters. It touches the inner surface at the poles of the rotation axis, where the colatitude \"\u03b8\" equals 0 or \u03c0; its radius in Boyer-Lindquist coordinates is defined by the formula"}
+{"text":"where the purely temporal component \"gtt\" of the metric changes sign from positive to negative. The space between these two surfaces is called the ergosphere. A moving particle experiences a positive proper time along its worldline, its path through spacetime. However, this is impossible within the ergosphere, where \"gtt\" is negative, unless the particle is co-rotating with the interior mass \"M\" with an angular speed at least of \u03a9. However, as seen above, frame-dragging occurs about every rotating mass and at every radius \"r\" and colatitude \"\u03b8\", not only within the ergosphere."}
+{"text":"The Lense\u2013Thirring effect inside a rotating shell was taken by Albert Einstein as not just support for, but a vindication of Mach's principle, in a letter he wrote to Ernst Mach in 1913 (five years before Lense and Thirring's work, and two years before he had attained the final form of general relativity). A reproduction of the letter can be found in Misner, Thorne, Wheeler. The general effect scaled up to cosmological distances, is still used as a support for Mach's principle."}
+{"text":"Inside a rotating spherical shell the acceleration due to the Lense\u2013Thirring effect would be"}
+{"text":"for \"MG\" \u226a \"Rc\"2 or more precisely,"}
+{"text":"The spacetime inside the rotating spherical shell will not be flat. A flat spacetime inside a rotating mass shell is possible if the shell is allowed to deviate from a precisely spherical shape and the mass density inside the shell is allowed to vary."}
+{"text":"A node is a point along a standing wave where the wave has minimum amplitude. For instance, in a vibrating guitar string, the ends of the string are nodes. By changing the position of the end node through frets, the guitarist changes the effective length of the vibrating string and thereby the note played. The opposite of a node is an anti-node, a point where the amplitude of the standing wave is at maximum. These occur midway between the nodes."}
+{"text":"Standing waves result when two sinusoidal wave trains of the same frequency are moving in opposite directions in the same space and interfere with each other. They occur when waves are reflected at a boundary, such as sound waves reflected from a wall or electromagnetic waves reflected from the end of a transmission line, and particularly when waves are confined in a resonator at resonance, bouncing back and forth between two boundaries, such as in an organ pipe or guitar string."}
+{"text":"In a standing wave the nodes are a series of locations at equally spaced intervals where the wave amplitude (motion) is zero (see animation above). At these points the two waves add with opposite phase and cancel each other out. They occur at intervals of half a wavelength (\u03bb\/2). Midway between each pair of nodes are locations where the amplitude is maximum. These are called the antinodes. At these points the two waves add with the same phase and reinforce each other."}
+{"text":"In cases where the two opposite wave trains are not the same amplitude, they do not cancel perfectly, so the amplitude of the standing wave at the nodes is not zero but merely a minimum. This occurs when the reflection at the boundary is imperfect. This is indicated by a finite standing wave ratio (SWR), the ratio of the amplitude of the wave at the antinode to the amplitude at the node."}
+{"text":"In resonance of a two dimensional surface or membrane, such as a drumhead or vibrating metal plate, the nodes become nodal lines, lines on the surface where the surface is motionless, dividing the surface into separate regions vibrating with opposite phase. These can be made visible by sprinkling sand on the surface, and the intricate patterns of lines resulting are called Chladni figures."}
+{"text":"In transmission lines a voltage node is a current antinode, and a voltage antinode is a current node."}
+{"text":"Nodes are the points of zero displacement, not the points where two constituent waves intersect."}
+{"text":"Where the nodes occur in relation to the boundary reflecting the waves depends on the end conditions or boundary conditions. Although there are many types of end conditions, the ends of resonators are usually one of two types that cause total reflection:"}
+{"text":"A sound wave consists of alternating cycles of compression and expansion of the wave medium. During compression, the molecules of the medium are forced together, resulting in the increased pressure and density. During expansion the molecules are forced apart, resulting in the decreased pressure and density."}
+{"text":"The number of nodes in a specified length is directly proportional to the frequency of the wave."}
+{"text":"The characteristic sound that allows the listener to identify a particular instrument is largely due to the relative magnitude of the harmonics created by the instrument."}
+{"text":"In quantum field theory, a contact term is a radiatively induced point-like interaction."}
+{"text":"These typically occur when the vertex for the emission of a massless particle such as a photon, a graviton, or a gluon, is proportional to formula_1 (the invariant momentum of the radiated particle)."}
+{"text":"This factor cancels the formula_2 of the Feynman propagator, and causes the exchange of the massless particle to produce a point-like"}
+{"text":"formula_3-function effective interaction, rather than the usual"}
+{"text":"formula_4 long-range potential. A notable example occurs in the weak interactions where a W-boson radiative correction to a gluon vertex produces a formula_1 term, leading to"}
+{"text":"what is known as a \"penguin\" interaction. The contact term then generates a correction to the full action of the theory."}
+{"text":"Contact terms occur in gravity when there are non-minimal interactions,"}
+{"text":"The non-minimal couplings are quantum equivalent to an \"Einstein frame,\" with a pure Einstein-Hilbert action,"}
+{"text":"owing to gravitational contact terms. These arise classically from graviton exchange interactions."}
+{"text":"The contact terms are an essential, yet hidden, part of the action and, if they are ignored, the Feynman diagram loops in different frames yield different results. At the leading order in formula_9 including the contact terms is equivalent to performing a Weyl Transformation to remove the non-minimal couplings and taking the theory to the Einstein-Hilbert form. In this sense, the Einstein-Hilbert form of the action is unique and \"frame ambiguities\" in loop calculations do not exist."}
+{"text":"Unification of the observable fundamental phenomena of nature is one of the primary goals of physics. The two great unifications to date are Isaac Newton\u2019s unification of gravity and astronomy, and James Clerk Maxwell\u2019s unification of electromagnetism; the latter has been further unified with the concept of electroweak interaction. This process of \"unifying\" forces continues today, with the ultimate goal of finding a theory of everything."}
+{"text":"The \"first great unification\" was Isaac Newton's 17th century unification of gravity, which brought together the understandings of the observable phenomena of gravity on Earth with the observable behaviour of celestial bodies in space."}
+{"text":"Unification of Magnetism, Electricity, Light and related radiation."}
+{"text":"The \"second great unification\" was James Clerk Maxwell's 19th century unification of electromagnetism. It brought together the understandings of the observable phenomena of magnetism, electricity and light (and more broadly, the spectrum of electromagnetic radiation). This was followed in the 20th century by Albert Einstein's unification of space and time, and of mass and energy. Later, quantum field theory unified quantum mechanics and special relativity."}
+{"text":"The ancient Chinese observed that certain rocks (lodestone and magnetite) were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. But even before the Chinese discovered magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also first studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force\u2014electromagnetism."}
+{"text":"This process of \"unifying\" forces continues today, and electromagnetism and the weak nuclear force are now considered to be two aspects of the electroweak interaction."}
+{"text":"Unification of the remaining fundamental forces: Theory of Everything."}
+{"text":"In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called \"canonical transformations\", which map canonical coordinate systems into canonical coordinate systems. A \"canonical coordinate system\" consists of canonical position and momentum variables (below symbolized by formula_1 and formula_2, respectively) that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself formula_3 as one of the new canonical momentum coordinates."}
+{"text":"In a more general sense, the Poisson bracket is used to define a Poisson algebra, of which the algebra of functions on a Poisson manifold is a special case. There are other general examples, as well: it occurs in the theory of Lie algebras, where the tensor algebra of a Lie algebra forms a Poisson algebra; a detailed construction of how this comes about is given in the universal enveloping algebra article. Quantum deformations of the universal enveloping algebra lead to the notion of quantum groups."}
+{"text":"All of these objects are named in honor of Sim\u00e9on Denis Poisson."}
+{"text":"Given two functions f and g that depend on phase space and time, their Poisson bracket formula_4 is another function that depends on phase space and time. The following rules hold for any three functions formula_5 of phase space and time:"}
+{"text":"Also, if a function formula_10 is constant over phase space (but may depend on time), then formula_11 for any formula_12."}
+{"text":"In canonical coordinates (also known as Darboux coordinates) formula_13 on the phase space, given two functions formula_14 and formula_15, the Poisson bracket takes the form"}
+{"text":"The Poisson brackets of the canonical coordinates are"}
+{"text":"Hamilton's equations of motion have an equivalent expression in terms of the Poisson bracket. This may be most directly demonstrated in an explicit coordinate frame. Suppose that formula_19 is a function on the solution\u2019s trajectory-manifold. Then from the multivariable chain rule,"}
+{"text":"Further, one may take formula_21 and formula_22 to be solutions to Hamilton's equations; that is,"}
+{"text":"Thus, the time evolution of a function formula_12 on a symplectic manifold can be given as a one-parameter family of symplectomorphisms (i.e., canonical transformations, area-preserving diffeomorphisms), with the time formula_26 being the parameter: Hamiltonian motion is a canonical transformation generated by the Hamiltonian. That is, Poisson brackets are preserved in it, so that \"any time formula_26\" in the solution to Hamilton's equations,"}
+{"text":"can serve as the bracket coordinates. \"Poisson brackets are canonical invariants\"."}
+{"text":"The operator in the convective part of the derivative, formula_30, is sometimes referred to as the Liouvillian (see Liouville's theorem (Hamiltonian))."}
+{"text":"An integrable dynamical system will have constants of motion in addition to the energy. Such constants of motion will commute with the Hamiltonian under the Poisson bracket. Suppose some function formula_31 is a constant of motion. This implies that if formula_32 is a trajectory or solution to Hamilton's equations of motion, then"}
+{"text":"where, as above, the intermediate step follows by applying the equations of motion and we supposed that formula_12 does not explicitly depend on time. This equation is known as the Liouville equation. The content of Liouville's theorem is that the time evolution of a measure (or \"distribution function\" on the phase space) is given by the above."}
+{"text":"If the Poisson bracket of formula_12 and formula_37 vanishes (formula_38), then formula_12 and formula_37 are said to be in involution. In order for a Hamiltonian system to be completely integrable, formula_41 independent constants of motion must be in mutual involution, where formula_41 is the number of degrees of freedom."}
+{"text":"Furthermore, according to Poisson's Theorem, if two quantities formula_43 and formula_44 are explicitly time independent (formula_45) constants of motion, so is their Poisson bracket formula_46. This does not always supply a useful result, however, since the number of possible constants of motion is limited (formula_47 for a system with formula_41 degrees of freedom), and so the result may be trivial (a constant, or a function of formula_43 and formula_44.)"}
+{"text":"Let formula_51 be a symplectic manifold, that is, a manifold equipped with a symplectic form: a 2-form formula_52 which is both closed (i.e., its exterior derivative formula_53 vanishes) and non-degenerate. For example, in the treatment above, take formula_51 to be formula_55 and take"}
+{"text":"If formula_57 is the interior product or contraction operation defined by formula_58, then non-degeneracy is equivalent to saying that for every one-form formula_59 there is a unique vector field formula_60 such that formula_61. Alternatively, formula_62. Then if formula_63 is a smooth function on formula_51, the Hamiltonian vector field formula_65 can be defined to be formula_66. It is easy to see that"}
+{"text":"The Poisson bracket formula_68 on (\"M\", \u03c9) is a bilinear operation on differentiable functions, defined by formula_69; the Poisson bracket of two functions on \"M\" is itself a function on \"M\". The Poisson bracket is antisymmetric because:"}
+{"text":"Here \"Xgf\" denotes the vector field \"Xg\" applied to the function \"f\" as a directional derivative, and formula_71 denotes the (entirely equivalent) Lie derivative of the function \"f\"."}
+{"text":"If \u03b1 is an arbitrary one-form on \"M\", the vector field \u03a9\u03b1 generates (at least locally) a flow formula_72 satisfying the boundary condition formula_73 and the first-order differential equation"}
+{"text":"The formula_72 will be symplectomorphisms (canonical transformations) for every \"t\" as a function of \"x\" if and only if formula_76; when this is true, \u03a9\u03b1 is called a symplectic vector field. Recalling Cartan's identity formula_77 and \"d\"\u03c9 = 0, it follows that formula_78. Therefore, \u03a9\u03b1 is a symplectic vector field if and only if \u03b1 is a closed form. Since formula_79, it follows that every Hamiltonian vector field \"Xf\" is a symplectic vector field, and that the Hamiltonian flow consists of canonical transformations. From above, under the Hamiltonian flow \"XH\","}
+{"text":"This is a fundamental result in Hamiltonian mechanics, governing the time evolution of functions defined on phase space. As noted above, when \"{f,H} = 0\", \"f\" is a constant of motion of the system. In addition, in canonical coordinates (with formula_81 and formula_82), Hamilton's equations for the time evolution of the system follow immediately from this formula."}
+{"text":"It also follows from that the Poisson bracket is a derivation; that is, it satisfies a non-commutative version of Leibniz's product rule:"}
+{"text":"The Poisson bracket is intimately connected to the Lie bracket of the Hamiltonian vector fields. Because the Lie derivative is a derivation,"}
+{"text":"Thus if \"v\" and \"w\" are symplectic, using formula_84, Cartan's identity, and the fact that formula_85 is a closed form,"}
+{"text":"Thus, the Poisson bracket on functions corresponds to the Lie bracket of the associated Hamiltonian vector fields. We have also shown that the Lie bracket of two symplectic vector fields is a Hamiltonian vector field and hence is also symplectic. In the language of abstract algebra, the symplectic vector fields form a subalgebra of the Lie algebra of smooth vector fields on \"M\", and the Hamiltonian vector fields form an ideal of this subalgebra. The symplectic vector fields are the Lie algebra of the (infinite-dimensional) Lie group of symplectomorphisms of \"M\"."}
+{"text":"It is widely asserted that the Jacobi identity for the Poisson bracket,"}
+{"text":"follows from the corresponding identity for the Lie bracket of vector fields, but this is true only up to a locally constant function. However, to prove the Jacobi identity for the Poisson bracket, it is sufficient to show that:"}
+{"text":"where the operator formula_90 on smooth functions on \"M\" is defined by formula_91 and the bracket on the right-hand side is the commutator of operators, formula_92. By , the operator formula_90 is equal to the operator \"Xg\". The proof of the Jacobi identity follows from because the Lie bracket of vector fields is just their commutator as differential operators."}
+{"text":"The algebra of smooth functions on M, together with the Poisson bracket forms a Poisson algebra, because it is a Lie algebra under the Poisson bracket, which additionally satisfies Leibniz's rule . We have shown that every symplectic manifold is a Poisson manifold, that is a manifold with a \"curly-bracket\" operator on smooth functions such that the smooth functions form a Poisson algebra. However, not every Poisson manifold arises in this way, because Poisson manifolds allow for degeneracy which cannot arise in the symplectic case."}
+{"text":"Given a smooth vector field formula_94 on the configuration space, let formula_95 be its conjugate momentum. The conjugate momentum mapping is a Lie algebra anti-homomorphism from the Poisson bracket to the Lie bracket:"}
+{"text":"This important result is worth a short proof. Write a vector field formula_94 at point formula_98 in the configuration space as"}
+{"text":"where the formula_100 is the local coordinate frame. The conjugate momentum to formula_94 has the expression"}
+{"text":"where the formula_2 are the momentum functions conjugate to the coordinates. One then has, for a point formula_104 in the phase space,"}
+{"text":"The above holds for all formula_106, giving the desired result."}
+{"text":"Poisson brackets deform to Moyal brackets upon quantization, that is, they generalize to a different Lie algebra, the Moyal algebra, or, equivalently in Hilbert space, quantum commutators. The Wigner-\u0130n\u00f6n\u00fc group contraction of these (the classical limit, ) yields the above Lie algebra."}
+{"text":"To state this more explicitly and precisely, the universal enveloping algebra of the Heisenberg algebra is the Weyl algebra (modulo the relation that the center be the unit). The Moyal product is then a special case of the star product on the algebra of symbols. An explicit definition of the algebra of symbols, and the star product is given in the article on the universal enveloping algebra."}
+{"text":"In physics and thermodynamics, the ergodic hypothesis says that, over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e., that all accessible microstates are equiprobable over a long period of time."}
+{"text":"Liouville's theorem states that, for Hamiltonian systems, the local density of microstates following a particle path through phase space is constant as viewed by an observer moving with the ensemble (i.e., the convective time derivative is zero). Thus, if the microstates are uniformly distributed in phase space initially, they will remain so at all times. But Liouville's theorem does \"not\" imply that the ergodic hypothesis holds for all Hamiltonian systems."}
+{"text":"The ergodic hypothesis is often assumed in the statistical analysis of computational physics. The analyst would assume that the average of a process parameter over time and the average over the statistical ensemble are the same. This assumption\u2014that it is as good to simulate a system over a long time as it is to make many independent realizations of the same system\u2014is not always correct. (See, for example, the Fermi\u2013Pasta\u2013Ulam\u2013Tsingou experiment of 1953.)"}
+{"text":"Assumption of the ergodic hypothesis allows proof that certain types of perpetual motion machines of the second kind are impossible."}
+{"text":"Systems that are ergodic are said to have the property of ergodicity; a broad range of systems in geometry, physics and stochastic probability theory are ergodic. Ergodic systems are studied in ergodic theory."}
+{"text":"In macroscopic systems, the timescales over which a system can truly explore the entirety of its own phase space can be sufficiently large that the thermodynamic equilibrium state exhibits some form of ergodicity breaking. A common example is that of spontaneous magnetisation in ferromagnetic systems, whereby below the Curie temperature the system preferentially adopts a non-zero magnetisation even though the ergodic hypothesis would imply that no net magnetisation should exist by virtue of the system exploring all states whose time-averaged magnetisation should be zero. The fact that macroscopic systems often violate the literal form of the ergodic hypothesis is an example of spontaneous symmetry breaking."}
+{"text":"However, complex disordered systems such as a spin glass show an even more complicated form of ergodicity breaking where the properties of the thermodynamic equilibrium state seen in practice are much more difficult to predict purely by symmetry arguments. Also conventional glasses (e.g. window glasses) violate ergodicity in a complicated manner. In practice this means that on sufficiently short time scales (e.g. those of parts of seconds, minutes, or a few hours) the systems may behave as \"solids\", i.e. with a positive shear modulus, but on extremely long scales, e.g. over millennia or eons, as \"liquids\", or with two or more time scales and \"plateaux\" in between."}
+{"text":"The \"commutative property\" (or \"commutative law\") is a property generally associated with binary operations and functions. If the commutative property holds for a pair of elements under a certain binary operation then the two elements are said to \"commute\" under that operation."}
+{"text":"The term \"commutative\" is used in several related senses."}
+{"text":"Two well-known examples of commutative binary operations:"}
+{"text":"Subtraction is noncommutative, since formula_10. However it is classified more precisely as anti-commutative, since formula_11."}
+{"text":"Some truth functions are noncommutative, since the truth tables for the functions are different when one changes the order of the operands. For example, the truth tables for and are"}
+{"text":"Function composition of linear functions from the real numbers to the real numbers is almost always noncommutative. For example, let formula_12 and formula_13. Then"}
+{"text":"This also applies more generally for linear and affine transformations from a vector space to itself (see below for the Matrix representation)."}
+{"text":"Matrix multiplication of square matrices is almost always noncommutative, for example:"}
+{"text":"The vector product (or cross product) of two vectors in three dimensions is anti-commutative; i.e., \"b\" \u00d7 \"a\" = \u2212(\"a\" \u00d7 \"b\")."}
+{"text":"Records of the implicit use of the commutative property go back to ancient times. The Egyptians used the commutative property of multiplication to simplify computing products. Euclid is known to have assumed the commutative property of multiplication in his book \"Elements\". Formal uses of the commutative property arose in the late 18th and early 19th centuries, when mathematicians began to work on a theory of functions. Today the commutative property is a well-known and basic property used in most branches of mathematics."}
+{"text":"The first recorded use of the term \"commutative\" was in a memoir by Fran\u00e7ois Servois in 1814, which used the word \"commutatives\" when describing functions that have what is now called the commutative property. The word is a combination of the French word \"commuter\" meaning \"to substitute or switch\" and the suffix \"-ative\" meaning \"tending to\" so the word literally means \"tending to substitute or switch\". The term then appeared in English in 1838 in Duncan Farquharson Gregory's article entitled \"On the real nature of symbolical algebra\" published in 1840 in the Transactions of the Royal Society of Edinburgh."}
+{"text":"In truth-functional propositional logic, \"commutation\", or \"commutativity\" refer to two valid rules of replacement. The rules allow one to transpose propositional variables within logical expressions in logical proofs. The rules are:"}
+{"text":"where \"formula_19\" is a metalogical symbol representing \"can be replaced in a proof with\"."}
+{"text":"\"Commutativity\" is a property of some logical connectives of truth functional propositional logic. The following logical equivalences demonstrate that commutativity is a property of particular connectives. The following are truth-functional tautologies."}
+{"text":"In group and set theory, many algebraic structures are called commutative when certain operands satisfy the commutative property. In higher branches of mathematics, such as analysis and linear algebra the commutativity of well-known operations (such as addition and multiplication on real and complex numbers) is often used (or implicitly assumed) in proofs."}
+{"text":"The associative property is closely related to the commutative property. The associative property of an expression containing two or more occurrences of the same operator states that the order operations are performed in does not affect the final result, as long as the order of terms does not change. In contrast, the commutative property states that the order of the terms does not affect the final result."}
+{"text":"Most commutative operations encountered in practice are also associative. However, commutativity does not imply associativity. A counterexample is the function"}
+{"text":"which is clearly commutative (interchanging \"x\" and \"y\" does not affect the result), but it is not associative (since, for example, formula_25 but formula_26). More such examples may be found in commutative non-associative magmas."}
+{"text":"Some forms of symmetry can be directly linked to commutativity. When a commutative operator is written as a binary function then the resulting function is symmetric across the line formula_27. As an example, if we let a function \"f\" represent addition (a commutative operation) so that formula_28 then formula_29 is a symmetric function, which can be seen in the adjacent image."}
+{"text":"For relations, a symmetric relation is analogous to a commutative operation, in that if a relation \"R\" is symmetric, then formula_30."}
+{"text":"In quantum mechanics as formulated by Schr\u00f6dinger, physical variables are represented by linear operators such as formula_31 (meaning multiply by formula_31), and formula_33. These two operators do not commute as may be seen by considering the effect of their compositions formula_34 and formula_35 (also called products of operators) on a one-dimensional wave function formula_36:"}
+{"text":"According to the uncertainty principle of Heisenberg, if the two operators representing a pair of variables do not commute, then that pair of variables are mutually complementary, which means they cannot be simultaneously measured or known precisely. For example, the position and the linear momentum in the formula_31-direction of a particle are represented by the operators formula_31 and formula_40, respectively (where formula_41 is the reduced Planck constant). This is the same example except for the constant formula_42, so again the operators do not commute and the physical meaning is that the position and linear momentum in a given direction are complementary."}
+{"text":"In geometry, the center of curvature of a curve is found at a point that is at a distance from the curve equal to the radius of curvature lying on the normal vector. It is the point at infinity if the curvature is zero. The osculating circle to the curve is centered at the centre of curvature. Cauchy defined the center of curvature \"C\" as the intersection point of two infinitely close normal lines to the curve. The locus of centers of curvature for each point on the curve comprise the evolute of the curve. This term is generally used in physics regarding to study of lenses and mirrors."}
+{"text":"It can also be defined as the spherical distance between the point at which all the rays falling on a lens or mirror either seems to converge to (in the case of convex lenses and concave mirrors) or diverge from (in the case of concave lenses or convex mirrors) and the lens\/mirror itself."}
+{"text":"A variable structure system, or VSS, is a discontinuous nonlinear system of the form"}
+{"text":"where formula_2 is the state vector, formula_3 is the time variable, and formula_4 is a \"piecewise continuous\" function. Due to the \"piecewise\" continuity of these systems, they behave like different continuous nonlinear systems in different regions of their state space. At the boundaries of these regions, their dynamics switch abruptly. Hence, their \"structure\" \"varies\" over different parts of their state space."}
+{"text":"The development of variable structure control depends upon methods of analyzing variable structure systems, which are special cases of hybrid dynamical systems."}
+{"text":"The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism (in particular, Maxwell's equations and the Lorentz force) in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems."}
+{"text":"This article uses the classical treatment of tensors and Einstein summation convention throughout and the Minkowski metric has the form diag(+1, \u22121, \u22121, \u22121). Where the equations are specified as holding in a vacuum, one could instead regard them as the formulation of Maxwell's equations in terms of \"total\" charge and current."}
+{"text":"For a more general overview of the relationships between classical electromagnetism and special relativity, including various conceptual implications of this picture, see Classical electromagnetism and special relativity."}
+{"text":"Lorentz tensors of the following kinds may be used in this article to describe bodies or particles:"}
+{"text":"The signs in the following tensor analysis depend on the convention used for the metric tensor. The convention used here is , corresponding to the Minkowski metric tensor:"}
+{"text":"The electromagnetic tensor is the combination of the electric and magnetic fields into a covariant antisymmetric tensor whose entries are B-field quantities."}
+{"text":"and the result of raising its indices is"}
+{"text":"where E is the electric field, B the magnetic field, and \"c\" the speed of light."}
+{"text":"The four-current is the contravariant four-vector which combines electric charge density \"\u03c1\" and electric current density j:"}
+{"text":"The electromagnetic four-potential is a covariant four-vector containing the electric potential (also called the scalar potential) \"\u03d5\" and magnetic vector potential (or vector potential) A, as follows:"}
+{"text":"The differential of the electromagnetic potential is"}
+{"text":"In the language of differential forms, which provides the generalisation to curved spacetimes, these are the components of a 1-form formula_16 and a 2-form formula_17 respectively. Here, \"formula_18\" is the exterior derivative and formula_19 the wedge product."}
+{"text":"The electromagnetic stress\u2013energy tensor can be interpreted as the flux density of the momentum four-vector, and is a contravariant symmetric tensor that is the contribution of the electromagnetic fields to the overall stress\u2013energy tensor:"}
+{"text":"where \"formula_21\" is the electric permittivity of vacuum, \"\u03bc\"0 is the magnetic permeability of vacuum, the Poynting vector is"}
+{"text":"and the Maxwell stress tensor is given by"}
+{"text":"The electromagnetic field tensor \"F\" constructs the electromagnetic stress\u2013energy tensor \"T\" by the equation:"}
+{"text":"where \"\u03b7\" is the Minkowski metric tensor (with signature ). Notice that we use the fact that"}
+{"text":"In vacuum (or for the microscopic equations, not including macroscopic material descriptions), Maxwell's equations can be written as two tensor equations."}
+{"text":"The two inhomogeneous Maxwell's equations, Gauss's Law and Amp\u00e8re's law (with Maxwell's correction) combine into (with metric):"}
+{"text":"while the homogeneous equations \u2013 Faraday's law of induction and Gauss's law for magnetism combine to form formula_26, which may written using Levi-Civita duality as:"}
+{"text":"where \"F\"\"\u03b1\u03b2\" is the electromagnetic tensor, \"J\"\"\u03b1\" is the four-current, \"\u03b5\"\"\u03b1\u03b2\u03b3\u03b4\" is the Levi-Civita symbol, and the indices behave according to the Einstein summation convention."}
+{"text":"Each of these tensor equations corresponds to four scalar equations, one for each value of \"\u03b2\"."}
+{"text":"Using the antisymmetric tensor notation and comma notation for the partial derivative (see Ricci calculus), the second equation can also be written more compactly as:"}
+{"text":"In the absence of sources, Maxwell's equations reduce to:"}
+{"text":"which is an electromagnetic wave equation in the field strength tensor."}
+{"text":"The Lorenz gauge condition is a Lorentz-invariant gauge condition. (This can be contrasted with other gauge conditions such as the Coulomb gauge, which if it holds in one inertial frame will generally not hold in any other.) It is expressed in terms of the four-potential as follows:"}
+{"text":"In the Lorenz gauge, the microscopic Maxwell's equations can be written as:"}
+{"text":"Electromagnetic (EM) fields affect the motion of electrically charged matter: due to the Lorentz force. In this way, EM fields can be detected (with applications in particle physics, and natural occurrences such as in aurorae). In relativistic form, the Lorentz force uses the field strength tensor as follows."}
+{"text":"Expressed in terms of coordinate time \"t\", it is:"}
+{"text":"where \"p\"\"\u03b1\" is the four-momentum, \"q\" is the charge, and \"x\"\"\u03b2\" is the position."}
+{"text":"Expressed in frame-independent form, we have the four-force"}
+{"text":"where \"u\"\"\u03b2\" is the four-velocity, and \"\u03c4\" is the particle's proper time, which is related to coordinate time by ."}
+{"text":"The density of force due to electromagnetism, whose spatial part is the Lorentz force, is given by"}
+{"text":"and is related to the electromagnetic stress\u2013energy tensor by"}
+{"text":"Using the Maxwell equations, one can see that the electromagnetic stress\u2013energy tensor (defined above) satisfies the following differential equation, relating it to the electromagnetic tensor and the current four-vector"}
+{"text":"which expresses the conservation of linear momentum and energy by electromagnetic interactions."}
+{"text":"In order to solve the equations of electromagnetism given here, it is necessary to add information about how to calculate the electric current, \"J\"\"\u03bd\" Frequently, it is convenient to separate the current into two parts, the free current and the bound current, which are modeled by different equations;"}
+{"text":"Maxwell's macroscopic equations have been used, in addition the definitions of the electric displacement D and the magnetic intensity H:"}
+{"text":"where M is the magnetization and P the electric polarization."}
+{"text":"The bound current is derived from the P and M fields which form an antisymmetric contravariant magnetization-polarization tensor"}
+{"text":"If this is combined with \"F\"\u03bc\u03bd we get the antisymmetric contravariant electromagnetic displacement tensor which combines the D and H fields as follows:"}
+{"text":"The three field tensors are related by:"}
+{"text":"which is equivalent to the definitions of the D and H fields given above."}
+{"text":"The bound current and free current as defined above are automatically and separately conserved"}
+{"text":"In vacuum, the constitutive relations between the field tensor and displacement tensor are:"}
+{"text":"Antisymmetry reduces these 16 equations to just six independent equations. Because it is usual to define \"F\"\"\u03bc\u03bd\" by"}
+{"text":"the constitutive equations may, in \"vacuum\", be combined with the Gauss\u2013Amp\u00e8re law to get:"}
+{"text":"The electromagnetic stress\u2013energy tensor in terms of the displacement is:"}
+{"text":"where \"\u03b4\u03b1\u03c0\" is the Kronecker delta. When the upper index is lowered with \"\u03b7\", it becomes symmetric and is part of the source of the gravitational field."}
+{"text":"Thus we have reduced the problem of modeling the current, \"J\"\"\u03bd\" to two (hopefully) easier problems \u2014 modeling the free current, \"J\"\"\u03bd\"free and modeling the magnetization and polarization, formula_55. For example, in the simplest materials at low frequencies, one has"}
+{"text":"where one is in the instantaneously comoving inertial frame of the material, \"\u03c3\" is its electrical conductivity, \"\u03c7\"e is its electric susceptibility, and \"\u03c7\"m is its magnetic susceptibility."}
+{"text":"The constitutive relations between the formula_59 and \"F\" tensors, proposed by Minkowski for a linear materials (that is, E is proportional to D and B proportional to H), are:"}
+{"text":"where \"u\" is the four-velocity of material, \"\u03b5\" and \"\u03bc\" are respectively the proper permittivity and permeability of the material (i.e. in rest frame of material), formula_62 and denotes the Hodge dual."}
+{"text":"The Lagrangian density for classical electrodynamics is composed by two components: a field component and a source component:"}
+{"text":"In the interaction term, the four-current should be understood as an abbreviation of many terms expressing the electric currents of other charged fields in terms of their variables; the four-current is not itself a fundamental field."}
+{"text":"The Lagrange equations for the electromagnetic lagrangian density formula_64 can be stated as follows:"}
+{"text":"the expression inside the square bracket is"}
+{"text":"Therefore, the electromagnetic field's equations of motion are"}
+{"text":"Separating the free currents from the bound currents, another way to write the Lagrangian density is as follows:"}
+{"text":"Using Lagrange equation, the equations of motion for formula_73 can be derived."}
+{"text":"The equivalent expression in non-relativistic vector notation is"}
+{"text":"In continuum mechanics, the generalized Lagrangian mean (GLM) is a formalism \u2013 developed by \u2013 to unambiguously split a motion into a mean part and an oscillatory part. The method gives a mixed Eulerian\u2013Lagrangian description for the flow field, but appointed to fixed Eulerian coordinates."}
+{"text":"In general, it is difficult to decompose a combined wave\u2013mean motion into a mean and a wave part, especially for flows bounded by a wavy surface: e.g. in the presence of surface gravity waves or near another undulating bounding surface (like atmospheric flow over mountainous or hilly terrain). However, this splitting of the motion in a wave and mean part is often demanded in mathematical models, when the main interest is in the mean motion \u2013 slowly varying at scales much larger than those of the individual undulations. From a series of postulates, arrive at the (GLM) formalism to split the flow: into a generalised Lagrangian mean flow and an oscillatory-flow part."}
+{"text":"The GLM method does not suffer from the strong drawback of the Lagrangian specification of the flow field \u2013 following individual fluid parcels \u2013 that Lagrangian positions which are initially close gradually drift far apart. In the Lagrangian frame of reference, it therefore becomes often difficult to attribute Lagrangian-mean values to some location in space."}
+{"text":"The specification of mean properties for the oscillatory part of the flow, like: Stokes drift, wave action, pseudomomentum and pseudoenergy \u2013 and the associated conservation laws \u2013 arise naturally when using the GLM method."}
+{"text":"The GLM concept can also be incorporated into variational principles of fluid flow."}
+{"text":"The system size expansion, also known as van Kampen's expansion or the \u03a9-expansion, is a technique pioneered by Nico van Kampen used in the analysis of stochastic processes. Specifically, it allows one to find an approximation to the solution of a master equation with nonlinear transition rates. The leading order term of the expansion is given by the linear noise approximation, in which the master equation is approximated by a Fokker\u2013Planck equation with linear coefficients determined by the transition rates and stoichiometry of the system."}
+{"text":"Less formally, it is normally straightforward to write down a mathematical description of a system where processes happen randomly (for example, radioactive atoms randomly decay in a physical system, or genes that are expressed stochastically in a cell). However, these mathematical descriptions are often too difficult to solve for the study of the systems statistics (for example, the mean and variance of the number of atoms or proteins as a function of time). The system size expansion allows one to obtain an approximate statistical description that can be solved much more easily than the master equation."}
+{"text":"Systems that admit a treatment with the system size expansion may be described by a probability distribution formula_1, giving the probability of observing the system in state formula_2 at time formula_3. formula_2 may be, for example, a vector with elements corresponding to the number of molecules of different chemical species in a system. In a system of size formula_5 (intuitively interpreted as the volume), we will adopt the following nomenclature: formula_6 is a vector of macroscopic copy numbers, formula_7 is a vector of concentrations, and formula_8 is a vector of deterministic concentrations, as they would appear according to the rate equation in an infinite system. formula_9 and formula_6 are thus quantities subject to stochastic effects."}
+{"text":"A master equation describes the time evolution of this probability. Henceforth, a system of chemical reactions will be discussed to provide a concrete example, although the nomenclature of \"species\" and \"reactions\" is generalisable. A system involving formula_11 species and formula_12 reactions can be described with the master equation:"}
+{"text":"Here, formula_5 is the system size, formula_15 is an operator which will be addressed later, formula_16 is the stoichiometric matrix for the system (in which element formula_16 gives the stoichiometric coefficient for species formula_18 in reaction formula_19), and formula_20 is the rate of reaction formula_19 given a state formula_9 and system size formula_5."}
+{"text":"formula_24 is a step operator, removing formula_16 from the formula_18th element of its argument. For example, formula_27. This formalism will be useful later."}
+{"text":"The above equation can be interpreted as follows. The initial sum on the RHS is over all reactions. For each reaction formula_19, the brackets immediately following the sum give two terms. The term with the simple coefficient \u22121 gives the probability flux away from a given state formula_6 due to reaction formula_19 changing the state. The term preceded by the product of step operators gives the probability flux due to reaction formula_19 changing a different state formula_32 into state formula_6. The product of step operators constructs this state formula_32."}
+{"text":"For example, consider the (linear) chemical system involving two chemical species formula_35 and formula_36 and the reaction formula_37. In this system, formula_38 (species), formula_39 (reactions). A state of the system is a vector formula_40, where formula_41 are the number of molecules of formula_35 and formula_36 respectively. Let formula_44, so that the rate of reaction 1 (the only reaction) depends on the concentration of formula_35. The stoichiometry matrix is formula_46."}
+{"text":"where formula_48 is the shift caused by the action of the product of step operators, required to change state formula_6 to a precursor state formula_50."}
+{"text":"If the master equation possesses nonlinear transition rates, it may be impossible to solve it analytically. The system size expansion utilises the ansatz that the variance of the steady-state probability distribution of constituent numbers in a population scales like the system size. This ansatz is used to expand the master equation in terms of a small parameter given by the inverse system size."}
+{"text":"Specifically, let us write the formula_51, the copy number of component formula_18, as a sum of its \"deterministic\" value (a scaled-up concentration) and a random variable formula_53, scaled by formula_54:"}
+{"text":"The probability distribution of formula_6 can then be rewritten in the vector of random variables formula_53:"}
+{"text":"Consider how to write reaction rates formula_59 and the step operator formula_15 in terms of this new random variable. Taylor expansion of the transition rates gives:"}
+{"text":"The step operator has the effect formula_62 and hence formula_63:"}
+{"text":"We are now in a position to recast the master equation."}
+{"text":"This rather frightening expression makes a bit more sense when we gather terms in different powers of formula_5. First, terms of order formula_54 give"}
+{"text":"These terms cancel, due to the macroscopic reaction equation"}
+{"text":"The terms of order formula_70 are more interesting:"}
+{"text":"The time evolution of formula_75 is then governed by the linear Fokker\u2013Planck equation with coefficient matrices formula_76 and formula_77 (in the large-formula_5 limit, terms of formula_79 may be neglected, termed the linear noise approximation). With knowledge of the reaction rates formula_80 and stoichiometry formula_81, the moments of formula_75 can then be calculated."}
+{"text":"The approximation implies that fluctuations around the mean are Gaussian distributed. Non-Gaussian features of the distributions can be computed by taking into account higher order terms in the expansion."}
+{"text":"The wave equation is an important second-order linear partial differential equation for the description of waves\u2014as they occur in classical physics\u2014such as mechanical waves (e.g. water waves, sound waves and seismic waves) or light waves. It arises in fields like acoustics, electromagnetics, and fluid dynamics."}
+{"text":"Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. In 1746, d\u2019Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation."}
+{"text":"The wave equation is a partial differential equation that may constrain some scalar function of a time variable and one or more spatial variables . The quantity may be, for example, the pressure in a liquid or gas, or the displacement, along some specific direction, of the particles of a vibrating solid away from their resting positions. The equation is"}
+{"text":"where is a fixed non-negative real coefficient."}
+{"text":"Using the notations of Newtonian mechanics and vector calculus, the wave equation can be written more compactly as"}
+{"text":"where the double dot denotes double time derivative of , is the nabla operator, and is the (spatial) Laplacian operator:"}
+{"text":"A solution of this equation can be quite complicated, but it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed . This analysis is possible because the wave equation is linear; so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called the superposition principle in physics."}
+{"text":"The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments."}
+{"text":"The wave equation is the simplest example of a hyperbolic differential equation. It, and its modifications, play fundamental roles in continuum mechanics, quantum mechanics, plasma physics, general relativity, geophysics, and many other scientific and technical disciplines."}
+{"text":"The wave equation in one space dimension can be written as follows:"}
+{"text":"This equation is typically described as having only one space dimension , because the only other independent variable is the time . Nevertheless, the dependent variable may represent a second space dimension, if, for example, the displacement takes place in -direction, as in the case of a string that is located in the ."}
+{"text":"The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of a string that is vibrating in a two-dimensional plane, with each of its elements being pulled in opposite directions by the force of tension."}
+{"text":"Another physical setting for derivation of the wave equation in one space dimension utilizes Hooke's Law. In the theory of elasticity, Hooke's Law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress)."}
+{"text":"The wave equation in the one-dimensional case can be derived from Hooke's Law in the following way: imagine an array of little weights of mass interconnected with massless springs of length . The springs have a spring constant of :"}
+{"text":"Here the dependent variable measures the distance from the equilibrium of the mass situated at , so that essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material. The forces exerted on the mass at the location are:"}
+{"text":"The equation of motion for the weight at the location is given by equating these two forces:"}
+{"text":"where the time-dependence of has been made explicit."}
+{"text":"If the array of weights consists of weights spaced evenly over the length of total mass , and the total spring constant of the array we can write the above equation as:"}
+{"text":"Taking the limit and assuming smoothness one gets:"}
+{"text":"which is from the definition of a second derivative. is the square of the propagation speed in this particular case."}
+{"text":"In the case of a stress pulse propagating longitudinally through a bar, the bar acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffness given by"}
+{"text":"Where is the cross-sectional area and is the Young's modulus of the material. The wave equation becomes"}
+{"text":"is equal to the volume of the bar and therefore"}
+{"text":"where is the density of the material. The wave equation reduces to"}
+{"text":"The speed of a stress wave in a bar is therefore ."}
+{"text":"The one-dimensional wave equation is unusual for a partial differential equation in that a relatively simple general solution may be found. Defining new variables:"}
+{"text":"In other words, solutions of the 1D wave equation are sums of a right traveling function and a left traveling function . \"Traveling\" means that the shape of these individual arbitrary functions with respect to stays constant, however the functions are translated left and right with time at the speed . This was derived by Jean le Rond d'Alembert."}
+{"text":"Another way to arrive at this result is to note that the wave equation may be \"factored\":"}
+{"text":"As a result, if we define thus,"}
+{"text":"From this, must have the form , and from this the correct form of the full solution can be deduced."}
+{"text":"For an initial value problem, the arbitrary functions and can be determined to satisfy initial conditions:"}
+{"text":"In the classical sense if and then . However, the waveforms and may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left."}
+{"text":"The basic wave equation is a linear differential equation and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components."}
+{"text":"Another way to solve for the solutions to the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined \"constant\" angular frequency , so that the temporal part of the wave function takes the form , and the amplitude is a function of the spatial variable , giving a separation of variables for the wave function:"}
+{"text":"This produces an ordinary differential equation for the spatial part :"}
+{"text":"which is precisely an eigenvalue equation for , hence the name eigenmode. It has the well-known plane wave solutions"}
+{"text":"The total wave function for this eigenmode is then the linear combination"}
+{"text":"where complex numbers depend in general on any initial and boundary conditions of the problem."}
+{"text":"Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factor formula_28. so that a full solution can be decomposed into an eigenmode expansion"}
+{"text":"or in terms of the plane waves,"}
+{"text":"Scalar wave equation in three space dimensions."}
+{"text":"A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions."}
+{"text":"The wave equation can be solved using the technique of separation of variables. To obtain a solution with constant frequencies, let us first Fourier-transform the wave equation in time as"}
+{"text":"This is the Helmholtz equation and can be solved using separation of variables. If spherical coordinates are used to describe a problem, then the solution to the angular part of the Helmholtz equation is given by spherical harmonics and the radial equation now becomes"}
+{"text":"Here and the complete solution is now given by"}
+{"text":"where and are the spherical Hankel functions."}
+{"text":"To gain a better understanding of the nature of these spherical waves, let us go back and look at the case when . In this case, there is no angular dependence and the amplitude depends only on the radial distance i.e. . In this case, the wave equation reduces to"}
+{"text":"where the quantity satisfies the one-dimensional wave equation. Therefore, there are solutions in the form"}
+{"text":"where and are general solutions to the one-dimensional wave equation, and can be interpreted as respectively an outgoing or incoming spherical wave. Such waves are generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions."}
+{"text":"For physical examples of non-spherical wave solutions to the 3D wave equation that do possess angular dependence, see dipole radiation."}
+{"text":"Although the word \"monochromatic\" is not exactly accurate since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions. Following the derivation in the previous section on Plane wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined \"constant\" angular frequency , then the transformed function has simply plane wave solutions,"}
+{"text":"From this we can observe that the peak intensity of the spherical wave oscillation, characterized as the squared wave amplitude"}
+{"text":"A flexible string that is stretched between two points and satisfies the wave equation for and . On the boundary points, may satisfy a variety of boundary conditions. A general form that is appropriate for applications is"}
+{"text":"where and are non-negative. The case where u is required to vanish at an endpoint is the limit of this condition when the respective or approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form"}
+{"text":"The eigenvalue must be determined so that there is a non-trivial solution of the boundary-value problem"}
+{"text":"This is a special case of the general problem of Sturm\u2013Liouville theory. If and are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for and can be obtained from expansion of these functions in the appropriate trigonometric series."}
+{"text":"Approximating the continuous string with a finite number of equidistant mass points one gets the following physical model:"}
+{"text":"If each mass point has the mass , the tension of the string is , the separation between the mass points is and are the offset of these points from their equilibrium points (i.e. their position on a straight line between the two attachment points of the string) the vertical component of the force towards point is"}
+{"text":"and the vertical component of the force towards point is"}
+{"text":"Taking the sum of these two forces and dividing with the mass one gets for the vertical motion:"}
+{"text":"The wave equation is obtained by letting in which case takes the form where is continuous function of two variables, takes the form and"}
+{"text":"with formula_53<\/ref> with all formula_54. The blue curve is the state at time formula_55 i.e. after a time that corresponds to the time a wave that is moving with the nominal wave velocity would need for one fourth of the length of the string."}
+{"text":"Figure 3 displays the shape of the string at the times formula_56. The wave travels in direction right with the speed without being actively constraint by the boundary conditions at the two extremes of the string. The shape of the wave is constant, i.e. the curve is indeed of the form ."}
+{"text":"Figure 4 displays the shape of the string at the times formula_57. The constraint on the right extreme starts to interfere with the motion preventing the wave to raise the end of the string."}
+{"text":"Figure 5 displays the shape of the string at the times formula_58 when the direction of motion is reversed. The red, green and blue curves are the states at the times formula_59 while the 3 black curves correspond to the states at times formula_60 with the wave starting to move back towards left."}
+{"text":"Figure 6 and figure 7 finally display the shape of the string at the times formula_61 and formula_62. The wave now travels towards left and the constraints at the end points are not active any more. When finally the other extreme of the string the direction will again be reversed in a way similar to what is displayed in figure 6."}
+{"text":"The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain in -dimensional space, with boundary . Then the wave equation is to be satisfied if is in and . On the boundary of , the solution shall satisfy"}
+{"text":"where is the unit outward normal to , and is a non-negative function defined on . The case where vanishes on is a limiting case for approaching infinity. The initial conditions are"}
+{"text":"where and are defined in . This problem may be solved by expanding and in the eigenfunctions of the Laplacian in , which satisfy the boundary conditions. Thus the eigenfunction satisfies"}
+{"text":"In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary . If is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle , multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation."}
+{"text":"If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions are spherical harmonics, and the radial components are Bessel functions of half-integer order."}
+{"text":"The inhomogeneous wave equation in one dimension is the following:"}
+{"text":"The function is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism."}
+{"text":"One method to solve the initial value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point , the value of depends only on the values of and and the values of the function between and . This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is , then no part of the wave that can't propagate to a given point by a given time can affect the amplitude at the same point and time."}
+{"text":"In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that casually affects point as . Suppose we integrate the inhomogeneous wave equation over this region."}
+{"text":"To simplify this greatly, we can use Green's theorem to simplify the left side to get the following:"}
+{"text":"The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute"}
+{"text":"In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus ."}
+{"text":"For the other two sides of the region, it is worth noting that is a constant, namely , where the sign is chosen appropriately. Using this, we can get the relation , again choosing the right sign:"}
+{"text":"And similarly for the final boundary segment:"}
+{"text":"Adding the three results together and putting them back in the original integral:"}
+{"text":"In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source."}
+{"text":"In three dimensions, the wave equation, when written in elliptic cylindrical coordinates, may be solved by separation of variables, leading to the Mathieu differential equation."}
+{"text":"The elastic wave equation (also known as the Navier\u2013Cauchy equation) in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion:"}
+{"text":"By using the elastic wave equation can be rewritten into the more common form of the Navier\u2013Cauchy equation."}
+{"text":"Note that in the elastic wave equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation."}
+{"text":"As an aid to understanding, the reader will observe that if and are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field , which has only transverse waves."}
+{"text":"In dispersive wave phenomena, the speed of wave propagation varies with the wavelength of the wave, which is reflected by a dispersion relation"}
+{"text":"where is the angular frequency and is the wavevector describing plane wave solutions. For light waves, the dispersion relation is , but in general, the constant speed gets replaced by a variable phase velocity:"}
+{"text":"This article summarizes equations in the theory of fluid mechanics."}
+{"text":"Here formula_1 is a unit vector in the direction of the flow\/current\/flux."}
+{"text":"This article summarizes equations in the theory of gravitation."}
+{"text":"A common misconception occurs between centre of mass and centre of gravity. They are defined in similar ways but are not exactly the same quantity. Centre of mass is the mathematical description of placing all the mass in the region considered to one position, centre of gravity is a real physical quantity, the point of a body where the gravitational force acts. They are equal if and only if the external gravitational field is uniform."}
+{"text":"& = \\frac{1}{M \\left | \\mathbf{g} \\left ( \\mathbf{r}_\\mathrm{cog} \\right ) \\right |}\\sum_i \\mathbf{r}_i m_i \\left | \\mathbf{g} \\left ( \\mathbf{r}_i \\right ) \\right | \\end{align}\\,\\!<\/math>"}
+{"text":"Centre of gravity for a continuum of mass:"}
+{"text":"In the weak-field and slow motion limit of general relativity, the phenomenon of gravitoelectromagnetism (in short \"GEM\") occurs, creating a parallel between gravitation and electromagnetism. The \"gravitational field\" is the analogue of the electric field, while the \"gravitomagnetic field\", which results from circulations of masses due to their angular momentum, is the analogue of the magnetic field."}
+{"text":"It can be shown that a uniform spherically symmetric mass distribution generates an equivalent gravitational field to a point mass, so all formulae for point masses apply to bodies which can be modelled in this way."}
+{"text":"This article summarizes equations in the theory of waves."}
+{"text":"Relation between space, time, angle analogues used to describe the phase:"}
+{"text":"In what follows \"n, m\" are any integers (Z = set of integers); formula_2."}
+{"text":"Gravitational radiation for two orbiting bodies in the low-speed limit."}
+{"text":"A common misconception occurs between phase velocity and group velocity (analogous to centres of mass and gravity). They happen to be equal in non-dispersive media. In dispersive media the phase velocity is not necessarily the same as the group velocity. The phase velocity varies with frequency."}
+{"text":"Intuitively the wave envelope is the \"global profile\" of the wave, which \"contains\" changing \"local profiles inside the global profile\". Each propagates at generally different speeds determined by the important function called the \"Dispersion Relation\". The use of the explicit form \"\u03c9\"(\"k\") is standard, since the phase velocity \"\u03c9\"\/\"k\" and the group velocity d\"\u03c9\"\/d\"k\" usually have convenient representations by this function."}
+{"text":"Sinusoidal solutions to the 3d wave equation."}
+{"text":"Resultant complex amplitude of all \"N\" waves"}
+{"text":"The transverse displacements are simply the real parts of the complex amplitudes."}
+{"text":"The following may be deduced by applying the principle of superposition to two sinusoidal waves, using trigonometric identities. The \"angle addition\" and \"sum-to-product\" trigonometric formulae are useful; in more advanced work complex numbers and fourier series and transforms are used."}
+{"text":"The Vasiliev equations are generating equations and yield differential equations in the space-time upon solving them order by order with respect to certain auxiliary directions. The equations rely on several ingredients: unfolded equations and higher-spin algebras."}
+{"text":"The exposition below is organised in such a way as to split the Vasiliev's equations into the building blocks and then join them together. The example of the four-dimensional bosonic Vasiliev's equations is reviewed at length since all other dimensions and super-symmetric generalisations are simple modifications of this basic example."}
+{"text":"Three variations of Vasiliev's equations are known: four-dimensional, three-dimensional and d-dimensional. They differ by mild details that are discussed below."}
+{"text":"Higher-spin algebras are global symmetries of the higher-spin theory multiplet. The same time they can be defined as global symmetries of some conformal field theories (CFT), which underlies the kinematic part of the higher-spin AdS\/CFT correspondence, which is a particular case of the AdS\/CFT. Another definition is that higher-spin algebras are quotients of the universal enveloping algebra of the anti-de Sitter algebra formula_1 by certain two-sided ideals. Some more complicated examples of higher-spin algebras exist, but all of them can be obtained by tensoring the simplest higher-spin algebras with matrix algebras and then imposing further constraints. Higher-spin algebras originate as associative algebras and the Lie algebra can be constructed via the commutator."}
+{"text":"In the case of the four-dimensional bosonic higher-spin theory the relevant higher-spin algebra is very simple thanks to formula_2 and can be built upon two-dimensional quantum Harmonic oscillator. In the latter case two pairs of creation\/annihilation operators formula_3 are needed. These can be packed into the quartet"}
+{"text":"formula_4 of operators obeying the canonical commutation relations"}
+{"text":"where formula_6 is the formula_7 invariant tensor, i.e. it is anti-symmetric. As is well known, the bilinears provide an oscillator realization of formula_7:"}
+{"text":"The higher-spin algebra is defined as the algebra of all even functions formula_10 in formula_11. That the functions are even is in accordance with the bosonic content of the higher-spin theory as formula_11 will be shown to be related to the Majorana spinors from the space-time point of view and even powers of formula_11 correspond to tensors. It is an associative algebra and the product is conveniently realised by the Moyal star product:"}
+{"text":"with the meaning that the algebra of operators formula_15 can be replaced with the algebra of function formula_16 in ordinary commuting variables formula_17 (hats off) and the product needs to be replaced with the non-commutative star-product. For example, one finds"}
+{"text":"and therefore formula_19 as it would be the case for the operators. Another representation of the same star-product is more useful in practice:"}
+{"text":"The exponential formula can be derived by integrating by parts and dropping the boundary terms. The prefactor is chosen as to ensure formula_21. In the Lorentz-covariant base we can split formula_22 and we also split formula_23. Then the Lorentz generators are formula_24, formula_25 and the translation generators are formula_26. The formula_27-automorphism can be realized in two equivalent ways: either as formula_28 or as formula_29. In both the cases it leaves the Lorentz generators untouched and flips the sign of translations."}
+{"text":"The higher-spin algebra constructed above can be shown to be the symmetry algebra of the three-dimensional Klein-Gordon equation formula_30. Considering more general free CFT's, e.g. a number of scalars plus a number of fermions, the Maxwell field and other, one can construct more examples of higher-spin algebras."}
+{"text":"The Vasiliev equations are equations in certain bigger space endowed with auxiliary directions to be solved for. The additional directions are given by the doubles of formula_17, called formula_32,"}
+{"text":"which are furthermore entangled with Y. The star-product on the algebra of functions in formula_33 in formula_34-variables is"}
+{"text":"The integral formula here-above is a particular star-product that corresponds to the Weyl ordering among Y's and among Z's, with the opposite signs for the commutator:"}
+{"text":"Moreover, the Y-Z star product is normal ordered with respect to Y-Z and Y+Z as is seen from"}
+{"text":"The higher-spin algebra is an associative subalgebra in the extended algebra. In accordance with the bosonic projection is given by formula_38."}
+{"text":"The essential part of the Vasiliev equations relies on an interesting deformation of the Quantum harmonic oscillator, known as deformed oscillators. First of all, let us pack the usual creation and annihilation operators formula_39 in a doublet formula_40. The canonical commutation relations (the formula_41-factors are introduced to facilitate comparison with Vasiliev's equations)"}
+{"text":"can be used to prove that the bilinears in formula_43 form formula_44 generators"}
+{"text":"In particular, formula_46 rotates formula_43 as an formula_48-vector with formula_49 playing the role of the formula_48-invariant metric. The deformed oscillators are defined by appending the set of generators with an additional generating element formula_51 and postulating"}
+{"text":"Again, one can see that formula_46, as defined above, form formula_48-generators and rotate properly formula_43. At formula_56 we get back to the undeformed oscillators. In fact, formula_43 and formula_46 form the generators of the Lie superalgebra formula_59, where formula_43 should be viewed as odd generators. Then, formula_61 is the part of the defining relations of formula_59."}
+{"text":"One (or two) copies of the deformed oscillator relations form a part of the Vasiliev equations where the generators are replaced with fields and the commutation relations are imposed as field equations."}
+{"text":"The equations for higher-spin fields originate from the Vasiliev equations in the unfolded form."}
+{"text":"Any set of differential equations can be put in the first order form by introducing auxiliary fields to denote derivatives. Unfolded approach is an advanced reformulation of this idea that takes into account gauge symmetries and diffeomorphisms. Instead of just formula_63 the unfolded equations are written in the language of differential forms as"}
+{"text":"where the variables are differential forms formula_65 of various degrees, enumerated by an abstract index formula_66; formula_67 is the exterior derivative formula_68. The structure function formula_69 is assumed to be expandable in exterior product Taylor series as"}
+{"text":"where formula_71 has form degree formula_72 and the sum is over all forms whose form degrees add up to formula_73. The simplest example of unfolded equations are the zero curvature equations formula_74 for a one-form connection formula_75 of any Lie algebra formula_76. Here formula_66 runs over the base of the Lie algebra, and the structure function formula_78 encodes the structure constants of the Lie algebra."}
+{"text":"Since formula_79 the consistency of the unfolded equations requires"}
+{"text":"which is the Frobenius integrability condition. In the case of the zero curvature equation this is just the Jacobi identity. Once the system is integrable it can be shown to have certain gauge symmetries. Every field formula_71 that is a form of non-zero degree formula_72 possesses a gauge parameter formula_83 that is a form of degree formula_84 and the gauge transformations are"}
+{"text":"The Vasiliev equations generate the unfolded equations for a specific field content, which consists of a one-form formula_75 and a zero-form formula_87, both taking values in the higher-spin algebra. Therefore, formula_88 and formula_89, formula_90. The unfolded equations that describe interactions of higher-spin fields are"}
+{"text":"where formula_92 are the interaction vertices that are of higher and higher order in the formula_87-field. The product in the higher-spin algebra is denoted by formula_94. The explicit form of the vertices can be extracted from the Vasiliev equations. The vertices that are bilinear in the fields are determined by the higher-spin algebra. Automorphism formula_27 is induced by the automorphism of the anti-de Sitter algebra that flips the sign of translations, see below."}
+{"text":"If we truncate away higher orders in the formula_87-expansion, the equations are just the zero-curvature condition for a connection formula_75 of the higher-spin algebra and the covariant constancy equation for a zero-form formula_87 that takes values in the twisted-adjoint representation (twist is by the automorphism formula_27)."}
+{"text":"The field content of the Vasiliev equations is given by three fields all taking values in the extended algebra of functions in Y and Z:"}
+{"text":"As to avoid any confusion caused by the differential forms in the auxiliary Z-space and to reveal the relation to the deformed oscillators the Vasiliev equations are written below in the component form."}
+{"text":"The Vasiliev equations can be split into two parts. The first part contains only zero-curvature or covariant constancy equations:"}
+{"text":"where the higher-spin algebra automorphism formula_112 is extended to the full algebra as"}
+{"text":"the latter two forms being equivalent because of the bosonic projection imposed on formula_114."}
+{"text":"Therefore, the first part of the equations implies that there is no nontrivial curvature in the x-space since formula_115 is flat. The second part makes the system nontrivial and determines the curvature of the auxiliary connection formula_108:"}
+{"text":"The existence of the Klein operators is of utter importance for the system. They realise the formula_112 automorphism as an inner one"}
+{"text":"In other words, the Klein operator formula_121 behave as formula_122, i.e. it anti-commutes to odd functions and commute to even functions in y,z."}
+{"text":"These 3+2 equations are the Vasiliev equations for the four-dimensional bosonic higher-spin theory. Several comments are in order."}
+{"text":"To prove that the linearized Vasiliev equations do describe free massless higher-spin fields we need to consider the linearised fluctuations over the anti-de Sitter vacuum. First of all we take the exact solution where formula_150 is a flat connection of the anti-de Sitter algebra, formula_149 and formula_152 and add fluctuations"}
+{"text":"Above it was used several times that formula_155, i.e. the vacuum value of the S-field acts as the derivative under the commutator. It is convenient to split the four-component Y,Z into two-component variables as formula_156. Another trick that was used in the fourth equation is the invertibility of the Klein operators:"}
+{"text":"The fifth of the Vasiliev equations is now split into the last three equation above."}
+{"text":"The analysis of the linearized fluctuations is in solving the equations one by one in the right order. Recall that one expects to find unfolded equations for two fields: one-form formula_158 and zero-form formula_104. From the fourth equation it follows that"}
+{"text":"formula_160 does not depend on the auxiliary Z-direction. Therefore, one can identify formula_161."}
+{"text":"The second equation then immediately leads to"}
+{"text":"where formula_163 is the Lorentz covariant derivative"}
+{"text":"where ... denote the term with formula_165 that is similar to the first one. The Lorentz covariant derivative comes from the usual commutator action of the spin-connection part of formula_143. The term with the vierbein results from the formula_112-automorphism that flips the sign of the AdS-translations and produces anti-commutator formula_168."}
+{"text":"To read off the content of the C-equation one needs to expand it in Y and analyze the C-equation component-wise"}
+{"text":"Then various components can be seen to have the following interpretation:"}
+{"text":"The last three equations can be recognized to be the equations of the form formula_179 where formula_180 is the exterior derivative on the space of differential forms in the Z-space. Such equations can be solved with the help of the Poincare Lemma. In addition one needs to know how to multiply by the Klein operator from the right, which is easy to derive from the integral formula for the star-product:"}
+{"text":"I.e. the result is to exchange the half of the Y and Z variables and to flip the sign. The solution to the last three equations can be written as"}
+{"text":"where a similar formula exists for formula_183."}
+{"text":"Here the last term is the gauge ambiguity, i.e. the freedom to add exact forms in the Z-space, and formula_184."}
+{"text":"One can gauge fix it to have formula_185. Then, one plugs the solution to the third equation, which of the same type, i.e. a differential equation of the first order in the Z-space. Its general solution is again given by the Poincare Lemma"}
+{"text":"where formula_187 is the integration constant in the Z-space, i.e. the de-Rham cohomology. It is this integration constant that is to be identified with the one-form formula_188 as the name suggests. After some algebra one finds"}
+{"text":"where we again dropped a term with dotted and undotted indices exchanged. The last step is to plug the solution into the first equation to find"}
+{"text":"and again the second term on the right is omitted. It is important that formula_191 is not a flat connection, while formula_192 is a flat connection. To analyze the formula_191-equations it is useful to expand formula_191 in Y"}
+{"text":"The content of the formula_191-equation is as follows:"}
+{"text":"To conclude, anti-de Sitter space is an exact solution of the Vasiliev equations and upon linearization over it one finds unfolded equations that are equivalent to the Fronsdal equations for fields with s=0,1,2,3... ."}
+{"text":"so that the fields are now function of formula_208 and space-time coordinates. The components of the fields are required to have the right spin-statistic. The equations need to be slightly modified."}
+{"text":"There also exist Vasiliev's equations in other dimensions:"}
+{"text":"The equations are very similar to the four-dimensional ones, but there are some important modifications in the definition of the algebra that the fields take values in and there are further constraints in the d-dimensional case."}
+{"text":"Discrepancies between Vasiliev equations and Higher Spin Theories."}
+{"text":"Most of the studies concern with the four-dimensional Vasiliev equations."}
+{"text":"The correction to the free spin-2 equations due to the scalar field stress-tensor was extracted out of the four-dimensional Vasiliev equations and found to be"}
+{"text":"where formula_211 are symmetrized derivatives with traces subtracted. The most important information is in the coefficients formula_212 and in the prefactor formula_213, where formula_214 is a free parameter that the equations have, see Other dimensions, extensions, and generalisations. It is important to note that the usual stress-tensor has no more than two derivative and the terms formula_215 are not independent (for example, they contribute to the same formula_216 AdS\/CFT three-point function). This is a general property of field theories that one can perform nonlinear (and also higher derivative) field redefinitions and therefore there exist infinitely many ways to write the same interaction vertex at the classical level. The canonical stress-tensor has two derivatives and the terms with contracted derivatives can be related to it via such redefinitions."}
+{"text":"A surprising fact that had been noticed before its inconsistency with the AdS\/CFT was realized is that the stress-tensor can change sign and, in particular, vanishes for formula_217. This would imply that the corresponding correlation function in the Chern-Simons matter theories vanishes, formula_218, which is not the case."}
+{"text":"The most important and detailed tests were performed much later. It was first shown that some of the three-point AdS\/CFT functions, as obtained from the Vasiliev equations, turn out to be infinite or inconsistent with AdS\/CFT, while some other do agree. Those that agree, in the language of Unfolded equations correspond to formula_219 and the infinities\/inconsistencies resulted from formula_220. The terms of the first type are local and are fixed by the higher spin algebra. The terms of the second type can be non-local (when solved perturbatively the master field formula_221 is a generating functions of infinitely many derivatives of higher spin fields). These non-localities are not present in higher spin theories as can be seen from the explicit cubic action."}
+{"text":"As is briefly mentioned in Other dimensions, extensions, and generalisations there is an option to introduce infinitely many additional coupling constants that enter via phase factor formula_223. As was noted, the second such coefficient formula_224 will affect five-point AdS\/CFT correlation functions, but not the three-point ones, which seems to be in tension with the results obtained directly from imposing higher spin symmetry on the correlation functions. Later, it was shown that the terms in the equations that result from"}
+{"text":"formula_225 are too non-local and lead to an infinite result for the AdS\/CFT correlation functions."}
+{"text":"In three dimensions the Prokushkin-Vasiliev equations, which are supposed to describe interactions of matter fields with higher spin fields in three dimensions, are also affected by the aforementioned locality problem. For example, the perturbative corrections at the second order to the stress-tensors of the matter fields lead to infinite correlation functions. There is, however, another discrepancy: the spectrum of the Prokushkin-Vasiliev equations has, in addition to the matter fields (scalar and spinor) and higher spin fields, a set of unphysical fields that do not have any field theory interpretation, but interact with the physical fields."}
+{"text":"Since the Vasiliev equations are quite complicated there are few exact solutions known"}
+{"text":"In general relativity, the quadrupole formula describes the rate at which gravitational waves are emitted from a system of masses based on the change of the (mass) quadrupole moment. The formula reads"}
+{"text":"where formula_2 is the spatial part of the trace reversed perturbation of the metric, i.e. the gravitational wave. formula_3 is the gravitational constant, formula_4 the speed of light in vacuum, and formula_5 is the mass quadrupole moment."}
+{"text":"The formula was first obtained by Albert Einstein in 1918. After a long history of debate on its physical correctness, observations of energy loss due to gravitational radiation in the Hulse\u2013Taylor binary discovered in 1974 confirmed the result, with agreement up to 0.2 percent (by 2005)."}
+{"text":"In physics, Born reciprocity, also called reciprocal relativity or Born\u2013Green reciprocity, is a principle set up by theoretical physicist Max Born that calls for a duality-symmetry among space and momentum. Born and his co-workers expanded his principle to a framework that is also known as reciprocity theory."}
+{"text":"Born noticed a symmetry among configuration space and momentum space representations of a free particle, in that its wave function description is invariant to a change of variables \"x\"\u00a0\u2192\u00a0\"p\" and \"p\"\u00a0\u2192\u00a0\u2212\"x\". (It can also be worded such as to include scale factors, e.g. invariance to \"x\"\u00a0\u2192\u00a0\"ap\" and \"p\"\u00a0\u2192\u00a0\u2212\"bx\" where \"a\", \"b\" are constants.) Born hypothesized that such symmetry should apply to the four-vectors of special relativity, that is, to the four-vector space coordinates"}
+{"text":"Both in classical and in quantum mechanics, the Born reciprocity conjecture postulates that the transformation \"x\"\u00a0\u2192\u00a0\"p\" and \"p\"\u00a0\u2192\u00a0\u2212\"x\" leaves invariant the Hamilton equations:"}
+{"text":"From his reciprocity approach, Max Born conjectured the invariance of a space-time-momentum-energy line element. Born and H.S. Green similarly introduced the notion an invariant (quantum) metric operator formula_5 as extension of the Minkowski metric of special relativity to an invariant metric on phase space coordinates. The metric is invariant under the group of quaplectic transformations."}
+{"text":"Such a reciprocity as called for by Born can be observed in much, but not all, of the formalism of classical and quantum physics. Born's reciprocity theory was not developed much further for reason of difficulties in the mathematical foundations of the theory."}
+{"text":"However Born's idea of a quantum metric operator was later taken up by Hideki Yukawa when developing his nonlocal quantum theory in the 1950s. In 1981, Eduardo R. Caianiello proposed a \"maximal acceleration\", similarly as there is a minimal length at Planck scale, and this concept of maximal acceleration has been expanded upon by others. It has also been suggested that Born reciprocity may be the underlying physical reason for the T-duality symmetry in string theory, and that Born reciprocity may be of relevance to developing a quantum geometry."}
+{"text":"Born chose the term \"reciprocity\" for the reason that in a crystal lattice, the motion of a particle can be described in \"p\"-space by means of the reciprocal lattice."}
+{"text":"This article summarizes equations used in optics, including geometric optics, physical optics, radiometry, diffraction, and interferometry."}
+{"text":"There are different forms of the Poynting vector, the most common are in terms of the E and B or E and H fields."}
+{"text":"For spectral quantities two definitions are in use to refer to the same quantity, in terms of frequency or wavelength."}
+{"text":"In astrophysics, \"L\" is used for \"luminosity\" (energy per unit time, equivalent to \"power\") and \"F\" is used for \"energy flux\" (energy per unit time per unit area, equivalent to \"intensity\" in terms of area, not solid angle). They are not new quantities, simply different names."}
+{"text":"The capstan equation or belt friction equation, also known as Eytelwein's formula (after Johann Albert Eytelwein), relates the hold-force to the load-force if a flexible line is wound around a cylinder (a bollard, a winch or a capstan)."}
+{"text":"Because of the interaction of frictional forces and tension, the tension on a line wrapped around a capstan may be different on either side of the capstan. A small \"holding\" force exerted on one side can carry a much larger \"loading\" force on the other side; this is the principle by which a capstan-type device operates."}
+{"text":"A holding capstan is a ratchet device that can turn only in one direction; once a load is pulled into place in that direction, it can be held with a much smaller force. A powered capstan, also called a winch, rotates so that the applied tension is multiplied by the friction between rope and capstan. On a tall ship a holding capstan and a powered capstan are used in tandem so that a small force can be used to raise a heavy sail and then the rope can be easily removed from the powered capstan and tied off."}
+{"text":"In rock climbing with so-called top-roping, a lighter person can hold (belay) a heavier person due to this effect."}
+{"text":"where formula_2 is the applied tension on the line, formula_3 is the resulting force exerted at the other side of the capstan, formula_4 is the coefficient of friction between the rope and capstan materials, and formula_5 is the total angle swept by all turns of the rope, measured in radians (i.e., with one full turn the angle formula_6)."}
+{"text":"Several assumptions must be true for the formula to be valid:"}
+{"text":"It can be observed that the force gain increases exponentially with the coefficient of friction, the number of turns around the cylinder, and the angle of contact. Note that \"the radius of the cylinder has no influence on the force gain\"."}
+{"text":"The table below lists values of the factor formula_9 based on the number of turns and coefficient of friction \"\u03bc\"."}
+{"text":"From the table it is evident why one seldom sees a sheet (a rope to the loose side of a sail) wound more than three turns around a winch. The force gain would be extreme besides being counter-productive since there is risk of a riding turn, result being that the sheet will foul, form a knot and not run out when eased (by slacking grip on the tail (free end))."}
+{"text":"It is both ancient and modern practice for anchor capstans and jib winches to be slightly flared out at the base, rather than cylindrical, to prevent the rope (anchor warp or sail sheet) from sliding down. The rope wound several times around the winch can slip upwards gradually, with little risk of a riding turn, provided it is tailed (loose end is pulled clear), by hand or a self-tailer."}
+{"text":"For instance, the factor \"153,552,935\" (5 turns around a capstan with a coefficient of friction of 0.6) means, in theory, that a newborn baby would be capable of holding (not moving) the weight of two supercarriers (97,000 tons each, but for the baby it would be only a little more than 1\u00a0kg). The large number of turns around the capstan combined with such a high friction coefficient mean that very little additional force is necessary to hold such heavy weight in place. The cables necessary to support this weight, as well as the capstan's ability to withstand the crushing force of those cables, are separate considerations."}
+{"text":"Generalization of the capstan equation for a V-belt."}
+{"text":"The belt friction equation for a v-belt is:"}
+{"text":"where formula_11 is the angle (in radians) between the two flat sides of the pulley that the v-belt presses against. A flat belt has an effective angle of formula_12."}
+{"text":"The material of a V-belt or multi-V serpentine belt tends to wedge into the mating groove in a pulley as the load increases, improving torque transmission."}
+{"text":"For the same power transmission, a V-belt requires less tension than a flat belt, increasing bearing life."}
+{"text":"Generalization of the capstan equation for a rope lying on an arbitrary orthotropic surface."}
+{"text":"If a rope is lying in equilibrium under tangential forces on a rough orthotropic surface then all three following conditions are satisfied:"}
+{"text":"This generalization has been obtained by Konyukhov."}
+{"text":"In general relativity and tensor calculus, the Palatini identity is:"}
+{"text":"where formula_2 denotes the variation of Christoffel symbols and formula_3 indicates covariant differentiation."}
+{"text":"A proof can be found in the entry Einstein\u2013Hilbert action."}
+{"text":"The \"same\" identity holds for the Lie derivative formula_4. In fact, one has:"}
+{"text":"where formula_6 denotes any vector field on the spacetime manifold formula_7."}
+{"text":"The Breit equation is a relativistic wave equation derived by Gregory Breit in 1929 based on the Dirac equation, which formally describes two or more massive spin-1\/2 particles (electrons, for example) interacting electromagnetically to the first order in perturbation theory. It accounts for magnetic interactions and retardation effects to the order of \"1\/c2\". When other quantum electrodynamic effects are negligible, this equation has been shown to give results in good agreement with experiment. It was originally derived from the Darwin Lagrangian but later vindicated by the Wheeler\u2013Feynman absorber theory and eventually quantum electrodynamics."}
+{"text":"The Breit equation is not only an approximation in terms of quantum mechanics, but also in terms of relativity theory as it is not completely invariant with respect to the Lorentz transformation. Just as does the Dirac equation, it treats nuclei as point sources of an external field for the particles it describes. For \"N\" particles, the Breit equation has the form (\"rij\" is the distance between particle \"i\" and \"j\"):"}
+{"text":"is the Dirac Hamiltonian (see Dirac equation) for particle \"i\" at position r\"i\" and \"\u03c6\"(r\"i\") is the scalar potential at that position; \"qi\" is the charge of the particle, thus for electrons \"qi\" = \u2212\"e\"."}
+{"text":"The one-electron Dirac Hamiltonians of the particles, along with their instantaneous Coulomb interactions 1\/\"rij\", form the \"Dirac-Coulomb\" operator. To this, Breit added the operator (now known as the (frequency-independent) Breit operator):"}
+{"text":"where the Dirac matrices for electron \"i\": a(\"i\") = [\u03b1x(\"i\"),\u03b1y(\"i\"),\u03b1z(\"i\")]. The two terms in the Breit operator account for retardation effects to the first order."}
+{"text":"The wave function \u03a8 in the Breit equation is a spinor with 4\"N\" elements, since each electron is described by a Dirac bispinor with 4 elements as in the Dirac equation, and the total wave function is the tensor product of these."}
+{"text":"The total Hamiltonian of the Breit equation, sometimes called the Dirac-Coulomb-Breit Hamiltonian (\"HDCB\") can be decomposed into the following practical energy operators for electrons in electric and magnetic fields (also called the Breit-Pauli Hamiltonian), which have well-defined meanings in the interaction of molecules with magnetic fields (for instance for nuclear magnetic resonance):"}
+{"text":"in which the consecutive partial operators are:"}
+{"text":"Following is a list of the frequently occurring equations in the theory of special relativity."}
+{"text":"To derive the equations of special relativity, one must start with two postulates:"}
+{"text":"In this context, \"speed of light\" really refers to the speed supremum of information transmission or of the movement of ordinary (nonnegative mass) matter, locally, as in a classical vacuum. Thus, a more accurate description would refer to formula_1 rather than the speed of light per se. However, light and other massless particles do theoretically travel at formula_1 under vacuum conditions and experiment has nonfalsified this notion with fairly high precision. Regardless of whether light itself does travel at formula_1, though formula_1 does act as such a supremum, and that is the assumption which matters for Relativity."}
+{"text":"From these two postulates, all of special relativity follows."}
+{"text":"In the following, the relative velocity \"v\" between two inertial frames is restricted fully to the \"x\"-direction, of a Cartesian coordinate system."}
+{"text":"The following notations are used very often in special relativity:"}
+{"text":"where \u03b2 = formula_7 and \"v\" is the relative velocity between two inertial frames."}
+{"text":"For two frames at rest, \u03b3 = 1, and increases with relative velocity between the two inertial frames. As the relative velocity approaches the speed of light, \u03b3 \u2192 \u221e."}
+{"text":"In this example the time measured in the frame on the vehicle, \"t\", is known as the proper time. The proper time between two events - such as the event of light being emitted on the vehicle and the event of light being received on the vehicle - is the time between the two events in a frame where the events occur at the same location. So, above, the emission and reception of the light both took place in the vehicle's frame, making the time that an observer in the vehicle's frame would measure the proper time."}
+{"text":"This is the formula for length contraction. As there existed a proper time for time dilation, there exists a proper length for length contraction, which in this case is \"\". The proper length of an object is the length of the object in the frame in which the object is at rest. Also, this contraction only affects the dimensions of the object which are parallel to the relative velocity between the object and observer. Thus, lengths perpendicular to the direction of motion are unaffected by length contraction."}
+{"text":"In what follows, bold sans serif is used for 4-vectors while normal bold roman is used for ordinary 3-vectors."}
+{"text":"where formula_18 is known as the metric tensor. In special relativity, the metric tensor is the Minkowski metric:"}
+{"text":"In the above, \"ds\"2 is known as the spacetime interval. This inner product is invariant under the Lorentz transformation, that is,"}
+{"text":"The sign of the metric and the placement of the \"ct\", \"ct\"', \"cdt\", and \"cdt\u2032\" time-based terms can vary depending on the author's choice. For instance, many times the time-based terms are placed first in the four-vectors, with the spatial terms following. Also, sometimes \"\u03b7\" is replaced with \u2212\"\u03b7\", making the spatial terms produce negative contributions to the dot product or spacetime interval, while the time term makes a positive contribution. These differences can be used in any combination, so long as the choice of standards is followed completely throughout the computations performed."}
+{"text":"It is possible to express the above coordinate transformation via a matrix. To simplify things, it can be best to replace \"t\", \"t\u2032\", \"dt\", and \"dt\u2032\" with \"ct\", \"ct\"', \"cdt\", and \"cdt\u2032\", which has the dimensions of distance. So:"}
+{"text":"The vectors in the above transformation equation are known as four-vectors, in this case they are specifically the position four-vectors. In general, in special relativity, four-vectors can be transformed from one reference frame to another as follows:"}
+{"text":"In the above, formula_28 and formula_29 are the four-vector and the transformed four-vector, respectively, and \u039b is the transformation matrix, which, for a given transformation is the same for all four-vectors one might want to transform. So formula_28 can be a four-vector representing position, velocity, or momentum, and the same \u039b can be used when transforming between the same two frames. The most general Lorentz transformation includes boosts and rotations; the components are complicated and the transformation requires spinors."}
+{"text":"Invariance and unification of physical quantities both arise from four-vectors. The inner product of a 4-vector with itself is equal to a scalar (by definition of the inner product), and since the 4-vectors are physical quantities their magnitudes correspond to physical quantities also."}
+{"text":"Doppler shift for emitter and observer moving right towards each other (or directly away):"}
+{"text":"Doppler shift for emitter and observer moving in a direction perpendicular to the line connecting them:"}
+{"text":"The Elliott formula describes analytically, or with few adjustable parameters such as the dephasing constant, the light absorption or emission spectra of solids. It was originally derived by Roger James Elliott to describe linear absorption based on properties of a single electron\u2013hole pair. The analysis can be extended to a many-body investigation with full predictive powers when all parameters are computed microscopically using, e.g., the semiconductor Bloch equations (abbreviated as SBEs) or the semiconductor luminescence equations (abbreviated as SLEs)."}
+{"text":"One of the most accurate theories of semiconductor absorption and photoluminescence is provided by the SBEs and SLEs, respectively. Both of them are systematically derived starting from the many-body\/quantum-optical system Hamiltonian and fully describe the resulting quantum dynamics of optical and quantum-optical observables such as optical polarization (SBEs) and photoluminescence intensity (SLEs). All relevant many-body effects can be systematically included by using various techniques such as the cluster-expansion approach."}
+{"text":"These exciton eigenstates provide valuable insight to SBEs and SLEs, especially, when one analyses the linear semiconductor absorption spectrum or photoluminescence at steady-state conditions. One simply uses the constructed eigenstates to diagonalize the homogeneous parts of the SBEs and SLEs. Under the steady-state conditions, the resulting equations can be solved analytically when one further approximates dephasing due to higher-order many-body effects. When such effects are fully included, one must resort to a numeric approach. After the exciton states are obtained, one can eventually express the linear absorption and steady-state photoluminescence analytically."}
+{"text":"The same approach can be applied to compute absorption spectrum for fields that are in the terahertz (abbreviated as THz) range of electromagnetic radiation. Since the THz-photon energy lies within the meV range, it is mostly resonant with the many-body states, not the interband transitions that are typically in the eV range. Technically, the THz investigations are an extension of the ordinary SBEs and\/or involve solving the dynamics of two-particle correlations explicitly. Like for the optical absorption and emission problem, one can diagonalize the homogeneous parts that emerge analytically with the help of the exciton eigenstates. Once the diagonalization is completed, one can then compute the THz absorption analytically."}
+{"text":"All of these derivations rely on the steady-state conditions and the analytic knowledge of the exciton states. Furthermore, the effect of further many-body contributions, such as the excitation-induced dephasing, can be included microscopically to the Wannier solver, which removes the need to introduce phenomenological dephasing constant, energy shifts, or screening of the Coulomb interaction."}
+{"text":"Linear absorption of broadband weak optical probe can then be expressed as"}
+{"text":"where formula_5 is the probe-photon energy, formula_6 is the oscillator strength of the exciton state formula_2, and formula_8 is the dephasing constant associated with the exciton state formula_2. For a phenomenological description, formula_8 can be used as a single fit parameter, i.e., formula_11. However, a full microscopic computation generally produces formula_12 that depends on both exciton index formula_2 and photon frequency. As a general tendency, formula_12 increases for elevated formula_3 while the formula_16 dependence is often weak."}
+{"text":"Each of the exciton resonances can produce a peak to the absorption spectrum when the photon energy matches with formula_3. For direct-gap semiconductors, the oscillator strength is proportional to the product of dipole-matrix element squared and formula_18 that vanishes for all states except for those that are spherically symmetric. In other words, formula_6 is nonvanishing only for the formula_20-like states, following the quantum-number convention of the hydrogen problem. Therefore, optical spectrum of direct-gap semiconductors produces an absorption resonance only for the formula_20-like state. The width of the resonance is determined by the corresponding dephasing constant."}
+{"text":"In general, the exciton eigen energies consist of a series of bound states that emerge energetically well below the fundamental bandgap energy and a continuum of unbound states that appear for energies above the bandgap. Therefore, a typical semiconductor's low-density absorption spectrum shows a series of exciton resonances and then a continuum-absorption tail. For realistic situations, formula_8 increases more rapidly than the exciton-state spacing so that one typically resolves only few lowest exciton resonances in actual experiments."}
+{"text":"The concentration of charge carriers influence the shape of the absorption spectrum considerably. For high enough densities, all formula_3 energies correspond to continuum states and some of the oscillators strengths may become negative-valued due to the Pauli-blocking effect. Physically, this can be understood as the elementary property of Fermions; if a given electronic state is already excited it cannot be excited a second time due to the Pauli exclusion among Fermions. Therefore, the corresponding electronic states can produce only photon emission that is seen as negative absorption, i.e., gain that is the prerequisite to realizing semiconductor lasers."}
+{"text":"Even though one can understand the principal behavior of semiconductor absorption on the basis of the Elliott formula, detailed predictions of the exact formula_3, formula_6, and formula_12 requires a full many-body computation already for moderate carrier densities."}
+{"text":"After the semiconductor becomes electronically excited, the carrier system relaxes into a quasiequilibrium. At the same time, vacuum-field fluctuations trigger spontaneous recombination of electrons and holes (electronic vacancies) via spontaneous emission of photons. At quasiequilibrium, this yields a steady-state photon flux emitted by the semiconductor. By starting from the SLEs, the steady-state photoluminescence (abbreviated as PL) can be cast into the form"}
+{"text":"that is very similar to the Elliott formula for the optical absorption. As a major difference, the numerator has a new contribution \u2013 the spontaneous-emission source"}
+{"text":"that contains electron and hole distributions formula_28 and formula_29, respectively, where formula_30 is the carrier momentum. Additionally, formula_31 contains also a direct contribution from exciton populations formula_32 that describes truly bound electron\u2013hole pairs."}
+{"text":"The formula_33 term defines the probability to find an electron and a hole with same formula_34. Such a form is expected for a probability of two uncorrelated events to occur simultaneously at a desired formula_34 value. Therefore, formula_33 is the spontaneous-emission source originating from uncorrelated electron\u2013hole plasma. The possibility to have truly correlated electron\u2013hole pairs is defined by a two-particle exciton correlation formula_32; the corresponding probability is directly proportional to the correlation. Nevertheless, both the presence of electron\u2013hole plasma and excitons can equivalently induce the spontaneous emission. A further discussion of the relative weight and nature of plasma vs. exciton sources is presented in connection with the SLEs."}
+{"text":"Like for the absorption, a direct-gap semiconductor emits light only at the resonances corresponding to the formula_20-like states. As a typical trend, a quasiequilibrium emission is strongly peaked around the 1\"s\" resonance because formula_31 is usually largest for the formula_40 ground state. This emission peak often remains well below the fundamental bandgap energy even at the high excitations where all states are continuum states. This demonstrates that semiconductors are often subjects to massive Coulomb-induced renormalizations even when the system appears to have only electron\u2013hole plasma states as emission resonances. To make an accurate prediction of the exact position and shape at elevated carrier densities, one must resort to the full SLEs."}
+{"text":"As discussed above, it is often meaningful to tune the electromagnetic field to be resonant with the transitions between two many-body states. For example, one can follow how a bound exciton is excited from its 1\"s\" ground state to a 2\"p\" state. In several semiconductor systems, one needs THz fields to induce such transitions. By starting from a steady-state configuration of electron\u2013hole correlations, the diagonalization of THz-induced dynamics yields a THz absorption spectrum"}
+{"text":"(\\omega) = \\mathrm{Im}\\left[ \\frac{\\sum_{\\nu, \\lambda} S^{\\nu, \\lambda} (\\omega) \\Delta N_{\\nu,\\lambda} - \\left[ S^{\\nu, \\lambda}(-\\omega) \\Delta N_{\\nu,\\lambda}\\right]^{\\star} }{ \\omega (\\hbar \\omega + \\mathrm{i} \\gamma(\\omega))} \\right]\\;."}
+{"text":"In this notation, the diagonal contributions formula_41 determine the population of formula_2 excitons. The off-diagonal formula_43 elements formally determine transition amplitudes between two exciton states formula_44 and formula_45. For elevated densities, formula_43 build up spontaneously and they describe correlated electron\u2013hole plasma that is a state where electrons and holes move with respect to each other without forming bound pairs."}
+{"text":"In contrast to optical absorption and photoluminescence, THz absorption may involve all exciton states. This can be seen from the spectral response function"}
+{"text":"that contains the current-matrix elements formula_48 between two exciton states. The unit vector formula_49 is determined by the direction of the THz field. This leads to dipole selection rules among exciton states, in full analog to the atomic dipole selection rules. Each allowed transition produces a resonance in formula_50 and the resonance width is determined by a dephasing constant formula_51 that generally depends on exciton states involved and the THz frequency formula_16. The THz response also contains formula_53 that stems from the decay constant of macroscopic THz currents."}
+{"text":"In contrast to optical and photoluminescence spectroscopy, THz absorption can directly measure the presence of exciton populations in full analogy to atomic spectroscopy. For example, the presence of a pronounced 1\"s\"-to-2\"p\" resonance in THz absorption uniquely identifies the presence of excitons as detected experimentally in Ref. As a major difference to atomic spectroscopy, semiconductor resonances contain a strong excitation-induced dephasing that produces much broader resonances than in atomic spectroscopy. In fact, one typically can resolve only one 1\"s\"-to-2\"p\" resonance because the dephasing constant formula_54 is broader than energetic spacing of n-\"p\" and (n+1)-\"p\" states making 1\"s\"-to-n-\"p\" and 1\"s\"-to-(n+1)\"p\" resonances merge into one asymmetric tail."}
+{"text":"The sigma model was introduced by ; the name \u03c3-model comes from a field in their model corresponding to a spinless meson called , a scalar meson introduced earlier by Julian Schwinger. The model served as the dominant prototype of spontaneous symmetry breaking of O(4) down to O(3): the three axial generators broken are the simplest manifestation of chiral symmetry breaking, the surviving unbroken O(3) representing isospin."}
+{"text":"In conventional particle physics settings, the field is generally taken to be SU(N), or the vector subspace of quotient formula_1 of the product of left and right chiral fields. In condensed matter theories, the field is taken to be O(N). For the rotation group O(3), the sigma model describes the isotropic ferromagnet; more generally, the O(N) model shows up in the quantum Hall effect, superfluid Helium-3 and spin chains."}
+{"text":"In supergravity models, the field is taken to be a symmetric space. Since symmetric spaces are defined in terms of their involution, their tangent space naturally splits into even and odd parity subspaces. This splitting helps propel the dimensional reduction of Kaluza\u2013Klein theories."}
+{"text":"In its most basic form, the sigma model can be taken as being purely the kinetic energy of a point particle; as a field, this is just the Dirichlet energy in Euclidean space."}
+{"text":"In two spatial dimensions, the O(3) model is completely integrable."}
+{"text":"The Lagrangian density of the sigma model can be written in a variety of different ways, each suitable to a particular type of application. The simplest, most generic definition writes the Lagrangian as the metric trace of the pullback of the metric tensor on a Riemannian manifold. For formula_2 a field over a spacetime formula_3, this may be written as"}
+{"text":"where the formula_5 is the metric tensor on the field space formula_6, and the formula_7 are the derivatives on the underlying spacetime manifold."}
+{"text":"This expression can be unpacked a bit. The field space formula_8 can be chosen to be any Riemannian manifold. Historically, this is the \"sigma\" of the sigma model; the historically-appropriate symbol formula_9 is avoided here to prevent clashes with many other common usages of formula_9 in geometry. Riemannian manifolds always come with a metric tensor formula_11. Given an atlas of charts on formula_8, the field space can always be locally trivialized, in that given formula_13 in the atlas, one may write a map formula_14 giving explicit local coordinates formula_15 on that patch. The metric tensor on that patch is a matrix having components formula_16"}
+{"text":"The base manifold formula_3 must be a differentiable manifold; by convention, it is either Minkowski space in particle physics applications, flat two-dimensional Euclidean space for condensed matter applications, or a Riemann surface, the worldsheet in string theory. The formula_18 is just the plain-old covariant derivative on the base spacetime manifold formula_19 When formula_3 is flat, formula_21 is just the ordinary gradient of a scalar function (as formula_22 is a scalar field, from the point of view of formula_3 itself.) In more precise language, formula_24 is a section of the jet bundle of formula_25."}
+{"text":"Taking formula_26 the Kronecker delta, \"i.e.\" the scalar dot product in Euclidean space, one gets the formula_27 non-linear sigma model. That is, write formula_28 to be the unit vector in formula_29, so that formula_30, with formula_31 the ordinary Euclidean dot product. Then formula_32 the formula_33-sphere, the isometries of which are the rotation group formula_27. The Lagrangian can then be written as"}
+{"text":"For formula_36, this is the continuum limit of the isotropic ferromagnet on a lattice, i.e. of the classical Heisenberg model. For formula_37, this is the continuum limit of the classical XY model. See also the n-vector model and the Potts model for reviews of the lattice model equivalents. The continuum limit is taken by writing"}
+{"text":"as the finite difference on neighboring lattice locations formula_39 Then formula_40 in the limit formula_41, and formula_42 after dropping the constant terms formula_43 (the \"bulk magnetization\")."}
+{"text":"The sigma model can also be written in a more fully geometric notation, as a fiber bundle with fibers formula_8 over a differentiable manifold formula_3. Given a section formula_2, fix a point formula_47 The pushforward at formula_48 is a map of tangent bundles"}
+{"text":"where formula_51 is taken to be an orthonormal vector space basis on formula_52 and formula_53 the vector space basis on formula_54. The formula_55 is a differential form. The sigma model action is then just the conventional inner product on vector-valued \"k\"-forms"}
+{"text":"where the formula_57 is the wedge product, and the formula_58 is the Hodge star. This is an inner product in two different ways. In the first way, given \"any\" two differentiable forms formula_59 in formula_3, the Hodge dual defines an invariant inner product on the space of differential forms, commonly written as"}
+{"text":"The above is an inner product on the space of square-integrable forms, conventionally taken to be the Sobolev space formula_62 In this way, one may write"}
+{"text":"This makes it explicit and plainly evident that the sigma model is just the kinetic energy of a point particle. From the point of view of the manifold formula_3, the field formula_22 is a scalar, and so formula_55 can be recognized just the ordinary gradient of a scalar function. The Hodge star is merely a fancy device for keeping track of the volume form when integrating on curved spacetime. In the case that formula_3 is flat, it can be completely ignored, and so the action is"}
+{"text":"which is the Dirichlet energy of formula_22. Classical extrema of the action (the solutions to the Lagrange equations) are then those field configurations that minimize the Dirichlet energy of formula_22. Another way to convert this expression into a more easily-recognizable form is to observe that, for a scalar function formula_71 one has formula_72 and so one may also write"}
+{"text":"where formula_74 is the Laplace\u2013Beltrami operator, \"i.e.\" the ordinary Laplacian when formula_3 is flat."}
+{"text":"That there is \"another\", second inner product in play simply requires not forgetting that formula_55 is a vector from the point of view of formula_8 itself. That is, given \"any\" two vectors formula_78, the Riemannian metric formula_79 defines an inner product"}
+{"text":"Since formula_55 is vector-valued formula_82 on local charts, one also takes the inner product there as well. More verbosely,"}
+{"text":"The tension between these two inner products can be made even more explicit by noting that"}
+{"text":"is a bilinear form; it is a pullback of the Riemann metric formula_79. The individual formula_86 can be taken as vielbeins. The Lagrangian density of the sigma model is then"}
+{"text":"for formula_88 the metric on formula_19 Given this gluing-together, the formula_55 can be interpreted as a solder form; this is articulated more fully, below."}
+{"text":"Several interpretational and foundational remarks can be made about the classical (non-quantized) sigma model. The first of these is that the classical sigma model can be interpreted as a model of non-interacting quantum mechanics. The second concerns the interpretation of energy."}
+{"text":"given above. Taking formula_92, the function formula_93 can be interpreted as a wave function, and its Laplacian the kinetic energy of that wave function. The formula_94 is just some geometric machinery reminding one to integrate over all space. The corresponding quantum mechanical notation is formula_95 In flat space, the Laplacian is conventionally written as formula_96. Assembling all these pieces together, the sigma model action is equivalent to"}
+{"text":"which is just the grand-total kinetic energy of the wave-function formula_98, up to a factor of formula_99. To conclude, the classical sigma model on formula_100 can be interpreted as the quantum mechanics of a free, non-interacting quantum particle. Obviously, adding a term of formula_101 to the Lagrangian results in the quantum mechanics of a wave-function in a potential. Taking formula_102 is not enough to describe the formula_33-particle system, in that formula_33 particles require formula_33 distinct coordinates, which are not provided by the base manifold. This can be solved by taking formula_33 copies of the base manifold."}
+{"text":"It is very well-known that the geodesic structure of a Riemannian manifold is described by the Hamilton\u2013Jacobi equations. In thumbnail form, the construction is as follows. \"Both\" formula_3 and formula_8 are Riemannian manifolds; the below is written for formula_8, the same can be done for formula_3. The cotangent bundle formula_111, supplied with coordinate charts, can always be locally trivialized, \"i.e.\""}
+{"text":"The trivialization supplies canonical coordinates formula_113 on the cotangent bundle. Given the metric tensor formula_79 on formula_8, define the Hamiltonian function"}
+{"text":"where, as always, one is careful to note that the inverse of the mertric is used in this definition: formula_117 Famously, the geodesic flow on formula_8 is given by the Hamilton\u2013Jacobi equations"}
+{"text":"The geodesic flow is the Hamiltonian flow; the solutions to the above are the geodesics of the manifold. Note, incidentally, that formula_121 along geodesics; the time parameter formula_122 is the distance along the geodesic."}
+{"text":"The sigma model takes the momenta in the two manifolds formula_111 and formula_124 and solders them together, in that formula_55 is a solder form. In this sense, the interpretation of the sigma model as an energy functional is not surprising; it is in fact the gluing together of \"two\" energy functionals. Caution: the precise definition of a solder form requires it to be an isomorphism; this can only happen if formula_3 and formula_8 have the same real dimension. Furthermore, the conventional definition of a solder form takes formula_8 to be a Lie group. Both conditions are satisfied in various applications."}
+{"text":"The space formula_8 is often taken to be a Lie group, usually SU(N), in the conventional particle physics models, O(N) in condensed matter theories, or as a symmetric space in supergravity models. Since symmetric spaces are defined in terms of their involution, their tangent space (i.e. the place where formula_55 lives) naturally splits into even and odd parity subspaces. This splitting helps propel the dimensional reduction of Kaluza\u2013Klein theories."}
+{"text":"For the special case of formula_8 being a Lie group, the formula_79 is the metric tensor on the Lie group, formally called the Cartan tensor or the Killing form. The Lagrangian can then be written as the pullback of the Killing form. Note that the Killing form can be written as a trace over two matrices from the corresponding Lie algebra; thus, the Lagrangian can also be written in a form involving the trace. With slight re-arrangements, it can also be written as the pullback of the Maurer-Cartan form."}
+{"text":"A common variation of the sigma model is to present it on a symmetric space. The prototypical example is the chiral model, which takes the product"}
+{"text":"of the \"left\" and \"right\" chiral fields, and then constructs the sigma model on the \"diagonal\""}
+{"text":"Such a quotient space is a symmetric space, and so one can generically take formula_135 where formula_136 is the maximal subgroup of formula_137 that is invariant under the Cartan involution. The Lagrangian is still written exactly as the above, either in terms of the pullback of the metric on formula_137 to a metric on formula_139 or as a pullback of the Maurer-Cartan form."}
+{"text":"In physics, the most common and conventional statement of the sigma model begins with the definition"}
+{"text":"Here, the formula_141 is the pullback of the Maurer-Cartan form, for formula_142, onto the spacetime manifold. The formula_143 is a projection onto the odd-parity piece of the Cartan involution. That is, given the Lie algebra formula_144 of formula_137, the involution decompses the space into odd and even parity components formula_146 corresponding to the two eigenstates of the involution. The sigma model Lagrangian can then be written as"}
+{"text":"This is instantly recognizable as the first term of the Skyrme model."}
+{"text":"The equivalent metric form of this is to write a group element formula_142 as the geodesic formula_149 of an element formula_150 of the Lie algebra formula_144. The formula_152 are the basis elements for the Lie algebra; the formula_153 are the structure constants of formula_144."}
+{"text":"Plugging this directly into the above and applying the infinitesimal form of the Baker\u2013Campbell\u2013Hausdorff formula promptly leads to the equivalent expression"}
+{"text":"where formula_156 is now obviously (proportional to) the Killing form, and the formula_157 are the vielbeins that express the \"curved\" metric formula_79 in terms of the \"flat\" metric formula_156. The article on the Baker\u2013Campbell\u2013Hausdorff formula provides an explicit expression for the vielbeins. They can be written as"}
+{"text":"where formula_3 is a matrix whose matrix elements are formula_162."}
+{"text":"For the sigma model on a symmetric space, as opposed to a Lie group, the formula_163 are limited to span the subspace formula_164 instead of all of formula_146. The Lie commutator on formula_164 will \"not\" be within formula_164; indeed, one has formula_168 and so a projection is still needed."}
+{"text":"The model can be extended in a variety of ways. Besides the aforementioned Skyrme model, which introduces quartic terms, the model may be augmented by a torsion term to yield the Wess\u2013Zumino\u2013Witten model."}
+{"text":"Another possibility is frequently seen in supergravity models. Here, one notes that the Maurer-Cartan form formula_169 looks like \"pure gauge\". In the construction above for symmetric spaces, one can also consider the other projection"}
+{"text":"where, as before, the symmetric space corresponded to the split formula_171. This extra term can be interpreted as a connection on the fiber bundle formula_172 (it transforms as a gauge field). It is what is \"left over\" from the connection on formula_137. It can be endowed with its own dynamics, by writing"}
+{"text":"with formula_175. Note that the differential here is just \"d\", and not a covariant derivative; this is \"not\" the Yang-Mills stress-energy tensor. This term is not gauge invariant by itself; it must be taken together with the part of the connection that embeds into formula_176, so that taken together, the formula_176, now with the connection as a part of it, together with this term, forms a complete gauge invariant Lagrangian (which does have the Yang-Mills terms in it, when expanded out)."}
+{"text":"In physics, there are equations in every field to relate physical quantities to each other and perform calculations. Entire handbooks of equations can only summarize most of the full subject, else are highly specialized within a certain field. Physics is derived of formulae only."}
+{"text":"In physics, chemistry and related fields, master equations are used to describe the time evolution of a system that can be modelled as being in a probabilistic combination of states at any given time and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations \u2013 over time \u2013 of the probabilities that the system occupies each of the different states."}
+{"text":"A master equation is a phenomenological set of first-order differential equations describing the time evolution of (usually) the probability of a system to occupy each one of a discrete set of states with regard to a continuous time variable \"t\". The most familiar form of a master equation is a matrix form:"}
+{"text":"where formula_2 is a column vector (where element \"i\" represents state \"i\"), and formula_3 is the matrix of connections. The way connections among states are made determines the dimension of the problem; it is either"}
+{"text":"When the connections are time-independent rate constants, the master equation represents a kinetic scheme, and the process is Markovian (any jumping time probability density function for state \"i\" is an exponential, with a rate equal to the value of the connection). When the connections depend on the actual time (i.e. matrix formula_3 depends on the time, formula_5 ), the process is not stationary and the master equation reads"}
+{"text":"When the connections represent multi exponential jumping time probability density functions, the process is semi-Markovian, and the equation of motion is an integro-differential equation termed the generalized master equation:"}
+{"text":"The matrix formula_3 can also represent birth and death, meaning that probability is injected (birth) or taken from (death) the system, where then, the process is not in equilibrium."}
+{"text":"Detailed description of the matrix and properties of the system."}
+{"text":"Let formula_3 be the matrix describing the transition rates (also known as kinetic rates or reaction rates). As always, the first subscript represents the row, the second subscript the column. That is, the source is given by the second subscript, and the destination by the first subscript. This is the opposite of what one might expect, but it is technically convenient."}
+{"text":"For each state \"k\", the increase in occupation probability depends on the contribution from all other states to \"k\", and is given by:"}
+{"text":"where formula_11 is the probability for the system to be in the state formula_12, while the matrix formula_3 is filled with a grid of transition-rate constants. Similarly, formula_14 contributes to the occupation of all other states formula_15"}
+{"text":"In probability theory, this identifies the evolution as a continuous-time Markov process, with the integrated master equation obeying a Chapman\u2013Kolmogorov equation."}
+{"text":"The master equation can be simplified so that the terms with \"\u2113\" = \"k\" do not appear in the summation. This allows calculations even if the main diagonal of the formula_3 is not defined or has been assigned an arbitrary value."}
+{"text":"The final equality arises from the fact that"}
+{"text":"because the summation over the probabilities formula_20 yields one, a constant function. Since this has to hold for any probability formula_2 (and in particular for any probability of the form formula_22 for some k) we get"}
+{"text":"Using this we can write the diagonal elements as"}
+{"text":"The master equation exhibits detailed balance if each of the terms of the summation disappears separately at equilibrium\u2014i.e. if, for all states \"k\" and \"\u2113\" having equilibrium probabilities formula_25 and formula_26,"}
+{"text":"These symmetry relations were proved on the basis of the time reversibility of microscopic dynamics (microscopic reversibility) as Onsager reciprocal relations."}
+{"text":"Many physical problems in classical, quantum mechanics and problems in other sciences, can be reduced to the form of a \"master equation\", thereby performing a great simplification of the problem (see mathematical model)."}
+{"text":"The Lindblad equation in quantum mechanics is a generalization of the master equation describing the time evolution of a density matrix. Though the Lindblad equation is often referred to as a \"master equation\", it is not one in the usual sense, as it governs not only the time evolution of probabilities (diagonal elements of the density matrix), but also of variables containing information about quantum coherence between the states of the system (non-diagonal elements of the density matrix)."}
+{"text":"Another special case of the master equation is the Fokker\u2013Planck equation which describes the time evolution of a continuous probability distribution. Complicated master equations which resist analytic treatment can be cast into this form (under various approximations), by using approximation techniques such as the system size expansion."}
+{"text":"Stochastic chemical kinetics are yet another example of the Master equation. A chemical Master equation is used to model a set of chemical reactions when the number of molecules of one or more species is small (of the order of 100 or 1000 molecules)."}
+{"text":"A quantum master equation is a generalization of the idea of a master equation. Rather than just a system of differential equations for a set of probabilities (which only constitutes the diagonal elements of a density matrix), quantum master equations are differential equations for the entire density matrix, including off-diagonal elements. A density matrix with only diagonal elements can be modeled as a classical random process, therefore such an \"ordinary\" master equation is considered classical. Off-diagonal elements represent quantum coherence which is a physical characteristic that is intrinsically quantum mechanical."}
+{"text":"The Redfield equation and Lindblad equation are examples of approximate quantum master equations assumed to be Markovian. More accurate quantum master equations for certain applications include the polaron transformed quantum master equation, and the VPQME (variational polaron transformed quantum master equation)."}
+{"text":"In a more precise sense, the PDF is used to specify the probability of the random variable falling \"within a particular range of values\", as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range\u2014that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and its integral over the entire space is equal to 1."}
+{"text":"The terms \"\"probability distribution function\" and \"probability function\"\" have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, \"probability distribution function\" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. \"Density function\" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables."}
+{"text":"Suppose bacteria of a certain species typically live 4 to 6 hours. The probability that a bacterium lives 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.0000000000... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on."}
+{"text":"There is a probability density function \"f\" with \"f\"(5 hours) = 2 hour\u22121. The integral of \"f\" over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window."}
+{"text":"A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable formula_1 has density formula_2, where formula_2 is a non-negative Lebesgue-integrable function, if:"}
+{"text":"Hence, if formula_5 is the cumulative distribution function of formula_1, then:"}
+{"text":"and (if formula_2 is continuous at formula_9)"}
+{"text":"Intuitively, one can think of formula_11 as being the probability of formula_1 falling within the infinitesimal interval formula_13."}
+{"text":"A random variable formula_1 with values in a measurable space formula_15 (usually formula_16 with the Borel sets as measurable subsets) has as probability distribution the measure \"X\"\u2217\"P\" on formula_15: the density of formula_1 with respect to a reference measure formula_19 on formula_15 is the Radon\u2013Nikodym derivative:"}
+{"text":"That is, \"f\" is any measurable function with the property that:"}
+{"text":"In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof)."}
+{"text":"It is not possible to define a density with reference to an arbitrary measure (e.g. one can't choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost everywhere unique."}
+{"text":"Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval [0,\u00a0] has probability density \"f\"(\"x\")\u00a0=\u00a02 for 0\u00a0\u2264\u00a0\"x\"\u00a0\u2264\u00a0 and \"f\"(\"x\")\u00a0=\u00a00 elsewhere."}
+{"text":"The standard normal distribution has probability density"}
+{"text":"If a random variable \"X\" is given and its distribution admits a probability density function \"f\", then the expected value of \"X\" (if the expected value exists) can be calculated as"}
+{"text":"Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point."}
+{"text":"A distribution has a density function if and only if its cumulative distribution function \"F\"(\"x\") is absolutely continuous. In this case: \"F\" is almost everywhere differentiable, and its derivative can be used as probability density:"}
+{"text":"If a probability distribution admits a density, then the probability of every one-point set {\"a\"} is zero; the same holds for finite and countable sets."}
+{"text":"Two probability densities \"f\" and \"g\" represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero."}
+{"text":"In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}
+{"text":"If \"dt\" is an infinitely small number, the probability that \"X\" is included within the interval (\"t\",\u00a0\"t\"\u00a0+\u00a0\"dt\") is equal to \"f\"(\"t\")\u00a0\"dt\", or:"}
+{"text":"It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function, by using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution\u2014that is, taking \u22121 or 1 for values, with probability \u00bd each. The density of probability associated with this variable is:"}
+{"text":"More generally, if a discrete variable can take \"n\" different values among real numbers, then the associated probability density function is:"}
+{"text":"where formula_30 are the discrete values accessible to the variable and formula_31 are the probabilities associated with these values."}
+{"text":"This substantially unifies the treatment of discrete and continuous probability distributions. For instance, the above expression allows for determining statistical characteristics of such a discrete variable (such as its mean, its variance and its kurtosis), starting from the formulas given for a continuous distribution of the probability..."}
+{"text":"It is common for probability density functions (and probability mass functions) to"}
+{"text":"be parametrized\u2014that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by formula_19 and formula_33 respectively, giving the family of densities"}
+{"text":"Since the parameters are constants, reparametrizing a density in terms of different parameters, to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. Changing the domain of a probability density, however, is trickier and requires more work: see the section below on change of variables."}
+{"text":"For continuous random variables \"X\"1, \u2026, \"Xn\", it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the \"n\" variables, such that, for any domain \"D\" in the \"n\"-dimensional space of the values of the variables \"X\"1, \u2026, \"Xn\", the probability that a realisation of the set variables falls inside the domain \"D\" is"}
+{"text":"If \"F\"(\"x\"1,\u00a0\u2026,\u00a0\"x\"\"n\") =\u00a0Pr(\"X\"1\u00a0\u2264\u00a0\"x\"1,\u00a0\u2026,\u00a0\"X\"\"n\"\u00a0\u2264\u00a0\"x\"\"n\") is the cumulative distribution function of the vector (\"X\"1,\u00a0\u2026,\u00a0\"X\"\"n\"), then the joint probability density function can be computed as a partial derivative"}
+{"text":"For \"i\" = 1, 2, \u2026, \"n\", let \"f\"\"X\"\"i\"(\"x\"\"i\") be the probability density function associated with variable \"Xi\" alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables \"X\"1, \u2026, \"Xn\" by integrating over all values of the other \"n\"\u00a0\u2212\u00a01 variables:"}
+{"text":"Continuous random variables \"X\"1, \u2026, \"Xn\" admitting a joint density are all independent from each other if and only if"}
+{"text":"If the joint probability density function of a vector of \"n\" random variables can be factored into a product of \"n\" functions of one variable"}
+{"text":"(where each \"fi\" is not necessarily a density) then the \"n\" variables in the set are all independent from each other, and the marginal probability density function of each of them is given by"}
+{"text":"This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call formula_41 a 2-dimensional random vector of coordinates (\"X\", \"Y\"): the probability to obtain formula_41 in the quarter plane of positive \"x\" and \"y\" is"}
+{"text":"Function of random variables and change of variables in the probability density function."}
+{"text":"If the probability density function of a random variable (or vector) \"X\" is given as \"fX\"(\"x\"), it is possible (but often not necessary; see below) to calculate the probability density function of some variable . This is also called a \u201cchange of variable\u201d and is in practice used to generate a random variable of arbitrary shape using a known (for instance, uniform) random number generator."}
+{"text":"It is tempting to think that in order to find the expected value \"E\"(\"g\"(\"X\")), one must first find the probability density \"f\"\"g\"(\"X\") of the new random variable . However, rather than computing"}
+{"text":"The values of the two integrals are the same in all cases in which both \"X\" and \"g\"(\"X\") actually have probability density functions. It is not necessary that \"g\" be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician."}
+{"text":"Let formula_46 be a monotonic function, then the resulting density function is"}
+{"text":"This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is,"}
+{"text":"For functions that are not monotonic, the probability density function for \"y\" is"}
+{"text":"where \"n\"(\"y\") is the number of solutions in \"x\" for the equation formula_51, and formula_52 are these solutions."}
+{"text":"Suppose x is an \"n\"-dimensional random variable with joint density \"f\". If , where \"H\" is a bijective, differentiable function, then \"y\" has density \"g\":"}
+{"text":"with the differential regarded as the Jacobian of the inverse of \"H\"(\u22c5), evaluated at y."}
+{"text":"For example, in the 2-dimensional case x\u00a0= (\"x\"1,\u00a0\"x\"2), suppose the transform \"H\" is given as \"y\"1\u00a0= \"H\"1(\"x\"1,\u00a0\"x\"2), \"y\"2\u00a0= \"H\"2(\"x\"1,\u00a0\"x\"2) with inverses \"x\"1\u00a0= \"H\"1\u22121(\"y\"1,\u00a0\"y\"2), \"x\"2\u00a0= \"H\"2\u22121(\"y\"1,\u00a0\"y\"2). The joint distribution for y\u00a0= (\"y\"1,\u00a0y2) has density"}
+{"text":"Let formula_55 be a differentiable function and formula_56 be a random vector taking values in formula_57, formula_58 be the probability density function of formula_56 and formula_60 be the Dirac delta function. It is possible to use the formulas above to determine formula_61, the probability density function of formula_62, which will be given by"}
+{"text":"This result leads to the law of the unconscious statistician:"}
+{"text":"which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that"}
+{"text":"which if marginalized over formula_9 leads to the desired probability density function."}
+{"text":"The probability density function of the sum of two independent random variables \"U\" and \"V\", each of which has a probability density function, is the convolution of their separate density functions:"}
+{"text":"It is possible to generalize the previous relation to a sum of N independent random variables, with densities \"U\"1, \u2026, \"UN\":"}
+{"text":"This can be derived from a two-way change of variables involving \"Y=U+V\" and \"Z=V\", similarly to the example below for the quotient of independent random variables."}
+{"text":"Products and quotients of independent random variables."}
+{"text":"Given two independent random variables \"U\" and \"V\", each of which has a probability density function, the density of the product \"Y\"\u00a0=\u00a0\"UV\" and quotient \"Y\"=\"U\"\/\"V\" can be computed by a change of variables."}
+{"text":"To compute the quotient \"Y\"\u00a0=\u00a0\"U\"\/\"V\" of two independent random variables \"U\" and \"V\", define the following transformation:"}
+{"text":"Then, the joint density \"p\"(\"y\",\"z\") can be computed by a change of variables from \"U,V\" to \"Y,Z\", and \"Y\" can be derived by marginalizing out \"Z\" from the joint density."}
+{"text":"The Jacobian matrix formula_73 of this transformation is"}
+{"text":"And the distribution of \"Y\" can be computed by marginalizing out \"Z\":"}
+{"text":"This method crucially requires that the transformation from \"U\",\"V\" to \"Y\",\"Z\" be bijective. The above transformation meets this because \"Z\" can be mapped directly back to \"V\", and for a given \"V\" the quotient \"U\"\/\"V\" is monotonic. This is similarly the case for the sum \"U\"\u00a0+\u00a0\"V\", difference \"U\"\u00a0\u2212\u00a0\"V\" and product \"UV\"."}
+{"text":"Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables."}
+{"text":"Given two standard normal variables \"U\" and \"V\", the quotient can be computed as follows. First, the variables have the following density functions:"}
+{"text":"This is the density of a standard Cauchy distribution."}
+{"text":"This article summarizes equations in the theory of electromagnetism."}
+{"text":"Here subscripts \"e\" and \"m\" are used to differ between electric and magnetic charges. The definitions for monopoles are of theoretical interest, although real magnetic dipoles can be described using pole strengths. There are two possible units for monopole strength, Wb (Weber) and A m (Ampere metre). Dimensional analysis shows that magnetic charges relate by \"qm\"(Wb) = \"\u03bc\"0 \"qm\"(Am)."}
+{"text":"Contrary to the strong analogy between (classical) gravitation and electrostatics, there are no \"centre of charge\" or \"centre of electrostatic attraction\" analogues."}
+{"text":"Below \"N\" = number of conductors or circuit components. Subcript \"net\" refers to the equivalent and resultant property value."}
+{"text":"Microplane model for constitutive laws of materials"}
+{"text":"The basic idea of the microplane model is to express the constitutive law not in terms of tensors, but in terms of the vectors of stress and strain acting on planes of various orientations called the microplanes. The use of vectors was inspired by G. I. Taylor's idea in 1938 which led to Taylor models for plasticity of polycrystalline metals. But the microplane models differ conceptually in two ways."}
+{"text":"Firstly, to prevent model instability in post-peak softening damage, the kinematic constraint must be used instead of the static one. Thus, the strain (rather than stress) vector on each microplane is the projection of the macroscopic strain tensor, i.e.,"}
+{"text":"where formula_2 and formula_3 are the normal vector and two strain vectors corresponding to each microplane, and formula_4 and formula_5 where formula_6 and formula_7 are three mutually orthogonal vectors, one normal and two tangential, characterizing each particular microplane (subscripts formula_8 refer to Cartesian coordinates)."}
+{"text":"Secondly, a variational principle (or the principle of virtual work) relates the stress vector components on the microplanes (formula_9 and formula_10) to the macro-continuum stress tensor formula_11, to ensure equilibrium. This yields for the stress tensor the expression:"}
+{"text":"Here formula_14 is the surface of a unit hemisphere, and the sum is an approximation of the integral. The weights, formula_15, are based on an optimal Gaussian integration formula for a spherical surface. At least 21 microplanes are needed for acceptable accuracy but 37 are distinctly more accurate."}
+{"text":"The inelastic or damage behavior is characterized by subjecting the microplane stresses formula_9 and formula_10 to strain-dependent strength limits called stress-strain boundaries imposed on each microplane. They are of four types, viz.:"}
+{"text":"Each step of explicit analysis begins with an elastic predictor and, if the boundary has been exceeded, the stress vector component on the microplane is then dropped at constant strain to the boundary."}
+{"text":"In physics, defining equations are equations that define new quantities in terms of base quantities. This article uses the current SI system of units, not natural or characteristic units."}
+{"text":"Physical quantities and units follow the same hierarchy; \"chosen base quantities\" have \"defined base units\", from these any other \"quantities may be derived\" and have corresponding \"derived units\"."}
+{"text":"Defining quantities is analogous to mixing colours, and could be classified a similar way, although this is not standard. Primary colours are to base quantities; as secondary (or tertiary etc.) colours are to derived quantities. Mixing colours is analogous to combining quantities using mathematical operations. But colours could be for light or paint, and analogously the system of units could be one of many forms: such as SI (now most common), CGS, Gaussian, old imperial units, a specific form of natural units or even arbitrarily defined units characteristic to the physical system in consideration (characteristic units)."}
+{"text":"The choice of a base system of quantities and units is arbitrary; but once chosen it \"must\" be adhered to throughout all analysis which follows for consistency. It makes no sense to mix up different systems of units. Choosing a system of units, one system out of the SI, CGS etc., is like choosing whether use paint or light colours."}
+{"text":"In light of this analogy, primary definitions are base quantities with no defining equation, but defined standardized condition, \"secondary\" definitions are quantities defined purely in terms of base quantities, \"tertiary\" for quantities in terms of both base and \"secondary\" quantities, \"quaternary\" for quantities in terms of base, \"secondary\", and \"tertiary\" quantities, and so on."}
+{"text":"Much of physics requires definitions to be made for the equations to make sense."}
+{"text":"Theoretical implications: Definitions are important since they can lead into new insights of a branch of physics. Two such examples occurred in classical physics. When entropy \"S\" was defined \u2013 the range of thermodynamics was greatly extended by associating chaos and disorder with a numerical quantity that could relate to energy and temperature, leading to the understanding of the second thermodynamic law and statistical mechanics."}
+{"text":"Also the action functional (also written \"S\") (together with generalized coordinates and momenta and the Lagrangian function), initially an alternative formulation of classical mechanics to Newton's laws, now extends the range of modern physics in general \u2013 notably quantum mechanics, particle physics, and general relativity."}
+{"text":"Analytical convenience: They allow other equations to be written more compactly and so allow easier mathematical manipulation; by including a parameter in a definition, occurrences of the parameter can be absorbed into the substituted quantity and removed from the equation."}
+{"text":"As an example consider Amp\u00e8re's circuital law (with Maxwell's correction) in integral form for an arbitrary current carrying conductor in a vacuum (so zero magnetization due medium, i.e. M = 0):"}
+{"text":"which is simpler to write, even if the equation is the same."}
+{"text":"Ease of comparison: They allow comparisons of measurements to be made when they might appear ambiguous and unclear otherwise."}
+{"text":"A basic example is mass density. It is not clear how compare how much matter constitutes a variety of substances given only their masses or only their volumes. Given both for each substance, the mass \"m\" per unit volume \"V\", or mass density \"\u03c1\" provides a meaningful comparison between the substances, since for each, a fixed amount of volume will correspond to an amount of mass depending on the substance. To illustrate this; if two substances A and B have masses \"mA\" and \"mB\" respectively, occupying volumes \"VA\" and \"VB\" respectively, using the definition of mass density gives:"}
+{"text":"Making such comparisons without using mathematics logically in this way would not be as systematic."}
+{"text":"Typically definitions are explicit, meaning the defining quantity is the subject of the equation, but sometimes the equation is not written explicitly \u2013 although the defining quantity can be solved for to make the equation explicit. For vector equations, sometimes the defining quantity is in a cross or dot product and cannot be solved for explicitly as a vector, but the components can."}
+{"text":"Electric current density is an example spanning all of these methods, Angular momentum is an example which doesn't require calculus. See the classical mechanics section below for nomenclature and diagrams to the right."}
+{"text":"Operations are simply multiplication and division. Equations may be written in a product or quotient form, both of course equivalent."}
+{"text":"There is no way to divide a vector by a vector, so there are no product or quotient forms."}
+{"text":"Vectors are rank-1 tensors. The formulae below are no more than the vector equations in the language of tensors."}
+{"text":"Sometimes there is still freedom within the chosen units system, to define one or more quantities in more than one way. The situation splits into two cases:"}
+{"text":"Mutually exclusive definitions: There are a number of possible choices for a quantity to be defined in terms of others, but only one can be used and not the others. Choosing more than one of the exclusive equations for a definition leads to a contradiction \u2013 one equation might demand a quantity \"X\" to be \"defined\" in one way \"using another\" quantity \"Y\", while another equation requires the \"reverse\", \"Y\" be defined using \"X\", but then another equation might falsify the use of both \"X\" and \"Y\", and so on. The mutual disagreement makes it impossible to say which equation defines what quantity."}
+{"text":"Equivalent definitions: Defining equations which are equivalent and self-consistent with other equations and laws within the physical theory, simply written in different ways."}
+{"text":"There are two possibilities for each case:"}
+{"text":"One defining equation \u2013 one defined quantity: A defining equation is used to define a single quantity in terms of a number of others."}
+{"text":"One defining equation \u2013 a number of defined quantities: A defining equation is used to define a number of quantities in terms of a number of others. A single defining equation shouldn't contain \"one\" quantity defining \"all other\" quantities in the \"same equation\", otherwise contradictions arise again. There is no definition of the defined quantities separately since they are defined by a single quantity in a single equation. Furthermore, the defined quantities may have already been defined before, so if another quantity defines these in the same equation, there is a clash between definitions."}
+{"text":"Contradictions can be avoided by defining quantities \"successively\"; the \"order\" in which quantities are defined must be accounted for. Examples spanning these instances occur in electromagnetism, and are given below."}
+{"text":"The magnetic induction field B can be defined in terms of electric charge \"q\" or current \"I\", and the Lorentz force (magnetic term) F experienced by the charge carriers due to the field,"}
+{"text":"where formula_9 is the change in position traversed by the charge carriers (assuming current is independent of position, if not so a line integral must be done along the path of current) or in terms of the magnetic flux \"\u03a6B\" through a surface \"S\", where the area is used as a scalar \"A\" and vector: formula_10 and formula_11 is a unit normal to \"A\", either in differential form"}
+{"text":"However, only one of the above equations can be used to define B for the following reason, given that A, r, v, and F have been defined elsewhere unambiguously (most likely mechanics and Euclidean geometry)."}
+{"text":"Another example is inductance \"L\" which has two equivalent equations to use as a definition."}
+{"text":"In terms of \"I\" and \"\u03a6B\", the inductance is given by"}
+{"text":"in terms of \"I\" and induced emf \"V\""}
+{"text":"These two are equivalent by Faraday's law of induction:"}
+{"text":"substituting into the first definition for \"L\""}
+{"text":"and so they are not mutually exclusive."}
+{"text":"One defining equation \u2013 a number of defined quantities"}
+{"text":"Notice that \"L\" cannot define \"I\" and \"\u03a6B\" simultaneously - this makes no sense. \"I\", \"\u03a6B\" and \"V\" have most likely all been defined before as (\"\u03a6B\" given above in flux equation);"}
+{"text":"where \"W\" = work done on charge \"q\". Furthermore, there is no definition of either \"I\" or \"\u03a6B\" separately \u2013 because \"L\" is defining them in the same equation."}
+{"text":"However, using the Lorentz force for the electromagnetic field:"}
+{"text":"as a single defining equation for the electric field E and magnetic field B is allowed, since E and B are not only defined by one variable, but \"three\"; force F, velocity v and charge \"q\". This is consistent with isolated definitions of E and B since E is defined using F and \"q\":"}
+{"text":"and B defined by F, v, and \"q\", as given above."}
+{"text":"Definitions vs. functions: Defining quantities can vary as a function of parameters other than those in the definition. A defining equation only defines how to calculate the defined quantity, it \"cannot\" describe how the quantity varies as a function of other parameters since the function would vary from one application to another. How the defined quantity varies as a function of other parameters is described by a constitutive equation or equations, since it varies from one application to another and from one approximation (or simplification) to another."}
+{"text":"Mass density \"\u03c1\" is defined using mass \"m\" and volume \"V\" by but can vary as a function of temperature \"T\" and pressure \"p\", \"\u03c1\" = \"\u03c1\"(\"p\", \"T\")"}
+{"text":"The angular frequency \"\u03c9\" of wave propagation is defined using the frequency (or equivalently time period \"T\") of the oscillation, as a function of wavenumber \"k\", \"\u03c9\" = \"\u03c9\"(\"k\"). This is the \"dispersion relation\" for wave propagation."}
+{"text":"The coefficient of restitution for an object colliding is defined using the speeds of separation and approach with respect to the collision point, but depends on the nature of the surfaces in question."}
+{"text":"Definitions vs. theorems: There is a very important difference between defining equations and general or derived results, theorems or laws. Defining equations \"do not find out any information\" about a physical system, they simply re-state one measurement in terms of others. Results, theorems, and laws, on the other hand \"do\" provide meaningful information, if only a little, since they represent a calculation for a quantity given other properties of the system, and describe how the system behaves as variables are changed."}
+{"text":"An example was given above for Ampere's law. Another is the conservation of momentum for \"N\"1 initial particles having initial momenta pi where \"i\" = 1, 2 ... \"N\"1, and \"N\"2 final particles having final momenta pi (some particles may explode or adhere) where \"j\" = 1, 2 ... \"N\"2, the equation of conservation reads:"}
+{"text":"Using the definition of momentum in terms of velocity:"}
+{"text":"the conservation equation can be written as"}
+{"text":"It is identical to the previous version. No information is lost or gained by changing quantities when definitions are substituted, but the equation itself does give information about the system."}
+{"text":"Some equations, typically results from a derivation, include useful quantities which serve as a one-off definition within its scope of application."}
+{"text":"In special relativity, relativistic mass has support and detraction by physicists. It is defined as:"}
+{"text":"where \"m\"0 is the rest mass of the object and \u03b3 is the Lorentz factor. This makes some quantities such as momentum p and energy \"E\" of a massive object in motion easy to obtain from other equations simply by using relativistic mass:"}
+{"text":"However, this does \"not\" always apply, for instance the kinetic energy \"T\" and force F of the same object is \"not\" given by:"}
+{"text":"The Lorentz factor has a deeper significance and origin, and is used in terms of proper time and coordinate time with four-vectors. The correct equations above are consequence of the applying definitions in the correct order."}
+{"text":"In electromagnetism, a charged particle (of mass \"m\" and charge \"q\") in a uniform magnetic field B is deflected by the field in a circular helical arc at velocity v and radius of curvature r, where the helical trajectory inclined at an angle \"\u03b8\" to B. The magnetic force is the centripetal force, so the force F acting on the particle is;"}
+{"text":"reducing to scalar form and solving for |B||r|;"}
+{"text":"serves as the definition for the magnetic rigidity of the particle. Since this depends on the mass and charge of the particle, it is useful for determining the extent a particle deflects in a B field, which occurs experimentally in mass spectrometry and particle detectors."}
+{"text":"In applications of the TBDE to QED, the two particles interact by way of four-vector potentials derived from the field theoretic electromagnetic interactions between the two particles. In applications to QCD, the two particles interact by way of four-vector potentials and Lorentz invariant scalar interactions, derived in part from the field theoretic chromomagnetic interactions between the quarks and in part by phenomenological considerations. As with the Breit equation a sixteen-component spinor \u03a8 is used."}
+{"text":"For QED, each equation has the same structure as the ordinary one-body Dirac equation in the presence of an external electromagnetic field, given by the 4-potential formula_1. For QCD, each equation has the same structure as the ordinary one-body Dirac equation in the presence of an external field similar to the electromagnetic field and an additional external field given by in terms of a Lorentz invariant scalar formula_2. In natural units: those two-body equations have the form."}
+{"text":"where, in coordinate space, \"p\"\u03bc is the 4-momentum, related to the 4-gradient by (the metric used here is formula_5)"}
+{"text":"and \u03b3\u03bc are the gamma matrices. The two-body Dirac equations (TBDE) have the property that if"}
+{"text":"one of the masses becomes very large, say formula_7 then the 16-component Dirac equation reduces to the 4-component one-body Dirac equation for particle one in an external potential."}
+{"text":"where \"c\" is the speed of light and"}
+{"text":"Natural units will be used below. A tilde symbol is used over the two sets of potentials to indicate that they may have additional gamma matrix dependencies not present in the one-body Dirac equation. Any coupling constants such as the electron charge are embodied in the vector potentials."}
+{"text":"This implies that in the c.m. frame formula_21, which has zero time component."}
+{"text":"Secondly, the mathematical consistency condition also eliminates the relative energy in the c.m. frame. It does this by imposing on each Dirac operator a structure such that in a particular combination they lead to this interaction independent form, eliminating in a covariant way the relative energy."}
+{"text":"In this expression formula_23 is the relative momentum having the form formula_24 for equal masses. In the c.m. frame (formula_25), the time component formula_26 of the relative momentum, that is the relative energy, is thus eliminated. in the sense that formula_27."}
+{"text":"A third consequence of the mathematical consistency is that each of the world scalar formula_28 and four vector formula_29 potentials has a term with a fixed dependence on formula_30 and formula_31 in addition to the gamma matrix independent forms of formula_32 and formula_33 which appear in the ordinary one-body Dirac equation for scalar and vector potentials."}
+{"text":"These extra terms correspond to additional recoil spin-dependence not present in the one-body Dirac equation and vanish when one of the particles becomes very heavy (the so-called static limit)."}
+{"text":"More on constraint dynamics: generalized mass shell constraints."}
+{"text":"Constraint dynamics arose from the work of Dirac and Bergmann. This section"}
+{"text":"shows how the elimination of relative time and energy takes place in the"}
+{"text":"c.m. system for the simple system of two relativistic spinless particles."}
+{"text":"Constraint dynamics was first applied to the classical relativistic two particle system by Todorov, Kalb"}
+{"text":"and Van Alstine, Komar, and Droz-Vincent. With constraint dynamics, these authors found a consistent and covariant approach to relativistic canonical Hamiltonian mechanics that also evades the Currie-Jordan-Sudarshan \"No Interaction\" theorem. That theorem states that without fields, one cannot have a relativistic Hamiltonian dynamics. Thus, the same covariant three-dimensional approach which allows the quantized version of constraint dynamics to remove quantum ghosts simultaneously circumvents at the classical level the C.J.S. theorem. Consider a constraint on the otherwise independent coordinate and momentum four vectors, written in the form formula_34. The symbolformula_35 is called a weak equality and implies that the constraint is to be imposed only after any needed Poisson brackets are performed. In the presence of such constraints, the total"}
+{"text":"Hamiltonian formula_36 is obtained from the Lagrangian formula_37 by adding to the Legendre Hamiltonian formula_38 the sum of the constraints times an appropriate set of Lagrange multipliers formula_39."}
+{"text":"This total Hamiltonian is traditionally called the Dirac Hamiltonian."}
+{"text":"Constraints arise naturally from parameter invariant actions of the form"}
+{"text":"In the case of four vector and Lorentz scalar interactions for a single"}
+{"text":"and by squaring leads to the generalized mass shell condition or generalized"}
+{"text":"Since, in this case, the Legendre Hamiltonian vanishes"}
+{"text":"the Dirac Hamiltonian is simply the generalized mass constraint (with no"}
+{"text":"interactions it would simply be the ordinary mass shell constraint)"}
+{"text":"One then postulates that for two bodies the Dirac Hamiltonian is the sum of"}
+{"text":"and that each constraint formula_50 be constant in the proper time associated with formula_36"}
+{"text":"Here the weak equality means that the Poisson bracket could result in terms proportional one of the constraints, the classical Poisson brackets for the relativistic two-body system being defined by"}
+{"text":"To see the consequences of having each constraint be a constant of the"}
+{"text":"which leads to (note the equality in this case is not a weak one in that no constraint need be imposed after the Poisson bracket is worked out)"}
+{"text":"(see Todorov, and Wong and Crater ) with the same formula_61 defined"}
+{"text":"In addition to replacing classical dynamical variables by their quantum counterparts, quantization of the constraint mechanics takes place by replacing the constraint on the dynamical variables with a restriction on the wave function"}
+{"text":"The first set of equations for \"i\"\u00a0=\u00a01,\u00a02 play the role for spinless particles that the two Dirac equations play for spin-one-half particles. The classical Poisson brackets are replaced by commutators"}
+{"text":"and we see in this case that the constraint formalism leads to the vanishing commutator of the wave operators for the two particlein. This is the analogue of the claim stated earlier that the two Dirac operators commute with one another."}
+{"text":"The vanishing of the above commutator ensures that the dynamics is"}
+{"text":"independent of the relative time in the c.m. frame. In order to"}
+{"text":"covariantly eliminate the relative energy, introduce the relative momentum formula_66 defined by"}
+{"text":"The above definition of the relative momentum forces the orthogonality of the total"}
+{"text":"which follows from taking the scalar product of either equation with formula_18."}
+{"text":"From Eqs.() and (), this relative momentum can be written in terms of"}
+{"text":"are the projections of the momenta formula_69 and formula_70 along the direction"}
+{"text":"of the total momentum formula_18. Subtracting the two constraints formula_77 and formula_78, gives"}
+{"text":"The equation formula_82 describes both the c.m. motion and the"}
+{"text":"internal relative motion. To characterize the former motion, observe that"}
+{"text":"since the potential formula_83 depends only on the difference of the two"}
+{"text":"(This does not require that formula_85 since the formula_86.) Thus, the total momentum formula_18 is a constant of motion and"}
+{"text":"formula_79 is an eigenstate state characterized by a total momentum"}
+{"text":"formula_89. In the c.m. system formula_90 with formula_15 the"}
+{"text":"invariant center of momentum (c.m.) energy. Thus"}
+{"text":"and so formula_79 is also an eigenstate of c.m. energy operators for each of"}
+{"text":"The above set of equations follow from the constraints formula_98 and the definition of the relative momenta given in Eqs.() and ()."}
+{"text":"If instead one chooses to define (for a more general choice see Horwitz),"}
+{"text":"and it is straight forward to show that the constraint Eq.() leads"}
+{"text":"in place of formula_67. This conforms with the earlier claim on the"}
+{"text":"vanishing of the relative energy in the c.m. frame made in conjunction with"}
+{"text":"the TBDE.\\ In the second choice the c.m. value of the relative energy is"}
+{"text":"not defined as zero but comes from the original generalized mass shell"}
+{"text":"constraints. The above equations for the relative and constituent"}
+{"text":"four-momentum are the relativistic analogues of the nonrelativistic equations"}
+{"text":"Using Eqs.(),(),(), one can write formula_36 in terms of formula_18 and formula_23"}
+{"text":"Eq.() contains both the total momentum formula_18 [through the formula_112] and the relative momentum formula_23. Using Eq. (), one obtains the eigenvalue equation"}
+{"text":"so that formula_114 becomes the standard triangle"}
+{"text":"With the above constraint Eqs.() on formula_79 then formula_117 where formula_118. This allows"}
+{"text":"writing Eq. () in the form of an eigenvalue equation"}
+{"text":"having a structure very similar to that of the ordinary three-dimensional"}
+{"text":"nonrelativistic Schr\u00f6dinger equation. It is a manifestly covariant"}
+{"text":"equation, but at the same time its three-dimensional structure is evident."}
+{"text":"The four-vectors formula_120 and formula_121 have only"}
+{"text":"The similarity to the three-dimensional structure of the nonrelativistic"}
+{"text":"Schr\u00f6dinger equation can be made more explicit by writing the equation in"}
+{"text":"A plausible structure for the quasipotential formula_83 can be found by"}
+{"text":"observing that the one-body Klein-Gordon equation formula_127 takes the form formula_128 when one"}
+{"text":"introduces a scalar interaction and timelike vector interaction via formula_129and formula_130. In the"}
+{"text":"two-body case, separate classical and quantum field theory"}
+{"text":"arguments show that when one includes world scalar and"}
+{"text":"vector interactions then formula_83 depends on two underlying invariant"}
+{"text":"functions formula_132 and formula_133 through the two-body Klein-Gordon-like potential"}
+{"text":"form with the same general structure, that is"}
+{"text":"Those field theories further yield the c.m. energy dependent forms"}
+{"text":"ones that Tododov introduced as the relativistic reduced mass"}
+{"text":"and effective particle energy for a two-body system. Similar to what"}
+{"text":"happens in the nonrelativistic two-body problem, in the relativistic case"}
+{"text":"we have the motion of this effective particle taking place as if it were in"}
+{"text":"an external field (here generated by formula_2 and formula_138). The two kinematical"}
+{"text":"variables formula_139 and formula_140 are related to one another by the"}
+{"text":"If one introduces the four-vectors, including a vector interaction formula_142"}
+{"text":"and scalar interaction formula_132, then the following classical minimal"}
+{"text":"Notice, that the interaction in this \"reduced particle\" constraint depends"}
+{"text":"on two invariant scalars, formula_133 and formula_132, one guiding the time-like"}
+{"text":"vector interaction and one the scalar interaction."}
+{"text":"Is there a set of two-body Klein-Gordon equations analogous to the two-body Dirac"}
+{"text":"equations? The classical relativistic constraints analogous to the quantum"}
+{"text":"two-body Dirac equations (discussed in the introduction) and that have the same structure as the above"}
+{"text":"Defining structures that display time-like vector and scalar interactions"}
+{"text":"and using the constraint formula_160, reproduces Eqs.() provided"}
+{"text":"and each, due to the constraint formula_167 is equivalent to"}
+{"text":"Hyperbolic versus external field form of the two-body Dirac equations."}
+{"text":"For the two body system there are numerous covariant forms of interaction."}
+{"text":"The simplest way of looking at these is from the point of view of the gamma"}
+{"text":"matrix structures of the corresponding interaction vertices of the single"}
+{"text":"paraticle exchange diagrams. For scalar, pseudoscalar, vector,"}
+{"text":"pseudovector, and tensor exchanges those matrix structures are respectively"}
+{"text":"The form of the Two-Body Dirac equations which most readily incorporates"}
+{"text":"each or any number of these intereractions in concert is the so-called hyperbolic form of the TBDE"}
+{"text":"interactions those forms ultimately reduce to the ones given in the first"}
+{"text":"set of equations of this article. Those equations are called the external"}
+{"text":"field-like forms because their appearances are individually the same as"}
+{"text":"those for the usual one-body Dirac equation in the presence of external"}
+{"text":"The most general hyperbolic form for compatible TBDE is"}
+{"text":"where formula_172 represents any invariant interaction singly or in"}
+{"text":"combination. It has a matrix structure in addition to coordinate"}
+{"text":"dependence. Depending on what that matrix structure is one has either"}
+{"text":"scalar, pseudoscalar, vector, pseudovector, or tensor interactions. The"}
+{"text":"operators formula_173 and formula_174 are auxiliary constraints"}
+{"text":"in which the formula_176 are the free Dirac operators"}
+{"text":"This, in turn leads to the two compatibility conditions"}
+{"text":"conditions do not restrict the gamma matrix structure of formula_172. That"}
+{"text":"matrix structure is determined by the type of vertex-vertex structure"}
+{"text":"incorporated in the interaction. For the two types of invariant"}
+{"text":"interactions formula_172 emphasized in this article they are"}
+{"text":"For general independent scalar and vector interactions"}
+{"text":"The vector interaction specified by the above matrix structure for an electromagnetic-like interaction would correspond to the Feynman gauge."}
+{"text":"If one inserts Eq.() into () and brings the free"}
+{"text":"Dirac operator () to the right of the matrix hyperbolic functions"}
+{"text":"and uses standard gamma matrix commutators and anticommutators and formula_186 one arrives at formula_187"}
+{"text":"The (covariant) structure of these equations are analogous to those of a Dirac equation for each of the two particles, with formula_194 and formula_195"}
+{"text":"playing the roles that formula_196 and formula_197 do in the single particle"}
+{"text":"Over and above the usual kinetic part formula_199 and"}
+{"text":"time-like vector and scalar potential portions, the spin-dependent"}
+{"text":"and the last set of derivative terms are two-body recoil effects absent for"}
+{"text":"the one-body Dirac equation but essential for the compatibility"}
+{"text":"(consistency) of the two-body equations. The connections between what"}
+{"text":"are designated as the vertex invariants formula_201 and the"}
+{"text":"Comparing Eq.() with the first equation of this article one finds"}
+{"text":"Note that the first portion of the vector potentials is timelike (parallel"}
+{"text":"to formula_209 while the next portion is spacelike (perpendicular to formula_210. The spin-dependent scalar potentials formula_211 are"}
+{"text":"The parametrization for formula_214 and formula_215 takes advantage of"}
+{"text":"the Todorov effective external potential forms (as seen in the above section"}
+{"text":"on the two-body Klein Gordon equations) and at the same time displays the"}
+{"text":"correct static limit form for the Pauli reduction to Schr\u00f6dinger-like"}
+{"text":"form. The choice for these parameterizations (as with the two-body Klein"}
+{"text":"Gordon equations) is closely tied to classical or quantum field"}
+{"text":"theories for separate scalar and vector interactions. This"}
+{"text":"amounts to working in the Feynman gauge with the simplest relation between"}
+{"text":"space- and timelike parts of the vector interaction."}
+{"text":"The mass and energy potentials are respectively"}
+{"text":"in which formula_223 is a Green function determined from the Schr\u00f6dinger equation. Because of the similarity between the Schr\u00f6dinger equation Eq. () and the relativistic constraint equation (),one can derive the same type of equation as the above"}
+{"text":"called the quasipotential equation with a formula_215 very similar to that given in the Lippmann-Schwinger equation. The difference is that with the quasipotential equation, one starts with the scattering amplitudes formula_226 of quantum field theory, as determined from Feynman diagrams and deduces the quasipotential \u03a6 perturbatively. Then one can use that \u03a6 in (), to compute energy levels of two particle systems that are implied by the field theory. Constraint dynamics provides one of many, in fact an infinite number of, different types of quasipotential equations (three-dimensional truncations of the Bethe-Salpeter equation) differing from one another by the choice of formula_215."}
+{"text":"In general relativity, the Komar superpotential, corresponding to the invariance of the Hilbert\u2013Einstein Lagrangian formula_1, is the tensor density:"}
+{"text":"associated with a vector field formula_3, and where formula_4 denotes covariant derivative with respect to the Levi-Civita connection."}
+{"text":"where formula_6 denotes interior product, generalizes to an arbitrary vector field formula_7 the so-called above Komar superpotential, which was originally derived for timelike Killing vector fields."}
+{"text":"Komar superpotential is affected by the anomalous factor problem: In fact, when computed, for example, on the Kerr\u2013Newman solution, produces the correct angular momentum, but just one-half of the expected mass."}
+{"text":"In physics, specifically statistical mechanics, an ensemble (also statistical ensemble) is an idealization consisting of a large number of virtual copies (sometimes infinitely many) of a system, considered all at once, each of which represents a possible state that the real system might be in. In other words, a statistical ensemble is a probability distribution for the state of the system. The concept of an ensemble was introduced by J. Willard Gibbs in 1902."}
+{"text":"A thermodynamic ensemble is a specific variety of statistical ensemble that, among other properties, is in statistical equilibrium (defined below), and is used to derive the properties of thermodynamic systems from the laws of classical or quantum mechanics."}
+{"text":"The ensemble formalises the notion that an experimenter repeating an experiment again and again under the same macroscopic conditions, but unable to control the microscopic details, may expect to observe a range of different outcomes."}
+{"text":"The notional size of ensembles in thermodynamics, statistical mechanics and quantum statistical mechanics can be very large, including every possible microscopic state the system could be in, consistent with its observed macroscopic properties. For many important physical cases, it is possible to calculate averages directly over the whole of the thermodynamic ensemble, to obtain explicit formulas for many of the thermodynamic quantities of interest, often in terms of the appropriate partition function."}
+{"text":"The concept of an equilibrium or stationary ensemble is crucial to many applications of statistical ensembles. Although a mechanical system certainly evolves over time, the ensemble does not necessarily have to evolve. In fact, the ensemble will not evolve if it contains all past and future phases of the system. Such a statistical ensemble, one that does not change over time, is called \"stationary\" and can be said to be in \"statistical equilibrium\"."}
+{"text":"The study of thermodynamics is concerned with systems that appear to human perception to be \"static\" (despite the motion of their internal parts), and which can be described simply by a set of macroscopically observable variables. These systems can be described by statistical ensembles that depend on a few observable parameters, and which are in statistical equilibrium. Gibbs noted that different macroscopic constraints lead to different types of ensembles, with particular statistical characteristics. Three important thermodynamic ensembles were defined by Gibbs:"}
+{"text":"The calculations that can be made using each of these ensembles are explored further in their respective articles."}
+{"text":"Other thermodynamic ensembles can be also defined, corresponding to different physical requirements, for which analogous formulae can often similarly be derived."}
+{"text":"For example in the reaction ensemble, particle number fluctuations are only allowed to occur according to the stoichiometry of the chemical reactions which are present in the system."}
+{"text":"Representations of statistical ensembles in statistical mechanics."}
+{"text":"The precise mathematical expression for a statistical ensemble has a distinct form depending on the type of mechanics under consideration (quantum or classical). In the classical case, the ensemble is a probability distribution over the microstates. In quantum mechanics, this notion, due to von Neumann, is a way of assigning a probability distribution over the results of each complete set of commuting observables."}
+{"text":"In classical mechanics, the ensemble is instead written as a probability distribution in phase space; the microstates are the result of partitioning phase space into equal-sized units, although the size of these units can be chosen somewhat arbitrarily."}
+{"text":"Putting aside for the moment the question of how statistical ensembles are generated operationally, we should be able to perform the following two operations on ensembles \"A\", \"B\" of the same system:"}
+{"text":"Under certain conditions, therefore, equivalence classes of statistical ensembles have the structure of a convex set."}
+{"text":"A statistical ensemble in quantum mechanics (also known as a mixed state) is most often represented by a density matrix, denoted by formula_1. The density matrix provides a fully general tool that can incorporate both quantum uncertainties (present even if the state of the system were completely known) and classical uncertainties (due to a lack of knowledge) in a unified manner. Any physical observable in quantum mechanics can be written as an operator, . The expectation value of this operator on the statistical ensemble formula_2 is given by the following trace:"}
+{"text":"This can be used to evaluate averages (operator ), variances (using operator ), covariances (using operator ), etc. The density matrix must always have a trace of 1: formula_4 (this essentially is the condition that the probabilities must add up to one)."}
+{"text":"In general, the ensemble evolves over time according to the von Neumann equation."}
+{"text":"Equilibrium ensembles (those that do not evolve over time, formula_5) can be written solely as a function of conserved variables. For example, the microcanonical ensemble and canonical ensemble are strictly functions of the total energy, which is measured by the total energy operator (Hamiltonian). The grand canonical ensemble is additionally a function of the particle number, measured by the total particle number operator . Such equilibrium ensembles are a diagonal matrix in the orthogonal basis of states that simultaneously diagonalize each conserved variable. In bra\u2013ket notation, the density matrix is"}
+{"text":"where the , indexed by , are the elements of a complete and orthogonal basis. (Note that in other bases, the density matrix is not necessarily diagonal.)"}
+{"text":"In classical mechanics, an ensemble is represented by a probability density function defined over the system's phase space. While an individual system evolves according to Hamilton's equations, the density function (the ensemble) evolves over time according to Liouville's equation."}
+{"text":"In a mechanical system with a defined number of parts, the phase space has generalized coordinates called , and associated canonical momenta called . The ensemble is then represented by a joint probability density function ."}
+{"text":"If the number of parts in the system is allowed to vary among the systems in the ensemble (as in a grand ensemble where the number of particles is a random quantity), then it is a probability distribution over an extended phase space that includes further variables such as particle numbers (first kind of particle), (second kind of particle), and so on up to (the last kind of particle; is how many different kinds of particles there are). The ensemble is then represented by a joint probability density function . The number of coordinates varies with the numbers of particles."}
+{"text":"Any mechanical quantity can be written as a function of the system's phase. The expectation value of any such quantity is given by an integral over the entire phase space of this quantity weighted by :"}
+{"text":"The condition of probability normalization applies, requiring"}
+{"text":"Phase space is a continuous space containing an infinite number of distinct physical states within any small region. In order to connect the probability \"density\" in phase space to a probability \"distribution\" over microstates, it is necessary to somehow partition the phase space into blocks that are distributed representing the different states of the system in a fair way. It turns out that the correct way to do this simply results in equal-sized blocks of canonical phase space, and so a microstate in classical mechanics is an extended region in the phase space of canonical coordinates that has a particular volume. In particular, the probability density function in phase space, , is related to the probability distribution over microstates, by a factor"}
+{"text":"Since can be chosen arbitrarily, the notional size of a microstate is also arbitrary. Still, the value of influences the offsets of quantities such as entropy and chemical potential, and so it is important to be consistent with the value of when comparing different systems."}
+{"text":"It is in general difficult to find a coordinate system that uniquely encodes each physical state. As a result, it is usually necessary to use a coordinate system with multiple copies of each state, and then to recognize and remove the overcounting."}
+{"text":"A crude way to remove the overcounting would be to manually define a subregion of phase space that includes each physical state only once and then exclude all other parts of phase space. In a gas, for example, one could include only those phases where the particles' coordinates are sorted in ascending order. While this would solve the problem, the resulting integral over phase space would be tedious to perform due to its unusual boundary shape. (In this case, the factor introduced above would be set to , and the integral would be restricted to the selected subregion of phase space.)"}
+{"text":"A simpler way to correct the overcounting is to integrate over all of phase space but to reduce the weight of each phase in order to exactly compensate the overcounting. This is accomplished by the factor introduced above, which is a whole number that represents how many ways a physical state can be represented in phase space. Its value does not vary with the continuous canonical coordinates, so overcounting can be corrected simply by integrating over the full range of canonical coordinates, then dividing the result by the overcounting factor. However, does vary strongly with discrete variables such as numbers of particles, and so it must be applied before summing over particle numbers."}
+{"text":"As mentioned above, the classic example of this overcounting is for a fluid system containing various kinds of particles, where any two particles of the same kind are indistinguishable and exchangeable. When the state is written in terms of the particles' individual positions and momenta, then the overcounting related to the exchange of identical particles is corrected by using"}
+{"text":"This is known as \"correct Boltzmann counting\"."}
+{"text":"The formulation of statistical ensembles used in physics has now been widely adopted in other fields, in part because it has been recognized that the canonical ensemble or Gibbs measure serves to maximize the entropy of a system, subject to a set of constraints: this is the principle of maximum entropy. This principle has now been widely applied to problems in linguistics, robotics, and the like."}
+{"text":"In addition, statistical ensembles in physics are often built on a principle of locality: that all interactions are only between neighboring atoms or nearby molecules. Thus, for example, lattice models, such as the Ising model, model ferromagnetic materials by means of nearest-neighbor interactions between spins. The statistical formulation of the principle of locality is now seen to be a form of the Markov property in the broad sense; nearest neighbors are now Markov blankets. Thus, the general notion of a statistical ensemble with nearest-neighbor interactions leads to Markov random fields, which again find broad applicability; for example in Hopfield networks."}
+{"text":"In the discussion given so far, while rigorous, we have taken for granted that the notion of an ensemble is valid a priori, as is commonly done in physical context. What has not been shown is that the ensemble \"itself\" (not the consequent results) is a precisely defined object mathematically. For instance,"}
+{"text":"In this section, we attempt to partially answer this question."}
+{"text":"Suppose we have a \"preparation procedure\" for a system in a physics"}
+{"text":"lab: For example, the procedure might involve a physical apparatus and"}
+{"text":"some protocols for manipulating the apparatus. As a result of this preparation procedure, some system"}
+{"text":"is produced and maintained in isolation for some small period of time."}
+{"text":"By repeating this laboratory preparation procedure we obtain a"}
+{"text":"...,\"X\"\"k\", which in our mathematical idealization, we assume is an infinite sequence of systems. The systems are similar in that they were all produced in the same way. This infinite sequence is an ensemble."}
+{"text":"In a laboratory setting, each one of these prepped systems might be used as input"}
+{"text":"for \"one\" subsequent \"testing procedure\". Again, the testing procedure"}
+{"text":"involves a physical apparatus and some protocols; as a result of the"}
+{"text":"testing procedure we obtain a \"yes\" or \"no\" answer."}
+{"text":"Given a testing procedure \"E\" applied to each prepared system, we obtain a sequence of values"}
+{"text":"..., Meas (\"E\", \"X\"\"k\"). Each one of these values is a 0 (or no) or a 1 (yes)."}
+{"text":"For quantum mechanical systems, an important assumption made in the"}
+{"text":"quantum logic approach to quantum mechanics is the identification of \"yes-no\" questions to the"}
+{"text":"lattice of closed subspaces of a Hilbert space. With some additional"}
+{"text":"technical assumptions one can then infer that states are given by"}
+{"text":"We see this reflects the definition of quantum states in general: A quantum state is a mapping from the observables to their expectation values."}
+{"text":"Dirac equation in the algebra of physical space"}
+{"text":"The Dirac equation, as the relativistic equation that describes"}
+{"text":"spin 1\/2 particles in quantum mechanics, can be written in terms of the Algebra of physical space (APS), which is a case of a Clifford algebra or geometric algebra"}
+{"text":"that is based on the use of paravectors."}
+{"text":"The Dirac equation in APS, including the electromagnetic interaction, reads"}
+{"text":"Another form of the Dirac equation in terms of the Space time algebra was given earlier by David Hestenes."}
+{"text":"In general, the Dirac equation in the formalism of geometric algebra has the advantage of"}
+{"text":"The spinor can be written in a null basis as"}
+{"text":"such that the representation of the spinor in terms of the Pauli matrices is"}
+{"text":"The standard form of the Dirac equation can be recovered by decomposing the spinor in its right and left-handed spinor components, which are extracted with the help of the projector"}
+{"text":"The Dirac equation can be also written as"}
+{"text":"Without electromagnetic interaction, the following equation is obtained from"}
+{"text":"the two equivalent forms of the Dirac equation"}
+{"text":"where the second column of the right and left spinors can be dropped by defining the"}
+{"text":"The standard relativistic covariant form of the Dirac equation in the Weyl"}
+{"text":"Given two spinors formula_18 and formula_19 in APS and"}
+{"text":"their respective spinors in the standard form as formula_20 and"}
+{"text":"formula_21, one can verify the following identity"}
+{"text":"The Dirac equation is invariant under a global right rotation applied"}
+{"text":"so that the kinetic term of the Dirac equation transforms as"}
+{"text":"so that we can verify the invariance of the form of the Dirac equation."}
+{"text":"A more demanding requirement is that the Dirac equation should be"}
+{"text":"invariant under a local gauge transformation of the type formula_28"}
+{"text":"In this case, the kinetic term transforms as"}
+{"text":"so that the left side of the Dirac equation transforms covariantly as"}
+{"text":"where we identify the need to perform an electromagnetic gauge transformation."}
+{"text":"The mass term transforms as in the case with global rotation, so, the form"}
+{"text":"An application of the Dirac equation on itself leads to the second order Dirac equation"}
+{"text":"A solution for the free particle with momentum formula_34 and positive energy formula_35 is"}
+{"text":"and the current resembles the classical proper velocity"}
+{"text":"A solution for the free particle with negative energy and momentum"}
+{"text":"and the current resembles the classical proper velocity formula_38"}
+{"text":"but with a remarkable feature: \"the time runs backwards\""}
+{"text":"This article summarizes equations in the theory of quantum mechanics."}
+{"text":"A fundamental physical constant occurring in quantum mechanics is the Planck constant, \"h\". A common abbreviation is , also known as the \"reduced Planck constant\" or \"Dirac constant\"."}
+{"text":"The general form of wavefunction for a system of particles, each with position r\"i\" and z-component of spin \"sz i\". Sums are over the discrete variable \"sz\", integrals over continuous positions r."}
+{"text":"For clarity and brevity, the coordinates are collected into tuples, the indices label the particles (which cannot be done physically, but is mathematically necessary). Following are general mathematical results, used in calculations."}
+{"text":"Summarized below are the various forms the Hamiltonian takes, with the corresponding Schr\u00f6dinger equations and forms of wavefunction solutions. Notice in the case of one spatial dimension, for one particle, the partial derivative reduces to an ordinary derivative."}
+{"text":"In the physical sciences and electrical engineering, dispersion relations describe the effect of dispersion on the properties of waves in a medium. A dispersion relation relates the wavelength or wavenumber of a wave to its frequency. Given the dispersion relation, one can calculate the phase velocity and group velocity of waves in the medium, as a function of frequency. In addition to the geometry-dependent and material-dependent dispersion relations, the overarching Kramers\u2013Kronig relations describe the frequency dependence of wave propagation and attenuation."}
+{"text":"Dispersion may be caused either by geometric boundary conditions (waveguides, shallow water) or by interaction of the waves with the transmitting medium. Elementary particles, considered as matter waves, have a nontrivial dispersion relation even in the absence of geometric constraints and other media."}
+{"text":"In the presence of dispersion, wave velocity is no longer uniquely defined, giving rise to the distinction of phase velocity and group velocity."}
+{"text":"Dispersion occurs when pure plane waves of different wavelengths have different propagation velocities, so that a wave packet of mixed wavelengths tends to spread out in space. The speed of a plane wave, formula_1, is a function of the wave's wavelength formula_2:"}
+{"text":"The wave's speed, wavelength, and frequency, \"f\", are related by the identity"}
+{"text":"The function formula_5 expresses the dispersion relation of the given medium. Dispersion relations are more commonly expressed in terms of the angular frequency formula_6 and wavenumber formula_7. Rewriting the relation above in these variables gives"}
+{"text":"where we now view \"f\" as a function of \"k\". The use of \u03c9(\"k\") to describe the dispersion relation has become standard because both the phase velocity \u03c9\/\"k\" and the group velocity d\u03c9\/d\"k\" have convenient representations via this function."}
+{"text":"The plane waves being considered can be described by"}
+{"text":"Plane waves in vacuum are the simplest case of wave propagation: no geometric constraint, no interaction with a transmitting medium."}
+{"text":"For electromagnetic waves in vacuum, the angular frequency is proportional to the wavenumber:"}
+{"text":"This is a \"linear\" dispersion relation. In this case, the phase velocity and the group velocity are the same:"}
+{"text":"they are given by \"c\", the speed of light in vacuum, a frequency-independent constant."}
+{"text":"Total energy, momentum, and mass of particles are connected through the relativistic dispersion relation:"}
+{"text":"where formula_15 is the invariant mass. In the nonrelativistic limit, formula_16 is a constant, and formula_17 is the familiar kinetic energy expressed in terms of the momentum formula_18."}
+{"text":"The transition from ultrarelativistic to nonrelativistic behaviour shows up as a slope change from \"p\" to \"p\"2 as shown in the log\u2013log dispersion plot of \"E\" vs. \"p\"."}
+{"text":"Elementary particles, atomic nuclei, atoms, and even molecules behave in some contexts as matter waves. According to the de Broglie relations, their kinetic energy \"E\" can be expressed as a frequency \"\u03c9\", and their momentum \"p\" as a wavenumber \"k\", using the reduced Planck constant \"\u0127\":"}
+{"text":"Accordingly, angular frequency and wavenumber are connected through a dispersion relation, which in the nonrelativistic limit reads"}
+{"text":"As mentioned above, when the focus in a medium is on refraction rather than absorption\u2014that is, on the real part of the refractive index\u2014it is common to refer to the functional dependence of angular frequency on wavenumber as the \"dispersion relation\". For particles, this translates to a knowledge of energy as a function of momentum."}
+{"text":"The name \"dispersion relation\" originally comes from optics. It is possible to make the effective speed of light dependent on wavelength by making light pass through a material which has a non-constant index of refraction, or by using light in a non-uniform medium such as a waveguide. In this case, the waveform will spread over time, such that a narrow pulse will become an extended pulse, i.e., be dispersed. In these materials, formula_21 is known as the group velocity and corresponds to the speed at which the peak of the pulse propagates, a value different from the phase velocity."}
+{"text":"The dispersion relation for deep water waves is often written as"}
+{"text":"where \"g\" is the acceleration due to gravity. Deep water, in this respect, is commonly denoted as the case where the water depth is larger than half the wavelength. In this case the phase velocity is"}
+{"text":"For an ideal string, the dispersion relation can be written as"}
+{"text":"where \"T\" is the tension force in the string, and \"\u03bc\" is the string's mass per unit length. As for the case of electromagnetic waves in vacuum, ideal strings are thus a non-dispersive medium, i.e. the phase and group velocities are equal and independent (to first order) of vibration frequency."}
+{"text":"For a nonideal string, where stiffness is taken into account, the dispersion relation is written as"}
+{"text":"where formula_27 is a constant that depends on the string."}
+{"text":"In the study of solids, the study of the dispersion relation of electrons is of paramount importance. The periodicity of crystals means that many levels of energy are possible for a given momentum and that some energies might not be available at any momentum. The collection of all possible energies and momenta is known as the band structure of a material. Properties of the band structure define whether the material is an insulator, semiconductor or conductor."}
+{"text":"Phonons are to sound waves in a solid what photons are to light: they are the quanta that carry it. The dispersion relation of phonons is also non-trivial and important, being directly related to the acoustic and thermal properties of a material. For most systems, the phonons can be categorized into two main types: those whose bands become zero at the center of the Brillouin zone are called acoustic phonons, since they correspond to classical sound in the limit of long wavelengths. The others are optical phonons, since they can be excited by electromagnetic radiation."}
+{"text":"With high-energy (e.g., ) electrons in a transmission electron microscope, the energy dependence of higher-order Laue zone (HOLZ) lines in convergent beam electron diffraction (CBED) patterns allows one, in effect, to \"directly image\" cross-sections of a crystal's three-dimensional dispersion surface. This dynamical effect has found application in the precise measurement of lattice parameters, beam energy, and more recently for the electronics industry: lattice strain."}
+{"text":"Isaac Newton studied refraction in prisms but failed to recognize the material dependence of the dispersion relation, dismissing the work of another researcher whose measurement of a prism's dispersion did not match Newton's own."}
+{"text":"Dispersion of waves on water was studied by Pierre-Simon Laplace in 1776."}
+{"text":"The universality of the Kramers\u2013Kronig relations (1926\u201327) became apparent with subsequent papers on the dispersion relation's connection to causality in the scattering theory of all types of waves and particles."}
+{"text":"In electrodynamics, the Larmor formula is used to calculate the total power radiated by a non relativistic point charge as it accelerates. It was first derived by J. J. Larmor in 1897, in the context of the wave theory of light."}
+{"text":"When any charged particle (such as an electron, a proton, or an ion) accelerates, it radiates away energy in the form of electromagnetic waves. For velocities that are small relative to the speed of light, the total power radiated is given by the Larmor formula:"}
+{"text":"where formula_3 or formula_4 is the proper acceleration, formula_5 is the charge, and formula_6 is the speed of light. A relativistic generalization is given by the Li\u00e9nard\u2013Wiechert potentials."}
+{"text":"In either unit system, the power radiated by a single electron can be expressed in terms of the classical electron radius and electron mass as:"}
+{"text":"One implication is that an electron orbiting around a nucleus, as in the Bohr model, should lose energy, fall to the nucleus and the atom should collapse. This puzzle was not solved until quantum theory was introduced."}
+{"text":"Derivation 1: Mathematical approach (using CGS units)."}
+{"text":"We first need to find the form of the electric and magnetic fields. The fields can be written (for a fuller derivation see Li\u00e9nard\u2013Wiechert potential)"}
+{"text":"where formula_10 is the charge's velocity divided by formula_11, formula_12 is the charge's acceleration divided by \"c\", formula_13 is a unit vector in the formula_14 direction, formula_15 is the magnitude of formula_16, formula_17 is the charge's location, and formula_18. The terms on the right are evaluated at the retarded time formula_19."}
+{"text":"The right-hand side is the sum of the electric fields associated with the velocity and the acceleration of the charged particle. The velocity field depends only upon formula_10 while the acceleration field depends on both formula_10 and formula_12 and the angular relationship between the two. Since the velocity field is proportional to formula_23, it falls off very quickly with distance. On the other hand, the acceleration field is proportional to formula_24, which means that it falls much more slowly with distance. Because of this, the acceleration field is representative of the radiation field and is responsible for carrying most of the energy away from the charge."}
+{"text":"We can find the energy flux density of the radiation field by computing its Poynting vector:"}
+{"text":"where the 'a' subscripts emphasize that we are taking only the acceleration field. Substituting in the relation between the magnetic and electric fields while assuming that the particle instantaneously at rest at time formula_26 and simplifying gives"}
+{"text":"If we let the angle between the acceleration and the observation vector be equal to formula_28, and we introduce the acceleration formula_29, then the power radiated per unit solid angle is"}
+{"text":"The total power radiated is found by integrating this quantity over all solid angles (that is, over formula_28 and formula_32). This gives"}
+{"text":"which is the Larmor result for a non-relativistic accelerated charge. It relates the power radiated by the particle to its acceleration. It clearly shows that the faster the charge accelerates the greater the radiation will be. We would expect this since the radiation field is dependent upon acceleration."}
+{"text":"The full derivation can be found here."}
+{"text":"Here is an explanation which can help understanding the above page."}
+{"text":"This approach is based on the finite speed of light. A charge moving with"}
+{"text":"constant velocity has a radial electric field formula_34"}
+{"text":"from the charge), always emerging from the future position of the charge,"}
+{"text":"and there is no tangential component of the electric field formula_36."}
+{"text":"This future position is completely deterministic as long as the velocity"}
+{"text":"is constant. When the velocity of the charge changes, (say it bounces back"}
+{"text":"during a short time) the future position \"jumps\", so from this moment and"}
+{"text":"on, the radial electric field formula_34 emerges from a new"}
+{"text":"position. Given the fact that the electric field must be continuous, a"}
+{"text":"non-zero tangential component of the electric field formula_38 appears,"}
+{"text":"which decreases like formula_24 (unlike the radial component which"}
+{"text":"Hence, at large distances from the charge, the radial component is negligible"}
+{"text":"relative to the tangential component, and in addition to that, fields which"}
+{"text":"behave like formula_23 cannot radiate, because the Poynting vector"}
+{"text":"associated with them will behave like formula_42."}
+{"text":"The tangential component comes out (SI units):"}
+{"text":"And to obtain the Larmour formula, one has to integrate over all angles, at"}
+{"text":"large distance formula_15 from the charge, the"}
+{"text":"Poynting vector associated with formula_38, which is:"}
+{"text":"Since formula_49, we recover the result quoted at the top of the article, namely"}
+{"text":"Written in terms of momentum, , the non-relativistic Larmor formula is (in CGS units)"}
+{"text":"The power can be shown to be Lorentz invariant. Any relativistic generalization of the Larmor formula must therefore relate to some other Lorentz invariant quantity. The quantity formula_52 appearing in the non-relativistic formula suggests that the relativistically correct formula should include the Lorentz scalar found by taking the inner product of the four-acceleration with itself [here is the four-momentum]. The correct relativistic generalization of the Larmor formula is (in CGS units)"}
+{"text":"It can be shown that this inner product is given by"}
+{"text":"and so in the limit , it reduces to formula_54, thus reproducing the nonrelativistic case."}
+{"text":"The above inner product can also be written in terms of and its time derivative. Then the relativistic generalization of the Larmor formula is (in CGS units)"}
+{"text":"This is the Li\u00e9nard result, which was first obtained in 1898. The formula_55 means that when the Lorentz factor formula_56 is very close to one (i.e. formula_57) the radiation emitted by the particle is likely to be negligible. However, as formula_58 the radiation grows like formula_55 as the particle tries to lose its energy in the form of EM waves. Also, when the acceleration and velocity are orthogonal the power is reduced by a factor of formula_60, i.e. the factor formula_55 becomes formula_62. The faster the motion becomes the greater this reduction gets."}
+{"text":"We can use Li\u00e9nard's result to predict what sort of radiation losses to expect in different kinds of motion."}
+{"text":"The angular distribution of radiated power is given by a general formula, applicable whether or not the particle is relativistic. In CGS units, this formula is"}
+{"text":"where formula_64 is a unit vector pointing from the particle towards the observer. In the case of linear motion (velocity parallel to acceleration), this simplifies to"}
+{"text":"where formula_28 is the angle between the observer and the particle's motion."}
+{"text":"The radiation from a charged particle carries energy and momentum. In order to satisfy energy and momentum conservation, the charged particle must experience a recoil at the time of emission. The radiation must exert an additional force on the charged particle. This force is known as the Abraham\u2013Lorentz force in the nonrelativistic limit and the Abraham\u2013Lorentz\u2013Dirac force in the relativistic setting."}
+{"text":"A classical electron in the Bohr model orbiting a nucleus experiences acceleration and should radiate. Consequently, the electron loses energy and the electron should eventually spiral into the nucleus. Atoms, according to classical mechanics, are consequently unstable. This classical prediction is violated by the observation of stable electron orbits. The problem is resolved with a quantum mechanical description of atomic physics, initially provided by the Bohr model. Classical solutions to the stability of electron orbitals can be demonstrated using Non-radiation conditions and in accordance with known physical laws."}
+{"text":"In fluid dynamics the Morison equation is a semi-empirical equation for the inline force on a body in oscillatory flow. It is sometimes called the MOJS equation after all four authors\u2014Morison, O'Brien, Johnson and Schaaf\u2014of the 1950 paper in which the equation was introduced. The Morison equation is used to estimate the wave loads in the design of oil platforms and other offshore structures."}
+{"text":"The Morison equation is the sum of two force components: an inertia force in phase with the local flow acceleration and a drag force proportional to the (signed) square of the instantaneous flow velocity. The inertia force is of the functional form as found in potential flow theory, while the drag force has the form as found for a body placed in a steady flow. In the heuristic approach of Morison, O'Brien, Johnson and Schaaf these two force components, inertia and drag, are simply added to describe the inline force in an oscillatory flow. The transverse force\u2014perpendicular to the flow direction, due to vortex shedding\u2014has to be addressed separately."}
+{"text":"The Morison equation contains two empirical hydrodynamic coefficients\u2014an inertia coefficient and a drag coefficient\u2014which are determined from experimental data. As shown by dimensional analysis and in experiments by Sarpkaya, these coefficients depend in general on the Keulegan\u2013Carpenter number, Reynolds number and surface roughness."}
+{"text":"The descriptions given below of the Morison equation are for uni-directional onflow conditions as well as body motion."}
+{"text":"In an oscillatory flow with flow velocity formula_1, the Morison equation gives the inline force parallel to the flow direction:"}
+{"text":"For instance for a circular cylinder of diameter \"D\" in oscillatory flow, the reference area per unit cylinder length is formula_12 and the cylinder volume per unit cylinder length is formula_13. As a result, formula_3 is the total force per unit cylinder length:"}
+{"text":"Besides the inline force, there are also oscillatory lift forces perpendicular to the flow direction, due to vortex shedding. These are not covered by the Morison equation, which is only for the inline forces."}
+{"text":"In case the body moves as well, with velocity formula_16, the Morison equation becomes:"}
+{"text":"Note that the added mass coefficient formula_18 is related to the inertia coefficient formula_19 as formula_10."}
+{"text":"Stokes flow (named after George Gabriel Stokes), also named creeping flow or creeping motion, is a type of fluid flow where advective inertial forces are small compared with viscous forces. The Reynolds number is low, i.e. formula_1. This is a typical situation in flows where the fluid velocities are very slow, the viscosities are very large, or the length-scales of the flow are very small. Creeping flow was first studied to understand lubrication. In nature this type of flow occurs in the swimming of microorganisms and sperm and the flow of lava. In technology, it occurs in paint, MEMS devices, and in the flow of viscous polymers generally."}
+{"text":"The equations of motion for Stokes flow, called the Stokes equations, are a linearization of the Navier\u2013Stokes equations, and thus can be solved by a number of well-known methods for linear differential equations. The primary Green's function of Stokes flow is the Stokeslet, which is associated with a singular point force embedded in a Stokes flow. From its derivatives, other fundamental solutions can be obtained. The Stokeslet was first derived by the Nobel Laureate Hendrik Lorentz, as far back as 1896. Despite its name, Stokes never knew about the Stokeslet; the name was coined by Hancock in 1953. The closed-form fundamental solutions for the generalized unsteady Stokes and Oseen flows associated with arbitrary time-dependent translational and rotational motions have been derived for the Newtonian and micropolar fluids."}
+{"text":"The equation of motion for Stokes flow can be obtained by linearizing the steady state Navier-Stokes equations. The inertial forces are assumed to be negligible in comparison to the viscous forces, and eliminating the inertial terms of the momentum balance in the Navier\u2013Stokes equations reduces it to the momentum balance in the Stokes equations:"}
+{"text":"where formula_3 is the stress (sum of viscous and pressure stresses), and formula_4 an applied body force. The full Stokes equations also include an equation for the conservation of mass, commonly written in the form:"}
+{"text":"where formula_6 is the fluid density and formula_7 the fluid velocity. To obtain the equations of motion for incompressible flow, it is assumed that the density, formula_6, is a constant."}
+{"text":"Furthermore, occasionally one might consider the unsteady Stokes equations, in which the term formula_9 is added to the left hand side of the momentum balance equation."}
+{"text":"The Stokes equations represent a considerable simplification of the full Navier\u2013Stokes equations, especially in the incompressible Newtonian case. They are the leading-order simplification of the full Navier\u2013Stokes equations, valid in the distinguished limit formula_10"}
+{"text":"While these properties are true for incompressible Newtonian Stokes flows, the non-linear and sometimes time-dependent nature of non-Newtonian fluids means that they do not hold in the more general case."}
+{"text":"An interesting property of Stokes flow is known as the Stokes' paradox: that there can be no Stokes flow of a fluid around a disk in two dimensions; or, equivalently, the fact there is no non-trivial solution for the Stokes equations around an infinitely long cylinder."}
+{"text":"A Taylor\u2013Couette system can create laminar flows in which concentric cylinders of fluid move past each other in an apparent spiral. A fluid such as corn syrup with high viscosity fills the gap between two cylinders, with colored regions of the fluid visible through the transparent outer cylinder."}
+{"text":"The cylinders are rotated relative to one another at a low speed, which together with the high viscosity of the fluid and thinness of the gap gives a low Reynolds number, so that the apparent mixing of colors is actually laminar and can then be reversed to approximately the initial state. This creates a dramatic demonstration of seemingly mixing a fluid and then unmixing it by reversing the direction of the mixer."}
+{"text":"In the common case of an incompressible Newtonian fluid, the Stokes equations take the (vectorized) form:"}
+{"text":"where formula_7 is the velocity of the fluid, formula_13 is the gradient of the pressure, formula_14 is the dynamic viscosity, and formula_4 an applied body force. The resulting equations are linear in velocity and pressure, and therefore can take advantage of a variety of linear differential equation solvers."}
+{"text":"With the velocity vector expanded as formula_16 and similarly the body force vector formula_17, we may write the vector equation explicitly,"}
+{"text":"We arrive at these equations by making the assumptions that formula_19 and the density formula_6 is a constant."}
+{"text":"The equation for an incompressible Newtonian Stokes flow can be solved by the stream function method in planar or in 3-D axisymmetric cases"}
+{"text":"The linearity of the Stokes equations in the case of an incompressible Newtonian fluid means that a Green's function, formula_21, exists. The Green's function is found by solving the Stokes equations with the forcing term replaced by a point force acting at the origin, and boundary conditions vanishing at infinity:"}
+{"text":"where formula_23 is the Dirac delta function, and formula_24 represents a point force acting at the origin. The solution for the pressure \"p\" and velocity u with |u| and \"p\" vanishing at infinity is given by"}
+{"text":"The terms Stokeslet and point-force solution are used to describe formula_27. Analogous to the point charge in electrostatics, the Stokeslet is force-free everywhere except at the origin, where it contains a force of strength formula_28."}
+{"text":"For a continuous-force distribution (density) formula_29 the solution (again vanishing at infinity) can then be constructed by superposition:"}
+{"text":"This integral representation of the velocity can be viewed as a reduction in dimensionality: from the three-dimensional partial differential equation to a two-dimensional integral equation for unknown densities."}
+{"text":"The Papkovich\u2013Neuber solution represents the velocity and pressure fields of an incompressible Newtonian Stokes flow in terms of two harmonic potentials."}
+{"text":"Certain problems, such as the evolution of the shape of a bubble in a Stokes flow, are conducive to numerical solution by the boundary element method. This technique can be applied to both 2- and 3-dimensional flows."}
+{"text":"Hele-Shaw flow is an example of a geometry for which inertia forces are negligible. It is defined by two parallel plates arranged very close together with the space between the plates occupied partly by fluid and partly by obstacles in the form of cylinders with generators normal to the plates."}
+{"text":"Slender-body theory in Stokes flow is a simple approximate method of determining the irrotational flow field around bodies whose length is large compared with their width. The basis of the method is to choose a distribution of flow singularities along a line (since the body is slender) so that their irrotational flow in combination with a uniform stream approximately satisfies the zero normal velocity condition."}
+{"text":"Lamb's general solution arises from the fact that the pressure formula_31 satisfies the Laplace equation, and can be expanded in a series of solid spherical harmonics in spherical coordinates. As a result, the solution to the Stokes equations can be written:"}
+{"text":"where formula_33 and formula_34 are solid spherical harmonics of order formula_35:"}
+{"text":"and the formula_37 are the associated Legendre polynomials. The Lamb's solution can be used to describe the motion of fluid either inside or outside a sphere. For example, it can be used to describe the motion of fluid around a spherical particle with prescribed surface flow, a so-called squirmer, or to describe the flow inside a spherical drop of fluid. For interior flows, the terms with formula_38 are dropped, while for exterior flows the terms with formula_39 are dropped (often the convention formula_40 is assumed for exterior flows to avoid indexing by negative numbers)."}
+{"text":"The drag resistance to a moving sphere, also known as Stokes' solution is here summarised. Given a sphere of radius formula_41, travelling at velocity formula_42, in a Stokes fluid with dynamic viscosity formula_14, the drag force formula_44 is given by:"}
+{"text":"The Stokes solution dissipates less energy than any other solenoidal vector field with the same boundary velocities: this is known as the Helmholtz minimum dissipation theorem."}
+{"text":"The Lorentz reciprocal theorem states a relationship between two Stokes flows in the same region. Consider fluid filled region formula_46 bounded by surface formula_47. Let the velocity fields formula_7 and formula_49 solve the Stokes equations in the domain formula_46, each with corresponding stress fields formula_51 and formula_52. Then the following equality holds:"}
+{"text":"Where formula_54 is the unit normal on the surface formula_47. The Lorentz reciprocal theorem can be used to show that Stokes flow \"transmits\" unchanged the total force and torque from an inner closed surface to an outer enclosing surface. The Lorentz reciprocal theorem can also be used to relate the swimming speed of a microorganism, such as cyanobacterium, to the surface velocity which is prescribed by deformations of the body shape via cilia or flagella."}
+{"text":"The Fax\u00e9n's laws are direct relations that express the multipole moments in terms of the ambient flow and its derivatives. First developed by Hilding Fax\u00e9n to calculate the force, formula_28, and torque, formula_57 on a sphere, they took the following form:"}
+{"text":"where formula_14 is the dynamic viscosity, formula_41 is the particle radius, formula_61 is the ambient flow, formula_62 is the speed of the particle, formula_63 is the angular velocity of the background flow, and formula_64 is the angular velocity of the particle."}
+{"text":"The Fax\u00e9n's laws can be generalized to describe the moments of other shapes, such as ellipsoids, spheroids, and spherical drops."}
+{"text":"The shallow-water equations are a set of hyperbolic partial differential equations (or parabolic if viscous shear is considered) that describe the flow below a pressure surface in a fluid (sometimes, but not necessarily, a free surface). The shallow-water equations in unidirectional form are also called Saint-Venant equations, after Adh\u00e9mar Jean Claude Barr\u00e9 de Saint-Venant (see the related section below)."}
+{"text":"The equations are derived from depth-integrating the Navier\u2013Stokes equations, in the case where the horizontal length scale is much greater than the vertical length scale. Under this condition, conservation of mass implies that the vertical velocity scale of the fluid is small compared to the horizontal velocity scale. It can be shown from the momentum equation that vertical pressure gradients are nearly hydrostatic, and that horizontal pressure gradients are due to the displacement of the pressure surface, implying that the horizontal velocity field is constant throughout the depth of the fluid. Vertically integrating allows the vertical velocity to be removed from the equations. The shallow-water equations are thus derived."}
+{"text":"While a vertical velocity term is not present in the shallow-water equations, note that this velocity is not necessarily zero. This is an important distinction because, for example, the vertical velocity cannot be zero when the floor changes depth, and thus if it were zero only flat floors would be usable with the shallow-water equations. Once a solution (i.e. the horizontal velocities and free surface displacement) has been found, the vertical velocity can be recovered via the continuity equation."}
+{"text":"Situations in fluid dynamics where the horizontal length scale is much greater than the vertical length scale are common, so the shallow-water equations are widely applicable. They are used with Coriolis forces in atmospheric and oceanic modeling, as a simplification of the primitive equations of atmospheric flow."}
+{"text":"Shallow-water equation models have only one vertical level, so they cannot directly encompass any factor that varies with height. However, in cases where the mean state is sufficiently simple, the vertical variations can be separated from the horizontal and several sets of shallow-water equations can describe the state."}
+{"text":"The shallow-water equations are derived from equations of conservation of mass and conservation of linear momentum (the Navier\u2013Stokes equations), which hold even when the assumptions of shallow-water break down, such as across a hydraulic jump. In the case of a horizontal bed, with negligible Coriolis forces, frictional and viscous forces, the shallow-water equations are:"}
+{"text":"Here \"\u03b7\" is the total fluid column height (instantaneous fluid depth as a function of \"x\", \"y\" and \"t\"), and the 2D vector (\"u\",\"v\") is the fluid's horizontal flow velocity, averaged across the vertical column. Further \"g\" is acceleration due to gravity and \u03c1 is the fluid density. The first equation is derived from mass conservation, the second two from momentum conservation."}
+{"text":"Expanding the derivatives in the above using the product rule, the non-conservative form of the shallow-water equations is obtained. Since velocities are not subject to a fundamental conservation equation, the non-conservative forms do not hold across a shock or hydraulic jump. Also included are the appropriate terms for Coriolis, frictional and viscous forces, to obtain (for constant fluid density):"}
+{"text":"It is often the case that the terms quadratic in \"u\" and \"v\", which represent the effect of bulk advection, are small compared to the other terms. This is called geostrophic balance, and is equivalent to saying that the Rossby number is small. Assuming also that the wave height is very small compared to the mean height (), we have (without lateral viscous forces):"}
+{"text":"The one-dimensional (1-D) Saint-Venant equations were derived by Adh\u00e9mar Jean Claude Barr\u00e9 de Saint-Venant, and are commonly used to model transient open-channel flow and surface runoff. They can be viewed as a contraction of the two-dimensional (2-D) shallow-water equations, which are also known as the two-dimensional Saint-Venant equations. The 1-D Saint-Venant equations contain to a certain extent the main characteristics of the channel cross-sectional shape."}
+{"text":"The 1-D equations are used extensively in computer models such as TUFLOW, Mascaret (EDF), SIC (Irstea), HEC-RAS, SWMM5, ISIS, InfoWorks, Flood Modeller, SOBEK 1DFlow, MIKE 11, and MIKE SHE because they are significantly easier to solve than the full shallow-water equations. Common applications of the 1-D Saint-Venant equations include flood routing along rivers (including evaluation of measures to reduce the risks of flooding), dam break analysis, storm pulses in an open channel, as well as storm runoff in overland flow."}
+{"text":"The system of partial differential equations which describe the 1-D incompressible flow in an open channel of arbitrary cross section \u2013 as derived and posed by Saint-Venant in his 1871 paper (equations 19 & 20) \u2013 is:"}
+{"text":"where \"x\" is the space coordinate along the channel axis, \"t\" denotes time, \"A\"(\"x\",\"t\") is the cross-sectional area of the flow at location \"x\", \"u\"(\"x\",\"t\") is the flow velocity, \"\u03b6\"(\"x\",\"t\") is the free surface elevation and \u03c4(\"x\",\"t\") is the wall shear stress along the wetted perimeter \"P\"(\"x\",\"t\") of the cross section at \"x\". Further \u03c1 is the (constant) fluid density and \"g\" is the gravitational acceleration."}
+{"text":"Closure of the hyperbolic system of equations ()\u2013() is obtained from the geometry of cross sections \u2013 by providing a functional relationship between the cross-sectional area \"A\" and the surface elevation \u03b6 at each position \"x\". For example, for a rectangular cross section, with constant channel width \"B\" and channel bed elevation \"z\"b, the cross sectional area is: . The instantaneous water depth is with \"z\"b(\"x\") the bed level (i.e. elevation of the lowest point in the bed above datum, see the cross-section figure). For non-moving channel walls the cross-sectional area \"A\" in equation () can be written as:"}
+{"text":"with \"b\"(\"x\",\"h\") the effective width of the channel cross section at location \"x\" when the fluid depth is \"h\" \u2013 so for rectangular channels."}
+{"text":"The wall shear stress \u03c4 is dependent on the flow velocity \"u\", they can be related by using e.g. the Darcy\u2013Weisbach equation, Manning formula or Ch\u00e9zy formula."}
+{"text":"Further, equation () is the continuity equation, expressing conservation of water volume for this incompressible homogeneous fluid. Equation () is the momentum equation, giving the balance between forces and momentum change rates."}
+{"text":"The bed slope \"S\"(\"x\"), friction slope \"S\"f(\"x\",\"t\") and hydraulic radius \"R\"(\"x\",\"t\") are defined as:"}
+{"text":"Consequently, the momentum equation () can be written as:"}
+{"text":"The momentum equation () can also be cast in the so-called conservation form, through some algebraic manipulations on the Saint-Venant equations, () and (). In terms of the discharge :"}
+{"text":"where \"A\", \"I\"1 and \"I\"2 are functions of the channel geometry, described in the terms of the channel width \"B\"(\u03c3,\"x\"). Here \u03c3 is the height above the lowest point in the cross section at location \"x\", see the cross-section figure. So \u03c3 is the height above the bed level \"z\"b(\"x\") (of the lowest point in the cross section):"}
+{"text":"Above \u2013 in the momentum equation () in conservation form \u2013 \"A\", \"I\"1 and \"I\"2 are evaluated at . The term describes the hydrostatic force in a certain cross section. And, for a non-prismatic channel, gives the effects of geometry variations along the channel axis \"x\"."}
+{"text":"In applications, depending on the problem at hand, there often is a preference for using either the momentum equation in non-conservation form, () or (), or the conservation form (). For instance in case of the description of hydraulic jumps, the conservation form is preferred since the momentum flux is continuous across the jump."}
+{"text":"The Saint-Venant equations ()\u2013() can be analysed using the method of characteristics. The two celerities d\"x\"\/d\"t\" on the characteristic curves are:"}
+{"text":"The Froude number determines whether the flow is subcritical () or supercritical ()."}
+{"text":"For a rectangular and prismatic channel of constant width \"B\", i.e. with and , the Riemann invariants are:"}
+{"text":"so the equations in characteristic form are:"}
+{"text":"The Riemann invariants and method of characteristics for a prismatic channel of arbitrary cross-section are described by Didenkulova & Pelinovsky (2011)."}
+{"text":"The characteristics and Riemann invariants provide important information on the behavior of the flow, as well as that they may be used in the process of obtaining (analytical or numerical) solutions."}
+{"text":"The dynamic wave is the full one-dimensional Saint-Venant equation. It is numerically challenging to solve, but is valid for all channel flow scenarios. The dynamic wave is used for modeling transient storms in modeling programs including Mascaret (EDF), SIC (Irstea), HEC-RAS, InfoWorks_ICM, MIKE 11, Wash 123d and SWMM5."}
+{"text":"In the order of increasing simplifications, by removing some terms of the full 1D Saint-Venant equations (aka Dynamic wave equation), we get the also classical Diffusive wave equation and Kinematic wave equation."}
+{"text":"For the diffusive wave it is assumed that the inertial terms are less than the gravity, friction, and pressure terms. The diffusive wave can therefore be more accurately described as a non-inertia wave, and is written as:"}
+{"text":"The diffusive wave is valid when the inertial acceleration is much smaller than all other forms of acceleration, or in other words when there is primarily subcritical flow, with low Froude values. Models that use the diffusive wave assumption include MIKE SHE and LISFLOOD-FP. In the SIC (Irstea) software this options is also available, since the 2 inertia terms (or any of them) can be removed in option from the interface."}
+{"text":"For the kinematic wave it is assumed that the flow is uniform, and that the friction slope is approximately equal to the slope of the channel. This simplifies the full Saint-Venant equation to the kinematic wave:"}
+{"text":"The kinematic wave is valid when the change in wave height over distance and velocity over distance and time is negligible relative to the bed slope, e.g. for shallow flows over steep slopes. The kinematic wave is used in HEC-HMS."}
+{"text":"The 1-D Saint-Venant momentum equation can be derived from the Navier\u2013Stokes equations that describe fluid motion. The \"x\"-component of the Navier\u2013Stokes equations \u2013 when expressed in Cartesian coordinates in the \"x\"-direction \u2013 can be written as:"}
+{"text":"where \"u\" is the velocity in the \"x\"-direction, \"v\" is the velocity in the \"y\"-direction, \"w\" is the velocity in the \"z\"-direction, \"t\" is time, \"p\" is the pressure, \u03c1 is the density of water, \u03bd is the kinematic viscosity, and \"f\"x is the body force in the \"x\"-direction."}
+{"text":"The local acceleration (a) can also be thought of as the \"unsteady term\" as this describes some change in velocity over time. The convective acceleration (b) is an acceleration caused by some change in velocity over position, for example the speeding up or slowing down of a fluid entering a constriction or an opening, respectively. Both these terms make up the inertia terms of the 1-dimensional Saint-Venant equation."}
+{"text":"The pressure gradient term (c) describes how pressure changes with position, and since the pressure is assumed hydrostatic, this is the change in head over position. The friction term (d) accounts for losses in energy due to friction, while the gravity term (e) is the acceleration due to bed slope."}
+{"text":"The primitive equations are a set of nonlinear differential equations that are used to approximate global atmospheric flow and are used in most atmospheric models. They consist of three main sets of balance equations:"}
+{"text":"The primitive equations may be linearized to yield Laplace's tidal equations, an eigenvalue problem from which the analytical solution to the latitudinal structure of the flow may be determined."}
+{"text":"In general, nearly all forms of the primitive equations relate the five variables \"u\", \"v\", \u03c9, \"T\", \"W\", and their evolution over space and time."}
+{"text":"The equations were first written down by Vilhelm Bjerknes."}
+{"text":"Forces that cause atmospheric motion include the pressure gradient force, gravity, and viscous friction. Together, they create the forces that accelerate our atmosphere."}
+{"text":"The pressure gradient force causes an acceleration forcing air from regions of high pressure to regions of low pressure. Mathematically, this can be written as:"}
+{"text":"The gravitational force accelerates objects at approximately 9.8\u00a0m\/s2 directly towards the center of the Earth."}
+{"text":"The force due to viscous friction can be approximated as:"}
+{"text":"Using Newton's second law, these forces (referenced in the equations above as the accelerations due to these forces) may be summed to produce an equation of motion that describes this system. This equation can be written in the form:"}
+{"text":"Therefore, to complete the system of equations and obtain 6 equations and 6 variables:"}
+{"text":"where n is the number density in mol, and T:=RT is the temperature equivalent value in Joule\/mol."}
+{"text":"The precise form of the primitive equations depends on the vertical coordinate system chosen, such as pressure coordinates, log pressure coordinates, or sigma coordinates. Furthermore, the velocity, temperature, and geopotential variables may be decomposed into mean and perturbation components using Reynolds decomposition."}
+{"text":"Pressure coordinate in vertical, Cartesian tangential plane."}
+{"text":"In this form pressure is selected as the vertical coordinate and the horizontal coordinates are written for the Cartesian tangential plane (i.e. a plane tangent to some point on the surface of the Earth). This form does not take the curvature of the Earth into account, but is useful for visualizing some of the physical processes involved in formulating the equations due to its relative simplicity."}
+{"text":"Note that the capital D time derivatives are material derivatives. Five equations in five unknowns comprise the system."}
+{"text":"When a statement of the conservation of water vapor substance is included, these six equations form the basis for any numerical weather prediction scheme."}
+{"text":"Primitive equations using sigma coordinate system, polar stereographic projection."}
+{"text":"According to the \"National Weather Service Handbook No. 1 \u2013 Facsimile Products\", the primitive equations can be simplified into the following equations:"}
+{"text":"This equation and notation works in much the same way as the temperature equation. This equation describes the motion of water from one place to another at a point without taking into account water that changes form. Inside a given system, the total change in water with time is zero. However, concentrations are allowed to move with the wind."}
+{"text":"These simplifications make it much easier to understand what is happening in the model. Things like the temperature (potential temperature), precipitable water, and to an extent the pressure thickness simply move from one spot on the grid to another with the wind. The wind is forecast slightly differently. It uses geopotential, specific heat, the exner function \"\u03c0\", and change in sigma coordinate."}
+{"text":"The analytic solution to the linearized primitive equations involves a sinusoidal oscillation in time and longitude, modulated by coefficients related to height and latitude."}
+{"text":"where \"s\" and formula_39 are the zonal wavenumber and angular frequency, respectively. The solution represents atmospheric waves and tides."}
+{"text":"When the coefficients are separated into their height and latitude components, the height dependence takes the form of propagating or evanescent waves (depending on conditions), while the latitude dependence is given by the Hough functions."}
+{"text":"This analytic solution is only possible when the primitive equations are linearized and simplified. Unfortunately many of these simplifications (i.e. no dissipation, isothermal atmosphere) do not correspond to conditions in the actual atmosphere. As a result, a numerical solution which takes these factors into account is often calculated using general circulation models and climate models."}
+{"text":"Collaborative Research and Training Site, Review of the Primitive Equations."}
+{"text":"The Kaufmann vortex, also known as the Scully model, is a mathematical model for a vortex taking account of viscosity. It uses an algebraic velocity profile."}
+{"text":"Kaufmann and Scully's model for the velocity in the \u0398 direction is:"}
+{"text":"The model was suggested by Scully and Sullivan in 1972 at Massachusetts Institute of Technology, and earlier by W. Kaufmann in 1962."}
+{"text":"The basic form of a 2-dimensional thin film equation is"}
+{"text":"and \"\u03bc\" is the viscosity (or dynamic viscosity) of the liquid, \"h\"(\"x\",\"y\",\"t\") is film thickness, \"\u03b3\" is the interfacial tension between the liquid and the gas phase above it, formula_8 is the liquid density and formula_9 the surface shear. The surface shear could be caused by flow of the overlying gas or surface tension gradients. The vectors formula_10 represent the unit vector in the surface co-ordinate directions, the dot product serving to identify the gravity component in each direction. The vector formula_11 is the unit vector perpendicular to the surface."}
+{"text":"A generalised thin film equation is discussed in"}
+{"text":"When formula_13 this may represent flow with slip at the solid surface whole formula_14 describes the thickness of a thin bridge between two masses of fluid in a Hele-Shaw cell. The value formula_15 represents surface tension driven flow."}
+{"text":"A form frequently investigated with regard to the rupture of thin liquid films involves the addition of a disjoining pressure \u03a0(\"h\") in the equation, as in"}
+{"text":"where the function \u03a0(\"h\") is usually very small in value for moderate-large film thicknesses \"h\" and grows very rapidly when \"h\" goes very close to zero."}
+{"text":"Physical applications, properties and solution behaviour of the thin-film equation are reviewed in. With the inclusion of phase change at the substrate a form of thin film equation for an arbitrary surface is derived in. A detailed study of the steady-flow of a thin film near a moving contact line is given in. For a yield-stress fluid flow driven by gravity and surface tension is investigated in."}
+{"text":"For purely surface tension driven flow it is easy to see that one static (time-independent) solution is a paraboloid of revolution"}
+{"text":"and this is consistent with the experimentally observed spherical cap shape of a static sessile drop, as a \"flat\" spherical cap that has small height can be accurately approximated in second order with a paraboloid. This, however, does not handle correctly the circumference of the droplet where the value of the function \"h\"(\"x\",\"y\") drops to zero and below, as a real physical liquid film can't have a negative thickness. This is one reason why the disjoining pressure term \u03a0(\"h\") is important in the theory."}
+{"text":"One possible realistic form of the disjoining pressure term is"}
+{"text":"where \"B\", \"h\"*, \"m\" and \"n\" are some parameters. These constants and the surface tension formula_19 can be approximately related to the equilibrium liquid-solid contact angle formula_20 through the equation"}
+{"text":"The thin film equation can be used to simulate several behaviors of liquids, such as the fingering instability in gravity driven flow."}
+{"text":"The lack of a second-order time derivative in the thin-film equation is a result of the assumption of small Reynold's number in its derivation, which allows the ignoring of inertial terms dependent on fluid density formula_8. This is somewhat similar to the situation with Washburn's equation, which describes the capillarity-driven flow of a liquid in a thin tube."}
+{"text":"The barotropic vorticity equation assumes the atmosphere is nearly barotropic, which means that the direction and speed of the geostrophic wind are independent of height. In other words, there is no vertical wind shear of the geostrophic wind. It also implies that thickness contours (a proxy for temperature) are parallel to upper level height contours. In this type of atmosphere, high and low pressure areas are centers of warm and cold temperature anomalies. Warm-core highs (such as the subtropical ridge and the Bermuda-Azores high) and cold-core lows have strengthening winds with height, with the reverse true for cold-core highs (shallow Arctic highs) and warm-core lows (such as tropical cyclones)."}
+{"text":"A simplified form of the vorticity equation for an inviscid, divergence-free flow (solenoidal velocity field), the barotropic vorticity equation can simply be stated as"}
+{"text":"is \"absolute vorticity\", with \"\u03b6\" being \"relative vorticity\", defined as the vertical component of the curl of the fluid velocity and \"f\" is the \"Coriolis parameter\""}
+{"text":"where \"\u03a9\" is the angular frequency of the planet's"}
+{"text":"rotation (\"\u03a9\" = for the earth) and \"\u03c6\" is latitude."}
+{"text":"In terms of \"relative vorticity\", the equation can be rewritten as"}
+{"text":"where \"\u03b2\"\u00a0=\u00a0 is the variation of the \"Coriolis parameter\" with distance \"y\" in the north\u2013south direction and \"v\" is the component of velocity in this direction."}
+{"text":"In 1950, Charney, Fj\u00f8rtoft, and von Neumann integrated this equation (with an added diffusion term on the right-hand side) on a computer for the first time, using an observed field of 500\u00a0hPa geopotential height for the first timestep. This was one of the first successful instances of numerical weather prediction."}
+{"text":"The Rankine vortex is a simple mathematical model of a vortex in a viscous fluid. It is named after its discoverer, William John Macquorn Rankine."}
+{"text":"A swirling flow in a viscous fluid can be characterized by a central core comprising a forced vortex, surrounded by a free vortex. In an inviscid fluid, on the other hand, a swirling flow consists entirely of a free vortex with a singularity at its center point. The tangential velocity"}
+{"text":"of a Rankine vortex with circulation formula_1 and radius formula_2 is"}
+{"text":"The remainder of the velocity components are identically zero, so that the total velocity field is formula_4."}
+{"text":"In the field of physics, engineering, and earth sciences, advection is the transport of a substance or quantity by bulk motion of a fluid. The properties of that substance are carried with it. Generally the majority of the advected substance is a fluid. The properties that are carried with the advected substance are conserved properties such as energy. An example of advection is the transport of pollutants or silt in a river by bulk water flow downstream. Another commonly advected quantity is energy or enthalpy. Here the fluid may be any material that contains thermal energy, such as water or air. In general, any substance or conserved, extensive quantity can be advected by a fluid that can hold or contain the quantity or substance."}
+{"text":"During advection, a fluid transports some conserved quantity or material via bulk motion. The fluid's motion is described mathematically as a vector field, and the transported material is described by a scalar field showing its distribution over space. Advection requires currents in the fluid, and so cannot happen in rigid solids. It does not include transport of substances by molecular diffusion."}
+{"text":"Advection is sometimes confused with the more encompassing process of convection which is the combination of advective transport and diffusive transport."}
+{"text":"In meteorology and physical oceanography, advection often refers to the transport of some property of the atmosphere or ocean, such as heat, humidity (see moisture) or salinity."}
+{"text":"Advection is important for the formation of orographic clouds and the precipitation of water from clouds, as part of the hydrological cycle."}
+{"text":"The term \"advection\" often serves as a synonym for \"convection\", and this correspondence of terms is used in the literature. More technically, convection applies to the movement of a fluid (often due to density gradients created by thermal gradients), whereas advection is the movement of some material by the velocity of the fluid. Thus, somewhat confusingly, it is technically correct to think of momentum being advected by the velocity field in the Navier-Stokes equations, although the resulting motion would be considered to be convection. Because of the specific use of the term convection to indicate transport in association with thermal gradients, it is probably safer to use the term advection if one is uncertain about which terminology best describes their particular system."}
+{"text":"In meteorology and physical oceanography, advection often refers to the horizontal transport of some property of the atmosphere or ocean, such as heat, humidity or salinity, and convection generally refers to vertical transport (vertical advection). Advection is important for the formation of orographic clouds (terrain-forced convection) and the precipitation of water from clouds, as part of the hydrological cycle."}
+{"text":"The advection equation also applies if the quantity being advected is represented by a probability density function at each point, although accounting for diffusion is more difficult."}
+{"text":"The advection equation is the partial differential equation that governs the motion of a conserved scalar field as it is advected by a known velocity vector field. It is derived using the scalar field's conservation law, together with Gauss's theorem, and taking the infinitesimal limit."}
+{"text":"One easily visualized example of advection is the transport of ink dumped into a river. As the river flows, ink will move downstream in a \"pulse\" via advection, as the water's movement itself transports the ink. If added to a lake without significant bulk water flow, the ink would simply disperse outwards from its source in a diffusive manner, which is not advection. Note that as it moves downstream, the \"pulse\" of ink will also spread via diffusion. The sum of these processes is called convection."}
+{"text":"In Cartesian coordinates the advection operator is"}
+{"text":"where formula_2 is the velocity field, and formula_3 is the del operator (note that Cartesian coordinates are used here)."}
+{"text":"The advection equation for a conserved quantity described by a scalar field formula_4 is expressed mathematically by a continuity equation:"}
+{"text":"where formula_5 is the divergence operator and again formula_6 is the velocity vector field. Frequently, it is assumed that the flow is incompressible, that is, the velocity field satisfies"}
+{"text":"In this case, formula_6 is said to be solenoidal. If this is so, the above equation can be rewritten as"}
+{"text":"In particular, if the flow is steady, then"}
+{"text":"which shows that formula_4 is constant along a streamline."}
+{"text":"Hence, formula_11 so formula_4 doesn't vary in time."}
+{"text":"If a vector quantity formula_13 (such as a magnetic field) is being advected by the solenoidal velocity field formula_6, the advection equation above becomes:"}
+{"text":"Here, formula_13 is a vector field instead of the scalar field formula_4."}
+{"text":"The advection equation is not simple to solve numerically: the system is a hyperbolic partial differential equation, and interest typically centers on discontinuous \"shock\" solutions (which are notoriously difficult for numerical schemes to handle)."}
+{"text":"Even with one space dimension and a constant velocity field, the system remains difficult to simulate. The equation becomes"}
+{"text":"where formula_19 is the scalar field being advected"}
+{"text":"and formula_20 is the formula_21 component of the vector formula_22."}
+{"text":"Treatment of the advection operator in the incompressible Navier\u2013Stokes equations."}
+{"text":"According to Zang, numerical simulation can be aided by considering the skew symmetric form for the advection operator."}
+{"text":"and formula_6 is the same as above."}
+{"text":"Since skew symmetry implies only imaginary eigenvalues, this form reduces the \"blow up\" and \"spectral blocking\" often experienced in numerical solutions with sharp discontinuities (see Boyd)."}
+{"text":"Using vector calculus identities, these operators can also be expressed in other ways, available in more software packages for more coordinate systems."}
+{"text":"This form also makes visible that the skew symmetric operator introduces error when the velocity field diverges. Solving the advection equation by numerical methods is very challenging and there is a large scientific literature about this."}
+{"text":"In fluid dynamics, the drag equation is a formula used to calculate the force of drag experienced by an object due to movement through a fully enclosing fluid. The equation is:"}
+{"text":"The equation is attributed to Lord Rayleigh, who originally used \"L\"2 in place of \"A\" (with \"L\" being some linear dimension)."}
+{"text":"For sharp-cornered bluff bodies, like square cylinders and plates held transverse to the flow direction, this equation is applicable with the drag coefficient as a constant value when the Reynolds number is greater than 1000. For smooth bodies, like a circular cylinder, the drag coefficient may vary significantly until Reynolds numbers up to 107 (ten million)."}
+{"text":"The equation is easier understood for the idealized situation where all of the fluid impinges on the reference area and comes to a complete stop, building up stagnation pressure over the whole area. No real object exactly corresponds to this behavior. \"CD\" is the ratio of drag for any real object to that of the ideal object. In practice a rough un-streamlined body (a bluff body) will have a \"CD\" around 1, more or less. Smoother objects can have much lower values of \"CD\". The equation is precise \u2013 it simply provides the definition of \"CD\" (drag coefficient), which varies with the Reynolds number and is found by experiment."}
+{"text":"Of particular importance is the formula_9 dependence on flow velocity, meaning that fluid drag increases with the square of flow velocity. When flow velocity is doubled, for example, not only does the fluid strike with twice the flow velocity, but twice the mass of fluid strikes per second. Therefore, the change of momentum per second is multiplied by four. Force is equivalent to the change of momentum divided by time. This is in contrast with solid-on-solid friction, which generally has very little flow velocity dependence."}
+{"text":"The drag force can also be specified as,"}
+{"text":"where, \"Pd\" is pressure exerted by fluid on area \"A\". Here the pressure \"Pd\" is referred to as dynamic pressure due to kinetic energy of fluid experiencing relative flow velocity \"u\". This is defined in similar form as kinetic energy equation:"}
+{"text":"The drag equation may be derived to within a multiplicative constant by the method of dimensional analysis. If a moving fluid meets an object, it exerts a force on the object. Suppose that the fluid is a liquid, and the variables involved \u2013 under some conditions \u2013 are the:"}
+{"text":"Using the algorithm of the Buckingham \u03c0 theorem, these five variables can be reduced to two dimensionless groups:"}
+{"text":"Alternatively, the dimensionless groups via direct manipulation of the variables."}
+{"text":"That this is so becomes apparent when the drag force \"FD\" is expressed as part of a function of the other variables in the problem:"}
+{"text":"This rather odd form of expression is used because it does not assume a one-to-one relationship. Here, \"fa\" is some (as-yet-unknown) function that takes five arguments. Now the right-hand side is zero in any system of units; so it should be possible to express the relationship described by \"fa\" in terms of only dimensionless groups."}
+{"text":"There are many ways of combining the five arguments of \"fa\" to form dimensionless groups, but the Buckingham \u03c0 theorem states that there will be two such groups. The most appropriate are the Reynolds number, given by"}
+{"text":"Thus the function of five variables may be replaced by another function of only two variables:"}
+{"text":"where \"fb\" is some function of two arguments."}
+{"text":"The original law is then reduced to a law involving only these two numbers."}
+{"text":"Because the only unknown in the above equation is the drag force \"FD\", it is possible to express it as"}
+{"text":"Thus the force is simply \u00bd \"\u03c1\" \"A\" \"u2\" times some (as-yet-unknown) function \"fc\" of the Reynolds number Re \u2013 a considerably simpler system than the original five-argument function given above."}
+{"text":"Dimensional analysis thus makes a very complex problem (trying to determine the behavior of a function of five variables) a much simpler one: the determination of the drag as a function of only one variable, the Reynolds number."}
+{"text":"If the fluid is a gas, certain properties of the gas influence the drag and those properties must also be taken into account. Those properties are conventionally considered to be the absolute temperature of the gas, and the ratio of its specific heats. These two properties determine the speed of sound in the gas at its given temperature. The Buckingham pi theorem then leads to a third dimensionless group, the ratio of the relative velocity to the speed of sound, which is known as the Mach number. Consequently when a body is moving relative to a gas, the drag coefficient varies with the Mach number and the Reynolds number."}
+{"text":"The analysis also gives other information for free, so to speak. The analysis shows that, other things being equal, the drag force will be proportional to the density of the fluid. This kind of information often proves to be extremely valuable, especially in the early stages of a research project."}
+{"text":"To empirically determine the Reynolds number dependence, instead of experimenting on a large body with fast-flowing fluids (such as real-size airplanes in wind tunnels), one may just as well experiment using a small model in a flow of higher velocity because these two systems deliver similitude by having the same Reynolds number. If the same Reynolds number and Mach number cannot be achieved just by using a flow of higher velocity it may be advantageous to use a fluid of greater density or lower viscosity."}
+{"text":"In mathematical physics, the Whitham equation is a non-local model for non-linear dispersive waves."}
+{"text":"The equation is notated as follows :"}
+{"text":"This integro-differential equation for the oscillatory variable \"\u03b7\"(\"x\",\"t\") is named after Gerald Whitham who introduced it as a model to study breaking of non-linear dispersive water waves in 1967. Wave breaking \u2013 bounded solutions with unbounded derivatives \u2013 for the Whitham equation has recently been proven."}
+{"text":"For a certain choice of the kernel \"K\"(\"x\"\u00a0\u2212\u00a0\"\u03be\") it becomes the Fornberg\u2013Whitham equation."}
+{"text":"Using the Fourier transform (and its inverse), with respect to the space coordinate \"x\" and in terms of the wavenumber \"k\":"}
+{"text":"From the mathematical point of view, Euler equations are notably hyperbolic conservation equations in the case without external field (i.e., in the limit of high Froude number). In fact, like any Cauchy equation, the Euler equations originally formulated in convective form (also called \"Lagrangian form\") can also be put in the \"conservation form\" (also called \"Eulerian form\"). The conservation form emphasizes the mathematical interpretation of the equations as conservation equations through a control volume fixed in space, and is the most important for these equations also from a numerical point of view. The convective form emphasizes changes to the state in a frame of reference moving with the fluid."}
+{"text":"The Euler equations first appeared in published form in Euler's article \"Principes g\u00e9n\u00e9raux du mouvement des fluides\", published in \"M\u00e9moires de l'Acad\u00e9mie des Sciences de Berlin\" in 1757 (in this article Euler actually published only the \"general\" form of the continuity equation and the momentum equation; the energy balance equation would be obtained a century later). They were among the first partial differential equations to be written down. At the time Euler published his work, the system of equations consisted of the momentum and continuity equations, and thus was underdetermined except in the case of an incompressible fluid. An additional equation, which was later to be called the adiabatic condition, was supplied by Pierre-Simon Laplace in 1816."}
+{"text":"During the second half of the 19th century, it was found that the equation related to the balance of energy must at all times be kept, while the adiabatic condition is a consequence of the fundamental laws in the case of smooth solutions. With the discovery of the special theory of relativity, the concepts of energy density, momentum density, and stress were unified into the concept of the stress\u2013energy tensor, and energy and momentum were likewise unified into a single concept, the energy\u2013momentum vector"}
+{"text":"Incompressible Euler equations with constant and uniform density."}
+{"text":"In convective form (i.e., the form with the convective operator made explicit in the momentum equation), the incompressible Euler equations in case of density constant in time and uniform in space are:"}
+{"text":"The first equation is the Euler momentum equation with uniform density (for this equation it could also not be constant in time). By expanding the material derivative, the equations become:"}
+{"text":"In fact for a flow with uniform density formula_13 the following identity holds:"}
+{"text":"where formula_15 is the mechanic pressure. The second equation is the incompressible constraint, stating the flow velocity is a solenoidal field (the order of the equations is not causal, but underlines the fact that the incompressible constraint is not a degenerate form of the continuity equation, but rather of the energy equation, as it will become clear in the following). Notably, the continuity equation would be required also in this incompressible case as an additional third equation in case of density varying in time \"or\" varying in space. For example, with density uniform but varying in time, the continuity equation to be added to the above set would correspond to:"}
+{"text":"So the case of constant and uniform density is the only one not requiring the continuity equation as additional equation regardless of the presence or absence of the incompressible constraint. In fact, the case of incompressible Euler equations with constant and uniform density being analyzed is a toy model featuring only two simplified equations, so it is ideal for didactical purposes even if with limited physical relevancy."}
+{"text":"The equations above thus represent respectively conservation of mass (1 scalar equation) and momentum (1 vector equation containing formula_17 scalar components, where formula_17 is the physical dimension of the space of interest). Flow velocity and pressure are the so-called \"physical variables\"."}
+{"text":"In a coordinate system given by formula_19 the velocity and external force vectors formula_1 and formula_21 have components formula_22 and formula_23, respectively. Then the equations may be expressed in subscript notation as:"}
+{"text":"where the formula_25 and formula_26 subscripts label the \"N\"-dimensional space components, and formula_27 is the Kroenecker delta. The use of Einstein notation (where the sum is implied by repeated indices instead of sigma notation) is also frequent."}
+{"text":"Although Euler first presented these equations in 1755, many fundamental questions about them remain unanswered."}
+{"text":"In three space dimensions, in certain simplified scenarios, the Euler equations produce singularities."}
+{"text":"Smooth solutions of the free (in the sense of without source term: g=0) equations satisfy the conservation of specific kinetic energy:"}
+{"text":"In the one dimensional case without the source term (both pressure gradient and external force), the momentum equation becomes the inviscid Burgers equation:"}
+{"text":"This is a model equation giving many insights on Euler equations."}
+{"text":"In order to make the equations dimensionless, a characteristic length formula_30, and a characteristic velocity formula_31, need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained:"}
+{"text":"Substitution of these inversed relations in Euler equations, defining the Froude number, yields (omitting the * at apix):"}
+{"text":"Euler equations in the Froude limit (no external field) are named free equations and are conservative. The limit of high Froude numbers (low external field) is thus notable and can be studied with perturbation theory."}
+{"text":"The conservation form emphasizes the mathematical properties of Euler equations, and especially the contracted form is often the most convenient one for computational fluid dynamics simulations. Computationally, there are some advantages in using the conserved variables. This gives rise to a large class of numerical methods"}
+{"text":"The free Euler equations are conservative, in the sense they are equivalent to a conservation equation:"}
+{"text":"where the conservation quantity formula_36 in this case is a vector, and formula_37 is a flux matrix. This can be simply proved."}
+{"text":"At last Euler equations can be recast into the particular equation:"}
+{"text":"For certain problems, especially when used to analyze compressible flow in a duct or in case the flow is cylindrically or spherically symmetric, the one-dimensional Euler equations are a useful first approximation. Generally, the Euler equations are solved by Riemann's method of characteristics. This involves finding curves in plane of independent variables (i.e., formula_38 and formula_39) along which partial differential equations (PDEs) degenerate into ordinary differential equations (ODEs). Numerical solutions of the Euler equations rely heavily on the method of characteristics."}
+{"text":"In convective form the incompressible Euler equations in case of density variable in space are:"}
+{"text":"The first equation, which is the new one, is the incompressible continuity equation. In fact the general continuity equation would be:"}
+{"text":"but here the last term is identically zero for the incompressibility constraint."}
+{"text":"The incompressible Euler equations in the Froude limit are equivalent to a single conservation equation with conserved quantity and associated flux respectively:"}
+{"text":"Here formula_36 has length formula_46 and formula_37 has size formula_48. In general (not only in the Froude limit) Euler equations are expressible as:"}
+{"text":"The variables for the equations in conservation form are not yet optimised. In fact we could define:"}
+{"text":"In differential convective form, the compressible (and most general) Euler equations can be written shortly with the material derivative notation:"}
+{"text":"The equations above thus represent conservation of mass, momentum, and energy: the energy equation expressed in the variable internal energy allows to understand the link with the incompressible case, but it is not in the simplest form."}
+{"text":"Mass density, flow velocity and pressure are the so-called \"convective variables\" (or physical variables, or lagrangian variables), while mass density, momentum density and total energy density are the so-called \"conserved variables\" (also called eulerian, or mathematical variables)."}
+{"text":"If one explicitates the material derivative the equations above are:"}
+{"text":"Coming back to the incompressible case, it now becomes apparent that the \"incompressible constraint\" typical of the former cases actually is a particular form valid for incompressible flows of the \"energy equation\", and not of the mass equation. In particular, the incompressible constraint corresponds to the following very simple energy equation:"}
+{"text":"Thus for an incompressible inviscid fluid the specific internal energy is constant along the flow lines, also in a time-dependent flow. The pressure in an incompressible flow acts like a Lagrange multiplier, being the multiplier of the incompressible constraint in the energy equation, and consequently in incompressible flows it has no thermodynamic meaning. In fact, thermodynamics is typical of compressible flows and degenerates in incompressible flows."}
+{"text":"Basing on the mass conservation equation, one can put this equation in the conservation form:"}
+{"text":"meaning that for an incompressible inviscid nonconductive flow a continuity equation holds for the internal energy."}
+{"text":"Since by definition the specific enthalpy is:"}
+{"text":"The material derivative of the specific internal energy can be expressed as:"}
+{"text":"Then by substituting the momentum equation in this expression, one obtains:"}
+{"text":"And by substituting the latter in the energy equation, one obtains that the enthalpy expression for the Euler energy equation:"}
+{"text":"In a reference frame moving with an inviscid and nonconductive flow, the variation of enthalpy directly corresponds to a variation of pressure."}
+{"text":"In thermodynamics the independent variables are the specific volume, and the specific entropy, while the specific energy is a function of state of these two variables."}
+{"text":"This equation is the only belonging to general continuum equations, so only this equation have the same form for example also in Navier-Stokes equations."}
+{"text":"On the other hand, the pressure in thermodynamics is the opposite of the partial derivative of the specific internal energy with respect to the specific volume:"}
+{"text":"since the internal energy in thermodynamics is a function of the two variables aforementioned, the pressure gradient contained into the momentum equation should be explicited as:"}
+{"text":"It is convenient for brevity to switch the notation for the second order derivatives:"}
+{"text":"can be furtherly simplified in convective form by changing variable from specific energy to the specific entropy: in fact the first law of thermodynamics in local form can be written:"}
+{"text":"by substituting the material derivative of the internal energy, the energy equation becomes:"}
+{"text":"now the term between parenthesis is identically zero according to the conservation of mass, then the Euler energy equation becomes simply:"}
+{"text":"For a thermodynamic fluid, the compressible Euler equations are consequently best written as:"}
+{"text":"In the general case and not only in the incompressible case, the energy equation means that for an inviscid thermodynamic fluid the specific entropy is constant along the flow lines, also in a time-dependent flow. Basing on the mass conservation equation, one can put this equation in the conservation form:"}
+{"text":"meaning that for an inviscid nonconductive flow a continuity equation holds for the entropy."}
+{"text":"On the other hand, the two second-order partial derivatives of the specific internal energy in the momentum equation require the specification of the fundamental equation of state of the material considered, i.e. of the specific internal energy as function of the two variables specific volume and specific entropy:"}
+{"text":"The \"fundamental\" equation of state contains all the thermodynamic information about the system (Callen, 1985), exactly like the couple of a \"thermal\" equation of state together with a \"caloric\" equation of state."}
+{"text":"The Euler equations in the Froude limit are equivalent to a single conservation equation with conserved quantity and associated flux respectively:"}
+{"text":"Here formula_36 has length N + 2 and formula_37 has size N(N + 2). In general (not only in the Froude limit) Euler equations are expressible as:"}
+{"text":"We remark that also the Euler equation even when conservative (no external field, Froude limit) have no Riemann invariants in general. Some further assumptions are required"}
+{"text":"However, we already mentioned that for a thermodynamic fluid the equation for the total energy density is equivalent to the conservation equation:"}
+{"text":"Then the conservation equations in the case of a thermodynamic fluid are more simply expressed as:"}
+{"text":"Another possible form for the energy equation, being particularly useful for isobarics, is:"}
+{"text":"Expanding the fluxes can be an important part of constructing numerical solvers, for example by exploiting (approximate) solutions to the Riemann problem. In regions where the state vector y varies smoothly, the equations in conservative form can be put in quasilinear form :"}
+{"text":"where formula_85 are called the flux Jacobians defined as the matrices:"}
+{"text":"Obviously this Jacobian does not exist in discontinuity regions (e.g. contact discontinuities, shock waves in inviscid nonconductive flows). If the flux Jacobians formula_85 are not functions of the state vector formula_36, the equations reveals \"linear\"."}
+{"text":"The compressible Euler equations can be decoupled into a set of N+2 wave equations that describes sound in Eulerian continuum if they are expressed in characteristic variables instead of conserved variables."}
+{"text":"In fact the tensor A is always diagonalizable. If the eigenvalues (the case of Euler equations) are all real the system is defined \"hyperbolic\", and physically eigenvalues represent the speeds of propagation of information. If they are all distinguished, the system is defined \"strictly hyperbolic\" (it will be proved to be the case of one-dimensional Euler equations). Furthermore, diagonalisation of compressible Euler equation is easier when the energy equation is expressed in the variable entropy (i.e. with equations for thermodynamic fluids) than in other energy variables. This will become clear by considering the 1D case."}
+{"text":"If formula_89 is the right eigenvector of the matrix formula_90 corresponding to the eigenvalue formula_91, by building the projection matrix:"}
+{"text":"One can finally find the \"characteristic variables\" as:"}
+{"text":"Since A is constant, multiplying the original 1-D equation in flux-Jacobian form with P\u22121 yields the characteristic equations:"}
+{"text":"The original equations have been decoupled into N+2 characteristic equations each describing a simple wave, with the eigenvalues being the wave speeds. The variables \"w\"i are called the \"characteristic variables\" and are a subset of the conservative variables. The solution of the initial value problem in terms of characteristic variables is finally very simple. In one spatial dimension it is:"}
+{"text":"Then the solution in terms of the original conservative variables is obtained by transforming back:"}
+{"text":"this computation can be explicited as the linear combination of the eigenvectors:"}
+{"text":"Now it becomes apparent that the characteristic variables act as weights in the linear combination of the jacobian eigenvectors. The solution can be seen as superposition of waves, each of which is advected independently without change in shape. Each \"i\"-th wave has shape \"w\"\"i\"\"p\"\"i\" and speed of propagation \"\u03bb\"\"i\". In the following we show a very simple example of this solution procedure."}
+{"text":"Waves in 1D inviscid, nonconductive thermodynamic fluid."}
+{"text":"If one considers Euler equations for a thermodynamic fluid with the two further assumptions of one spatial dimension and free (no external field: \"g\"\u00a0=\u00a00) :"}
+{"text":"If one defines the vector of variables:"}
+{"text":"recalling that formula_69 is the specific volume, formula_101 the flow speed, formula_71 the specific entropy, the corresponding jacobian matrix is:"}
+{"text":"At first one must find the eigenvalues of this matrix by solving the characteristic equation:"}
+{"text":"This determinant is very simple: the fastest computation starts on the last row, since it has the highest number of zero elements."}
+{"text":"This parameter is always real according to the second law of thermodynamics. In fact the second law of thermodynamics can be expressed by several postulates. The most elementary of them in mathematical terms is the statement of convexity of the fundamental equation of state, i.e. the hessian matrix of the specific energy expressed as function of specific volume and specific entropy:"}
+{"text":"is defined positive. This statement corresponds to the two conditions:"}
+{"text":"The first condition is the one ensuring the parameter \"a\" is defined real."}
+{"text":"Then the matrix has three real eigenvalues all distinguished: the 1D Euler equations are a strictly hyperbolic system."}
+{"text":"At this point one should determine the three eigenvectors: each one is obtained by substituting one eigenvalue in the eigenvalue equation and then solving it. By substituting the first eigenvalue \u03bb1 one obtains:"}
+{"text":"Basing on the third equation that simply has solution s1=0, the system reduces to:"}
+{"text":"The two equations are redundant as usual, then the eigenvector is defined with a multiplying constant. We choose as right eigenvector:"}
+{"text":"The other two eigenvectors can be found with analogous procedure as:"}
+{"text":"Then the projection matrix can be built:"}
+{"text":"Finally it becomes apparent that the real parameter \"a\" previously defined is the speed of propagation of the information characteristic of the hyperbolic system made of Euler equations, i.e. it is the \"wave speed\". It remains to be shown that the sound speed corresponds to the particular case of an isentropic transformation:"}
+{"text":"Sound speed is defined as the wavespeed of an isentropic transformation:"}
+{"text":"by the definition of the isoentropic compressibility:"}
+{"text":"the soundspeed results always the square root of ratio between the isentropic compressibility and the density:"}
+{"text":"The sound speed in an ideal gas depends only on its temperature:"}
+{"text":"Since the specific enthalpy in an ideal gas is proportional to its temperature:"}
+{"text":"the sound speed in an ideal gas can also be made dependent only on its specific enthalpy:"}
+{"text":"Bernoulli's theorem is a direct consequence of the Euler equations."}
+{"text":"The vector calculus identity of the cross product of a curl holds:"}
+{"text":"where the Feynman subscript notation formula_127 is used, which means the subscripted gradient operates only on the factor formula_37."}
+{"text":"Lamb in his famous classical book Hydrodynamics (1895), still in print, used this identity to change the convective term of the flow velocity in rotational form:"}
+{"text":"the Euler momentum equation in Lamb's form becomes:"}
+{"text":"the Euler momentum equation assumes a form that is optimal to demonstrate Bernoulli's theorem for steady flows:"}
+{"text":"In fact, in case of an external conservative field, by defining its potential \u03c6:"}
+{"text":"In case of a steady flow the time derivative of the flow velocity disappears, so the momentum equation becomes:"}
+{"text":"And by projecting the momentum equation on the flow direction, i.e. along a \"streamline\", the cross product disappears because its result is always perpendicular to the velocity:"}
+{"text":"In the steady incompressible case the mass equation is simply:"}
+{"text":"that is the mass conservation for a steady incompressible flow states that the density along a streamline is constant. Then the Euler momentum equation in the steady incompressible case becomes:"}
+{"text":"The convenience of defining the total head for an inviscid liquid flow is now apparent:"}
+{"text":"That is, the momentum balance for a steady inviscid and incompressible flow in an external conservative field states that the total head along a streamline is constant."}
+{"text":"In the most general steady (compressibile) case the mass equation in conservation form is:"}
+{"text":"The right-hand side appears on the energy equation in convective form, which on the steady state reads:"}
+{"text":"so that the internal specific energy now features in the head."}
+{"text":"Since the external field potential is usually small compared to the other terms, it is convenient to group the latter ones in the total enthalpy:"}
+{"text":"and the Bernoulli invariant for an inviscid gas flow is:"}
+{"text":"That is, the energy balance for a steady inviscid flow in an external conservative field states that the sum of the total enthalpy and the external potential is constant along a streamline."}
+{"text":"In the usual case of small potential field, simply:"}
+{"text":"By substituting the pressure gradient with the entropy and enthalpy gradient, according to the first law of thermodynamics in the enthalpy form:"}
+{"text":"in the convective form of Euler momentum equation, one arrives to:"}
+{"text":"Friedmann deduced this equation for the particular case of a perfect gas and published it in 1922. However, this equation is general for an inviscid nonconductive fluid and no equation of state is implicit in it."}
+{"text":"On the other hand, by substituting the enthalpy form of the first law of thermodynamics in the rotational form of Euler momentum equation, one obtains:"}
+{"text":"and by defining the specific total enthalpy:"}
+{"text":"one arrives to the Crocco\u2013Vazsonyi form (Crocco, 1937) of the Euler momentum equation:"}
+{"text":"In the steady case the two variables entropy and total enthalpy are particularly useful since Euler equations can be recast into the Crocco's form:"}
+{"text":"Finally if the flow is also isothermal:"}
+{"text":"by defining the specific total Gibbs free energy:"}
+{"text":"the Crocco's form can be reduced to:"}
+{"text":"From these relationships one deduces that the specific total free energy is uniform in a steady, irrotational, isothermal, isoentropic, inviscid flow."}
+{"text":"Shock propagation is studied \u2013 among many other fields \u2013 in aerodynamics and rocket propulsion, where sufficiently fast flows occur."}
+{"text":"To properly compute the continuum quantities in discontinuous zones (for example shock waves or boundary layers) from the \"local\" forms (all the above forms are local forms, since the variables being described are typical of one point in the space considered, i.e. they are \"local variables\") of Euler equations through finite difference methods generally too many space points and time steps would be necessary for the memory of computers now and in the near future. In these cases it is mandatory to avoid the local forms of the conservation equations, passing some weak forms, like the finite volume one."}
+{"text":"Starting from the simplest case, one consider a steady free conservation equation in conservation form in the space domain:"}
+{"text":"where in general F is the flux matrix. By integrating this local equation over a fixed volume Vm, it becomes:"}
+{"text":"Then, basing on the divergence theorem, we can transform this integral in a boundary integral of the flux:"}
+{"text":"This \"global form\" simply states that there is no net flux of a conserved quantity passing through a region in the case steady and without source. In 1D the volume reduces to an interval, its boundary being its extrema, then the divergence theorem reduces to the fundamental theorem of calculus:"}
+{"text":"that is the simple finite difference equation, known as the \"jump relation\":"}
+{"text":"Or, if one performs an indefinite integral:"}
+{"text":"On the other hand, a transient conservation equation:"}
+{"text":"For one-dimensional Euler equations the conservation variables and the flux are the vectors:"}
+{"text":"In the one dimensional case the correspondent jump relations, called the Rankine\u2013Hugoniot equations, are:<"}
+{"text":"In the steady one dimensional case the become simply:"}
+{"text":"Thanks to the mass difference equation, the energy difference equation can be simplified without any restriction:"}
+{"text":"where formula_174 is the specific total enthalpy."}
+{"text":"These are the usually expressed in the convective variables:"}
+{"text":"The energy equation is an integral form of the Bernoulli equation in the compressible case."}
+{"text":"The former mass and momentum equations by substitution lead to the Rayleigh equation:"}
+{"text":"Since the second term is a constant, the Rayleigh equation always describes a simple line in the pressure volume plane not depending of any equation of state, i.e. the Rayleigh line. By substitution in the Rankine\u2013Hugoniot equations, that can be also made explicit as:"}
+{"text":"One can also obtain the kinetic equation and to the Hugoniot equation. The analytical passages are not shown here for brevity."}
+{"text":"The Hugoniot equation, coupled with the fundamental equation of state of the material:"}
+{"text":"describes in general in the pressure volume plane a curve passing by the conditions (v0, p0), i.e. the Hugoniot curve, whose shape strongly depends on the type of material considered."}
+{"text":"It is also customary to define a \"Hugoniot function\":"}
+{"text":"allowing to quantify deviations from the Hugoniot equation, similarly to the previous definition of the \"hydraulic head\", useful for the deviations from the Bernoulli equation."}
+{"text":"On the other hand, by integrating a generic conservation equation:"}
+{"text":"on a fixed volume Vm, and then basing on the divergence theorem, it becomes:"}
+{"text":"By integrating this equation also over a time interval:"}
+{"text":"Now by defining the node conserved quantity:"}
+{"text":"In particular, for Euler equations, once the conserved quantities have been determined, the convective variables are deduced by back substitution:"}
+{"text":"Then the explicit finite volume expressions of the original convective variables are:<"}
+{"text":"\\oint_{\\partial V_m}\\rho\\mathbf{u} \\cdot \\hat{n}\\, ds\\, dt \\\\[1.2ex]"}
+{"text":"\\mathbf u_{m,n+1} &= \\mathbf u_{m,n} - \\frac{1}{\\rho_{m,n} V_m}\\int_{t_n}^{t_{n+1}}\\oint_{\\partial V_m} (\\rho\\mathbf{u} \\otimes \\mathbf{u} - p\\mathbf{I}) \\cdot \\hat{n}\\, ds\\, dt \\\\[1.2ex]"}
+{"text":"\\mathbf e_{m,n+1} &= \\mathbf e_{m,n} - \\frac{1}{2}\\left(u^2_{m,n+1} - u^2_{m,n}\\right) - \\frac{1}{\\rho_{m,n} V_m}\\int_{t_n}^{t_{n+1}}\\oint_{\\partial V_m} \\left(\\rho e + \\frac{1}{2}\\rho u^2 + p\\right)\\mathbf{u} \\cdot \\hat{n}\\, ds\\, dt \\\\[1.2ex]"}
+{"text":"It has been shown that Euler equations are not a complete set of equations, but they require some additional constraints to admit a unique solution: these are the equation of state of the material considered. To be consistent with thermodynamics these equations of state should satisfy the two laws of thermodynamics. On the other hand, by definition non-equilibrium system are described by laws lying outside these laws. In the following we list some very simple equations of state and the corresponding influence on Euler equations."}
+{"text":"For an ideal polytropic gas the fundamental equation of state is:"}
+{"text":"where formula_53 is the specific energy, formula_69 is the specific volume, formula_71 is the specific entropy, formula_193 is the molecular mass, formula_194 here is considered a constant (polytropic process), and can be shown to correspond to the heat capacity ratio. This equation can be shown to be consistent with the usual equations of state employed by thermodynamics."}
+{"text":"From this equation one can derive the equation for pressure by its thermodynamic definition:"}
+{"text":"By inverting it one arrives to the mechanical equation of state:"}
+{"text":"Then for an ideal gas the compressible Euler equations can be simply expressed in the \"mechanical\" or \"primitive variables\" specific volume, flow velocity and pressure, by taking the set of the equations for a thermodynamic system and modifying the energy equation into a pressure equation through this mechanical equation of state. At last, in convective form they result:"}
+{"text":"and in one-dimensional quasilinear form they results:"}
+{"text":"In the case of steady flow, it is convenient to choose the Frenet\u2013Serret frame along a streamline as the coordinate system for describing the steady momentum Euler equation:"}
+{"text":"where formula_1, formula_15 and formula_40 denote the flow velocity, the pressure and the density, respectively."}
+{"text":"Let formula_204 be a Frenet\u2013Serret orthonormal basis which consists of a tangential unit vector, a normal unit vector, and a binormal unit vector to the streamline, respectively. Since a streamline is a curve that is tangent to the velocity vector of the flow, the left-hand side of the above equation, the convective derivative of velocity, can be described as follows:"}
+{"text":"where formula_206 is the radius of curvature of the streamline."}
+{"text":"Therefore, the momentum part of the Euler equations for a steady flow is found to have a simple form:"}
+{"text":"For barotropic flow formula_208, Bernoulli's equation is derived from the first equation:"}
+{"text":"The second equation expresses that, in the case the streamline is curved, there should exist a pressure gradient normal to the streamline because the centripetal acceleration of the fluid parcel is only generated by the normal pressure gradient."}
+{"text":"The third equation expresses that pressure is constant along the binormal axis."}
+{"text":"Let formula_210 be the distance from the center of curvature of the streamline, then the second equation is written as follows:"}
+{"text":"In a steady flow of an inviscid fluid without external forces, the center of curvature of the streamline lies in the direction of decreasing radial pressure."}
+{"text":"Although this relationship between the pressure field and flow curvature is very useful, it doesn't have a name in the English-language scientific literature. Japanese fluid-dynamicists call the relationship the \"Streamline curvature theorem\"."}
+{"text":"This \"theorem\" explains clearly why there are such low pressures in the centre of vortices, which consist of concentric circles of streamlines."}
+{"text":"This also is a way to intuitively explain why airfoils generate lift forces."}
+{"text":"All potential flow solutions are also solutions of the Euler equations, and in particular the incompressible Euler equations when the potential is harmonic."}
+{"text":"Solutions to the Euler equations with vorticity are:"}
+{"text":"In fluid dynamics the Milne-Thomson circle theorem or the circle theorem is a statement giving a new stream function for a fluid flow when a cylinder is placed into that flow. It was named after the English mathematician L. M. Milne-Thomson."}
+{"text":"Let formula_1 be the complex potential for a fluid flow, where all singularities of formula_1 lie in formula_3. If a circle formula_4 is placed into that flow, the complex potential for the new flow is given by"}
+{"text":"with same singularities as formula_1 in formula_3 and formula_4 is a streamline. On the circle formula_4, formula_10, therefore"}
+{"text":"Consider a uniform irrotational flow formula_12 with velocity formula_13 flowing in the positive formula_14 direction and place an infinitely long cylinder of radius formula_15 in the flow with the center of the cylinder at the origin. Then formula_16, hence using circle theorem,"}
+{"text":"represents the complex potential of uniform flow over a cylinder."}
+{"text":"The Prony equation (named after Gaspard de Prony) is a historically important equation in hydraulics, used to calculate the head loss due to friction within a given run of pipe. It is an empirical equation developed by Frenchman Gaspard de Prony in the 19th century:"}
+{"text":"where \"hf\" is the head loss due to friction, calculated from: the ratio of the length to diameter of the pipe \"L\/D\", the velocity of the flow \"V\", and two empirical factors \"a\" and \"b\" to account for friction."}
+{"text":"This equation has been supplanted in modern hydraulics by the Darcy\u2013Weisbach equation, which used it as a starting point."}
+{"text":"In fluid dynamics, the Darcy friction factor formulae are equations that allow the calculation of the Darcy friction factor, a dimensionless quantity used in the Darcy\u2013Weisbach equation, for the description of friction losses in pipe flow as well as open-channel flow."}
+{"text":"The Darcy friction factor is also known as the \"Darcy\u2013Weisbach friction factor\", \"resistance coefficient\" or simply \"friction factor\"; by definition it is four times larger than the Fanning friction factor."}
+{"text":"In this article, the following conventions and definitions are to be understood:"}
+{"text":"Which friction factor formula may be applicable depends upon the type of flow that exists:"}
+{"text":"Transition (neither fully laminar nor fully turbulent) flow occurs in the range of Reynolds numbers between 2300 and 4000. The value of the Darcy friction factor is subject to large uncertainties in this flow regime."}
+{"text":"The Blasius correlation is the simplest equation for computing the Darcy friction"}
+{"text":"factor. Because the Blasius correlation has no term for pipe roughness, it"}
+{"text":"is valid only to smooth pipes. However, the Blasius correlation is sometimes"}
+{"text":"used in rough pipes because of its simplicity. The Blasius correlation is valid"}
+{"text":"The Darcy friction factor for fully turbulent flow (Reynolds number greater than 4000) in rough conduits can be modeled by the Colebrook\u2013White equation."}
+{"text":"The last formula in the \"Colebrook equation\" section of this article is for free surface flow. The approximations elsewhere in this article are not applicable for this type of flow."}
+{"text":"Before choosing a formula it is worth knowing that in the paper on the Moody chart, Moody stated the accuracy is about \u00b15% for smooth pipes and \u00b110% for rough pipes. If more than one formula is applicable in the flow regime under consideration, the choice of formula may be influenced by one or more of the following:"}
+{"text":"The phenomenological Colebrook\u2013White equation (or Colebrook equation) expresses the Darcy friction factor \"f\" as a function of Reynolds number Re and pipe relative roughness \u03b5 \/ \"D\"h, fitting the data of experimental studies of turbulent flow in smooth and rough pipes."}
+{"text":"The equation can be used to (iteratively) solve for the Darcy\u2013Weisbach friction factor \"f\"."}
+{"text":"For a conduit flowing completely full of fluid at Reynolds numbers greater than 4000, it is expressed as:"}
+{"text":"Note: Some sources use a constant of 3.71 in the denominator for the roughness term in the first equation above."}
+{"text":"The Colebrook equation is usually solved numerically due to its implicit nature. Recently, the Lambert W function has been employed to obtain explicit reformulation of the Colebrook equation."}
+{"text":"Additional, mathematically equivalent forms of the Colebrook equation are:"}
+{"text":"The additional equivalent forms above assume that the constants 3.7 and 2.51 in the formula at the top of this section are exact. The constants are probably values which were rounded by Colebrook during his curve fitting; but they are effectively treated as exact when comparing (to several decimal places) results from explicit formulae (such as those found elsewhere in this article) to the friction factor computed via Colebrook's implicit equation."}
+{"text":"Equations similar to the additional forms above (with the constants rounded to fewer decimal places, or perhaps shifted slightly to minimize overall rounding errors) may be found in various references. It may be helpful to note that they are essentially the same equation."}
+{"text":"Another form of the Colebrook-White equation exists for free surfaces. Such a condition may exist in a pipe that is flowing partially full of fluid. For free surface flow:"}
+{"text":"The above equation is valid only for turbulent flow. Another approach for estimating \"f\" in free surface flows, which is valid under all the flow regimes (laminar, transition and turbulent) is the following:"}
+{"text":"where \"Reh\" is Reynolds number where \"h\" is the characteristic hydraulic length (hydraulic radius for 1D flows or water depth for 2D flows) and \"Rh\" is the hydraulic radius (for 1D flows) or the water depth (for 2D flows). The Lambert W function can be calculated as follows:"}
+{"text":"The \"Haaland equation\" was proposed in 1983 by Professor S.E. Haaland of the Norwegian Institute of Technology. It is used to solve directly for the Darcy\u2013Weisbach friction factor \"f\" for a full-flowing circular pipe. It is an approximation of the implicit Colebrook\u2013White equation, but the discrepancy from experimental data is well within the accuracy of the data."}
+{"text":"The Swamee\u2013Jain equation is used to solve directly for the Darcy\u2013Weisbach friction factor \"f\" for a full-flowing circular pipe. It is an approximation of the implicit Colebrook\u2013White equation."}
+{"text":"Serghides's solution is used to solve directly for the Darcy\u2013Weisbach friction factor \"f\" for a full-flowing circular pipe. It is an approximation of the implicit Colebrook\u2013White equation. It was derived using Steffensen's method."}
+{"text":"The solution involves calculating three intermediate values and then substituting those values into a final equation."}
+{"text":"The equation was found to match the Colebrook\u2013White equation within 0.0023% for a test set with a 70-point matrix consisting of ten relative roughness values (in the range 0.00004 to 0.05) by seven Reynolds numbers (2500 to 108)."}
+{"text":"Goudar equation is the most accurate approximation to solve directly for the Darcy\u2013Weisbach friction factor \"f\" for a full-flowing circular pipe. It is an approximation of the implicit Colebrook\u2013White equation. Equation has the following form"}
+{"text":"Brki\u0107 shows one approximation of the Colebrook equation based on the Lambert W-function"}
+{"text":"The equation was found to match the Colebrook\u2013White equation within 3.15%."}
+{"text":"Brki\u0107 and Praks show one approximation of the Colebrook equation based on the Wright formula_40-function, a cognate of the Lambert W-function"}
+{"text":"The equation was found to match the Colebrook\u2013White equation within 0.0497%."}
+{"text":"Praks and Brki\u0107 show one approximation of the Colebrook equation based on the Wright formula_40-function, a cognate of the Lambert W-function"}
+{"text":"The equation was found to match the Colebrook\u2013White equation within 0.0012%."}
+{"text":"Since Serghides's solution was found to be one of the most accurate approximation of the implicit Colebrook\u2013White equation, Niazkar modified the Serghides's solution to solve directly for the Darcy\u2013Weisbach friction factor \"f\" for a full-flowing circular pipe."}
+{"text":"Niazkar's solution is shown in the following:"}
+{"text":"Niazkar's solution was found to be the most accurate correlation based on a comparative analysis conducted in the literature among 42 different explicit equations for estimating Colebrook friction factor."}
+{"text":"Early approximations for smooth pipes by Paul Richard Heinrich Blasius in terms of the Moody friction factor are given in one article of 1913:"}
+{"text":"Johann Nikuradse in 1932 proposed that this corresponds to a power law correlation for the fluid velocity profile."}
+{"text":"Mishra and Gupta in 1979 proposed a correction for curved or helically coiled tubes, taking into account the equivalent curve radius, Rc:"}
+{"text":"The following table lists historical approximations to the Colebrook\u2013White relation for pressure-driven flow. Churchill equation (1977) is the only equation that can be evaluated for very slow flow (Reynolds number < 1), but the Cheng (2008), and Bellos et al. (2018) equations also return an approximately correct value for friction factor in the laminar flow region (Reynolds number < 2300). All of the others are for transitional and turbulent flow only."}
+{"text":"A continuity equation or transport equation is an equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a conserved quantity, but it can be generalized to apply to any extensive quantity. Since mass, energy, momentum, electric charge and other natural\u00a0quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations."}
+{"text":"Continuity equations more generally can include \"source\" and \"sink\" terms, which allow them to describe quantities that are often but not always conserved, such as the density of a molecular species which can be created or destroyed by chemical reactions. In an everyday example, there is a continuity equation for the number of people alive; it has a \"source term\" to account for people being born, and a \"sink term\" to account for people dying."}
+{"text":"Any continuity equation can be expressed in an \"integral form\" (in terms of a flux integral), which applies to any finite region, or in a \"differential form\" (in terms of the divergence operator) which applies at a point."}
+{"text":"Continuity equations underlie more specific transport equations such as the convection\u2013diffusion equation, Boltzmann transport equation, and Navier\u2013Stokes equations."}
+{"text":"Flows governed by continuity equations can be visualized using a Sankey diagram."}
+{"text":"A continuity equation is useful when a \"flux\" can be defined. To define flux, first there must be a quantity which can flow or move, such as mass, energy, electric charge, momentum, number of molecules, etc. Let be the volume density of this quantity, that is, the amount of per unit volume."}
+{"text":"The way that this quantity is flowing is described by its flux. The flux of is a vector field, which we denote as j. Here are some examples and properties of flux:"}
+{"text":"The integral form of the continuity equation states that:"}
+{"text":"Mathematically, the integral form of the continuity equation expressing the rate of increase of within a volume is:"}
+{"text":"In a simple example, could be a building, and could be the number of people in the building. The surface would consist of the walls, doors, roof, and foundation of the building. Then the continuity equation states that the number of people in the building increases when people enter the building (an inward flux through the surface), decreases when people exit the building (an outward flux through the surface), increases when someone in the building gives birth (a source, ), and decreases when someone in the building dies (a sink, )."}
+{"text":"By the divergence theorem, a general continuity equation can also be written in a \"differential form\":"}
+{"text":"This general equation may be used to derive any continuity equation, ranging from as simple as the volume continuity equation to as complicated as the Navier\u2013Stokes equations. This equation also generalizes the advection equation. Other equations in physics, such as Gauss's law of the electric field and Gauss's law for gravity, have a similar mathematical form to the continuity equation, but are not usually referred to by the term \"continuity equation\", because in those cases does not represent the flow of a real physical quantity."}
+{"text":"In the case that is a conserved quantity that cannot be created or destroyed (such as energy), and the equations become:"}
+{"text":"In electromagnetic theory, the continuity equation is an empirical law expressing (local) charge conservation. Mathematically it is an automatic consequence of Maxwell's equations, although charge conservation is more fundamental than Maxwell's equations. It states that the divergence of the current density (in amperes per square metre) is equal to the negative rate of change of the charge density (in coulombs per cubic metre),"}
+{"text":"Current is the movement of charge. The continuity equation says that if charge is moving out of a differential volume (i.e. divergence of current density is positive) then the amount of charge within that volume is going to decrease, so the rate of change of charge density is negative. Therefore, the continuity equation amounts to a conservation of charge."}
+{"text":"If magnetic monopoles exist, there would be a continuity equation for monopole currents as well, see the monopole article for background and the duality between electric and magnetic currents."}
+{"text":"In fluid dynamics, the continuity equation states that the rate at which mass enters a system is equal to the rate at which mass leaves the system plus the accumulation of mass within the system."}
+{"text":"The differential form of the continuity equation is:"}
+{"text":"The time derivative can be understood as the accumulation (or loss) of mass in the system, while the divergence term represents the difference in flow in versus flow out. In this context, this equation is also one of the Euler equations (fluid dynamics). The Navier\u2013Stokes equations form a vector continuity equation describing the conservation of linear momentum."}
+{"text":"If the fluid is incompressible (volumetric strain rate is zero), the mass continuity equation simplifies to a volume continuity equation:"}
+{"text":"which means that the divergence of the velocity field is zero everywhere. Physically, this is equivalent to saying that the local volume dilation rate is zero, hence a flow of water through a converging pipe will adjust solely by increasing its velocity as water is largely incompressible."}
+{"text":"In computer vision, optical flow is the pattern of apparent motion of objects in a visual scene. Under the assumption that brightness of the moving object did not change between two image frames, one can derive the optical flow equation as:"}
+{"text":"Conservation of energy says that energy cannot be created or destroyed. (See below for the nuances associated with general relativity.) Therefore, there is a continuity equation for energy flow:"}
+{"text":"An important practical example is the flow of heat. When heat flows inside a solid, the continuity equation can be combined with Fourier's law (heat flux is proportional to temperature gradient) to arrive at the heat equation. The equation of heat flow may also have source terms: Although \"energy\" cannot be created or destroyed, \"heat\" can be created from other types of energy, for example via friction or joule heating."}
+{"text":"If there is a quantity that moves continuously according to a stochastic (random) process, like the location of a single dissolved molecule with Brownian motion, then there is a continuity equation for its probability distribution. The flux in this case is the probability per unit area per unit time that the particle passes through a surface. According to the continuity equation, the negative divergence of this flux equals the rate of change of the probability density. The continuity equation reflects the fact that the molecule is always somewhere\u2014the integral of its probability distribution is always equal to 1\u2014and that it moves by a continuous motion (no teleporting)."}
+{"text":"Quantum mechanics is another domain where there is a continuity equation related to \"conservation of probability\". The terms in the equation require the following definitions, and are slightly less obvious than the other examples above, so they are outlined here:"}
+{"text":"With these definitions the continuity equation reads:"}
+{"text":"Either form may be quoted. Intuitively, the above quantities indicate this represents the flow of probability. The \"chance\" of finding the particle at some position and time flows like a fluid; hence the term \"probability current\", a vector field. The particle itself does \"not\" flow deterministically in this vector field."}
+{"text":"The total current flow in the semiconductor consists of drift current and diffusion current of both the electrons in the conduction band and holes in the valence band."}
+{"text":"This section presents a derivation the equation above for electrons. A similar derivation can be found for the equation for holes."}
+{"text":"Consider the fact that the number of electrons is conserved across a volume of semiconductor material with cross-sectional area, \"A\", and length, \"dx\", along the \"x\"-axis. More precisely, one can say:"}
+{"text":"Total electron current is the sum of drift current and diffusion current:"}
+{"text":"Aplying the product rule reults in the final expression:"}
+{"text":"The key to solving these equations in real devices is whenever possible to select regions in which most of the mechanisms are negligible so that the equations reduce to a much simpler form."}
+{"text":"The notation and tools of special relativity, especially 4-vectors and 4-gradients, offer a convenient way to write any continuity equation."}
+{"text":"The density of a quantity and its current can be combined into a 4-vector called a 4-current:"}
+{"text":"where is the speed of light. The 4-divergence of this current is:"}
+{"text":"where is the 4-gradient and is an index labelling the spacetime dimension. Then the continuity equation is:"}
+{"text":"in the usual case where there are no sources or sinks, that is, for perfectly conserved quantities like energy or charge. This continuity equation is manifestly (\"obviously\") Lorentz invariant."}
+{"text":"Examples of continuity equations often written in this form include electric charge conservation"}
+{"text":"where is the electric 4-current; and energy\u2013momentum conservation"}
+{"text":"In general relativity, where spacetime is curved, the continuity equation (in differential form) for energy, charge, or other conserved quantities involves the \"covariant\" divergence instead of the ordinary divergence."}
+{"text":"For example, the stress\u2013energy tensor is a second-order tensor field containing energy\u2013momentum densities, energy\u2013momentum fluxes, and shear stresses, of a mass-energy distribution. The differential form of energy\u2013momentum conservation in general relativity states that the \"covariant\" divergence of the stress-energy tensor is zero:"}
+{"text":"This is an important constraint on the form the Einstein field equations take in general relativity."}
+{"text":"However, the \"ordinary\" divergence of the stress\u2013energy tensor does \"not\" necessarily vanish:"}
+{"text":"The right-hand side strictly vanishes for a flat geometry only."}
+{"text":"As a consequence, the \"integral\" form of the continuity equation is difficult to define and not necessarily valid for a region within which spacetime is significantly curved (e.g. around a black hole, or across the whole universe)."}
+{"text":"Quarks and gluons have \"color charge\", which is always conserved like electric charge, and there is a continuity equation for such color charge currents (explicit expressions for currents are given at gluon field strength tensor)."}
+{"text":"There are many other quantities in particle physics which are often or always conserved: baryon number (proportional to the number of quarks minus the number of antiquarks), electron number, mu number, tau number, isospin, and others. Each of these has a corresponding continuity equation, possibly including source \/ sink terms."}
+{"text":"One reason that conservation equations frequently occur in physics is Noether's theorem. This states that whenever the laws of physics have a continuous symmetry, there is a continuity equation for some conserved physical quantity. The three most famous examples are:"}
+{"text":"See Noether's theorem for proofs and details."}
+{"text":"In differential calculus, the Reynolds transport theorem (also known as the Leibniz\u2013Reynolds transport theorem), or in short Reynolds theorem, is a three-dimensional generalization of the Leibniz integral rule which is also known as differentiation under the integral sign."}
+{"text":"The theorem is named after Osborne Reynolds (1842\u20131912). It is used to recast derivatives of integrated quantities and is useful in formulating the basic equations of continuum mechanics."}
+{"text":"Consider integrating over the time-dependent region that has boundary , then taking the derivative with respect to time:"}
+{"text":"If we wish to move the derivative within the integral, there are two issues: the time dependence of , and the introduction of and removal of space from due to its dynamic boundary. Reynolds transport theorem provides the necessary framework."}
+{"text":"Reynolds transport theorem can be expressed as follows:"}
+{"text":"in which is the outward-pointing unit normal vector, is a point in the region and is the variable of integration, and are volume and surface elements at , and is the velocity of the area element (\"not\" the flow velocity). The function may be tensor-, vector- or scalar-valued. Note that the integral on the left hand side is a function solely of time, and so the total derivative has been used."}
+{"text":"In continuum mechanics, this theorem is often used for material elements. These are parcels of fluids or solids which no material enters or leaves. If is a material element then there is a velocity function , and the boundary elements obey"}
+{"text":"This condition may be substituted to obtain:"}
+{"text":"If we take to be constant with respect to time, then and the identity reduces to"}
+{"text":"as expected. (This simplification is not possible if the flow velocity is incorrectly used in place of the velocity of an area element.)"}
+{"text":"The theorem is the higher-dimensional extension of differentiation under the integral sign and reduces to that expression in some cases. Suppose is independent of and , and that is a unit square in the -plane and has limits and . Then Reynolds transport theorem reduces to"}
+{"text":"which, up to swapping and , is the standard expression for differentiation under the integral sign."}
+{"text":"The Fanning friction factor, named after John Thomas Fanning, is a dimensionless number used as a local parameter in continuum mechanics calculations. It is defined as the ratio between the local shear stress and the local flow kinetic energy density:"}
+{"text":"In particular the shear stress at the wall can, in turn, be related to the pressure loss by multiplying the wall shear stress by the wall area ( formula_12 for a pipe with circular cross section) and dividing by the cross-sectional flow area ( formula_13 for a pipe with circular cross section). Thus formula_14"}
+{"text":"This friction factor is one-fourth of the Darcy friction factor, so attention must be paid to note which one of these is meant in the \"friction factor\" chart or equation consulted. Of the two, the Fanning friction factor is the more commonly used by chemical engineers and those following the British convention."}
+{"text":"The formulas below may be used to obtain the Fanning friction factor for common applications."}
+{"text":"The Darcy friction factor can also be expressed as"}
+{"text":"For laminar flow in a round tube."}
+{"text":"From the chart, it is evident that the friction factor is never zero, even for smooth pipes because of some roughness at the microscopic level."}
+{"text":"The friction factor for laminar flow of Newtonian fluids in round tubes is often taken to be:"}
+{"text":"where Re is the Reynolds number of the flow."}
+{"text":"For a square channel the value used is:"}
+{"text":"For turbulent flow in a round tube."}
+{"text":"Blasius developed an expression of friction factor in 1913 for the flow in the regime formula_21."}
+{"text":"Koo introduced another explicit formula in 1933 for a turbulent flow in region of formula_23"}
+{"text":"When the pipes have certain roughness formula_25, this factor must be taken in account when the Fanning friction factor is calculated. The relationship between pipe roughness and Fanning friction factor was developed by Haaland (1983) under flow conditions of formula_26"}
+{"text":"As the roughness extends into turbulent core, the Fanning friction factor becomes independent of fluid viscosity at large Reynolds numbers, as illustrated by Nikuradse and Reichert (1943) for the flow in region of formula_30. The equation below has been modified from the original format which was developed for Darcy friction factor by a factor of formula_31"}
+{"text":"For the turbulent flow regime, the relationship between the Fanning friction factor and the Reynolds number is more complex and is governed by the Colebrook equation which is implicit in formula_2:"}
+{"text":"Various explicit approximations of the related Darcy friction factor have been developed for turbulent flow."}
+{"text":"Stuart W. Churchill developed a formula that covers the friction factor for both laminar and turbulent flow. This was originally produced to describe the Moody chart, which plots the Darcy-Weisbach Friction factor against Reynolds number. The Darcy Weisbach Formula formula_35, also called Moody friction factor, is 4 times the Fanning friction factor formula_36 and so a factor of formula_37 has been applied to produce the formula given below."}
+{"text":"Due to geometry of non-circular conduits, the Fanning friction factor can be estimated from algebraic expressions above by using hydraulic radius formula_41 when calculating for Reynolds number formula_42"}
+{"text":"The friction head can be related to the pressure loss due to friction by dividing the pressure loss by the product of the acceleration due to gravity and the density of the fluid. Accordingly, the relationship between the friction head and the Fanning friction factor is:"}
+{"text":"In fluid mechanics and astrophysics, the relativistic Euler equations are a generalization of the Euler equations that account for the effects of general relativity.They have applications in high-energy astrophysics and numerical relativity, where they are commonly used for describing phenomena such as gamma-ray bursts, accretion phenomena, and neutron stars, often with the addition of a magnetic field. \"Note: for consistency with the literature, this article makes use of natural units, namely the speed of light\" formula_1 \"and the Einstein summation convention.\""}
+{"text":"For most fluids observable on Earth, traditional fluid mechanics based on Newtonian mechanics is sufficient. However, as the fluid velocity approaches the speed of light or moves through strong gravitational fields, or the pressure approaches the energy density (formula_2), these equations are no longer valid. Such situations occur frequently in astrophysical applications. For example, gamma-ray bursts often feature speeds only formula_3 less than the speed of light, and neutron stars feature gravitational fields that are more than formula_4 times stronger than the Earth's. Under these extreme circumstances, only a relativistic treatment of fluids will suffice."}
+{"text":"The equations of motion are contained in the continuity equation of the stress\u2013energy tensor formula_5:"}
+{"text":"where formula_7 is the covariant derivative. For a perfect fluid,"}
+{"text":"Here formula_9 is the total mass-energy density (including both rest mass and internal energy density) of the fluid, formula_10 is the fluid pressure, formula_11 is the four-velocity of the fluid, and formula_12 is the metric tensor. To the above equations, a statement of conservation is usually added, usually conservation of baryon number. If formula_13 is the number density of baryons this may be stated"}
+{"text":"These equations reduce to the classical Euler equations if the fluid three-velocity is much less than the speed of light, the pressure is much less than the energy density, and the latter is dominated by the rest mass density.To close this system, an equation of state, such as an ideal gas or a Fermi gas, is also added."}
+{"text":"In the case of flat space, that is formula_15 and using a metric signature of formula_16, the equations of motion are,"}
+{"text":"Where formula_18 is the energy density of the system, with formula_10 being the pressure, and formula_20 being the four-velocity of the system."}
+{"text":"Expanding out the sums and equations, we have, (using formula_21 as the material derivative)"}
+{"text":"Then, picking formula_23 to observe the behavior of the velocity itself, we see that the equations of motion become"}
+{"text":"Note that taking the non-relativistic limit, we have formula_25. This says that the of the system is dominated by the rest energy of the fluid in question."}
+{"text":"In this limit, we have formula_26 and formula_27, and can see that we return the Euler Equation of formula_28."}
+{"text":"In order to determine the equations of motion, we take advantage of the following spatial projection tensor condition:"}
+{"text":"We prove this by looking at formula_30 and then multiplying each side by formula_31. Upon doing this, and noting that formula_32, we have formula_33. Relabeling the indices formula_34 as formula_35 shows that the two completely cancel. This cancellation is the expected result of contracting a temporal tensor with a spatial tensor."}
+{"text":"Where we have implicitly defined that formula_37."}
+{"text":"Then, let's note the fact that formula_40 and formula_41. Note that the second identity follows from the first. Under these simplifications, we find that"}
+{"text":"We have two cancellations, and are thus left with"}
+{"text":"The Starling equation describes the net flow of fluid across a semipermeable membrane. It is named after Ernest Starling. It describes the balance between capillary pressure, interstitial pressure, and osmotic pressure. The classic Starling equation has in recent years been revised. The Starling principle of fluid exchange is key to understanding how plasma fluid (solvent) within the bloodstream (intravascular fluid) moves to the space outside the bloodstream (extravascular space)."}
+{"text":"Transendothelial fluid exchange occurs predominantly in the capillaries, and is a process of plasma ultrafiltration across a semi-permeable membrane. It is now appreciated that the ultrafilter is the glycocalyx of the plasma membrane of the endothelium, whose interpolymer spaces function as a system of small pores, radius circa 5\u00a0nm. Where the endothelial glycocalyx overlies an inter endothelial cell cleft, the plasma ultrafiltrate may pass to the interstitial space. Some continuous capillaries may feature fenestrations that provide an additional subglycocalyx pathway for solvent and small solutes. Discontinuous capillaries as found in sinusoidal tissues of bone marrow, liver and spleen have little or no filter function."}
+{"text":"The rate at which fluid is filtered across vascular endothelium (transendothelial filtration) is determined by the sum of two outward forces, capillary pressure (formula_1) and interstitial protein osmotic pressure (formula_2), and two absorptive forces, plasma protein osmotic pressure (formula_3) and interstitial pressure (formula_4). The Starling equation describes these forces in mathematical terms. It is one of the Kedem\u2013Katchalski equations which bring nonsteady state thermodynamics to the theory of osmotic pressure across membranes that are at least partly permeable to the solute responsible for the osmotic pressure difference. The second Kedem\u2013Katchalsky equation explains the trans endothelial transport of solutes, formula_5."}
+{"text":"The classic Starling equation reads as follows:"}
+{"text":"By convention, outward force is defined as positive, and inward force is defined as negative. If Jv is positive, solvent is leaving the capillary (filtration). If negative, solvent is entering the capillary (absorption)."}
+{"text":"Applying the classic Starling equation, it had long been taught that continuous capillaries filter out fluid in their arteriolar section and reabsorb most of it in their venular section, as shown by the diagram."}
+{"text":"However, empirical evidence shows that, in most tissues, the flux of the intraluminal fluid of capillaries is continuous and, primarily, effluent. Efflux occurs along the whole length of a capillary. Fluid filtered to the space outside a capillary is mostly returned to the circulation via lymph nodes and the thoracic duct."}
+{"text":"A mechanism for this phenomenon is the Michel-Weinbaum model, in honour of two scientists who, independently, described the filtration function of the glycocalyx. Briefly, the colloid osmotic pressure \u03c0i of the interstitial fluid has been found to have no effect on Jv and the colloid osmotic pressure difference that opposes filtration is now known to be \u03c0'p minus the subglycocalyx \u03c0, which is close to zero while there is adequate filtration to flush interstitial proteins out of the interendothelial cleft. Consequently, Jv is much less than previously calculated, and the unopposed diffusion of interstitial proteins to the subglycocalyx space if and when filtration falls wipes out the colloid osmotic pressure difference necessary for reabsorption of fluid to the capillary."}
+{"text":"The revised Starling equation is compatible with the steady-state Starling principle:"}
+{"text":"Pressures are often measured in millimetres of mercury (mmHg), and the filtration coefficient in millilitres per minute per millimetre of mercury (ml\u00b7min\u22121\u00b7mmHg\u22121)."}
+{"text":"In some texts the product of hydraulic conductivity and surface area is called the filtration co-efficient Kfc."}
+{"text":"Staverman's reflection coefficient, \"\u03c3\", is a unitless constant that is specific to the permeability of a membrane to a given solute."}
+{"text":"The Starling equation, written without \"\u03c3\", describes the flow of a solvent across a membrane that is impermeable to the solutes contained within the solution."}
+{"text":"\"\u03c3n\" corrects for the partial permeability of a semipermeable membrane to a solute \"n\"."}
+{"text":"Where \"\u03c3\" is close to 1, the plasma membrane is less permeable to the denotated species (for example, larger molecules such as albumin and other plasma proteins), which may flow across the endothelial lining, from higher to lower concentrations, more slowly, while allowing water and smaller solutes through the glycocalyx filter to the extravascular space."}
+{"text":"Following are typically quoted values for the variables in the classic Starling equation:"}
+{"text":"It is reasoned that some albumin escapes from the capillaries and enters the interstitial fluid where it would produce a flow of water equivalent to that produced by a hydrostatic pressure of +3 mmHg. Thus, the difference in protein concentration would produce a flow of fluid into the vessel at the venous end equivalent to 28\u00a0\u2212\u00a03\u00a0=\u00a025\u00a0mmHg of hydrostatic pressure. The total oncotic pressure present at the venous end could be considered as\u00a0+25\u00a0mmHg."}
+{"text":"In the beginning (arteriolar end) of a capillary, there is a net driving force (formula_28) outwards from the capillary of +9 mmHg. In the end (venular end), on the other hand, there is a net driving force of\u00a0\u22128 mmHg."}
+{"text":"Assuming that the net driving force declines linearly, then there is a mean net driving force outwards from the capillary as a whole, which also results in that more fluid exits a capillary than re-enters it. The lymphatic system drains this excess."}
+{"text":"J. Rodney Levick argues in his textbook that the interstitial force is often underestimated, and measurements used to populate the revised Starling equation show the absorbing forces to be consistently less than capillary or venular pressures."}
+{"text":"Glomerular capillaries have a continuous glycocalyx layer in health and the total transendothelial filtration rate of solvent (formula_7) to the renal tubules is normally around 125 ml\/ min (about 180 litres\/ day). Glomerular capillary formula_7 is more familiarly known as the glomerular filtration rate (GFR). In the rest of the body's capillaries, formula_7 is typically 5 ml\/ min (around 8 litres\/ day), and the fluid is returned to the circulation \"via\" afferent and efferent lymphatics."}
+{"text":"The Starling equation can describe the movement of fluid from pulmonary capillaries to the alveolar air space."}
+{"text":"The principles behind the equation are useful for explaining physiological phenomena in capillaries, such as the formation of edema."}
+{"text":"Woodcock and Woodcock showed in 2012 that the revised Starling equation (steady-state Starling principle) provides scientific explanations for clinical observations concerning intravenous fluid therapy."}
+{"text":"The Starling equation is named for the British physiologist Ernest Starling, who is also recognised for the Frank\u2013Starling law of the heart. Starling can be credited with identifying that the \"absorption of isotonic salt solutions (from the extravascular space) by the blood vessels is determined by this osmotic pressure of the serum proteins\" in 1896."}
+{"text":"In fluid dynamics, the mild-slope equation describes the combined effects of diffraction and refraction for water waves propagating over bathymetry and due to lateral boundaries\u2014like breakwaters and coastlines. It is an approximate model, deriving its name from being originally developed for wave propagation over mild slopes of the sea floor. The mild-slope equation is often used in coastal engineering to compute the wave-field changes near harbours and coasts."}
+{"text":"The mild-slope equation models the propagation and transformation of water waves, as they travel through waters of varying depth and interact with lateral boundaries such as cliffs, beaches, seawalls and breakwaters. As a result, it describes the variations in wave amplitude, or equivalently wave height. From the wave amplitude, the amplitude of the flow velocity oscillations underneath the water surface can also be computed. These quantities\u2014wave amplitude and flow-velocity amplitude\u2014may subsequently be used to determine the wave effects on coastal and offshore structures, ships and other floating objects, sediment transport and resulting bathymetric changes of the sea bed and coastline, mean flow fields and mass transfer of dissolved and floating materials. Most often, the mild-slope equation is solved by computer using methods from numerical analysis."}
+{"text":"A first form of the mild-slope equation was developed by Eckart in 1952, and an improved version\u2014the mild-slope equation in its classical formulation\u2014has been derived independently by Juri Berkhoff in 1972. Thereafter, many modified and extended forms have been proposed, to include the effects of, for instance: wave\u2013current interaction, wave nonlinearity, steeper sea-bed slopes, bed friction and wave breaking. Also parabolic approximations to the mild-slope equation are often used, in order to reduce the computational cost."}
+{"text":"In case of a constant depth, the mild-slope equation reduces to the Helmholtz equation for wave diffraction."}
+{"text":"For monochromatic waves according to linear theory\u2014with the free surface elevation given as formula_1 and the waves propagating on a fluid layer of mean water depth formula_2\u2014the mild-slope equation is:"}
+{"text":"The phase and group speed depend on the dispersion relation, and are derived from Airy wave theory as:"}
+{"text":"For a given angular frequency formula_7, the wavenumber formula_12 has to be solved from the dispersion equation, which relates these two quantities to the water depth formula_20."}
+{"text":"the mild slope equation can be cast in the form of an inhomogeneous Helmholtz equation:"}
+{"text":"In spatially coherent fields of propagating waves, it is useful to split the complex amplitude formula_4 in its amplitude and phase, both real valued:"}
+{"text":"This transforms the mild-slope equation in the following set of equations (apart from locations for which formula_30 is singular):"}
+{"text":"The last equation shows that wave energy is conserved in the mild-slope equation, and that the wave energy formula_32 is transported in the formula_33-direction normal to the wave crests (in this case of pure wave motion without mean currents). The effective group speed formula_40 is different from the group speed formula_41"}
+{"text":"The first equation states that the effective wavenumber formula_33 is irrotational, a direct consequence of the fact it is the derivative of the wave phase formula_43, a scalar field. The second equation is the eikonal equation. It shows the effects of diffraction on the effective wavenumber: only for more-or-less progressive waves, with formula_44 the splitting into amplitude formula_45 and phase formula_43 leads to consistent-varying and meaningful fields of formula_45 and formula_33. Otherwise, \"\u03ba\"2 can even become negative. When the diffraction effects are totally neglected, the effective wavenumber \"\u03ba\" is equal to formula_12, and the geometric optics approximation for wave refraction can be used."}
+{"text":"When formula_50 is used in the mild-slope equation, the result is, apart from a factor formula_51:"}
+{"text":"Now both the real part and the imaginary part of this equation have to be equal to zero:"}
+{"text":"The effective wavenumber vector formula_33 is \"defined\" as the gradient of the wave phase:"}
+{"text":"Note that formula_33 is an irrotational field, since the curl of the gradient is zero:"}
+{"text":"Now the real and imaginary parts of the transformed mild-slope equation become, first multiplying the imaginary part by formula_45:"}
+{"text":"The first equation directly leads to the eikonal equation above for formula_61, while the second gives:"}
+{"text":"which\u2014by noting that formula_63 in which the angular frequency formula_7 is a constant for time-harmonic motion\u2014leads to the wave-energy conservation equation."}
+{"text":"The mild-slope equation can be derived by the use of several methods. Here, we will use a variational approach. The fluid is assumed to be inviscid and incompressible, and the flow is assumed to be irrotational. These assumptions are valid ones for surface gravity waves, since the effects of vorticity and viscosity are only significant in the Stokes boundary layers (for the oscillatory part of the flow). Because the flow is irrotational, the wave motion can be described using potential flow theory."}
+{"text":"Luke's Lagrangian formulation gives a variational formulation for non-linear surface gravity waves."}
+{"text":"For the case of a horizontally unbounded domain with a constant density formula_36, a free fluid surface at formula_66 and a fixed sea bed at formula_67 Luke's variational principle formula_68 uses the Lagrangian"}
+{"text":"where formula_70 is the horizontal Lagrangian density, given by:"}
+{"text":"where formula_72 is the velocity potential, with the flow velocity components being formula_73 formula_74 and formula_75 in the formula_76, formula_77 and formula_78 directions, respectively."}
+{"text":"Luke's Lagrangian formulation can also be recast into a Hamiltonian formulation in terms of the surface elevation and velocity potential at the free surface."}
+{"text":"Taking the variations of formula_79 with respect to the potential formula_72 and surface elevation formula_81 leads to the Laplace equation for formula_82 in the fluid interior, as well as all the boundary conditions both on the free surface formula_66 as at the bed at formula_84"}
+{"text":"In case of linear wave theory, the vertical integral in the Lagrangian density formula_70 is split into a part from the bed formula_86 to the mean surface at formula_87 and a second part from formula_88 to the free surface formula_89. Using a Taylor series expansion for the second integral around the mean free-surface elevation formula_87 and only retaining quadratic terms in formula_82 and formula_92 the Lagrangian density formula_93 for linear wave motion becomes"}
+{"text":"The term formula_95 in the vertical integral is dropped since it has become dynamically uninteresting: it gives a zero contribution to the Euler\u2013Lagrange equations, with the upper integration limit now fixed. The same is true for the neglected bottom term proportional to formula_96 in the potential energy."}
+{"text":"The waves propagate in the horizontal formula_6 plane, while the structure of the potential formula_82 is not wave-like in the vertical formula_78-direction. This suggests the use of the following assumption on the form of the potential formula_100"}
+{"text":"Here formula_104 is the velocity potential at the mean free-surface level formula_103 Next, the mild-slope assumption is made, in that the vertical shape function formula_106 changes slowly in the formula_6-plane, and horizontal derivatives of formula_106 can be neglected in the flow velocity. So:"}
+{"text":"The Euler\u2013Lagrange equations for this Lagrangian density formula_93 are, with formula_114 representing either formula_115 or formula_116"}
+{"text":"Now formula_118 is first taken equal to formula_115 and then to formula_120"}
+{"text":"As a result, the evolution equations for the wave motion become:"}
+{"text":"with \u2207 the horizontal gradient operator: \u2207\u00a0\u2261(\u2202\/\u2202\"x\" \u2202\/\u2202\"y\")T where T denotes the transpose."}
+{"text":"The next step is to choose the shape function formula_106 and to determine formula_123 and formula_124"}
+{"text":"Vertical shape function from Airy wave theory."}
+{"text":"Since the objective is the description of waves over mildly sloping beds, the shape function formula_125 is chosen according to Airy wave theory. This is the linear theory of waves propagating in constant depth formula_126 The form of the shape function is:"}
+{"text":"with formula_128 now in general not a constant, but chosen to vary with formula_76 and formula_77 according to the local depth formula_2 and the linear dispersion relation:"}
+{"text":"Here formula_133 a constant angular frequency, chosen in accordance with the characteristics of the wave field under study. Consequently, the integrals formula_123 and formula_135 become:"}
+{"text":"The following time-dependent equations give the evolution of the free-surface elevation formula_81 and free-surface potential formula_138"}
+{"text":"From the two evolution equations, one of the variables formula_115 or formula_141 can be eliminated, to obtain the time-dependent form of the mild-slope equation:"}
+{"text":"and the corresponding equation for the free-surface potential is identical, with formula_141 replaced by formula_144 The time-dependent mild-slope equation can be used to model waves in a narrow band of frequencies around formula_145"}
+{"text":"Consider monochromatic waves with complex amplitude formula_4 and angular frequency formula_147"}
+{"text":"with formula_7 and formula_133 chosen equal to each other, formula_151 Using this in the time-dependent form of the mild-slope equation, recovers the classical mild-slope equation for time-harmonic wave motion:"}
+{"text":"Applicability and validity of the mild-slope equation."}
+{"text":"The standard mild slope equation, without extra terms for bed slope and bed curvature, provides accurate results for the wave field over bed slopes ranging from 0 to about 1\/3. However, some subtle aspects, like the amplitude of reflected waves, can be completely wrong, even for slopes going to zero. This mathematical curiosity has little practical importance in general since this reflection becomes vanishingly small for small bottom slopes."}
+{"text":"The vorticity equation of fluid dynamics describes evolution of the vorticity of a particle of a fluid as it moves with its flow, that is, the local rotation of the fluid (in terms of vector calculus this is the curl of the flow velocity)."}
+{"text":"where is the material derivative operator, is the flow velocity, is the local fluid density, is the local pressure, is the viscous stress tensor and represents the sum of the external body forces. The first source term on the right hand side represents vortex stretching."}
+{"text":"The equation is valid in the absence of any concentrated torques and line forces, for a compressible Newtonian fluid."}
+{"text":"In the case of incompressible (i.e. low Mach number) and isotropic fluids, with conservative body forces, the equation simplifies to the vorticity transport equation"}
+{"text":"where is the kinematic viscosity and is the Laplace operator."}
+{"text":"Thus for an inviscid, barotropic fluid with conservative body forces, the vorticity equation simplifies to"}
+{"text":"Alternately, in case of incompressible, inviscid fluid with conservative body forces,"}
+{"text":"For a brief review of additional cases and simplifications, see also. For the vorticity equation in turbulence theory, in context of the flows in oceans and atmosphere, refer to."}
+{"text":"The vorticity equation can be derived from the Navier\u2013Stokes equation for the conservation of angular momentum. In the absence of any concentrated torques and line forces, one obtains"}
+{"text":"Now, vorticity is defined as the curl of the flow velocity vector. Taking the curl of momentum equation yields the desired equation."}
+{"text":"The following identities are useful in derivation of the equation:"}
+{"text":"The vorticity equation can be expressed in tensor notation using Einstein's summation convention and the Levi-Civita symbol :"}
+{"text":"In the atmospheric sciences, the vorticity equation can be stated in terms of the absolute vorticity of air with respect to an inertial frame, or of the vorticity with respect to the rotation of the Earth. The absolute version is"}
+{"text":"Here, is the polar () component of the vorticity, is the atmospheric density, , , and w are the components of wind velocity, and is the 2-dimensional (i.e. horizontal-component-only) del."}
+{"text":"The Vogel-Fulcher-Tammann equation, also known as Vogel-Fulcher-Tammann-Hesse equation or Vogel-Fulcher equation (abbreviated: VFT equation), is used to describe the viscosity of liquids as a function of temperature, and especially its strongly temperature dependent variation in the supercooled regime, upon approaching the glass transition. In this regime the viscosity of certain liquids can increase by up to 13 orders of magnitude within a relatively narrow temperature interval."}
+{"text":"where formula_2 and formula_3 are empirical material-dependent parameters, and formula_4 is also an empirical fitting parameter, and typically lies about 50\u00a0\u00b0C below the glass transition temperature. These three parameters are normally used as adjustable parameters to fit the VFT equation to experimental data of specific systems."}
+{"text":"The VFT equation is named after Hans Vogel, Gordon Scott Fulcher (1884\u20131971) and Gustav Tammann (1861\u20131938)."}
+{"text":"In fluid dynamics, the Boussinesq approximation for water waves is an approximation valid for weakly non-linear and fairly long waves. The approximation is named after Joseph Boussinesq, who first derived them in response to the observation by John Scott Russell of the wave of translation (also known as solitary wave or soliton). The 1872 paper of Boussinesq introduces the equations now known as the Boussinesq equations."}
+{"text":"The Boussinesq approximation for water waves takes into account the vertical structure of the horizontal and vertical flow velocity. This results in non-linear partial differential equations, called Boussinesq-type equations, which incorporate frequency dispersion (as opposite to the shallow water equations, which are not frequency-dispersive). In coastal engineering, Boussinesq-type equations are frequently used in computer models for the simulation of water waves in shallow seas and harbours."}
+{"text":"While the Boussinesq approximation is applicable to fairly long waves \u2013 that is, when the wavelength is large compared to the water depth \u2013 the Stokes expansion is more appropriate for short waves (when the wavelength is of the same order as the water depth, or shorter)."}
+{"text":"The essential idea in the Boussinesq approximation is the elimination of the vertical coordinate from the flow equations, while retaining some of the influences of the vertical structure of the flow under water waves. This is useful because the waves propagate in the horizontal plane and have a different (not wave-like) behaviour in the vertical direction. Often, as in Boussinesq's case, the interest is primarily in the wave propagation."}
+{"text":"This elimination of the vertical coordinate was first done by Joseph Boussinesq in 1871, to construct an approximate solution for the solitary wave (or wave of translation). Subsequently, in 1872, Boussinesq derived the equations known nowadays as the Boussinesq equations."}
+{"text":"The steps in the Boussinesq approximation are:"}
+{"text":"Thereafter, the Boussinesq approximation is applied to the remaining flow equations, in order to eliminate the dependence on the vertical coordinate."}
+{"text":"As a result, the resulting partial differential equations are in terms of functions of the horizontal coordinates (and time)."}
+{"text":"As an example, consider potential flow over a horizontal bed in the (\"x,z\") plane, with \"x\" the horizontal and \"z\" the vertical coordinate. The bed is located at , where \"h\" is the mean water depth. A Taylor expansion is made of the velocity potential \"\u03c6(x,z,t)\" around the bed level :"}
+{"text":"where \"\u03c6b(x,t)\" is the velocity potential at the bed. Invoking Laplace's equation for \"\u03c6\", as valid for incompressible flow, gives:"}
+{"text":"since the vertical velocity is zero at the \u2013 impermeable \u2013 horizontal bed . This series may subsequently be truncated to a finite number of terms."}
+{"text":"For water waves on an incompressible fluid and irrotational flow in the (\"x\",\"z\") plane, the boundary conditions at the free surface elevation are:"}
+{"text":"Now the Boussinesq approximation for the velocity potential \"\u03c6\", as given above, is applied in these boundary conditions. Further, in the resulting equations only the linear and quadratic terms with respect to \"\u03b7\" and \"ub\" are retained (with the horizontal velocity at the bed ). The cubic and higher order terms are assumed to be negligible. Then, the following partial differential equations are obtained:"}
+{"text":"This set of equations has been derived for a flat horizontal bed, \"i.e.\" the mean depth \"h\" is a constant independent of position \"x\". When the right-hand sides of the above equations are set to zero, they reduce to the shallow water equations."}
+{"text":"Under some additional approximations, but at the same order of accuracy, the above set A can be reduced to a single partial differential equation for the free surface elevation \"\u03b7\":"}
+{"text":"From the terms between brackets, the importance of nonlinearity of the equation can be expressed in terms of the Ursell number."}
+{"text":"In dimensionless quantities, using the water depth \"h\" and gravitational acceleration \"g\" for non-dimensionalization, this equation reads, after normalization:"}
+{"text":"Water waves of different wave lengths travel with different phase speeds, a phenomenon known as frequency dispersion. For the case of infinitesimal wave amplitude, the terminology is \"linear frequency dispersion\". The frequency dispersion characteristics of a Boussinesq-type of equation can be used to determine the range of wave lengths, for which it is a valid approximation."}
+{"text":"The linear frequency dispersion characteristics for the above set A of equations are:"}
+{"text":"The relative error in the phase speed \"c\" for set A, as compared with linear theory for water waves, is less than 4% for a relative wave number . So, in engineering applications, set A is valid for wavelengths \"\u03bb\" larger than 4 times the water depth \"h\"."}
+{"text":"The linear frequency dispersion characteristics of equation B are:"}
+{"text":"The relative error in the phase speed for equation B is less than 4% for , equivalent to wave lengths \"\u03bb\" longer than 7 times the water depth \"h\", called fairly long waves."}
+{"text":"For short waves with equation B become physically meaningless, because there are no longer real-valued solutions of the phase speed."}
+{"text":"The original set of two partial differential equations (Boussinesq, 1872, equation 25, see set A above) does not have this shortcoming."}
+{"text":"The shallow water equations have a relative error in the phase speed less than 4% for wave lengths \"\u03bb\" in excess of 13 times the water depth \"h\"."}
+{"text":"There are an overwhelming number of mathematical models which are referred to as Boussinesq equations. This may easily lead to confusion, since often they are loosely referenced to as \"the\" Boussinesq equations, while in fact a variant thereof is considered. So it is more appropriate to call them Boussinesq-type equations. Strictly speaking, \"the\" Boussinesq equations is the above-mentioned set B, since it is used in the analysis in the remainder of his 1872 paper."}
+{"text":"Some directions, into which the Boussinesq equations have been extended, are:"}
+{"text":"While the Boussinesq equations allow for waves traveling simultaneously in opposing directions, it is often advantageous to only consider waves traveling in one direction. Under small additional assumptions, the Boussinesq equations reduce to:"}
+{"text":"Besides solitary wave solutions, the Korteweg\u2013de Vries equation also has periodic and exact solutions, called cnoidal waves. These are approximate solutions of the Boussinesq equation."}
+{"text":"For the simulation of wave motion near coasts and harbours, numerical models \u2013 both commercial and academic \u2013 employing Boussinesq-type equations exist. Some commercial examples are the Boussinesq-type wave modules in MIKE 21 and SMS. Some of the free Boussinesq models are Celeris, COULWAVE, and FUNWAVE. Most numerical models employ finite-difference, finite-volume or finite element techniques for the discretization of the model equations. Scientific reviews and intercomparisons of several Boussinesq-type equations, their numerical approximation and performance are e.g. , and ."}
+{"text":"In fluid dynamics, Batchelor vortices, first described by George Batchelor in a 1964 article, have been found useful in analyses of airplane vortex wake hazard problems."}
+{"text":"The Batchelor vortex is an approximate solution to the Navier-Stokes equations obtained using a boundary layer approximation. The physical reasoning behind this approximation is the assumption that the axial gradient of the flow field of interest is of much smaller magnitude than the radial gradient."}
+{"text":"The axial, radial and azimuthal velocity components of the vortex are denoted formula_1,formula_2 and formula_3 respectively and can be represented in cylindrical coordinates formula_4 as follows:
"}
+{"text":"The parameters in the above equations are"}
+{"text":"Note that the radial component of the velocity is zero and that the axial and azimuthal components depend only on formula_13."}
+{"text":"We now write the system above in dimensionless form by scaling time by a factor formula_14. Using the same symbols for the dimensionless variables, the Batchelor vortex can be expressed in terms of the dimensionless variables as"}
+{"text":"where formula_16 denotes the free stream axial velocity and formula_17 is the Reynolds number."}
+{"text":"If one lets formula_18 and considers an infinitely large swirl number then the Batchelor vortex simplifies to the Lamb\u2013Oseen vortex for the azimuthal velocity:"}
+{"text":"When using the notation formula_1 for dynamic viscosity, formula_2 for the liquid-solid contact angle, formula_3 for surface tension , formula_4 for the fluid density, \"t\" for time, and \"r\" for the cross-sectional radius of the capillary and \"x\" for the distance the fluid has advanced, the Bosanquet equation of motion is"}
+{"text":"assuming that the motion is completely driven by surface tension, with no applied pressure at either end of the capillary tube."}
+{"text":"The solution of the Bosanquet equation can be split into two timescales, firstly to account for the initial motion of the fluid by considering a solution in the limit of time approaching 0 giving the form"}
+{"text":"For the condition of short time this shows a meniscus front position proportional to time rather than the Lucas-Washburn square root of time, and the independence of viscosity demonstrates plug flow."}
+{"text":"As time increases after the initial time of acceleration, the equation decays to the familiar Lucas-Washburn form dependent on viscosity and the square root of time."}
+{"text":"The black-oil equations are a set of partial differential equations that describe fluid flow in a petroleum reservoir, constituting the mathematical framework for a black-oil reservoir simulator."}
+{"text":"The term \"black-oil\" refers to the fluid model, in which water is modeled explicitly together with two hydrocarbon components, one (pseudo) oil phase and one (pseudo-)gas phase."}
+{"text":"This is in contrast with a compositional formulation, in which each hydrocarbon component (arbitrary number) is handled separately"}
+{"text":"The equations of an extended black-oil model are"}
+{"text":"formula_4 is a porosity of the porous medium,"}
+{"text":"and vapor (\"gas\") phases in the reservoir,"}
+{"text":"Darcy velocities of the liquid phase, water phase and vapor phase in the reservoir."}
+{"text":"The oil and gas at the surface (standard conditions) could be produced from both liquid and vapor phases existing at high pressure and temperature of reservoir conditions. This is characterized by the following quantities:"}
+{"text":"(ratio of some volume of reservoir liquid"}
+{"text":"to the volume of oil at standard conditions"}
+{"text":"obtained from the same volume of reservoir liquid),"}
+{"text":"formula_9 is a water formation volume factor"}
+{"text":"(ratio of volume of water at reservoir conditions to volume of water at standard conditions),"}
+{"text":"formula_10 is a gas formation volume factor"}
+{"text":"(ratio of some volume of reservoir vapor"}
+{"text":"to the volume of gas at standard conditions obtained from the same volume of reservoir vapor),"}
+{"text":"formula_11 is a solution of gas in oil phase"}
+{"text":"(ratio of volume of gas to the volume of oil at standard conditions"}
+{"text":"obtained from some amount of liquid phase at reservoir conditions),"}
+{"text":"formula_12 is a vaporized oil in gas phase"}
+{"text":"(ratio of volume of oil to the volume of gas at standard conditions"}
+{"text":"obtained from some amount of vapor phase at reservoir conditions)."}
+{"text":"In fluid dynamics, the Oseen equations (or Oseen flow) describe the flow of a viscous and incompressible fluid at small Reynolds numbers, as formulated by Carl Wilhelm Oseen in 1910. Oseen flow is an improved description of these flows, as compared to Stokes flow, with the (partial) inclusion of convective acceleration."}
+{"text":"Oseen's work is based on the experiments of G.G. Stokes, who had studied the falling of a sphere through a viscous fluid. He developed a correction term, which included inertial factors, for the flow velocity used in Stokes' calculations, to solve the problem known as Stokes' paradox. His approximation leads to an improvement to Stokes' calculations."}
+{"text":"The Oseen equations are, in case of an object moving with a steady flow velocity U through the fluid\u2014which is at rest far from the object\u2014and in a frame of reference attached to the object:"}
+{"text":"The boundary conditions for the Oseen flow around a rigid object are:"}
+{"text":"with \"r\" the distance from the object's center, and \"p\"\u221e the undisturbed pressure far from the object."}
+{"text":"A fundamental property of Oseen's equation is that the general solution can be split into \"longitudinal\" and \"transversal\" waves."}
+{"text":"A solution formula_3 is a longitudinal wave if the velocity is irrotational and hence the viscous term drops out. The equations become"}
+{"text":"Velocity is derived from potential theory and pressure is from linearized Bernoulli's equations."}
+{"text":"A solution formula_6 is a transversal wave if the pressure formula_7 is identically zero and the velocity field is solenoidal. The equations are"}
+{"text":"Then the complete Oseen solution is given by"}
+{"text":"a splitting theorem due to Horace Lamb. The splitting is unique if conditions at infinity (say formula_10) are specified."}
+{"text":"For certain Oseen flows, further splitting of transversal wave into irrotational and rotational component is possible formula_11 Let formula_12 be the scalar function which satisfies formula_13 and vanishes at infinity and conversely let formula_14 be given such that formula_15, then the transversal wave is"}
+{"text":"where formula_12 is determined from formula_18 and formula_19 is the unit vector. Neither formula_20 or formula_21 are transversal by itself, but formula_22 is transversal. Therefore,"}
+{"text":"The only rotational component is being formula_21."}
+{"text":"The fundamental solution due to a singular point force embedded in an Oseen flow is the Oseenlet. The closed-form fundamental solutions for the generalized unsteady Stokes and Oseen flows associated with arbitrary time-dependent translational and rotational motions have been derived for the Newtonian and micropolar fluids."}
+{"text":"Using the Oseen equation, Horace Lamb was able to derive improved expressions for the viscous flow around a sphere in 1911, improving on Stokes law towards somewhat higher Reynolds numbers. Also, Lamb derived\u2014for the first time\u2014a solution for the viscous flow around a circular cylinder."}
+{"text":"The solution to the response of a singular force formula_25 when no external boundaries are present be written as"}
+{"text":"If formula_27, where formula_28 is the singular force concentrated at the point formula_29 and formula_30 is an arbitrary point and formula_31 is the given vector, which gives the direction of the singular force, then in the absence of boundaries, the velocity and pressure is derived from the fundamental tensor formula_32 and the fundamental vector formula_33"}
+{"text":"Now if formula_25 is arbitrary function of space, the solution for an unbounded domain is"}
+{"text":"where formula_37 is the infinitesimal volume\/area element around the point formula_29."}
+{"text":"Without loss of generality formula_39 taken at the origin and formula_40. Then the fundamental tensor and vector are"}
+{"text":"where formula_43 is the modified Bessel function of the second kind of order zero."}
+{"text":"Without loss of generality formula_44 taken at the origin and formula_45. Then the fundamental tensor and vector are"}
+{"text":"Oseen considered the sphere to be stationary and the fluid to be flowing with a flow velocity (formula_48) at an infinite distance from the sphere. Inertial terms were neglected in Stokes\u2019 calculations. It is a limiting solution when the Reynolds number tends to zero. When the Reynolds number is small and finite, such as 0.1, correction for the inertial term is needed. Oseen substituted the following flow velocity values into the Navier-Stokes equations."}
+{"text":"Inserting these into the Navier-Stokes equations and neglecting the quadratic terms in the primed quantities leads to the derivation of Oseen's approximation:"}
+{"text":"Since the motion is symmetric with respect to formula_51 axis and the divergence of the vorticity vector is always zero we get:"}
+{"text":"the function formula_53 can be eliminated by adding to a suitable function in formula_51, is the vorticity function, and the previous function can be written as:"}
+{"text":"and by some integration the solution for formula_12 is:"}
+{"text":"thus by letting formula_51 be the \"privileged direction\" it produces:"}
+{"text":"then by applying the three boundary conditions we obtain"}
+{"text":"the new improved drag coefficient now become:"}
+{"text":"and finally, when Stokes' solution was solved on the basis of Oseen's approximation, it showed that the resultant drag force is given by"}
+{"text":"The force from Oseen's equation differs from that of Stokes by a factor of"}
+{"text":"In the far field formula_71 \u226b 1, the viscous stress is dominated by the last term. That is:"}
+{"text":"The inertia term is dominated by the term:"}
+{"text":"The error is then given by the ratio:"}
+{"text":"This becomes unbounded for formula_71 \u226b 1, therefore the inertia cannot be ignored in the far field. By taking the curl, Stokes equation gives formula_76 Since the body is a source of vorticity, formula_77 would become unbounded logarithmically for large formula_78 This is certainly unphysical and is known as Stokes' paradox."}
+{"text":"Solution for a moving sphere in incompressible fluid."}
+{"text":"Consider the case of a solid sphere moving in a stationary liquid with a constant velocity."}
+{"text":"The liquid is modeled as an incompressible fluid (i.e. with constant density), and being stationary means that its velocity tends towards zero as the distance from the sphere approaches infinity."}
+{"text":"For a real body there will be a transient effect due to its acceleration as it begins its motion; however after enough time it will tend towards zero, so that the fluid velocity everywhere will approach the one obtained in the hypothetical case in which the body is already moving for infinite time."}
+{"text":"Thus we assume a sphere of radius \"a\" moving at a constant velocity formula_79, in an incompressible fluid that is at rest at infinity. We will work in coordinates formula_80 that move along with the sphere with the coordinate center located at the sphere's center. We have:"}
+{"text":"Since these boundary conditions, as well as the equation of motions, are time invariant (i.e. they are unchanged by shifting the time formula_82) when expressed in the formula_80 coordinates, the solution depends upon the time only through these coordinates."}
+{"text":"The equations of motion are the Navier-Stokes equations defined in the resting frame coordinates formula_84. While spatial derivatives are equal in both coordinate systems, the time derivative that appears in the equations satisfies:"}
+{"text":"where the derivative formula_86 is with respect to the moving coordinates formula_80. We henceforth omit the \"m\" subscript."}
+{"text":"Oseen's approximation sums up to neglecting the term non-linear in formula_88. Thus the incompressible Navier-Stokes equations become:"}
+{"text":"for a fluid having density \u03c1 and kinematic viscosity \u03bd = \u03bc\/\u03c1 (\u03bc being the dynamic viscosity). \"p\" is the pressure."}
+{"text":"Due to the continuity equation for incompressible fluid formula_90, the solution can be expressed using a vector potential formula_91. This turns out to be directed at the formula_92 direction and its magnitude is equivalent to the stream function used in two-dimensional problems. It turns out to be:"}
+{"text":"where formula_94 is Reynolds number for the flow close to the sphere."}
+{"text":"Note that in some notations formula_95 is replaced by formula_96 so that the derivation of formula_88 from formula_98 is more similar to its derivation from the stream function in the two-dimensional case (in polar coordinates)."}
+{"text":"The vector Laplacian of a vector of the type formula_104 reads:"}
+{"text":"where we have used the vanishing of the divergence of formula_91 to relate the vector laplacian and a double curl."}
+{"text":"The equation of motion's left hand side is the curl of the following:"}
+{"text":"We calculate the derivative separately for each term in formula_95."}
+{"text":"Taking the curl, we find an expression that is equal to formula_116 times the gradient of the following function, which is the pressure:"}
+{"text":"where formula_118 is the pressure at infinity, formula_119.is the polar angle originated from the opposite side of the front stagnation point (formula_120 where is the front stagnation point)."}
+{"text":"Also, the velocity is derived by taking the curl of formula_91:"}
+{"text":"These \"p\" and \"u\" satisfy the equation of motion and thus constitute the solution to Oseen's approximation."}
+{"text":"One may question, however, whether the correction term was chosen by chance, because in a frame of reference moving with the sphere, the fluid near the sphere is almost at rest, and in that region inertial force is negligible and Stokes' equation is well justified. Far away from the sphere, the flow velocity approaches \"u\" and Oseen's approximation is more accurate. But Oseen's equation was obtained applying the equation for the entire flow field. This question was answered by Proudman and Pearson in 1957, who solved the Navier-Stokes equations and gave an improved Stokes' solution in the neighborhood of the sphere and an improved Oseen's solution at infinity, and matched the two solutions in a supposed common region of their validity. They obtained:"}
+{"text":"The method and formulation for analysis of flow at a very low Reynolds number is important. The slow motion of small particles in a fluid is common in bio-engineering. Oseen's drag formulation can be used in connection with flow of fluids under various special conditions, such as: containing particles, sedimentation of particles, centrifugation or ultracentrifugation of suspensions, colloids, and blood through isolation of tumors and antigens. The fluid does not even have to be a liquid, and the particles do not need to be solid. It can be used in a number of applications, such as smog formation and atomization of liquids."}
+{"text":"Blood flow in small vessels, such as capillaries, is characterized by small Reynolds and Womersley numbers. A vessel of diameter of with a flow of , viscosity of for blood, density of and a heart rate of , will have a Reynolds number of 0.005 and a Womersley number of 0.0126. At these small Reynolds and Womersley numbers, the viscous effects of the fluid become predominant. Understanding the movement of these particles is essential for drug delivery and studying metastasis movements of cancers."}
+{"text":"In fluid dynamics, stream thrust averaging is a process used to convert three-dimensional flow through a duct into one-dimensional uniform flow. It makes the assumptions that the flow is mixed adiabatically and without friction. However, due to the mixing process, there is a net increase in the entropy of the system. Although there is an increase in entropy, the stream thrust averaged values are more representative of the flow than a simple average as a simple average would violate the second Law of Thermodynamics."}
+{"text":"Solving for formula_5 yields two solutions. They must both be analyzed to determine which is the physical solution. One will usually be a subsonic root and the other a supersonic root. If it is not clear which value of velocity is correct, the second law of thermodynamics may be applied."}
+{"text":"The values formula_10 and formula_11 are unknown and may be dropped from the formulation. The value of entropy is not necessary, only that the value is positive."}
+{"text":"One possible unreal solution for the stream thrust averaged velocity yields a negative entropy. Another method of determining the proper solution is to take a simple average of the velocity and determining which value is closer to the stream thrust averaged velocity."}
+{"text":"In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behaviour of a physical system as a set of mathematical functions in terms of dynamic variables. These variables are usually spatial coordinates and time, but may include momentum components. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics."}
+{"text":"There are two main descriptions of motion: dynamics and kinematics. Dynamics is general, since the momenta, forces and energy of the particles are taken into account. In this instance, sometimes the term \"dynamics\" refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler\u2013Lagrange equations), and sometimes to the solutions to those equations."}
+{"text":"However, kinematics is simpler. It concerns only variables derived from the positions of objects and time. In circumstances of constant acceleration, these simpler equations of motion are usually referred to as the SUVAT equations, arising from the definitions of kinematic quantities: displacement (), initial velocity (), final velocity (), acceleration (), and time ()."}
+{"text":"Equations of motion can therefore be grouped under these main classifiers of motion. In all cases, the main types of motion are translations, rotations, oscillations, or any combinations of these."}
+{"text":"A differential equation of motion, usually identified as some physical law and applying definitions of physical quantities, is used to set up an equation for the problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a family of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants."}
+{"text":"To state this formally, in general an equation of motion is a function of the position of the object, its velocity (the first time derivative of , ), and its acceleration (the second derivative of , ), and time . Euclidean vectors in 3D are denoted throughout in bold. This is equivalent to saying an equation of motion in is a second-order ordinary differential equation (ODE) in ,"}
+{"text":"where is time, and each overdot denotes one time derivative. The initial conditions are given by the \"constant\" values at ,"}
+{"text":"The solution to the equation of motion, with specified initial values, describes the system for all times after . Other dynamical variables like the momentum of the object, or quantities derived from and like angular momentum, can be used in place of as the quantity to solve for from some equation of motion, although the position of the object at time is by far the most sought-after quantity."}
+{"text":"Sometimes, the equation will be linear and is more likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used. The solutions to nonlinear equations may show chaotic behavior depending on how \"sensitive\" the system is to the initial conditions."}
+{"text":"Kinematics, dynamics and the mathematical models of the universe developed incrementally over three millennia, thanks to many thinkers, only some of whose names we know. In antiquity, priests, astrologers and astronomers predicted solar and lunar eclipses, the solstices and the equinoxes of the Sun and the period of the Moon. But they had nothing other than a set of algorithms to guide them. Equations of motion were not written down for another thousand years."}
+{"text":"Medieval scholars in the thirteenth century \u2014 for example at the relatively new universities in Oxford and Paris \u2014 drew on ancient mathematicians (Euclid and Archimedes) and philosophers (Aristotle) to develop a new body of knowledge, now called physics."}
+{"text":"At Oxford, Merton College sheltered a group of scholars devoted to natural science, mainly physics, astronomy and mathematics, who were of similar stature to the intellectuals at the University of Paris. Thomas Bradwardine extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested an exponential law involving force, resistance, distance, velocity and time. Nicholas Oresme further extended Bradwardine's arguments. The Merton school proved that the quantity of motion of a body undergoing a uniformly accelerated motion is equal to the quantity of a uniform motion at the speed achieved halfway through the accelerated motion."}
+{"text":"Discourses such as these spread throughout Europe, shaping the work of Galileo Galilei and others, and helped in laying the foundation of kinematics. Galileo deduced the equation in his work geometrically, using the Merton rule, now known as a special case of one of the equations of kinematics."}
+{"text":"The term \"inertia\" was used by Kepler who applied it to bodies at rest. (The first law of motion is now often called the law of inertia.)"}
+{"text":"Galileo did not fully grasp the third law of motion, the law of the equality of action and reaction, though he corrected some errors of Aristotle. With Stevin and others Galileo also wrote on statics. He formulated the principle of the parallelogram of forces, but he did not fully recognize its scope."}
+{"text":"Galileo also was interested by the laws of the pendulum, his first observations of which were as a young man. In 1583, while he was praying in the cathedral at Pisa, his attention was arrested by the motion of the great lamp lighted and left swinging, referencing his own pulse for time keeping. To him the period appeared the same, even after the motion had greatly diminished, discovering the isochronism of the pendulum."}
+{"text":"More careful experiments carried out by him later, and described in his Discourses, revealed the period of oscillation varies with the square root of length but is independent of the mass the pendulum."}
+{"text":"Thus we arrive at Ren\u00e9 Descartes, Isaac Newton, Gottfried Leibniz, et al.; and the evolved forms of the equations of motion that begin to be recognized as the modern ones."}
+{"text":"Later the equations of motion also appeared in electrodynamics, when describing the motion of charged particles in electric and magnetic fields, the Lorentz force is the general equation which serves as the definition of what is meant by an electric field and magnetic field. With the advent of special relativity and general relativity, the theoretical modifications to spacetime meant the classical equations of motion were also modified to account for the finite speed of light, and curvature of spacetime. In all these cases the differential equations were in terms of a function describing the particle's trajectory in terms of space and time coordinates, as influenced by forces or energy transformations."}
+{"text":"However, the equations of quantum mechanics can also be considered \"equations of motion\", since they are differential equations of the wavefunction, which describes how a quantum state behaves analogously using the space and time coordinates of the particles. There are analogs of equations of motion in other areas of physics, for collections of physical phenomena that can be considered waves, fluids, or fields."}
+{"text":"From the instantaneous position , instantaneous meaning at an instant value of time , the instantaneous velocity and acceleration have the general, coordinate-independent definitions;"}
+{"text":"Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature."}
+{"text":"The rotational analogues are the \"angular vector\" (angle the particle rotates about some axis) , angular velocity , and angular acceleration :"}
+{"text":"where is a unit vector in the direction of the axis of rotation, and is the angle the object turns through about the axis."}
+{"text":"The following relation holds for a point-like particle, orbiting about some axis with angular velocity :"}
+{"text":"where is the position vector of the particle (radial from the rotation axis) and the tangential velocity of the particle. For a rotating continuum rigid body, these relations hold for each point in the rigid body."}
+{"text":"The differential equation of motion for a particle of constant or uniform acceleration in a straight line is simple: the acceleration is constant, so the second derivative of the position of the object is constant. The results of this case are summarized below."}
+{"text":"Constant translational acceleration in a straight line."}
+{"text":"These equations apply to a particle moving linearly, in three dimensions in a straight line with constant acceleration. Since the position, velocity, and acceleration are collinear (parallel, and lie on the same line) \u2013 only the magnitudes of these vectors are necessary, and because the motion is along a straight line, the problem effectively reduces from three dimensions to one."}
+{"text":"Equations [1] and [2] are from integrating the definitions of velocity and acceleration, subject to the initial conditions and ;"}
+{"text":"which breaks into the radial acceleration , centripetal acceleration , Coriolis acceleration , and angular acceleration ."}
+{"text":"Special cases of motion described be these equations are summarized qualitatively in the table below. Two have already been discussed above, in the cases that either the radial components or the angular components are zero, and the non-zero component of motion describes uniform acceleration."}
+{"text":"In 3D space, the equations in spherical coordinates with corresponding unit vectors , and , the position, velocity, and acceleration generalize respectively to"}
+{"text":"In the case of a constant this reduces to the planar equations above."}
+{"text":"The first general equation of motion developed was Newton's second law of motion. In its most general form it states the rate of change of momentum of an object equals the force acting on it,"}
+{"text":"The force in the equation is \"not\" the force the object exerts. Replacing momentum by mass times velocity, the law is also written more famously as"}
+{"text":"since is a constant in Newtonian mechanics."}
+{"text":"Newton's second law applies to point-like particles, and to all points in a rigid body. They also apply to each point in a mass continuum, like deformable solids or fluids, but the motion of the system must be accounted for; see material derivative. In the case the mass is not constant, it is not sufficient to use the product rule for the time derivative on the mass and velocity, and Newton's second law requires some modification consistent with conservation of momentum; see variable-mass system."}
+{"text":"It may be simple to write down the equations of motion in vector form using Newton's laws of motion, but the components may vary in complicated ways with spatial coordinates and time, and solving them is not easy. Often there is an excess of variables to solve for the problem completely, so Newton's laws are not always the most efficient way to determine the motion of a system. In simple cases of rectangular geometry, Newton's laws work fine in Cartesian coordinates, but in other coordinate systems can become dramatically complex."}
+{"text":"The momentum form is preferable since this is readily generalized to more complex systems, such as special and general relativity (see four-momentum). It can also be used with the momentum conservation. However, Newton's laws are not more fundamental than momentum conservation, because Newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. Momentum conservation is always true for an isolated system not subject to resultant forces."}
+{"text":"For a number of particles (see many body problem), the equation of motion for one particle influenced by other particles is"}
+{"text":"where is the momentum of particle , is the force on particle by particle , and is the resultant external force due to any agent not part of system. Particle does not exert a force on itself."}
+{"text":"Euler's laws of motion are similar to Newton's laws, but they are applied specifically to the motion of rigid bodies. The Newton\u2013Euler equations combine the forces and torques acting on a rigid body into a single equation."}
+{"text":"Newton's second law for rotation takes a similar form to the translational case,"}
+{"text":"by equating the torque acting on the body to the rate of change of its angular momentum . Analogous to mass times acceleration, the moment of inertia tensor depends on the distribution of mass about the axis of rotation, and the angular acceleration is the rate of change of angular velocity,"}
+{"text":"Again, these equations apply to point like particles, or at each point of a rigid body."}
+{"text":"Likewise, for a number of particles, the equation of motion for one particle is"}
+{"text":"where is the angular momentum of particle , the torque on particle by particle , and is resultant external torque (due to any agent not part of system). Particle does not exert a torque on itself."}
+{"text":"Some examples of Newton's law include describing the motion of a simple pendulum,"}
+{"text":"and a damped, sinusoidally driven harmonic oscillator,"}
+{"text":"For describing the motion of masses due to gravity, Newton's law of gravity can be combined with Newton's second law. For two examples, a ball of mass thrown in the air, in air currents (such as wind) described by a vector field of resistive forces ,"}
+{"text":"where is the gravitational constant, the mass of the Earth, and is the acceleration of the projectile due to the air currents at position and time ."}
+{"text":"The classical -body problem for particles each interacting with each other due to gravity is a set of nonlinear coupled second order ODEs,"}
+{"text":"where labels the quantities (mass, position, etc.) associated with each particle."}
+{"text":"Using all three coordinates of 3D space is unnecessary if there are constraints on the system. If the system has degrees of freedom, then one can use a set of generalized coordinates , to define the configuration of the system. They can be in the form of arc lengths or angles. They are a considerable simplification to describe motion, since they take advantage of the intrinsic constraints that limit the system's motion, and the number of coordinates is reduced to a minimum. The time derivatives of the generalized coordinates are the \"generalized velocities\""}
+{"text":"where the \"Lagrangian\" is a function of the configuration and its time rate of change (and possibly time )"}
+{"text":"Setting up the Lagrangian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled second order ODEs in the coordinates are obtained."}
+{"text":"is a function of the configuration and conjugate \"\"generalized\" momenta\""}
+{"text":"in which is a shorthand notation for a vector of partial derivatives with respect to the indicated variables (see for example matrix calculus for this denominator notation), and possibly time ,"}
+{"text":"Setting up the Hamiltonian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled first order ODEs in the coordinates and momenta are obtained."}
+{"text":"is \"Hamilton's principal function\", also called the \"classical action\" is a functional of . In this case, the momenta are given by"}
+{"text":"Although the equation has a simple general form, for a given Hamiltonian it is actually a single first order \"non-linear\" PDE, in variables. The action allows identification of conserved quantities for mechanical systems, even when the mechanical problem itself cannot be solved fully, because any differentiable symmetry of the action of a physical system has a corresponding conservation law, a theorem due to Emmy Noether."}
+{"text":"All classical equations of motion can be derived from the variational principle known as Hamilton's principle of least action"}
+{"text":"stating the path the system takes through the configuration space is the one with the least action ."}
+{"text":"In electrodynamics, the force on a charged particle of charge is the Lorentz force:"}
+{"text":"Combining with Newton's second law gives a first order differential equation of motion, in terms of position of the particle:"}
+{"text":"The same equation can be obtained using the Lagrangian (and applying Lagrange's equations above) for a charged particle of mass and charge :"}
+{"text":"where and are the electromagnetic scalar and vector potential fields. The Lagrangian indicates an additional detail: the canonical momentum in Lagrangian mechanics is given by:"}
+{"text":"instead of just , implying the motion of a charged particle is fundamentally determined by the mass and charge of the particle. The Lagrangian expression was first used to derive the force equation."}
+{"text":"Alternatively the Hamiltonian (and substituting into the equations):"}
+{"text":"The above equations are valid in flat spacetime. In curved spacetime, things become mathematically more complicated since there is no straight line; this is generalized and replaced by a \"geodesic\" of the curved spacetime (the shortest length of curve between two points). For curved manifolds with a metric tensor , the metric provides the notion of arc length (see line element for details). The differential arc length is given by:"}
+{"text":"and the geodesic equation is a second-order differential equation in the coordinates. The general solution is a family of geodesics:"}
+{"text":"where is a Christoffel symbol of the second kind, which contains the metric (with respect to the coordinate system)."}
+{"text":"Given the mass-energy distribution provided by the stress\u2013energy tensor , the Einstein field equations are a set of non-linear second-order partial differential equations in the metric, and imply the curvature of spacetime is equivalent to a gravitational field (see equivalence principle). Mass falling in curved spacetime is equivalent to a mass falling in a gravitational field - because gravity is a fictitious force. The \"relative acceleration\" of one geodesic to another in curved spacetime is given by the \"geodesic deviation equation\":"}
+{"text":"where is the separation vector between two geodesics, (\"not\" just ) is the covariant derivative, and is the Riemann curvature tensor, containing the Christoffel symbols. In other words, the geodesic deviation equation is the equation of motion for masses in curved spacetime, analogous to the Lorentz force equation for charges in an electromagnetic field."}
+{"text":"For flat spacetime, the metric is a constant tensor so the Christoffel symbols vanish, and the geodesic equation has the solutions of straight lines. This is also the limiting case when masses move according to Newton's law of gravity."}
+{"text":"In general relativity, rotational motion is described by the relativistic angular momentum tensor, including the spin tensor, which enter the equations of motion under covariant derivatives with respect to proper time. The Mathisson\u2013Papapetrou\u2013Dixon equations describe the motion of spinning objects moving in a gravitational field."}
+{"text":"Unlike the equations of motion for describing particle mechanics, which are systems of coupled ordinary differential equations, the analogous equations governing the dynamics of waves and fields are always partial differential equations, since the waves or fields are functions of space and time. For a particular solution, boundary conditions along with initial conditions need to be specified."}
+{"text":"Sometimes in the following contexts, the wave or field equations are also called \"equations of motion\"."}
+{"text":"Equations that describe the spatial dependence and time evolution of fields are called \"field equations\". These include"}
+{"text":"This terminology is not universal: for example although the Navier\u2013Stokes equations govern the velocity field of a fluid, they are not usually called \"field equations\", since in this context they represent the momentum of the fluid and are called the \"momentum equations\" instead."}
+{"text":"Equations of wave motion are called \"wave equations\". The solutions to a wave equation give the time-evolution and spatial dependence of the amplitude. Boundary conditions determine if the solutions describe traveling waves or standing waves."}
+{"text":"From classical equations of motion and field equations; mechanical, gravitational wave, and electromagnetic wave equations can be derived. The general linear wave equation in 3D is:"}
+{"text":"where is any mechanical or electromagnetic field amplitude, say:"}
+{"text":"and is the phase velocity. Nonlinear equations model the dependence of phase velocity on amplitude, replacing by . There are other linear and nonlinear wave equations for very specific applications, see for example the Korteweg\u2013de Vries equation."}
+{"text":"In quantum theory, the wave and field concepts both appear."}
+{"text":"In quantum mechanics, in which particles also have wave-like properties according to wave\u2013particle duality, the analogue of the classical equations of motion (Newton's law, Euler\u2013Lagrange equation, Hamilton\u2013Jacobi equation, etc.) is the Schr\u00f6dinger equation in its most general form:"}
+{"text":"where is the wavefunction of the system, is the quantum Hamiltonian operator, rather than a function as in classical mechanics, and is the Planck constant divided by 2. Setting up the Hamiltonian and inserting it into the equation results in a wave equation, the solution is the wavefunction as a function of space and time. The Schr\u00f6dinger equation itself reduces to the Hamilton\u2013Jacobi equation when one considers the correspondence principle, in the limit that becomes zero."}
+{"text":"Throughout all aspects of quantum theory, relativistic or non-relativistic, there are various formulations alternative to the Schr\u00f6dinger equation that govern the time evolution and behavior of a quantum system, for instance:"}
+{"text":"In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless, it is a pure number."}
+{"text":"Each partition function is constructed to represent a particular statistical ensemble (which, in turn, corresponds to a particular free energy). The most common statistical ensembles have named partition functions. The canonical partition function applies to a canonical ensemble, in which the system is allowed to exchange heat with the environment at fixed temperature, volume, and number of particles. The grand canonical partition function applies to a grand canonical ensemble, in which the system can exchange both heat and particles with the environment, at fixed temperature, volume, and chemical potential. Other types of partition functions can be defined for different circumstances; see partition function (mathematics) for generalizations. The partition function has many physical meanings, as discussed in Meaning and significance."}
+{"text":"Initially, let us assume that a thermodynamically large system is in thermal contact with the environment, with a temperature \"T\", and both the volume of the system and the number of constituent particles are fixed. A collection of this kind of system comprises an ensemble called a canonical ensemble. The appropriate mathematical expression for the canonical partition function depends on the degrees of freedom of the system, whether the context is classical mechanics or quantum mechanics, and whether the spectrum of states is discrete or continuous."}
+{"text":"For a canonical ensemble that is classical and discrete, the canonical partition function is defined as"}
+{"text":"The exponential factor formula_7 is otherwise known as the Boltzmann factor."}
+{"text":"In classical mechanics, the position and momentum variables of a particle can vary continuously, so the set of microstates is actually uncountable. In \"classical\" statistical mechanics, it is rather inaccurate to express the partition function as a sum of discrete terms. In this case we must describe the partition function using an integral rather than a sum. For a canonical ensemble that is classical and continuous, the canonical partition function is defined as"}
+{"text":"To make it into a dimensionless quantity, we must divide it by \"h\", which is some quantity with units of action (usually taken to be Planck's constant)."}
+{"text":"For a gas of formula_15 identical classical particles in three dimensions, the partition function is"}
+{"text":"The reason for the factorial factor \"N\"! is discussed below. The extra constant factor introduced in the denominator was introduced because, unlike the discrete form, the continuous form shown above is not dimensionless. As stated in the previous section, to make it into a dimensionless quantity, we must divide it by \"h\"3\"N\" (where \"h\" is usually taken to be Planck's constant)."}
+{"text":"For a canonical ensemble that is quantum mechanical and discrete, the canonical partition function is defined as the trace of the Boltzmann factor:"}
+{"text":"The dimension of formula_32 is the number of energy eigenstates of the system."}
+{"text":"For a canonical ensemble that is quantum mechanical and continuous, the canonical partition function is defined as"}
+{"text":"In systems with multiple quantum states \"s\" sharing the same energy \"Es\", it is said that the energy levels of the system are degenerate. In the case of degenerate energy levels, we can write the partition function in terms of the contribution from energy levels (indexed by \"j\") as follows:"}
+{"text":"where \"gj\" is the degeneracy factor, or number of quantum states \"s\" that have the same energy level defined by \"Ej\" = \"Es\"."}
+{"text":"The above treatment applies to \"quantum\" statistical mechanics, where a physical system inside a finite-sized box will typically have a discrete set of energy eigenstates, which we can use as the states \"s\" above. In quantum mechanics, the partition function can be more formally written as a trace over the state space (which is independent of the choice of basis):"}
+{"text":"where \"\u0124\" is the quantum Hamiltonian operator. The exponential of an operator can be defined using the exponential power series."}
+{"text":"The classical form of \"Z\" is recovered when the trace is expressed in terms of coherent states"}
+{"text":"and when quantum-mechanical uncertainties in the position and momentum of a particle"}
+{"text":"are regarded as negligible. Formally, using bra\u2013ket notation, one inserts under the trace for each degree of freedom the identity:"}
+{"text":"where \"x\", \"p\" is a normalised Gaussian wavepacket centered at"}
+{"text":"A coherent state is an approximate eigenstate of both operators formula_44 and formula_45, hence also of the Hamiltonian \"\u0124\", with errors of the size of the uncertainties. If \u0394\"x\" and \u0394\"p\" can be regarded as zero, the action of \"\u0124\" reduces to multiplication by the classical Hamiltonian, and \"Z\" reduces to the classical configuration integral."}
+{"text":"For simplicity, we will use the discrete form of the partition function in this section. Our results will apply equally well to the continuous form."}
+{"text":"Consider a system \"S\" embedded into a heat bath \"B\". Let the total energy of both systems be \"E\". Let \"pi\" denote the probability that the system \"S\" is in a particular microstate, \"i\", with energy \"Ei\". According to the fundamental postulate of statistical mechanics (which states that all attainable microstates of a system are equally probable), the probability \"pi\" will be proportional to the number of microstates of the total closed system (\"S\", \"B\") in which \"S\" is in microstate \"i\" with energy \"Ei\". Equivalently, \"pi\" will be proportional to the number of microstates of the heat bath \"B\" with energy \"E\" \u2212 \"Ei\":"}
+{"text":"Assuming that the heat bath's internal energy is much larger than the energy of \"S\" (\"E\" \u226b \"Ei\"), we can Taylor-expand formula_47 to first order in \"Ei\" and use the thermodynamic relation formula_48, where here formula_49, formula_50 are the entropy and temperature of the bath respectively:"}
+{"text":"Since the total probability to find the system in \"some\" microstate (the sum of all \"pi\") must be equal to\u00a01, we know that the constant of proportionality must be the normalization constant, and so, we can define the partition function to be this constant:"}
+{"text":"In order to demonstrate the usefulness of the partition function, let us calculate the thermodynamic value of the total energy. This is simply the expected value, or ensemble average for the energy, which is the sum of the microstate energies weighted by their probabilities:"}
+{"text":"Incidentally, one should note that if the microstate energies depend on a parameter \u03bb in the manner"}
+{"text":"then the expected value of \"A\" is"}
+{"text":"This provides us with a method for calculating the expected values of many microscopic quantities. We add the quantity artificially to the microstate energies (or, in the language of quantum mechanics, to the Hamiltonian), calculate the new partition function and expected value, and then set \"\u03bb\" to zero in the final expression. This is analogous to the source field method used in the path integral formulation of quantum field theory."}
+{"text":"In this section, we will state the relationships between the partition function and the various thermodynamic parameters of the system. These results can be derived using the method of the previous section and the various thermodynamic relations."}
+{"text":"As we have already seen, the thermodynamic energy is"}
+{"text":"The variance in the energy (or \"energy fluctuation\") is"}
+{"text":"In general, consider the extensive variable X and intensive variable Y where X and Y form a pair of conjugate variables. In ensembles where Y is fixed (and X is allowed to fluctuate), then the average value of X will be:"}
+{"text":"The sign will depend on the specific definitions of the variables X and Y. An example would be X = volume and Y = pressure. Additionally, the variance in X will be"}
+{"text":"In the special case of entropy, entropy is given by"}
+{"text":"where \"A\" is the Helmholtz free energy defined as \"A\" = \"U\" \u2212 \"TS\", where \"U\" = \"E\" is the total energy and \"S\" is the entropy, so that"}
+{"text":"Furthermore, the heat capacity can be expressed as"}
+{"text":"Suppose a system is subdivided into \"N\" sub-systems with negligible interaction energy, that is, we can assume the particles are essentially non-interacting. If the partition functions of the sub-systems are \"\u03b6\"1, \"\u03b6\"2, ..., \"\u03b6\"N, then the partition function of the entire system is the \"product\" of the individual partition functions:"}
+{"text":"If the sub-systems have the same physical properties, then their partition functions are equal, \u03b61 = \u03b62 = ... = \u03b6, in which case"}
+{"text":"However, there is a well-known exception to this rule. If the sub-systems are actually identical particles, in the quantum mechanical sense that they are impossible to distinguish even in principle, the total partition function must be divided by a \"N\"! (\"N\" factorial):"}
+{"text":"This is to ensure that we do not \"over-count\" the number of microstates. While this may seem like a strange requirement, it is actually necessary to preserve the existence of a thermodynamic limit for such systems. This is known as the Gibbs paradox."}
+{"text":"It may not be obvious why the partition function, as we have defined it above, is an important quantity. First, consider what goes into it. The partition function is a function of the temperature \"T\" and the microstate energies \"E\"1, \"E\"2, \"E\"3, etc. The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. This dependence on microscopic variables is the central point of statistical mechanics. With a model of the microscopic constituents of a system, one can calculate the microstate energies, and thus the partition function, which will then allow us to calculate all the other thermodynamic properties of the system."}
+{"text":"The partition function can be related to thermodynamic properties because it has a very important statistical meaning. The probability \"Ps\" that the system occupies microstate \"s\" is"}
+{"text":"Thus, as shown above, the partition function plays the role of a normalizing constant (note that it does \"not\" depend on \"s\"), ensuring that the probabilities sum up to one:"}
+{"text":"This is the reason for calling \"Z\" the \"partition function\": it encodes how the probabilities are partitioned among the different microstates, based on their individual energies. The letter \"Z\" stands for the German word \"Zustandssumme\", \"sum over states\". The usefulness of the partition function stems from the fact that it can be used to relate macroscopic thermodynamic quantities to the microscopic details of a system through the derivatives of its partition function. Finding the partition function is also equivalent to performing a Laplace transform of the density of states function from the energy domain to the \u03b2 domain, and the inverse Laplace transform of the partition function reclaims the state density function of energies."}
+{"text":"We can define a grand canonical partition function for a grand canonical ensemble, which describes the statistics of a constant-volume system that can exchange both heat and particles with a reservoir. The reservoir has a constant temperature \"T\", and a chemical potential \"\u03bc\"."}
+{"text":"The grand canonical partition function, denoted by formula_71, is the following sum over microstates"}
+{"text":"Here, each microstate is labelled by formula_73, and has total particle number formula_74 and total energy formula_75. This partition function is closely related to the grand potential, formula_76, by the relation"}
+{"text":"This can be contrasted to the canonical partition function above, which is related instead to the Helmholtz free energy."}
+{"text":"It is important to note that the number of microstates in the grand canonical ensemble may be much larger than in the canonical ensemble, since here we consider not only variations in energy but also in particle number. Again, the utility of the grand canonical partition function is that it is related to the probability that the system is in state formula_73:"}
+{"text":"An important application of the grand canonical ensemble is in deriving exactly the statistics of a non-interacting many-body quantum gas (Fermi\u2013Dirac statistics for fermions, Bose\u2013Einstein statistics for bosons), however it is much more generally applicable than that. The grand canonical ensemble may also be used to describe classical systems, or even interacting quantum gases."}
+{"text":"The grand partition function is sometimes written (equivalently) in terms of alternate variables as"}
+{"text":"where formula_81 is known as the absolute activity (or fugacity) and formula_82 is the canonical partition function."}
+{"text":"In combustion, the Williams spray equation, also known as the Williams\u2013Boltzmann equation, describes the statistical evolution of sprays contained in another fluid, analogous to the Boltzmann equation for the molecules, named after Forman A. Williams, who derived the equation in 1958."}
+{"text":"The sprays are assumed to be spherical with radius formula_1, even though the assumption is valid for solid particles(liquid droplets) when their shape has no consequence on the combustion. For liquid droplets to be nearly spherical, the spray has to be dilute(total volume occupied by the sprays is much less than the volume of the gas) and the Weber number formula_2, where formula_3 is the gas density, formula_4 is the spray droplet velocity, formula_5 is the gas velocity and formula_6 is the surface tension of the liquid spray, should be formula_7."}
+{"text":"The equation is described by a number density function formula_8, which represents the probable number of spray particles (droplets) of chemical species formula_9 (of formula_10 total species), that one can find with radii between formula_1 and formula_12, located in the spatial range between formula_13 and formula_14, traveling with a velocity in between formula_4 and formula_16, having the temperature in between formula_17 and formula_18 at time formula_19. Then the spray equation for the evolution of this density function is given by"}
+{"text":"A simplified model for liquid propellant rocket."}
+{"text":"This model for the rocket motor was developed by Probert, Williams and Tanasawa. It is reasonable to neglect formula_31, for distances not very close to the spray atomizer, where major portion of combustion occurs. Consider a one-dimensional liquid-propellent rocket motor situated at formula_32, where fuel is sprayed. Neglecting formula_33(density function is defined without the temperature so accordingly dimensions of formula_34 changes) and due to the fact that the mean flow is parallel to formula_35 axis, the steady spray equation reduces to"}
+{"text":"where formula_37 is the velocity in formula_35 direction. Integrating with respect to the velocity results"}
+{"text":"The contribution from the last term (spray acceleration term) becomes zero (using Divergence theorem) since formula_40 when formula_41 is very large, which is typically the case in rocket motors. The drop size rate formula_42 is well modeled using vaporization mechanisms as"}
+{"text":"where formula_44 is independent of formula_1, but can depend on the surrounding gas. Defining the number of droplets per unit volume per unit radius and average quantities averaged over velocities,"}
+{"text":"If further assumed that formula_48 is independent of formula_1, and with a transformed coordinate"}
+{"text":"If the combustion chamber has varying cross-section area formula_51, a known function for formula_52 and with area formula_53 at the spraying location, then the solution is given by"}
+{"text":"where formula_55 are the number distribution and mean velocity at formula_32 respectively."}
+{"text":"The ideal gas law, also called the general gas equation, is the equation of state of a hypothetical ideal gas. It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Beno\u00eet Paul \u00c9mile Clapeyron in 1834 as a combination of the empirical Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. The ideal gas law is often written in an empirical form:"}
+{"text":"where formula_2, formula_3 and formula_4 are the pressure, volume and temperature; formula_5 is the amount of substance; and formula_6 is the ideal gas constant. It is the same for all gases."}
+{"text":"It can also be derived from the microscopic kinetic theory, as was achieved (apparently independently) by August Kr\u00f6nig in 1856 and Rudolf Clausius in 1857."}
+{"text":"Note that this law makes no comment as to whether a gas heats or cools during compression or expansion. An ideal gas may not change temperature, but most gases like air are not ideal and follow the Joule\u2013Thomson effect."}
+{"text":"The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: the appropriate SI unit is the kelvin."}
+{"text":"In SI units, \"p\" is measured in pascals, \"V\" is measured in cubic metres, \"n\" is measured in moles, and \"T\" in kelvins (the Kelvin scale is a shifted Celsius scale, where 0.00 K = \u2212273.15\u00a0\u00b0C, the lowest possible temperature). \"R\" has the value 8.314 J\/(K\u00b7mol) \u2248 2 cal\/(K\u00b7mol), or 0.0821 l\u00b7atm\/(mol\u00b7K)."}
+{"text":"How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount (\"n\") (in moles) is equal to total mass of the gas (\"m\") (in kilograms) divided by the molar mass (\"M\") (in kilograms per mole):"}
+{"text":"By replacing \"n\" with \"m\"\/\"M\" and subsequently introducing density \"\u03c1\" = \"m\"\/\"V\", we get:"}
+{"text":"Defining the specific gas constant \"R\"specific(r) as the ratio \"R\"\/\"M\","}
+{"text":"This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume \"v\", the reciprocal of density, as"}
+{"text":"It is common, especially in engineering and meteorological applications, to represent the specific gas constant by the symbol \"R\". In such cases, the universal gas constant is usually given a different symbol such as formula_21 or formula_22 to distinguish it. In any case, the context and\/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being referred to."}
+{"text":"In statistical mechanics the following molecular equation is derived from first principles"}
+{"text":"where is the absolute pressure of the gas, is the number density of the molecules (given by the ratio = , in contrast to the previous formulation in which is the \"number of moles\"), is the absolute temperature, and is the Boltzmann constant relating temperature and energy, given by:"}
+{"text":"From this we notice that for a gas of mass , with an average particle mass of times the atomic mass constant, , (i.e., the mass is u) the number of molecules will be given by"}
+{"text":"and since , we find that the ideal gas law can be rewritten as"}
+{"text":"In SI units, is measured in pascals, in cubic metres, in kelvins, and"}
+{"text":"Combining the laws of Charles, Boyle and Gay-Lussac gives the combined gas law, which takes the same functional form as the ideal gas law save that the number of moles is unspecified, and the ratio of formula_27 to formula_4 is simply taken as a constant:"}
+{"text":"where formula_2 is the pressure of the gas, formula_3 is the volume of the gas, formula_4 is the absolute temperature of the gas, and formula_33 is a constant. When comparing the same substance under two different sets of conditions, the law can be written as"}
+{"text":"According to assumptions of the kinetic theory of ideal gases, we assume that there are no intermolecular attractions between the molecules of an ideal gas. In other words, its potential energy is zero. Hence, all the energy possessed by the gas is in the kinetic energy of the molecules of the gas."}
+{"text":"This is the kinetic energy of \"n\" moles of a monatomic gas having 3 degrees of freedom; \"x\", \"y\", \"z\"."}
+{"text":"The table below essentially simplifies the ideal gas equation for a particular processes, thus making this equation easier to solve using numerical methods."}
+{"text":"A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by subscript. As shown in the first column of the table, basic thermodynamic processes are defined such that one of the gas properties (\"P\", \"V\", \"T\", \"S\", or \"H\") is constant throughout the process."}
+{"text":"For a given thermodynamics process, in order to specify the extent of a particular process, one of the properties ratios (which are listed under the column labeled \"known ratio\") must be specified (either directly or indirectly). Also, the property for which the ratio is known must be distinct from the property held constant in the previous column (otherwise the ratio would be unity, and not enough information would be available to simplify the gas law equation)."}
+{"text":"In the final three columns, the properties (\"p\", \"V\", or \"T\") at state 2 can be calculated from the properties at state 1 using the equations listed."}
+{"text":"a. In an isentropic process, system entropy (\"S\") is constant. Under these conditions, \"p\"1 \"V\"1\"\u03b3\" = \"p\"2 \"V\"2\"\u03b3\", where \"\u03b3\" is defined as the heat capacity ratio, which is constant for a calorifically perfect gas. The value used for \"\u03b3\" is typically 1.4 for diatomic gases like nitrogen (N2) and oxygen (O2), (and air, which is 99% diatomic). Also \"\u03b3\" is typically 1.6 for mono atomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines \"\u03b3\" varies between 1.35 and 1.15, depending on constitution gases and temperature."}
+{"text":"b. In an isenthalpic process, system enthalpy (\"H\") is constant. In the case of free expansion for an ideal gas, there are no molecular interactions, and the temperature remains constant. For real gasses, the molecules do interact via attraction or repulsion depending on temperature and pressure, and heating or cooling does occur. This is known as the Joule\u2013Thomson effect. For reference, the Joule\u2013Thomson coefficient \u03bcJT for air at room temperature and sea level is 0.22\u00a0\u00b0C\/bar."}
+{"text":"Deviations from ideal behavior of real gases."}
+{"text":"A residual property is defined as the difference between a real gas property and an ideal gas property, both considered at the same pressure, temperature, and composition."}
+{"text":"The empirical laws that led to the derivation of the ideal gas law were discovered with experiments that changed only 2 state variables of the gas and kept every other one constant."}
+{"text":"All the possible gas laws that could have been discovered with this kind of setup are:"}
+{"text":"where \"P\" stands for pressure, \"V\" for volume, \"N\" for number of particles in the gas and \"T\" for temperature; where formula_48 are not actual constants but are in this context because of each equation requiring only the parameters explicitly noted in it changing."}
+{"text":"To derive the ideal gas law one does not need to know all 6 formulas, one can just know 3 and with those derive the rest or just one more to be able to get the ideal gas law, which needs 4."}
+{"text":"Since each formula only holds when only the state variables involved in said formula change while the others remain constant, we cannot simply use algebra and directly combine them all. I.e. Boyle did his experiments while keeping \"N\" and \"T\" constant and this must be taken into account."}
+{"text":"Keeping this in mind, to carry the derivation on correctly, one must imagine the gas being altered by one process at a time. The derivation using 4 formulas can look like this:"}
+{"text":"at first the gas has parameters formula_49"}
+{"text":"Say, starting to change only pressure and volume, according to Boyle's law, then:"}
+{"text":"Using then Eq. (5) to change the number of particles in the gas and the temperature,"}
+{"text":"Using then Eq. (6) to change the pressure and the number of particles,"}
+{"text":"Using then Charles's law to change the volume and temperature of the gas,"}
+{"text":"Using simple algebra on equations (7), (8), (9) and (10) yields the result:"}
+{"text":"Another equivalent result, using the fact that formula_61, where \"n\" is the number of moles in the gas and \"R\" is the universal gas constant, is:"}
+{"text":"where the numbers represent the gas laws numbered above."}
+{"text":"If you were to use the same method used above on 2 of the 3 laws on the vertices of one triangle that has a \"O\" inside it, you would get the third."}
+{"text":"Change only pressure and volume first: formula_37 (1\u00b4)"}
+{"text":"then only volume and temperature: formula_64 (2\u00b4)"}
+{"text":"then as we can choose any value for formula_65, if we set formula_66, Eq. (2\u00b4) becomes: formula_67(3\u00b4)"}
+{"text":"combining equations (1\u00b4) and (3\u00b4) yields formula_43 , which is Eq. (4), of which we had no prior knowledge until this derivation."}
+{"text":"The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved."}
+{"text":"The fundamental assumptions of the kinetic theory of gases imply that"}
+{"text":"Using the Maxwell\u2013Boltzmann distribution, the fraction of molecules that have a speed in the range formula_70 to formula_71 is formula_72, where"}
+{"text":"and formula_33 denotes the Boltzmann constant. The root-mean-square speed can be calculated by"}
+{"text":"from which we get the ideal gas law:"}
+{"text":"Let q = (\"q\"x, \"q\"y, \"q\"z) and p = (\"p\"x, \"p\"y, \"p\"z) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then the time-averaged kinetic energy of the particle is:
"}
+{"text":"where the first equality is Newton's second law, and the second line uses Hamilton's equations and the equipartition theorem. Summing over a system of \"N\" particles yields"}
+{"text":"By Newton's third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the container, and this force is given by the pressure \"P\" of the gas. Hence"}
+{"text":"where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is"}
+{"text":"where \"dV\" is an infinitesimal volume within the container and \"V\" is the total volume of the container."}
+{"text":"which immediately implies the ideal gas law for \"N\" particles:"}
+{"text":"where \"n\" = \"N\"\/\"N\"A is the number of moles of gas and \"R\" = \"N\"A\"k\"B is the gas constant."}
+{"text":"For a \"d\"-dimensional system, the ideal gas pressure is:"}
+{"text":"where formula_87 is the volume of the \"d\"-dimensional domain in which the gas exists. Note that the dimensions of the pressure changes with dimensionality."}
+{"text":"In cosmology, the equation of state of a perfect fluid is characterized by a dimensionless number formula_1, equal to the ratio of its pressure formula_2 to its energy density formula_3 :"}
+{"text":"It is closely related to the thermodynamic equation of state and ideal gas law."}
+{"text":"The perfect gas equation of state may be written as"}
+{"text":"where formula_6 is the mass density, formula_7 is the particular gas constant, formula_8 is the temperature and formula_9 is a characteristic thermal speed of the molecules. Thus"}
+{"text":"where formula_11 is the speed of light, formula_12 and formula_13 for a \"cold\" gas."}
+{"text":"FLRW equations and the equation of state."}
+{"text":"The equation of state may be used in Friedmann\u2013Lema\u00eetre\u2013Robertson\u2013Walker (FLRW) equations to describe the evolution of an isotropic universe filled with a perfect fluid. If formula_14 is the scale factor then"}
+{"text":"If the fluid is the dominant form of matter in a flat universe, then"}
+{"text":"In general the Friedmann acceleration equation is"}
+{"text":"where formula_19 is the cosmological constant and formula_20 is Newton's constant, and formula_21 is the second proper time derivative of the scale factor."}
+{"text":"If we define (what might be called \"effective\") energy density and pressure as"}
+{"text":"the acceleration equation may be written as"}
+{"text":"The equation of state for ordinary non-relativistic 'matter' (e.g. cold dust) is formula_26, which means that its energy density decreases as formula_27, where formula_28 is a volume. In an expanding universe, the total energy of non-relativistic matter remains constant, with its density decreasing as the volume increases."}
+{"text":"The equation of state for ultra-relativistic 'radiation' (including neutrinos, and in the very early universe other particles that later became non-relativistic) is formula_29 which means that its energy density decreases as formula_30. In an expanding universe, the energy density of radiation decreases more quickly than the volume expansion, because its wavelength is red-shifted."}
+{"text":"Cosmic inflation and the accelerated expansion of the universe can be characterized by the equation of state of dark energy. In the simplest case, the equation of state of the cosmological constant is formula_31. In this case, the above expression for the scale factor is not valid and formula_32, where the constant \"H\" is the Hubble parameter. More generally, the expansion of the universe is accelerating for any equation of state formula_33. The accelerated expansion of the Universe was indeed observed. According to observations, the value of equation of state of cosmological constant is near -1."}
+{"text":"Hypothetical phantom energy would have an equation of state formula_34, and would cause a Big Rip. Using the existing data, it is still impossible to distinguish between phantom formula_35 and non-phantom formula_36."}
+{"text":"In an expanding universe, fluids with larger equations of state disappear more quickly than those with smaller equations of state. This is the origin of the flatness and monopole problems of the Big Bang: curvature has formula_37 and monopoles have formula_26, so if they were around at the time of the early Big Bang, they should still be visible today. These problems are solved by cosmic inflation which has formula_39. Measuring the equation of state of dark energy is one of the largest efforts of observational cosmology. By accurately measuring formula_1, it is hoped that the cosmological constant could be distinguished from quintessence which has formula_41."}
+{"text":"A scalar field formula_42 can be viewed as a sort of perfect fluid with equation of state"}
+{"text":"where formula_44 is the time-derivative of formula_42 and formula_46 is the potential energy. A free formula_47 scalar field has formula_48, and one with vanishing kinetic energy is equivalent to a cosmological constant: formula_31. Any equation of state in between, but not crossing the formula_31 barrier known as the Phantom Divide Line (PDL), is achievable, which makes scalar fields useful models for many phenomena in cosmology."}
+{"text":"In fluid mechanics, the Tait equation is an equation of state, used to relate liquid density to pressure. The equation was originally published by Peter Guthrie Tait in 1888 in the form"}
+{"text":"where formula_2 is the reference pressure (taken to be 1 atmosphere), formula_3 is the current pressure, formula_4 is the volume of fresh water at the reference pressure, formula_5 is the volume at the current pressure, and formula_6 are experimentally determined parameters."}
+{"text":"Around 1895, the original isothermal Tait equation was replaced by Tammann with an equation of the form"}
+{"text":"The temperature-dependent version of the above equation is popularly known as the Tait equation and is commonly written as"}
+{"text":"The expression for the pressure in terms of the specific volume is"}
+{"text":"The tangent bulk modulus at pressure formula_3 is given by"}
+{"text":"Another popular isothermal equation of state that goes by the name \"Tait equation\" is the Murnaghan model which is sometimes expressed as"}
+{"text":"where formula_20 is the specific volume at pressure formula_3, formula_22 is the specific volume at pressure formula_2, formula_24 is the bulk modulus at formula_2, and formula_26 is a material parameter."}
+{"text":"This equation, in pressure form, can be written as"}
+{"text":"where formula_28 are mass densities at formula_29, respectively."}
+{"text":"For pure water, typical parameters are formula_2 = 101,325 Pa, formula_31 = 1000\u00a0kg\/cu.m, formula_24 = 2.15 GPa, and formula_26 = 7.15."}
+{"text":"Note that this form of the Tate equation of state is identical to that of the Murnaghan equation of state."}
+{"text":"The tangent bulk modulus predicted by the MacDonald-Tait model is"}
+{"text":"A related equation of state that can be used to model liquids is the Tumlirz equation (sometimes called the Tammann equation and originally proposed by Tumlirz in 1909 and Tammann in 1911 for pure water). This relation has the form"}
+{"text":"where formula_36 is the specific volume, formula_3 is the pressure, formula_38 is the salinity, formula_39 is the temperature, and formula_40 is the specific volume when formula_41, and formula_42 are parameters that can be fit to experimental data."}
+{"text":"The Tumlirz-Tammann version of the Tait equation for fresh water, i.e., when formula_43, is"}
+{"text":"For pure water, the temperature-dependence of formula_45 are:"}
+{"text":"In the above fits, the temperature formula_39 is in degrees Celsius, formula_2 is in bars, formula_40 is in cc\/gm, and formula_50 is in bars-cc\/gm."}
+{"text":"The inverse Tumlirz-Tammann-Tait relation for the pressure as a function of specific volume is"}
+{"text":"The Tumlirz-Tammann-Tait formula for the instantaneous tangent bulk modulus of pure water is a quadratic function of formula_3 (for an alternative see )"}
+{"text":"Like Wilson (1964), Renon & Prausnitz (1968) began with local composition theory, but instead of using the Flory\u2013Huggins volumetric expression as Wilson did, they assumed local compositions followed"}
+{"text":"with a new \"non-randomness\" parameter \u03b1. The excess Gibbs free energy was then determined to be"}
+{"text":"Unlike Wilson's equation, this can predict partially miscible mixtures. However the cross term, like Wohl's expansion, is more suitable for formula_7 than formula_8, and experimental data is not always sufficiently plentiful to yield three meaningful values, so later attempts to extend Wilson's equation to partial miscibility (or to extend Guggenheim's quasichemical theory for nonrandom mixtures to Wilson's different-sized molecules) eventually yielded variants like UNIQUAC."}
+{"text":"For a binary mixture the following function are used:"}
+{"text":"Here, formula_11 and formula_12 are the dimensionless interaction parameters, which are related to the interaction energy parameters formula_13 and formula_14 by:"}
+{"text":"Here \"R\" is the gas constant and \"T\" the absolute temperature, and \"Uij\" is the energy between molecular surface \"i\" and \"j\". \"Uii\" is the energy of evaporation. Here \"Uij\" has to be equal to \"Uji\", but formula_16 is not necessary equal to formula_17."}
+{"text":"The parameters formula_18 and formula_19 are the so-called non-randomness parameter, for which usually formula_18 is set equal to formula_19. For a liquid, in which the local distribution is random around the center molecule, the parameter formula_22. In that case the equations reduce to the one-parameter Margules activity model:"}
+{"text":"In practice, formula_18 is set to 0.2, 0.3 or 0.48. The latter value is frequently used for aqueous systems. The high value reflects the ordered structure caused by hydrogen bonds. However, in the description of liquid-liquid equilibria the non-randomness parameter is set to 0.2 to avoid wrong liquid-liquid description. In some cases a better phase equilibria description is obtained by setting formula_25. However this mathematical solution is impossible from a physical point of view, since no system can be more random than random (formula_18 =0). In general NRTL offers more flexibility in the description of phase equilibria than other activity models due to the extra non-randomness parameters. However, in practice this flexibility is reduced in order to avoid wrong equilibrium description outside the range of regressed data."}
+{"text":"The limiting activity coefficients, also known as the activity coefficients at infinite dilution, are calculated by:"}
+{"text":"The expressions show that at formula_22 the limiting activity coefficients are equal. This situation that occurs for molecules of equal size, but of different polarities.
It also shows, since three parameters are available, that multiple sets of solutions are possible."}
+{"text":"The general equation for formula_29 for species formula_30 in a mixture of formula_31 components is:"}
+{"text":"There are several different equation forms for formula_36 and formula_37, the most general of which are shown above."}
+{"text":"To describe phase equilibria over a large temperature regime, i.e. larger than 50 K, the interaction parameter has to be made temperature dependent."}
+{"text":"Two formats are frequently used. The extended Antoine equation format:"}
+{"text":"Here the logarithmic and linear terms are mainly used in the description of liquid-liquid equilibria (miscibility gap)."}
+{"text":"The other format is a second-order polynomial format:"}
+{"text":"The NRTL parameters are fitted to activity coefficients that have been derived from experimentally determined phase equilibrium data (vapor\u2013liquid, liquid\u2013liquid, solid\u2013liquid) as well as from heats of mixing. The source of the experimental data are often factual data banks like the Dortmund Data Bank. Other options are direct experimental work and predicted activity coefficients with UNIFAC and similar models."}
+{"text":"Determination of NRTL parameters from LLE data is more complicated than parameter regression from VLE data as it involves solving isoactivity equations which are highly non-linear. In addition, parameters obtained from LLE may not always represent the real activity of components due to lack of knowledge on the activity values of components in the data regression. For this reason it is necessary to confirm the consistency of the obtained parameters in the whole range of compositions (including binary subsystems, experimental and calculated lie-lines, Hessian matrix, etc.)."}
+{"text":"An equation of state introduced by R. H. Cole"}
+{"text":"where formula_2 is a reference density, formula_3 is the adiabatic index, and formula_4 is a parameter with pressure units."}
+{"text":"In chemistry and thermodynamics, the Van der Waals equation (or Van der Waals equation of state; named after Dutch physicist Johannes Diderik van der Waals) is an equation of state that generalizes the ideal gas law based on plausible reasons that real gases do not act ideally. The ideal gas law treats gas molecules as point particles that interact with their containers but not each other, meaning they neither take up space nor change kinetic energy during collisions (i.e. all collisions are perfectly elastic). The ideal gas law states that volume (\"V\") occupied by \"n\" moles of any gas has a pressure (\"P\") at temperature (\"T\") in kelvins given by the following relationship, where \"R\" is the gas constant:"}
+{"text":"To account for the volume that a real gas molecule takes up, the Van der Waals equation replaces \"V\" in the ideal gas law with formula_2, where \"Vm\" is the molar volume of the gas and \"b\" is the volume that is occupied by one mole of the molecules. This leads to:"}
+{"text":"The second modification made to the ideal gas law accounts for the fact that gas molecules do in fact interact with each other (they usually experience attraction at low pressures and repulsion at high pressures) and that real gases therefore show different compressibility than ideal gases. Van der Waals provided for intermolecular interaction by adding to the observed pressure \"P\" in the equation of state a term formula_4, where \"a\" is a constant whose value depends on the gas. The Van der Waals equation is therefore written as:"}
+{"text":"and can also be written as the equation below:"}
+{"text":"where \"Vm\" is the molar volume of the gas, \"R\" is the universal gas constant, \"T\" is temperature, \"P\" is pressure, and \"V\" is volume. When the molar volume \"Vm\" is large, \"b\" becomes negligible in comparison with \"Vm\", \"a\/Vm2\" becomes negligible with respect to \"P\", and the Van der Waals equation reduces to the ideal gas law, \"PVm=RT\"."}
+{"text":"The Van der Waals equation is a thermodynamic equation of state based on the theory that fluids are composed of particles with non-zero volumes, and subject to a (not necessarily pairwise) inter-particle attractive force. It was based on work in theoretical physical chemistry performed in the late 19th century by Johannes Diderik van der Waals, who did related work on the attractive force that also bears his name. The equation is known to be based on a traditional set of derivations deriving from Van der Waals' and related efforts, as well as a set of derivation based in statistical thermodynamics, see below."}
+{"text":"The equation relates four state variables: the pressure of the fluid \"p\", the total volume of the fluid's container \"V\", the number of particles \"N\", and the absolute temperature of the system \"T\"."}
+{"text":"The intensive, microscopic form of the equation is:"}
+{"text":"is the volume of the container occupied by each particle (not the velocity of a particle), and \"k\"B is the Boltzmann constant. It introduces two new parameters: \"a\"\u2032, a measure of the average attraction between particles, and \"b\"\u2032, the volume excluded from \"v\" by one particle."}
+{"text":"The equation can be also written in extensive, molar form:"}
+{"text":"is a measure of the average attraction between particles,"}
+{"text":"is the volume excluded by a mole of particles,"}
+{"text":"is the universal gas constant, \"k\"B is the Boltzmann constant, and \"N\"A is the Avogadro constant."}
+{"text":"A careful distinction must be drawn between the volume \"available to\" a particle and the volume \"of\" a particle. In the intensive equation, \"v\" equals the total space available to each particle, while the parameter \"b\"\u2032 is proportional to the proper volume of a single particle \u2013 the volume bounded by the atomic radius. This is subtracted from \"v\" because of the space taken up by one particle. In Van der Waals' original derivation, given below, \"b\"' is four times the proper volume of the particle. Observe further that the pressure \"p\" goes to infinity when the container is completely filled with particles so that there is no void space left for the particles to move; this occurs when \"V\" = \"nb\"."}
+{"text":"If a mixture of formula_14 gases is being considered, and each gas has its own formula_15 (attraction between molecules) and formula_16 (volume occupied by molecules) values, then formula_15 and formula_16 for the mixture can be calculated as"}
+{"text":"and the rule of adding partial pressures becomes invalid if the numerical result of the equation formula_26 is significantly different from the ideal gas equation formula_27 ."}
+{"text":"The Van der Waals equation can also be expressed in terms of reduced properties:"}
+{"text":"This yields a critical compressibility factor of 3\/8. Reasons for modification of ideal gas equation: The equation state for ideal gas is PV=RT. In the derivation of ideal gas laws on the basis of kinetic theory of gases some assumption have been made."}
+{"text":"The \"Van der Waals equation\" is mathematically simple, but it nevertheless predicts the experimentally observed transition between vapor and liquid, and predicts critical behaviour. It also adequately predicts and explains the Joule\u2013Thomson effect (temperature change during adiabatic expansion), which is not possible in ideal gas."}
+{"text":"However, the values of physical quantities as predicted with the Van der Waals equation of state \"are in very poor agreement with experiment\", so the model's utility is limited to qualitative rather than quantitative purposes. Empirically-based corrections can easily be inserted into the Van der Waals model (see Maxwell's correction, below), but in so doing, the modified expression is no longer as simple an analytical model; in this regard, other models, such as those based on the principle of corresponding states, achieve a better fit with roughly the same work."}
+{"text":"Even with its acknowledged shortcomings, the pervasive use of the \"Van der Waals equation\" in standard university physical chemistry textbooks makes clear its importance as a pedagogic tool to aid understanding fundamental physical chemistry ideas involved in developing theories of vapour\u2013liquid behavior and equations of state. In addition, other (more accurate) equations of state such as the Redlich\u2013Kwong and Peng\u2013Robinson equation of state are essentially modifications of the Van der Waals equation of state."}
+{"text":"Textbooks in physical chemistry generally give two derivations of the title equation. One is the conventional derivation that goes back to Van der Waals, a mechanical equation of state that cannot be used to specify all thermodynamic functions; the other is a statistical mechanics derivation that makes explicit the intermolecular potential neglected in the first derivation. A particular advantage of the statistical mechanical derivation is that it yields the partition function for the system, and allows all thermodynamic functions to be specified (including the mechanical equation of state)."}
+{"text":"Consider one mole of gas composed of non-interacting point particles that satisfy the ideal gas law:(see any standard Physical Chemistry text, op. cit.)"}
+{"text":"Next, assume that all particles are hard spheres of the same finite radius \"r\" (the Van der Waals radius). The effect of the finite volume of the particles is to decrease the available void space in which the particles are free to move. We must replace \"V\" by \"V\"\u00a0\u2212\u00a0\"b\", where \"b\" is called the \"excluded volume\" (per mole) or \"co-volume\". The corrected equation becomes"}
+{"text":"The excluded volume formula_16 is not just equal to the volume occupied by the solid, finite-sized particles, but actually four times the total molecular volume for one mole of a Van der waals' gas. To see this, we must realize that a particle is surrounded by a sphere of radius 2\"r\" (two times the original radius) that is forbidden for the centers of the other particles. If the distance between two particle centers were to be smaller than 2\"r\", it would mean that the two particles penetrate each other, which, by definition, hard spheres are unable to do."}
+{"text":"The excluded volume for the two particles (of average diameter \"d\" or radius \"r\") is"}
+{"text":"which, divided by two (the number of colliding particles), gives the excluded volume per particle:"}
+{"text":"So \"b\u2032\" is four times the proper volume of the particle. It was a point of concern to Van der Waals that the factor four yields an upper bound; empirical values for \"b\u2032\" are usually lower. Of course, molecules are not infinitely hard, as Van der Waals thought, and are often fairly soft. To obtain the excluded volume per mole we just need to multiply by the number of molecules in a mole, i.e. by the avogadro number:"}
+{"text":"The number of particles in the surface layers is, again by assuming homogeneity, also proportional to the density. In total, the force on the walls is decreased by a factor proportional to the square of the density, and the pressure (force per unit surface) is decreased by"}
+{"text":"Upon writing \"n\" for the number of moles and \"nV\"m = \"V\", the equation obtains the second form given above,"}
+{"text":"It is of some historical interest to point out that Van der Waals, in his Nobel prize lecture, gave credit to Laplace for the argument that pressure is reduced proportional to the square of the density."}
+{"text":"The canonical partition function \"Z\" of an ideal gas consisting of \"N = nN\"A identical (non-interacting) particles, is:"}
+{"text":"where formula_40 is the thermal de Broglie wavelength,"}
+{"text":"with the usual definitions: \"h\" is Planck's constant, \"m\" the mass of a particle, \"k\" Boltzmann's constant and \"T\" the absolute temperature. In an ideal gas \"z\" is the partition function of a single particle in a container of volume \"V\". In order to derive the Van der Waals equation we assume now that each particle moves independently in an average potential field offered by the other particles. The averaging over the particles is easy because we will assume that the particle density of the Van der Waals fluid is homogeneous."}
+{"text":"The interaction between a pair of particles, which are hard spheres, is taken to be"}
+{"text":"\"r\" is the distance between the centers of the spheres and \"d\" is the distance where the hard spheres touch each other (twice the Van der Waals radius). The depth of the Van der Waals well is formula_43."}
+{"text":"Because the particles are not coupled under the mean field Hamiltonian, the mean field approximation of the total partition function still factorizes,"}
+{"text":"but the intermolecular potential necessitates two modifications to \"z\". First, because of the finite size of the particles, not all of \"V\" is available, but only \"V \u2212 Nb\"', where (just as in the conventional derivation above)"}
+{"text":"exp[\" - \u03d5\/2kT\"] to take care of the average intermolecular potential. We divide here the potential by two because this interaction energy is shared between two particles. Thus"}
+{"text":"The total attraction felt by a single particle is"}
+{"text":"where we assumed that in a shell of thickness d\"r\" there are \"N\/V\"\u00a04\"\u03c0\" \"r\"2\"dr\" particles. This is a mean field approximation; the position of the particles is averaged. In reality the density close to the particle is different than far away as can be described by a pair correlation function. Furthermore, it is neglected that the fluid is enclosed"}
+{"text":"between walls. Performing the integral we get"}
+{"text":"so that we only have to differentiate the terms containing V. We get"}
+{"text":"Below the critical temperature, the Van der Waals equation seems to predict qualitatively incorrect relationships. Unlike for ideal gases, the p-V isotherms oscillate with a relative minimum (\"d\") and a relative maximum (\"e\"). Any pressure between \"pd\" and \"pe\" appears to have 3 stable volumes, contradicting the experimental observation that two state variables completely determine a one-component system's state. Moreover, the isothermal compressibility is negative between \"d\" and \"e\" (equivalently formula_52), which cannot describe a system at equilibrium."}
+{"text":"To address these problems, James Clerk Maxwell replaced the isotherm between points \"a\" and \"c\" with a horizontal line positioned so that the areas of the two shaded regions would be equal (replacing the \"a\"-\"d\"-\"b\"-\"e\"-\"c\" curve with a straight line from \"a\" to \"c\"); this portion of the isotherm corresponds to the liquid-vapor equilibrium. The regions of the isotherm from \"a\"\u2013\"d\" and from \"c\"\u2013\"e\" are interpreted as metastable states of super-heated liquid and super-cooled vapor, respectively. The equal area rule can be expressed as:"}
+{"text":"where \"pV\" is the vapor pressure (flat portion of the curve), \"VL\" is the volume of the pure liquid phase at point \"a\" on the diagram, and \"VG\" is the volume of the pure gas phase at point \"c\" on the diagram. A two-phase mixture at \"pV\" will occupy a total volume between \"VL\" and \"VG\", as determined by Maxwell's lever rule."}
+{"text":"Maxwell justified the rule based on the fact that the area on a \"pV\" diagram corresponds to mechanical work, saying that work done on the system in going from \"c\" to \"b\" should equal work released on going from \"a\" to \"b\". This is because the change in free energy \"A\"(\"T\",\"V\") equals the work done during a reversible process, and, as a state variable, the free energy must be path-independent. In particular, the value of \"A\" at point \"b\" should be the same regardless of whether the path taken is from left or right across the horizontal isobar, or follows the original Van der Waals isotherm."}
+{"text":"This derivation is not entirely rigorous, since it requires a reversible path through a region of thermodynamic instability, while \"b\" is unstable. Nevertheless, modern derivations from chemical potential reach the same conclusion, and it remains a necessary modification to the Van der Waals and to any other analytic equation of state."}
+{"text":"The Maxwell equal area rule can also be derived from an assumption of equal chemical potential \"\u03bc\" of coexisting liquid and vapour phases. On the isotherm shown in the above plot, points \"a\" and \"c\" are the only pair of points which fulfill the equilibrium condition of having equal pressure, temperature and chemical potential. It follows that systems with volumes intermediate between these two points will consist of a mixture of the pure liquid and gas with specific volumes equal to the pure liquid and gas phases at points \"a\" and \"c\"."}
+{"text":"The Van der Waals equation may be solved for \"VG\" and \"VL\" as functions of the temperature and the vapor pressure \"pV\". Since:"}
+{"text":"where \"A\" is the Helmholtz free energy, it follows that the equal area rule can be expressed as:"}
+{"text":"Since the gas and liquid volumes are functions of \"pV\" and \"T\" only, this equation is then solved numerically to obtain \"pV\" as a function of temperature (and number of particles \"N\"), which may then be used to determine the gas and liquid volumes."}
+{"text":"A pseudo-3D plot of the locus of liquid and vapor volumes versus temperature and pressure is shown"}
+{"text":"in the accompanying figure. One sees that the two locii meet at the critical point (1,1,1) smoothly. An isotherm of the Van der Waals fluid taken at \"T r\" = 0.90 is also shown where the intersections of the isotherm with the loci illustrate the construct's requirement that the two areas (red and blue, shown) are equal."}
+{"text":"We reiterate that the extensive volume \"V\"\u00a0 is related to the volume per particle \"v=V\/N\"\u00a0 where \"N = nN\"A\u00a0 is the number of particles in the system."}
+{"text":"The equation of state does not give us all the thermodynamic parameters of the system. We can take the equation for the Helmholtz energy \"A\""}
+{"text":"From the equation derived above for ln\"Q\", we find"}
+{"text":"Where \u03a6 is an undetermined constant, which may be taken from the Sackur\u2013Tetrode equation for an ideal gas to be:"}
+{"text":"This equation expresses \"A\"\u00a0 in terms of its natural variables \"V\"\u00a0 and \"T\"\u00a0, and therefore gives us all thermodynamic information about the system. The mechanical equation of state was already derived above"}
+{"text":"The entropy equation of state yields the entropy (\"S\"\u00a0)"}
+{"text":"from which we can calculate the internal energy"}
+{"text":"Similar equations can be written for the other thermodynamic potential and the chemical potential, but expressing any potential as a function of pressure \"p\"\u00a0 will require the solution of a third-order polynomial, which yields a complicated expression. Therefore, expressing the enthalpy and the Gibbs energy as functions of their natural variables will be complicated."}
+{"text":"Although the material constant \"a\" and \"b\" in the usual form of the Van der Waals equation differs for every single fluid considered, the equation can be recast into an invariant form applicable to \"all\" fluids."}
+{"text":"Defining the following reduced variables (\"fR\", \"fC\" are the reduced and critical variable versions of \"f\", respectively),"}
+{"text":"The first form of the Van der Waals equation of state given above can be recast in the following reduced form:"}
+{"text":"This equation is \"invariant\" for all fluids; that is, the same reduced form equation of state applies, no matter what \"a\" and \"b\" may be for the particular fluid."}
+{"text":"This invariance may also be understood in terms of the principle of corresponding states. If two fluids have the same reduced pressure, reduced volume, and reduced temperature, we say that their states are corresponding. The states of two fluids may be corresponding even if their measured pressure, volume, and temperature are very different. If the two fluids' states are corresponding, they exist in the same regime of the reduced form equation of state. Therefore, they will respond to changes in roughly the same way, even though their measurable physical characteristics may differ significantly."}
+{"text":"The Van der Waals equation is a cubic equation of state; in the reduced formulation the cubic equation is:"}
+{"text":"At the critical temperature, where formula_66 we get as expected"}
+{"text":"For \"TR\" < 1, there are 3 values for \"vR\"."}
+{"text":"For \"TR\" > 1, there is 1 real value for \"vR\"."}
+{"text":"The solution of this equation for the case where there are three separate roots may be found at Maxwell construction"}
+{"text":"The equation is also usable as a PVT equation for compressible fluids (e.g. polymers). In this case specific volume changes are small and it can be written in a simplified form:"}
+{"text":"where \"p\" is the pressure, \"V\" is specific volume, \"T\" is the temperature and \"A, B, C\" are parameters."}
+{"text":"The Murnaghan equation of state is a relationship between the volume of a body and the pressure to which it is subjected. This is one of many state equations that have been used in earth sciences and shock physics to model the behavior of matter under conditions of high pressure. It owes its name to Francis D. Murnaghan who proposed it in 1944 to reflect material behavior under a pressure range as wide as possible to reflect an experimentally established fact: the more a solid is compressed, the more difficult it is to compress further."}
+{"text":"The Murnaghan equation is derived, under certain assumptions, from the equations of continuum mechanics. It involves two adjustable parameters: the modulus of incompressibility \"K\"0 and its first derivative with respect to the pressure, \"K\"'0, both measured at ambient pressure. In general, these coefficients are determined by a regression on experimentally obtained values of volume \"V\" as a function of the pressure \"P\". These experimental data can be obtained by X-ray diffraction or by shock tests. Regression can also be performed on the values of the energy as a function of the volume obtained from ab-initio and molecular dynamics calculations."}
+{"text":"The Murnaghan equation of state is typically expressed as:"}
+{"text":"If the reduction in volume under compression is low, i.e., for \"V\"\/\"V\"0 greater than about 90%, the Murnaghan equation can model experimental data with satisfactory accuracy. Moreover, unlike many proposed equations of state, it gives an explicit expression of the volume as a function of pressure \"V\"(\"P\"). But its range of validity is limited and physical interpretation inadequate. However, this equation of state continues to be widely used in models of solid explosives. Of more elaborate equations of state, the most used in earth physics is the Birch\u2013Murnaghan equation of state. In shock physics of metals and alloys, another widely used equation of state is the Mie\u2013Gr\u00fcneisen equation of state."}
+{"text":"The study of the internal structure of the earth through the knowledge of the mechanical properties of the constituents of the inner layers of the planet involves extreme conditions; the pressure can be counted in hundreds of gigapascal and temperatures in thousands of degrees. The study of the properties of matter under these conditions can be done experimentally through devices such as diamond anvil cell for static pressures, or by subjecting the material to shock waves. It also gave rise to theoretical work to determine the equation of state, that is to say the relations among the different parameters that define in this case the state of matter: the volume (or density), temperature and pressure."}
+{"text":"Dozens of equations have been proposed by various authors. These are empirical relationships, the quality and relevance depend on the use made of it and can be judged by different criteria: the number of independent parameters that are involved, the physical meaning that can be assigned to these parameters, the quality of the experimental data, and the consistency of theoretical assumptions that underlie their ability to extrapolate the behavior of solids at high compression."}
+{"text":"Generally, at constant temperature, the bulk modulus is defined by:"}
+{"text":"The easiest way to get an equation of state linking \"P\" and \"V\" is to assume that \"K\" is constant, that is to say, independent of pressure and deformation of the solid, then we simply find the Hooke's law. In this case, the volume decreases exponentially with pressure. This is not a satisfactory result because it is experimentally established that as a solid is compressed, it becomes more difficult to compress. To go further, we must take into account the variations of the elastic properties of the solid with compression."}
+{"text":"The assumption Murnaghan is to assume that the bulk modulus is a linear function of pressure :"}
+{"text":"Murnaghan equation is the result of the integration of the differential equation:"}
+{"text":"We can also express the volume depending on the pressure:"}
+{"text":"This simplified presentation is however criticized by Poirier as lacking rigor. The same relationship can be shown in a different way from the fact that the incompressibility of the product of the modulus and the thermal expansion coefficient is not dependent on the pressure for a given material. This equation of state is also a general case of the older Polytrope relation which also has a constant power relation."}
+{"text":"In some circumstances, particularly in connection with ab initio calculations, the expression of the energy as a function of the volume will be preferred, which can be obtained by integrating the above equation according to the relationship \"P\" = \u2212\"dE\"\/\"dV\" . It can be written to \"K\"'0 different from 3,"}
+{"text":"Despite its simplicity, the Murnaghan equation is able to reproduce the experimental data for a range of pressures that can be quite large, on the order of \"K\"0\/2. It also remains satisfactory as the ratio \"V\"\/\"V\"0 remains above about 90%. In this range, the Murnaghan equation has an advantage compared to other equations of state if one wants to express the volume as a function of pressure."}
+{"text":"Nevertheless, other equations may provide better results and several theoretical and experimental studies show that the Murnaghan equation is unsatisfactory for many problems. Thus, to the extent that the ratio \"V\"\/\"V\"0 becomes very low, the theory predicts that \"K\"' goes to 5\/3, which is the Thomas\u2013Fermi limit. However, in the Murnaghan equation, \"K\"' is constant and set to its initial value. In particular, the value \"K\"'0 = 5\/3 becomes inconsistent with the theory under some situations. In fact, when extrapolated, the behavior predicted by the Murnaghan equation becomes quite quickly unlikely."}
+{"text":"Regardless of this theoretical argument, experience clearly shows that \"K\"' decreases with pressure, or in other words that the second derivative of the incompressibility modulus \"K\"\" is strictly negative. A second order theory based on the same principle (see next section) can account for this observation, but this approach is still unsatisfactory. Indeed, it leads to a negative bulk modulus in the limit where the pressure tends to infinity. In fact, this is an inevitable contradiction whatever polynomial expansion is chosen because there will always be a dominant term that diverges to infinity."}
+{"text":"These important limitations have led to the abandonment of the Murnaghan equation, which W. Holzapfel calls \"a useful mathematical form without any physical justification\". In practice, the analysis of compression data is done by using more sophisticated equations of state. The most commonly used within the science community is the Birch\u2013Murnaghan equation, second or third order in the quality of data collected."}
+{"text":"Finally, a very general limitation of this type of equation of state is their inability to take into account the phase transitions induced by the pressure and temperature of melting, but also multiple solid-solid transitions that can cause abrupt changes in the density and bulk modulus based on the pressure."}
+{"text":"In practice, the Murnaghan equation is used to perform a regression on a data set, where one gets the values of the coefficients \"K\"0 and \"K\"'0. These coefficients obtained, and knowing the value of the volume to ambient conditions, then we are in principle able to calculate the volume, density and bulk modulus for any pressure."}
+{"text":"The data set is mostly a series of volume measurements for different values of applied pressure, obtained mostly by X-ray diffraction. It is also possible to work on theoretical data, calculating the energy for different values of volume by ab initio methods, and then regressing these results. This gives a theoretical value of the modulus of elasticity which can be compared to experimental results."}
+{"text":"The following table lists some of the results of different materials, with the sole purpose of illustrating some numerical analyses that have been made using the Murnaghan equation, without prejudice to the quality of the models obtained. Given the criticisms that have been made in the previous section on the physical meaning of the Murnaghan equation, these results should be considered with caution."}
+{"text":"To improve the models or avoid criticism outlined above, several generalizations of the Murnaghan equation have been proposed. They usually consist in dropping a simplifying assumption and adding another adjustable parameter. This can improve the qualities of refinement, but also lead to complicated expressions. The question of the physical meaning of these additional parameters is also raised."}
+{"text":"A possible strategy is to include an additional term \"P\"2 in the previous development, requiring that formula_7. Solving this differential equation gives the equation of the second-order Murnaghan:"}
+{"text":"where formula_9. Found naturally in the first order equation taking formula_10. Developments to an order greater than 2 are possible in principle, but at the cost of adding an adjustable parameter for each term."}
+{"text":"In physics and thermodynamics, an equation of state is a thermodynamic equation relating state variables which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature (PVT), or internal energy. Equations of state are useful in describing the properties of fluids, mixtures of fluids, solids, and the interior of stars."}
+{"text":"At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid."}
+{"text":"Another common use is in modeling the interior of stars, including neutron stars, dense matter (quark\u2013gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology."}
+{"text":"Equations of state can also describe solids, including the transition of solids from one crystalline state to another."}
+{"text":"In a practical context, equations of state are instrumental for PVT calculations in process engineering problems, such as petroleum gas\/liquid equilibrium calculations. A successful PVT model based on a fitted equation of state can be helpful to determine the state of the flow regime, the parameters for handling the reservoir fluids, and pipe sizing."}
+{"text":"Measurements of equation-of-state parameters, especially at high pressures, can be made using lasers."}
+{"text":"Boyle's Law was perhaps the first expression of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as:"}
+{"text":"The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676."}
+{"text":"Charles's law or Law of Charles and Gay-Lussac (1787)."}
+{"text":"In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature (Charles's Law):"}
+{"text":"Dalton's Law of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone."}
+{"text":"Mathematically, this can be represented for \"n\" species as:"}
+{"text":"In 1834, \u00c9mile Clapeyron combined Boyle's Law and Charles' law into the first statement of the \"ideal gas law\". Initially, the law was formulated as \"pVm\" = \"R\"(\"TC\" + 267) (with temperature expressed in degrees Celsius), where \"R\" is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with 0\u00b0C = 273.15K, giving:"}
+{"text":"Van der Waals equation of state (1873)."}
+{"text":"In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was most famously continued via the Redlich\u2013Kwong equation of state and the Soave modification of Redlich-Kwong."}
+{"text":"General form of an equation of state."}
+{"text":"For a given amount of substance contained in a system, the temperature, volume, and pressure are not independent quantities; they are connected by a relationship of the general form"}
+{"text":"An equation used to model this relationship is called an equation of state. In the following sections major equations of state are described, and the variables used here are defined as follows. Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to use of the Kelvin (K) or Rankine (\u00b0R) temperature scales, with zero being absolute zero."}
+{"text":"The classical ideal gas law may be written"}
+{"text":"In the form shown above, the equation of state is thus"}
+{"text":"If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows"}
+{"text":"where formula_19 is the density, formula_20 is the adiabatic index (ratio of specific heats), formula_21 is the internal energy per unit mass (the \"specific internal energy\"), formula_22 is the specific heat at constant volume, and formula_23 is the specific heat at constant pressure."}
+{"text":"Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass formula_24 and spin formula_25 that takes into account of quantum effects. In the following, the upper sign will always correspond to Fermi-Dirac statistics and the lower sign to Bose\u2013Einstein statistics. The equation of state of such gases with formula_26 particles occupying a volume formula_27 with temperature formula_28 and pressure formula_29 is given by"}
+{"text":"where formula_31 is the Boltzmann constant and formula_32 the chemical potential is given by the following implicit function"}
+{"text":"In the limiting case where formula_34, this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit formula_34 reduces to"}
+{"text":"With a fixed number density formula_37, decreasing the temperature causes in Fermi gas, a increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction."}
+{"text":"Cubic equations of state are called such because they can be rewritten as a cubic function of formula_38."}
+{"text":"The Van der Waals equation of state may be written:"}
+{"text":"where formula_38 is molar volume. The substance-specific constants formula_41 and formula_42 can be calculated from the critical properties formula_43, formula_44, and formula_45 (noting that formula_45 is the molar volume at the critical point) as:"}
+{"text":"Proposed in 1873, the van der Waals equation of state was one of the first to perform markedly better than the ideal gas law. In this landmark equation formula_41 is called the attraction parameter and formula_42 the repulsion parameter or the effective molecular volume. While the equation is definitely superior to the ideal gas law and does predict the formation of a liquid phase, the agreement with experimental data is limited for conditions where the liquid forms. While the van der Waals equation is commonly referenced in text-books and papers for historical reasons, it is now obsolete. Other modern equations of only slightly greater complexity are much more accurate."}
+{"text":"The van der Waals equation may be considered as the ideal gas law, \"improved\" due to two independent reasons:"}
+{"text":"With the reduced state variables, i.e. formula_61, formula_62 and formula_63, the reduced form of the Van der Waals equation can be formulated:"}
+{"text":"The benefit of this form is that for given formula_65 and formula_66, the reduced volume of the liquid and gas can be calculated directly using Cardano's method for the reduced cubic form:"}
+{"text":"For formula_68 and formula_69, the system is in a state of vapor\u2013liquid equilibrium. The reduced cubic equation of state yields in that case 3 solutions. The largest and the lowest solution are the gas and liquid reduced volume."}
+{"text":"Introduced in 1949, the Redlich-Kwong equation of state was a considerable improvement over other equations of the time. It is still of interest primarily due to its relatively simple form. While superior to the van der Waals equation of state, it performs poorly with respect to the liquid phase and thus cannot be used for accurately calculating vapor\u2013liquid equilibria. However, it can be used in conjunction with separate liquid-phase correlations for this purpose."}
+{"text":"The Redlich-Kwong equation is adequate for calculation of gas phase properties when the ratio of the pressure to the critical pressure (reduced pressure) is less than about one-half of the ratio of the temperature to the critical temperature (reduced temperature):"}
+{"text":"Where \"\u03c9\" is the acentric factor for the species."}
+{"text":"This formulation for formula_77 is due to Graboski and Daubert. The original formulation from Soave is:"}
+{"text":"We can also write it in the polynomial form, with:"}
+{"text":"where formula_83 is the universal gas constant and \"Z\"=\"PV\"\/(\"RT\") is the compressibility factor."}
+{"text":"In 1972 G. Soave replaced the 1\/ term of the Redlich-Kwong equation with a function \"\u03b1\"(\"T\",\"\u03c9\") involving the temperature and the acentric factor (the resulting equation is also known as the Soave-Redlich-Kwong equation of state; SRK EOS). The \"\u03b1\" function was devised to fit the vapor pressure data of hydrocarbons and the equation does fairly well for these materials."}
+{"text":"Note especially that this replacement changes the definition of \"a\" slightly, as the formula_44 is now to the second power."}
+{"text":"Volume translation of Peneloux et al. (1982)."}
+{"text":"The SRK EOS may be written as"}
+{"text":"where formula_77 and other parts of the SRK EOS is defined in the SRK EOS section."}
+{"text":"A downside of the SRK EOS, and other cubic EOS, is that the liquid molar volume is significantly less accurate than the gas molar volume. Peneloux et alios (1982) proposed a simple correction for this by introducing a volume translation"}
+{"text":"where formula_89 is an additional fluid component parameter that translates the molar volume slightly. On the liquid branch of the EOS, a small change in molar volume corresponds to a large change in pressure. On the gas branch of the EOS, a small change in molar volume corresponds to a much smaller change in pressure than for the liquid branch. Thus, the perturbation of the molar gas volume is small. Unfortunately, there are two versions that occur in science and industry."}
+{"text":"In the first version only formula_90 is translated,"}
+{"text":"In the second version both formula_90 and formula_93 are translated, or the translation of formula_90 is followed by a renaming of the composite parameter . This gives"}
+{"text":"The c-parameter of a fluid mixture is calculated by"}
+{"text":"The c-parameter of the individual fluid components in a petroleum gas and oil can be estimated by the correlation"}
+{"text":"where the Rackett compressibility factor formula_98 can be estimated by"}
+{"text":"A nice feature with the volume translation method of Peneloux et al. (1982) is that it does not affect the vapor-liquid equilibrium calculations. This method of volume translation can also be applied to other cubic EOSs if the c-parameter correlation is adjusted to match the selected EOS."}
+{"text":"where formula_104 is the acentric factor of the species, formula_83 is the universal gas constant and formula_106 is compressibility factor."}
+{"text":"The Peng\u2013Robinson equation of state (PR EOS) was developed in 1976 at The University of Alberta by Ding-Yu Peng and Donald Robinson in order to satisfy the following goals:"}
+{"text":"For the most part the Peng\u2013Robinson equation exhibits performance similar to the Soave equation, although it is generally superior in predicting the liquid densities of many materials, especially nonpolar ones. The departure functions of the Peng\u2013Robinson equation are given on a separate article."}
+{"text":"The analytic values of its characteristic constants are:"}
+{"text":"A modification to the attraction term in the Peng\u2013Robinson equation of state published by Stryjek and Vera in 1986 (PRSV) significantly improved the model's accuracy by introducing an adjustable pure component parameter and by modifying the polynomial fit of the acentric factor."}
+{"text":"where formula_111 is an adjustable pure component parameter. Stryjek and Vera published pure component parameters for many compounds of industrial interest in their original journal article. At reduced temperatures above 0.7, they recommend to set formula_112 and simply use formula_113. For alcohols and water the value of formula_114 may be used up to the critical temperature and set to zero at higher temperatures."}
+{"text":"A subsequent modification published in 1986 (PRSV2) further improved the model's accuracy by introducing two additional pure component parameters to the previous attraction term modification."}
+{"text":"where formula_111, formula_117, and formula_118 are adjustable pure component parameters."}
+{"text":"PRSV2 is particularly advantageous for VLE calculations. While PRSV1 does offer an advantage over the Peng\u2013Robinson model for describing thermodynamic behavior, it is still not accurate enough, in general, for phase equilibrium calculations. The highly non-linear behavior of phase-equilibrium calculation methods tends to amplify what would otherwise be acceptably small errors. It is therefore recommended that PRSV2 be used for equilibrium calculations when applying these models to a design. However, once the equilibrium state has been determined, the phase specific thermodynamic values at equilibrium may be determined by one of several simpler models with a reasonable degree of accuracy."}
+{"text":"One thing to note is that in the PRSV equation, the parameter fit is done in a particular temperature range which is usually below the critical temperature. Above the critical temperature, the PRSV alpha function tends to diverge and become arbitrarily large instead of tending towards 0. Because of this, alternate equations for alpha should be employed above the critical point. This is especially important for systems containing hydrogen which is often found at temperatures far above its critical point. Several alternate formulations have been proposed. Some well known ones are by Twu et al or by Mathias and Copeman."}
+{"text":"Babalola modified the Peng\u2013Robinson Equation of state as:"}
+{"text":"The attractive force parameter \u2018a\u2019, which was considered to be a constant with respect to pressure in Peng\u2013Robinson EOS. The modification, in which parameter \u2018a\u2019 was treated as a variable with respect to pressure for multicomponent multi-phase high density reservoir systems was to improve accuracy in the prediction of properties of complex reservoir fluids for PVT modeling. The variation was represented with a linear equation where a1 and a2 represent the slope and the intercept respectively of the straight line obtained when values of parameter \u2018a\u2019 are plotted against pressure."}
+{"text":"This modification increases the accuracy of Peng\u2013Robinson equation of state for heavier fluids particularly at pressure ranges (>30MPa) and eliminates the need for tuning the original Peng-Robinson equation of state. Values for a"}
+{"text":"The Elliott, Suresh, and Donohue (ESD) equation of state was proposed in 1990. The equation seeks to correct a shortcoming in the Peng\u2013Robinson EOS in that there was an inaccuracy in the van der Waals repulsive term. The EOS accounts for the effect of the shape of a non-polar molecule and can be extended to polymers with the addition of an extra term (not shown). The EOS itself was developed through modeling computer simulations and should capture the essential physics of the size, shape, and hydrogen bonding."}
+{"text":"The characteristic size parameter is related to the shape parameter formula_89 through"}
+{"text":"Noting the relationships between Boltzmann's constant and the Universal gas constant, and observing that the number of molecules can be expressed in terms of Avogadro's number and the molar mass, the reduced number density formula_127 can be expressed in terms of the molar volume as"}
+{"text":"The shape parameter formula_138 appearing in the Attraction term and the term formula_139 are given by"}
+{"text":"where formula_142 is the depth of the square-well potential and is given by"}
+{"text":"The model can be extended to associating components and mixtures of nonassociating components. Details are in the paper by J.R. Elliott, Jr. \"et al.\" (1990)."}
+{"text":"The Cubic-Plus-Association (CPA) equation of state combines the Soave-Redlich-Kwong equation with an association term from Wertheim theory. The development of the equation began in 1995 as a research project that was funded by Shell, and in 1996 an article was published which presented the CPA equation of state."}
+{"text":"In the association term formula_153 is the mole fraction of molecules not bonded at site A."}
+{"text":"where \"a\" is associated with the interaction between molecules and \"b\" takes into account the finite size of the molecules, similar to the Van der Waals equation."}
+{"text":"Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. \"A\" is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient \"B\" corresponds to interactions between pairs of molecules, \"C\" to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients \"B\", \"C\", \"D\", etc. are functions of temperature only."}
+{"text":"One of the most accurate equations of state is that from Benedict-Webb-Rubin-Starling shown next. It was very close to a virial equation of state. If the exponential term in it is expanded to two Taylor terms, a virial equation can be derived:"}
+{"text":"Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered."}
+{"text":"Values of the various parameters for 15 substances can be found in"}
+{"text":"The Lee-Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state."}
+{"text":"Multiparameter equations of state (MEOS) can be used to represent pure fluids with high accuracy, in both the liquid and gaseous states. MEOS's represent the Helmholtz function of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in reduced temperature and reduced density - thus:"}
+{"text":"The reduced density and temperature are typically, though not always, the critical values for the pure fluid."}
+{"text":"Other thermodynamic functions can be derived from the MEOS by using appropriate derivatives of the Helmholtz function; hence, because integration of the MEOS is not required, there are few restrictions as to the functional form of the ideal or residual terms. Typical MEOS use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. MEOS are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also an MEOS. Mixture models for MEOS exist, as well."}
+{"text":"One example of such an equation of state is the form proposed by Span and Wagner."}
+{"text":"This is a somewhat simpler form that is intended to be used more in technical applications. Reference equations of state require a higher accuracy and use a more complicated form with more terms."}
+{"text":"When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used:"}
+{"text":"where formula_164 is the internal energy per unit mass, formula_165 is an empirically determined constant typically taken to be about 6.1, and formula_166 is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres)."}
+{"text":"The equation is stated in this form because the speed of sound in water is given by formula_167."}
+{"text":"Thus water behaves as though it is an ideal gas that is \"already\" under about 20,000 atmospheres (2\u00a0GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100\u00a0kPa to 200\u00a0kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1\u00a0MPa to 2000.2\u00a0MPa)."}
+{"text":"This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks."}
+{"text":"An ultrarelativistic fluid has equation of state"}
+{"text":"where formula_29 is the pressure, formula_170 is the mass density, and formula_171 is the speed of sound."}
+{"text":"The equation of state for an ideal Bose gas is"}
+{"text":"where \u03b1 is an exponent specific to the system (e.g. in the absence of a potential field, \u03b1 = 3\/2), \"z\" is exp(\"\u03bc\"\/\"kT\") where \"\u03bc\" is the chemical potential, Li is the polylogarithm, \u03b6 is the Riemann zeta function, and \"T\"\"c\" is the critical temperature at which a Bose\u2013Einstein condensate begins to form."}
+{"text":"Jones\u2013Wilkins\u2013Lee equation of state for explosives (JWL equation)."}
+{"text":"The equation of state from Jones\u2013Wilkins\u2013Lee is used to describe the detonation products of explosives."}
+{"text":"The ratio formula_174 is defined by using formula_175 = density of the explosive (solid part) and formula_176 = density of the detonation products. The parameters formula_177, formula_178, formula_179, formula_180 and formula_181 are given by several references. In addition, the initial density (solid part) formula_182, speed of detonation formula_183, Chapman\u2013Jouguet pressure formula_184 and the chemical energy of the explosive formula_185 are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below."}
+{"text":"Equations of state for solids and liquids."}
+{"text":"The Anton-Schmidt equation is an empirical equation of state for crystalline solids, e.g. for pure metals or intermetallic compounds."}
+{"text":"Quantum mechanical investigations of intermetallic compounds show that the dependency of the pressure under isotropic deformation can be described empirically by"}
+{"text":"Integration of formula_2 leads to equation of the state for the total energy. The energy formula_3 required to compress a solid to volume formula_4 is"}
+{"text":"The fitting parameters formula_7 and formula_8 are related to material properties, where"}
+{"text":"However, the fitting parameter formula_14 does not reproduce the total energy of the free atoms."}
+{"text":"The total energy equation is used to determine elastic and thermal material constants in quantum chemical simulation packages."}
+{"text":"In differential geometry and gauge theory, the Nahm equations are a system of ordinary differential equations introduced by Werner Nahm in the context of the \"Nahm transform\" \u2013 an alternative to Ward's twistor construction of monopoles. The Nahm equations are formally analogous to the algebraic equations in the ADHM construction of instantons, where finite order matrices are replaced by differential operators."}
+{"text":"Deep study of the Nahm equations was carried out by Nigel Hitchin and Simon Donaldson. Conceptually, the equations arise in the process of infinite-dimensional hyperk\u00e4hler reduction. Among their many applications we can mention: Hitchin's construction of monopoles, where this approach is critical for establishing nonsingularity of monopole solutions; Donaldson's description of the moduli space of monopoles; and the existence of hyperk\u00e4hler structure on coadjoint orbits of complex semisimple Lie groups, proved by Peter Kronheimer, Olivier Biquard, and A.G. Kovalev."}
+{"text":"Let \"T\"1(\"z\"),\"T\"2(\"z\"), \"T\"3(\"z\") be three matrix-valued meromorphic functions of a complex variable \"z\". The Nahm equations are a system of matrix differential equations"}
+{"text":"together with certain analyticity properties, reality conditions, and boundary conditions. The three equations can be written concisely using the Levi-Civita symbol, in the form"}
+{"text":"More generally, instead of considering \"N\" by \"N\" matrices, one can consider Nahm's equations with values in a Lie algebra g."}
+{"text":"The variable \"z\" is restricted to the open interval (0,2), and the following conditions are imposed:"}
+{"text":"The Nahm equations can be written in the Lax form as follows. Set"}
+{"text":"then the system of Nahm equations is equivalent to the Lax equation"}
+{"text":"As an immediate corollary, we obtain that the spectrum of the matrix \"A\" does not depend on \"z\". Therefore, the characteristic equation"}
+{"text":"which determines the so-called spectral curve in the twistor space \"TP\"1, is invariant under the flow in \"z\"."}
+{"text":"In mathematical physics, the quantum KZ equations or quantum Knizhnik\u2013Zamolodchikov equations or qKZ equations are the analogue for quantum affine algebras of the Knizhnik\u2013Zamolodchikov equations for affine Kac\u2013Moody algebras. They are a consistent system of difference equations satisfied by the \"N\"-point functions, the vacuum expectations of products of primary fields. In the limit as the deformation parameter \"q\" approaches 1, the \"N\"-point functions of the quantum affine algebra tend to those of the affine Kac\u2013Moody algebra and the difference equations become partial differential equations. The quantum KZ equations have been used to study exactly solved models in quantum statistical mechanics."}
+{"text":"In mathematical physics, the Gordon decomposition (named after Walter Gordon) of the Dirac current is a splitting of the charge or particle-number current into a part that arises from the motion of the center of mass of the particles and a part that arises from gradients of the spin density. It makes explicit use of the Dirac equation and so it applies only to \"on-shell\" solutions of the Dirac equation."}
+{"text":"For any solution formula_1 of the massive Dirac equation,"}
+{"text":"the Lorentz covariant number-current formula_3 may be expressed as"}
+{"text":"is the spinor generator of Lorentz transformations."}
+{"text":"The corresponding momentum-space version for plane wave solutions formula_6 and formula_7 obeying"}
+{"text":"One sees that from Dirac's equation that"}
+{"text":"and, from the conjugate of Dirac's equation,"}
+{"text":"From Dirac algebra, one may show that Dirac matrices satisfy"}
+{"text":"which amounts to just the Gordon decomposition, after some algebra."}
+{"text":"The second, spin-dependent, part of the current coupled to the photon field, formula_17 yields, up to an ignorable total divergence,"}
+{"text":"that is, an effective Pauli moment term, formula_19."}
+{"text":"This decomposition of the current into a particle number-flux (first term) and bound spin contribution (second term) requires formula_20."}
+{"text":"If one assumed that the given solution has energy formula_21 so that formula_22, one might obtain a decomposition that is valid for both massive and massless cases."}
+{"text":"Using the Dirac equation again, one finds that"}
+{"text":"where formula_28 is the vector of Pauli matrices."}
+{"text":"With the particle-number density identified with formula_29, and for a near plane-wave"}
+{"text":"solution of finite extent, one may interpret the first term in the decomposition as the current formula_30, due to particles moving at speed formula_31."}
+{"text":"The second term, formula_32 is the current due to the gradients in the intrinsic magnetic moment density. The magnetic moment itself is found by integrating by parts to show that"}
+{"text":"For a single massive particle in its rest frame, where formula_34, the magnetic moment reduces to"}
+{"text":"where formula_36 and formula_37 is the Dirac value of the gyromagnetic ratio."}
+{"text":"For a single massless particle obeying the right-handed Weyl equation, the spin-1\/2 is locked to the direction formula_38 of its kinetic momentum and the magnetic moment becomes"}
+{"text":"For the both massive and massless cases, one also has an expression for the momentum density as part of the symmetric Belinfante\u2013Rosenfeld stress\u2013energy tensor"}
+{"text":"Using the Dirac equation one may evaluate formula_41 to find the energy density to be formula_42, and the momentum density,"}
+{"text":"If one used the non-symmetric canonical energy-momentum tensor"}
+{"text":"one would not find the bound spin-momentum contribution."}
+{"text":"By an integration by parts one finds that the spin contribution to the total angular momentum is"}
+{"text":"This is what is expected, so the division by 2 in the spin contribution to the momentum density is necessary. The absence of a division by 2 in the formula for the current reflects the formula_37 gyromagnetic ratio of the electron. In other words, a spin-density gradient is twice as effective at making an electric current as it is at contributing to the linear momentum."}
+{"text":"Motivated by the Riemann\u2013Silberstein vector form of Maxwell's equations, Michael Berry uses the Gordon strategy to obtain gauge-invariant expressions for the intrinsic spin angular-momentum density for solutions to Maxwell's equations."}
+{"text":"He assumes that the solutions are monochromatic and uses the phasor expressions formula_47, formula_48. The time average of the Poynting vector momentum density is then given by"}
+{"text":"We have used Maxwell's equations in passing from the first to the second and third lines, and in expression such as formula_52 the scalar product is between the fields so that the vector character is determined by the formula_53."}
+{"text":"and for a fluid with intrinsic angular momentum density formula_55 we have"}
+{"text":"these identities suggest that the spin density can be identified as either"}
+{"text":"The two decompositions coincide when the field is paraxial. They also coincide when the field is a pure helicity state \u2013 i.e. when formula_59 where the helicity formula_60 takes the values formula_61 for light that is right or left circularly polarized respectively. In other cases they may differ."}
+{"text":"In physics, specifically relativistic quantum mechanics (RQM) and its applications to particle physics, relativistic wave equations predict the behavior of particles at high energies and velocities comparable to the speed of light. In the context of quantum field theory (QFT), the equations determine the dynamics of quantum fields."}
+{"text":"The solutions to the equations, universally denoted as or (Greek psi), are referred to as \"wave functions\" in the context of RQM, and \"fields\" in the context of QFT. The equations themselves are called \"wave equations\" or \"field equations\", because they have the mathematical form of a wave equation or are generated from a Lagrangian density and the field-theoretic Euler\u2013Lagrange equations (see classical field theory for background)."}
+{"text":"In the Schr\u00f6dinger picture, the wave function or field is the solution to the Schr\u00f6dinger equation;"}
+{"text":"one of the postulates of quantum mechanics. All relativistic wave equations can be constructed by specifying various forms of the Hamiltonian operator \"\u0124\" describing the quantum system. Alternatively, Feynman's path integral formulation uses a Lagrangian rather than a Hamiltonian operator."}
+{"text":"More generally \u2013 the modern formalism behind relativistic wave equations is Lorentz group theory, wherein the spin of the particle has a correspondence with the representations of the Lorentz group."}
+{"text":"Late 1920s: Relativistic quantum mechanics of spin-0 and spin- particles."}
+{"text":"A description of quantum mechanical systems which could account for \"relativistic\" effects was sought for by many theoretical physicists; from the late 1920s to the mid-1940s. The first basis for relativistic quantum mechanics, i.e. special relativity applied with quantum mechanics together, was found by all those who discovered what is frequently called the Klein\u2013Gordon equation:"}
+{"text":"by inserting the energy operator and momentum operator into the relativistic energy\u2013momentum relation:"}
+{"text":"The solutions to () are scalar fields. The KG equation is undesirable due to its prediction of \"negative\" energies and probabilities, as a result of the quadratic nature of () \u2013 inevitable in a relativistic theory. This equation was initially proposed by Schr\u00f6dinger, and he discarded it for such reasons, only to realize a few months later that its non-relativistic limit (what is now called the Schr\u00f6dinger equation) was still of importance. Nevertheless, \u2013 () is applicable to spin-0 bosons."}
+{"text":"Neither the non-relativistic nor relativistic equations found by Schr\u00f6dinger could predict the fine structure in the Hydrogen spectral series. The mysterious underlying property was \"spin\". The first two-dimensional \"spin matrices\" (better known as the Pauli matrices) were introduced by Pauli in the Pauli equation; the Schr\u00f6dinger equation with a non-relativistic Hamiltonian including an extra term for particles in magnetic fields, but this was \"phenomenological\". Weyl found a relativistic equation in terms of the Pauli matrices; the Weyl equation, for \"massless\" spin- fermions. The problem was resolved by Dirac in the late 1920s, when he furthered the application of equation () to the electron \u2013 by various manipulations he factorized the equation into the form:"}
+{"text":"and one of these factors is the Dirac equation (see below), upon inserting the energy and momentum operators. For the first time, this introduced new four-dimensional spin matrices and in a relativistic wave equation, and explained the fine structure of hydrogen. The solutions to () are multi-component spinor fields, and each component satisfies (). A remarkable result of spinor solutions is that half of the components describe a particle while the other half describe an antiparticle; in this case the electron and positron. The Dirac equation is now known to apply for all massive spin- fermions. In the non-relativistic limit, the Pauli equation is recovered, while the massless case results in the Weyl equation."}
+{"text":"Although a landmark in quantum theory, the Dirac equation is only true for spin- fermions, and still predicts negative energy solutions, which caused controversy at the time (in particular \u2013 not all physicists were comfortable with the \"Dirac sea\" of negative energy states)."}
+{"text":"1930s\u20131960s: Relativistic quantum mechanics of higher-spin particles."}
+{"text":"The natural problem became clear: to generalize the Dirac equation to particles with \"any spin\"; both fermions and bosons, and in the same equations their antiparticles (possible because of the spinor formalism introduced by Dirac in his equation, and then-recent developments in spinor calculus by van der Waerden in 1929), and ideally with positive energy solutions."}
+{"text":"This was introduced and solved by Majorana in 1932, by a deviated approach to Dirac. Majorana considered one \"root\" of ():"}
+{"text":"where is a spinor field now with infinitely many components, irreducible to a finite number of tensors or spinors, to remove the indeterminacy in sign. The matrices and are infinite-dimensional matrices, related to infinitesimal Lorentz transformations. He did not demand that each component of to satisfy equation (), instead he regenerated the equation using a Lorentz-invariant action, via the principle of least action, and application of Lorentz group theory."}
+{"text":"Majorana produced other important contributions that were unpublished, including wave equations of various dimensions (5, 6, and 16). They were anticipated later (in a more involved way) by de Broglie (1934), and Duffin, Kemmer, and Petiau (around 1938\u20131939) see Duffin\u2013Kemmer\u2013Petiau algebra. The Dirac\u2013Fierz\u2013Pauli formalism was more sophisticated than Majorana\u2019s, as spinors were new mathematical tools in the early twentieth century, although Majorana\u2019s paper of 1932 was difficult to fully understand; it took Pauli and Wigner some time to understand it, around 1940."}
+{"text":"Dirac in 1936, and Fierz and Pauli in 1939, built equations from irreducible spinors and , symmetric in all indices, for a massive particle of spin for integer (see Van der Waerden notation for the meaning of the dotted indices):"}
+{"text":"where is the momentum as a covariant spinor operator. For , the equations reduce to the coupled Dirac equations and and together transform as the original Dirac spinor. Eliminating either or shows that and each fulfill ()."}
+{"text":"In 1941, Rarita and Schwinger focussed on spin- particles and derived the Rarita\u2013Schwinger equation, including a Lagrangian to generate it, and later generalized the equations analogous to spin for integer . In 1945, Pauli suggested Majorana's 1932 paper to Bhabha, who returned to the general ideas introduced by Majorana in 1932. Bhabha and Lubanski proposed a completely general set of equations by replacing the mass terms in () and () by an arbitrary constant, subject to a set of conditions which the wave functions must obey."}
+{"text":"Finally, in the year 1948 (the same year as Feynman's path integral formulation was cast), Bargmann and Wigner formulated the general equation for massive particles which could have any spin, by considering the Dirac equation with a totally symmetric finite-component spinor, and using Lorentz group theory (as Majorana did): the Bargmann\u2013Wigner equations. In the early 1960s, a reformulation of the Bargmann\u2013Wigner equations was made by H. Joos and Steven Weinberg, the Joos\u2013Weinberg equation. Various theorists at this time did further research in relativistic Hamiltonians for higher spin particles."}
+{"text":"The relativistic description of spin particles has been a difficult problem in quantum theory. It is still an area of the present-day research because the problem is only partially solved; including interactions in the equations is problematic, and paradoxical predictions (even from the Dirac equation) are still present."}
+{"text":"The following equations have solutions which satisfy the superposition principle, that is, the wave functions are additive."}
+{"text":"Throughout, the standard conventions of tensor index notation and Feynman slash notation are used, including Greek indices which take the values 1, 2, 3 for the spatial components and 0 for the timelike component of the indexed quantities. The wave functions are denoted \", and are the components of the four-gradient operator."}
+{"text":"In matrix equations, the Pauli matrices are denoted by \" in which , where is the identity matrix:"}
+{"text":"and the other matrices have their usual representations. The expression"}
+{"text":"is a matrix operator which acts on 2-component spinor fields."}
+{"text":"The gamma matrices are denoted by \"\", in which again , and there are a number of representations to select from. The matrix is \"not\" necessarily the identity matrix. The expression"}
+{"text":"is a matrix operator which acts on 4-component spinor fields."}
+{"text":"Note that terms such as \"\" scalar multiply an identity matrix of the relevant dimension, the common sizes are or , and are \"conventionally\" not written for simplicity."}
+{"text":"The Duffin\u2013Kemmer\u2013Petiau equation is an alternative equation for spin-0 and spin-1 particles:"}
+{"text":"Start with the standard special relativity (SR) 4-vectors"}
+{"text":"Note that each 4-vector is related to another by a Lorentz scalar:"}
+{"text":"Now, just apply the standard Lorentz scalar product rule to each one:"}
+{"text":"The last equation is a fundamental quantum relation."}
+{"text":"When applied to a Lorentz scalar field formula_21, one gets the Klein\u2013Gordon equation, the most basic of the quantum relativistic wave equations."}
+{"text":"The Schr\u00f6dinger equation is the low-velocity limiting case (\"v\"\u00a0\u00ab\u00a0\"c\") of the Klein\u2013Gordon equation."}
+{"text":"When the relation is applied to a four-vector field formula_25 instead of a Lorentz scalar field formula_21, then one gets the Proca equation (in Lorenz gauge):"}
+{"text":"If the rest mass term is set to zero (light-like particles), then this gives the free Maxwell equation (in Lorenz gauge)"}
+{"text":"Under a proper orthochronous Lorentz transformation in Minkowski space, all one-particle quantum states of spin with spin z-component locally transform under some representation of the Lorentz group:"}
+{"text":"where is some finite-dimensional representation, i.e. a matrix. Here is thought of as a column vector containing components with the allowed values of . The quantum numbers and as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of may occur more than once depending on the representation. Representations with several possible values for are considered below."}
+{"text":"The irreducible representations are labeled by a pair of half-integers or integers . From these all other representations can be built up using a variety of standard methods, like taking tensor products and direct sums. In particular, space-time itself constitutes a 4-vector representation so that . To put this into context; Dirac spinors transform under the representation. In general, the representation space has subspaces that under the subgroup of spatial rotations, SO(3), transform irreducibly like objects of spin \"j\", where each allowed value:"}
+{"text":"occurs exactly once. In general, \"tensor products of irreducible representations\" are reducible; they decompose as direct sums of irreducible representations."}
+{"text":"The representations and can each separately represent particles of spin . A state or quantum field in such a representation would satisfy no field equation except the Klein\u2013Gordon equation."}
+{"text":"There are equations which have solutions that do not satisfy the superposition principle."}
+{"text":"In physics, the acoustic wave equation governs the propagation of acoustic waves through a material medium. The form of the equation is a second order partial differential equation. The equation describes the evolution of acoustic pressure formula_1 or particle velocity u as a function of position x and time formula_2. A simplified form of the equation describes acoustic waves in only one spatial dimension, while a more general form describes waves in three dimensions."}
+{"text":"For lossy media, more intricate models need to be applied in order to take into account frequency-dependent attenuation and phase speed. Such models include acoustic wave equations that incorporate fractional derivative terms, see also the acoustic attenuation article or the survey paper."}
+{"text":"The wave equation describing sound in one dimension (position formula_3) is"}
+{"text":"where formula_1 is the acoustic pressure (the local decoration the from the ambient pressure), and where formula_6 is the speed of sound.same"}
+{"text":"Provided that the speed formula_6 is a constant, not dependent on frequency (the dispersionless case), then the most general solution is"}
+{"text":"where formula_9 and formula_10 are any two twice-differentiable functions. This may be pictured as the superposition of two waveforms of arbitrary profile, one (formula_9) travelling up the x-axis and the other (formula_10) down the x-axis at the speed formula_6. The particular case of a sinusoidal wave travelling in one direction is obtained by choosing either formula_9 or formula_10 to be a sinusoid, and the other to be zero, giving"}
+{"text":"where formula_17 is the angular frequency of the wave and formula_18 is its wave number."}
+{"text":"The derivation of the wave equation involves three steps: derivation of the equation of state, the linearized one-dimensional continuity equation, and the linearized one-dimensional force equation."}
+{"text":"The equation of state (ideal gas law)"}
+{"text":"In an adiabatic process, pressure \"P\" as a function of density formula_20 can be linearized to"}
+{"text":"where \"C\" is some constant. Breaking the pressure and density into their mean and total components and noting that formula_22:"}
+{"text":"The adiabatic bulk modulus for a fluid is defined as"}
+{"text":"Condensation, \"s\", is defined as the change in density for a given ambient fluid density."}
+{"text":"The continuity equation (conservation of mass) in one dimension is"}
+{"text":"Where \"u\" is the flow velocity of the fluid."}
+{"text":"Again the equation must be linearized and the variables split into mean and variable components."}
+{"text":"Rearranging and noting that ambient density changes with neither time nor position and that the condensation multiplied by the velocity is a very small number:"}
+{"text":"Euler's Force equation (conservation of momentum) is the last needed component. In one dimension the equation is:"}
+{"text":"where formula_33 represents the convective, substantial or material derivative, which is the derivative at a point moving along with the medium rather than at a fixed point."}
+{"text":"Rearranging and neglecting small terms, the resultant equation becomes the linearized one-dimensional Euler Equation:"}
+{"text":"Taking the time derivative of the continuity equation and the spatial derivative of the force equation results in:"}
+{"text":"Multiplying the first by formula_38, subtracting the two, and substituting the linearized equation of state,"}
+{"text":"where formula_41 is the speed of propagation."}
+{"text":"Feynman provides a derivation of the wave equation for sound in three dimensions as"}
+{"text":"where formula_43 is the Laplace operator, formula_1 is the acoustic pressure (the local deviation from the ambient pressure), and formula_6 is the speed of sound."}
+{"text":"A similar looking wave equation but for the vector field particle velocity is given by"}
+{"text":"In some situations, it is more convenient to solve the wave equation for an abstract scalar field velocity potential which has the form"}
+{"text":"and then derive the physical quantities particle velocity and acoustic pressure by the equations (or definition, in the case of particle velocity):"}
+{"text":"The following solutions are obtained by separation of variables in different coordinate systems. They are phasor solutions, that is they have an implicit time-dependence factor of formula_50 where formula_51 is the angular frequency. The explicit time dependence is given by"}
+{"text":"where the asymptotic approximations to the Hankel functions, when formula_56, are"}
+{"text":"Depending on the chosen Fourier convention, one of these represents an outward travelling wave and the other a nonphysical inward travelling wave. The inward travelling solution wave is only nonphysical because of the singularity that occurs at r=0; inward travelling waves do exist."}
+{"text":"Sound pressure or acoustic pressure is the local pressure deviation from the ambient (average or equilibrium) atmospheric pressure, caused by a sound wave. In air, sound pressure can be measured using a microphone, and in water with a hydrophone. The SI unit of sound pressure is the pascal (Pa)."}
+{"text":"A sound wave in a transmission medium causes a deviation (sound pressure, a \"dynamic\" pressure) in the local ambient pressure, a \"static\" pressure."}
+{"text":"Sound pressure, denoted \"p\", is defined by"}
+{"text":"In a sound wave, the complementary variable to sound pressure is the particle velocity. Together, they determine the sound intensity of the wave."}
+{"text":"\"Sound intensity\", denoted I and measured in W\u00b7m\u22122 in SI units, is defined by"}
+{"text":"\"Acoustic impedance\", denoted \"Z\" and measured in Pa\u00b7m\u22123\u00b7s in SI units, is defined by"}
+{"text":"\"Specific acoustic impedance\", denoted \"z\" and measured in Pa\u00b7m\u22121\u00b7s in SI units, is defined by"}
+{"text":"The \"particle displacement\" of a \"progressive sine wave\" is given by"}
+{"text":"It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave \"x\" are given by"}
+{"text":"Taking the Laplace transforms of \"v\" and \"p\" with respect to time yields"}
+{"text":"Since formula_18, the amplitude of the specific acoustic impedance is given by"}
+{"text":"Consequently, the amplitude of the particle displacement is related to that of the acoustic velocity and the sound pressure by"}
+{"text":"When measuring the sound pressure created by a sound source, it is important to measure the distance from the object as well, since the sound pressure of a \"spherical\" sound wave decreases as 1\/\"r\" from the centre of the sphere (and not as 1\/\"r\"2, like the sound intensity):"}
+{"text":"If the sound pressure \"p\"1 is measured at a distance \"r\"1 from the centre of the sphere, the sound pressure \"p\"2 at another position \"r\"2 can be calculated:"}
+{"text":"The inverse-proportional law for sound pressure comes from the inverse-square law for sound intensity:"}
+{"text":"The sound pressure may vary in direction from the centre of the sphere as well, so measurements at different angles may be necessary, depending on the situation. An obvious example of a sound source whose spherical sound wave varies in level in different directions is a bullhorn."}
+{"text":"Sound pressure level (SPL) or acoustic pressure level is a logarithmic measure of the effective pressure of a sound relative to a reference value."}
+{"text":"Sound pressure level, denoted \"L\"\"p\" and measured in dB, is defined by"}
+{"text":"The commonly used reference sound pressure in air is"}
+{"text":"which is often considered as the threshold of human hearing (roughly the sound of a mosquito flying 3\u00a0m away). The proper notations for sound pressure level using this reference are or , but the suffix notations , , dBSPL, or dBSPL are very common, even if they are not accepted by the SI."}
+{"text":"Most sound-level measurements will be made relative to this reference, meaning will equal an SPL of . In other media, such as underwater, a reference level of is used. These references are defined in ANSI S1.1-2013."}
+{"text":"The main instrument for measuring sound levels in the environment is the sound level meter. Most sound level meters provide readings in A, C, and Z-weighted decibels and must meet international standards such as IEC 61672-2013."}
+{"text":"The lower limit of audibility is defined as SPL of , but the upper limit is not as clearly defined. While ( or ) is the largest pressure variation an undistorted sound wave can have in Earth's atmosphere (i.e. if the thermodynamic properties of the air are disregarded, in reality the sound wave become progressively non-linear starting over 150\u00a0dB), larger sound waves can be present in other atmospheres or other media such as under water or through the Earth."}
+{"text":"Ears detect changes in sound pressure. Human hearing does not have a flat spectral sensitivity (frequency response) relative to frequency versus amplitude. Humans do not perceive low- and high-frequency sounds as well as they perceive sounds between 3,000 and 4,000\u00a0Hz, as shown in the equal-loudness contour. Because the frequency response of human hearing changes with amplitude, three weightings have been established for measuring sound pressure: A, B and C. A-weighting applies to sound pressures levels up to , B-weighting applies to sound pressures levels between and , and C-weighting is for measuring sound pressure levels above ."}
+{"text":"In order to distinguish the different sound measures, a suffix is used: A-weighted sound pressure level is written either as dBA or LA. B-weighted sound pressure level is written either as dBB or LB, and C-weighted sound pressure level is written either as dBC or LC. Unweighted sound pressure level is called \"linear sound pressure level\" and is often written as dBL or just L. Some sound measuring instruments use the letter \"Z\" as an indication of linear SPL."}
+{"text":"According to the inverse proportional law, when sound level \"L\"\"p\"1 is measured at a distance \"r\"1, the sound level \"L\"\"p\"2 at the distance \"r\"2 is"}
+{"text":"The formula for the sum of the sound pressure levels of \"n\" incoherent radiating sources is"}
+{"text":"in the formula for the sum of the sound pressure levels yields"}
+{"text":"A Dynamical Theory of the Electromagnetic Field"}
+{"text":"\"A Dynamical Theory of the Electromagnetic Field\" is a paper by James Clerk Maxwell on electromagnetism, published in 1865. In the paper, Maxwell derives an electromagnetic wave equation with a velocity for light in close agreement with measurements made by experiment, and deduces that light is an electromagnetic wave."}
+{"text":"In part III of the paper, which is entitled \"General Equations of the Electromagnetic Field\", Maxwell formulated twenty equations which were to become known as Maxwell's equations, until this term became applied instead to a vectorized set of four equations selected in 1884, which had all appeared in \"On Physical Lines of Force\"."}
+{"text":"Heaviside's versions of Maxwell's equations are distinct by virtue of the fact that they are written in modern vector notation. They actually only contain one of the original eight\u2014equation \"G\" (Gauss's Law). Another of Heaviside's four equations is an amalgamation of Maxwell's law of total currents (equation \"A\") with Amp\u00e8re's circuital law (equation \"C\"). This amalgamation, which Maxwell himself had actually originally made at equation (112) in \"On Physical Lines of Force\", is the one that modifies Amp\u00e8re's Circuital Law to include Maxwell's displacement current."}
+{"text":"Eighteen of Maxwell's twenty original equations can be vectorized into six equations, labeled (A) to (F) below, each of which represents a group of three original equations in component form. The 19th and 20th of Maxwell's component equations appear as (G) and (H) below, making a total of eight vector equations. These are listed below in Maxwell's original order, designated by the letters that Maxwell assigned to them in his 1864 paper."}
+{"text":"Maxwell did not consider completely general materials; his initial formulation used linear, isotropic, nondispersive media with permittivity \"\u03f5\" and permeability \"\u03bc\", although he also discussed the possibility of anisotropic materials."}
+{"text":"Gauss's law for magnetism () is not included in the above list, but follows directly from equation\u00a0(B) by taking divergences (because the divergence of the curl is zero)."}
+{"text":"Substituting (A) into (C) yields the familiar differential form of the ."}
+{"text":"Equation (D) implicitly contains the Lorentz force law and the differential form of Faraday's law of induction. For a \"static\" magnetic field, formula_11 vanishes, and the electric field becomes conservative and is given by , so that (D) reduces to"}
+{"text":"This is simply the Lorentz force law on a per-unit-charge basis \u2014 although Maxwell's equation\u00a0(D) first appeared at equation (77) in \"On Physical Lines of Force\" in 1861, 34\u00a0years before Lorentz derived his force law, which is now usually presented as a supplement to the four \"Maxwell's equations\". The cross-product term in the Lorentz force law is the source of the so-called \"motional emf\" in electric generators (see also \"Moving magnet and conductor problem\"). Where there is no motion through the magnetic field \u2014 e.g., in transformers \u2014 we can drop the cross-product term, and the force per unit charge (called ) reduces to the electric field , so that Maxwell's equation\u00a0(D) reduces to"}
+{"text":"Taking curls, noting that the curl of a gradient is zero, we obtain"}
+{"text":"which is the differential form of Faraday's law. Thus the three terms on the right side of equation\u00a0(D) may be described, from left to right, as the motional term, the transformer term, and the conservative term."}
+{"text":"In deriving the electromagnetic wave equation, Maxwell considers the situation only from the rest frame of the medium, and accordingly drops the cross-product term. But he still works from equation\u00a0(D), in contrast to modern textbooks which tend to work from Faraday's law (see below)."}
+{"text":"The constitutive equations (E) and (F) are now usually written in the rest frame of the medium as and ."}
+{"text":"Maxwell's equation (G), viewed in isolation as printed in the 1864 paper, at first seems to say that .\u00a0 However, if we trace the signs through the previous two triplets of equations, we see that what seem to be the components of are in fact the components of\u00a0. The notation used in Maxwell's later \"Treatise on Electricity and Magnetism\" is different, and avoids the misleading first impression."}
+{"text":"In part VI of \"A Dynamical Theory of the Electromagnetic Field\", subtitled \"Electromagnetic theory of light\", Maxwell uses the correction to Amp\u00e8re's Circuital Law made in part III of his 1862 paper, \"On Physical Lines of Force\", which is defined as displacement current, to derive the electromagnetic wave equation."}
+{"text":"He obtained a wave equation with a speed in close agreement to experimental determinations of the speed of light. He commented,"}
+{"text":"Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics by a much less cumbersome method which combines the corrected version of Amp\u00e8re's Circuital Law with Faraday's law of electromagnetic induction."}
+{"text":"To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's equations. Using (SI units) in a vacuum, these equations are"}
+{"text":"If we take the curl of the curl equations we obtain"}
+{"text":"where formula_20 is any vector function of space, we recover the wave equations"}
+{"text":"is the speed of light in free space."}
+{"text":"Of this paper and Maxwell's related works, fellow physicist Richard Feynman said: \"From the long view of this history of mankind \u2013 seen from, say, 10,000 years from now \u2013 there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electromagnetism.\""}
+{"text":"Albert Einstein used Maxwell's equations as the starting point for his special theory of relativity, presented in \"The Electrodynamics of Moving Bodies\", one of Einstein's 1905 \"Annus Mirabilis\" papers. In it is stated:"}
+{"text":"Maxwell's equations can also be derived by extending general relativity into five physical dimensions."}
+{"text":"Electromagnetic or magnetic induction is the production of an electromotive force across an electrical conductor in a changing magnetic field."}
+{"text":"Michael Faraday is generally credited with the discovery of induction in 1831, and James Clerk Maxwell mathematically described it as Faraday's law of induction. Lenz's law describes the direction of the induced field. Faraday's law was later generalized to become the Maxwell\u2013Faraday equation, one of the four Maxwell equations in his theory of electromagnetism."}
+{"text":"Electromagnetic induction has found many applications, including electrical components such as inductors and transformers, and devices such as electric motors and generators."}
+{"text":"Electromagnetic induction was discovered by Michael Faraday, published in 1831. It was discovered independently by Joseph Henry in 1832."}
+{"text":"Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was James Clerk Maxwell, who used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's model, the time varying aspect of electromagnetic induction is expressed as a differential equation, which Oliver Heaviside referred to as Faraday's law even though it is slightly different from Faraday's original formulation and does not describe motional EMF. Heaviside's version (see Maxwell\u2013Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations."}
+{"text":"In 1834 Heinrich Lenz formulated the law named after him to describe the \"flux through the circuit\". Lenz's law gives the direction of the induced EMF and current resulting from electromagnetic induction."}
+{"text":"Faraday's law of induction and Lenz's law."}
+{"text":"Faraday's law of induction makes use of the magnetic flux \u03a6B through a region of space enclosed by a wire loop. The magnetic flux is defined by a surface integral:"}
+{"text":"where \"dA is an element of the surface \u03a3 enclosed by the wire loop, B is the magnetic field. The dot product B\u00b7\"dA corresponds to an infinitesimal amount of magnetic flux. In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic flux lines that pass through the loop."}
+{"text":"When the flux through the surface changes, Faraday's law of induction says that the wire loop acquires an electromotive force (EMF). The most widespread version of this law states that the induced electromotive force in any closed circuit is equal to the rate of change of the magnetic flux enclosed by the circuit:"}
+{"text":"where formula_3 is the EMF and \u03a6B is the magnetic flux. The direction of the electromotive force is given by Lenz's law which states that an induced current will flow in the direction that will oppose the change which produced it. This is due to the negative sign in the previous equation. To increase the generated EMF, a common approach is to exploit flux linkage by creating a tightly wound coil of wire, composed of \"N\" identical turns, each with the same magnetic flux going through them. The resulting EMF is then \"N\" times that of one single wire."}
+{"text":"Generating an EMF through a variation of the magnetic flux through the surface of a wire loop can be achieved in several ways:"}
+{"text":"In general, the relation between the EMF formula_5 in a wire loop encircling a surface \u03a3, and the electric field E in the wire is given by"}
+{"text":"where \"d\"\u2113 is an element of contour of the surface \u03a3, combining this with the definition of flux"}
+{"text":"we can write the integral form of the Maxwell\u2013Faraday equation"}
+{"text":"It is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism."}
+{"text":"Faraday's law describes two different phenomena: the \"motional EMF\" generated by a magnetic force on a moving wire (see Lorentz force), and the \"transformer EMF\" this is generated by an electric force due to a changing magnetic field (due to the differential form of the Maxwell\u2013Faraday equation). James Clerk Maxwell drew attention to the separate physical phenomena in 1861. This is believed to be a unique example in physics of where such a fundamental law is invoked to explain two such different phenomena."}
+{"text":"Albert Einstein noticed that the two situations both corresponded to a relative movement between a conductor and a magnet, and the outcome was unaffected by which one was moving. This was one of the principal paths that led him to develop special relativity."}
+{"text":"The principles of electromagnetic induction are applied in many devices and systems, including:"}
+{"text":"The EMF generated by Faraday's law of induction due to relative movement of a circuit and a magnetic field is the phenomenon underlying electrical generators. When a permanent magnet is moved relative to a conductor, or vice versa, an electromotive force is created. If the wire is connected through an electrical load, current will flow, and thus electrical energy is generated, converting the mechanical energy of motion to electrical energy. For example, the \"drum generator\" is based upon the figure to the bottom-right. A different implementation of this idea is the Faraday's disc, shown in simplified form on the right."}
+{"text":"When the electric current in a loop of wire changes, the changing current creates a changing magnetic field. A second wire in reach of this magnetic field will experience this change in magnetic field as a change in its coupled magnetic flux, \"d\" \u03a6B \/ \"d t\". Therefore, an electromotive force is set up in the second loop called the induced EMF or transformer EMF. If the two ends of this loop are connected through an electrical load, current will flow."}
+{"text":"A current clamp is a type of transformer with a split core which can be spread apart and clipped onto a wire or coil to either measure the current in it or, in reverse, to induce a voltage. Unlike conventional instruments the clamp does not make electrical contact with the conductor or require it to be disconnected during attachment of the clamp."}
+{"text":"Faraday's law is used for measuring the flow of electrically conductive liquids and slurries. Such instruments are called magnetic flow meters. The induced voltage \u2107 generated in the magnetic field \"B\" due to a conductive liquid moving at velocity \"v\" is thus given by:"}
+{"text":"where \u2113 is the distance between electrodes in the magnetic flow meter."}
+{"text":"Electrical conductors moving through a steady magnetic field, or stationary conductors within a changing magnetic field, will have circular currents induced within them by induction, called eddy currents. Eddy currents flow in closed loops in planes perpendicular to the magnetic field. They have useful applications in eddy current brakes and induction heating systems. However eddy currents induced in the metal magnetic cores of transformers and AC motors and generators are undesirable since they dissipate energy (called core losses) as heat in the resistance of the metal. Cores for these devices use a number of methods to reduce eddy currents:"}
+{"text":"Eddy currents occur when a solid metallic mass is rotated in a magnetic field, because the outer portion of the metal cuts more magnetic lines of force than the inner portion; hence the induced electromotive force is not uniform; this tends to cause electric currents between the points of greatest and least potential. Eddy currents consume a considerable amount of energy and often cause a harmful rise in temperature."}
+{"text":"Only five laminations or plates are shown in this example, so as to show the subdivision of the eddy currents. In practical use, the number of laminations or punchings ranges from 40 to 66 per inch (16 to 26 per centimetre), and brings the eddy current loss down to about one percent. While the plates can be separated by insulation, the voltage is so low that the natural rust\/oxide coating of the plates is enough to prevent current flow across the laminations."}
+{"text":"This is a rotor approximately 20\u00a0mm in diameter from a DC motor used in a Note the laminations of the electromagnet pole pieces, used to limit parasitic inductive losses."}
+{"text":"In this illustration, a solid copper bar conductor on a rotating armature is just passing under the tip of the pole piece N of the field magnet. Note the uneven distribution of the lines of force across the copper bar. The magnetic field is more concentrated and thus stronger on the left edge of the copper bar (a,b) while the field is weaker on the right edge (c,d). Since the two edges of the bar move with the same velocity, this difference in field strength across the bar creates whorls or current eddies within the copper bar."}
+{"text":"High current power-frequency devices, such as electric motors, generators and transformers, use multiple small conductors in parallel to break up the eddy flows that can form within large solid conductors. The same principle is applied to transformers used at higher than power frequency, for example, those used in switch-mode power supplies and the intermediate frequency coupling transformers of radio receivers."}
+{"text":"In physics (specifically in electromagnetism) the Lorentz force (or electromagnetic force) is the combination of electric and magnetic force on a point charge due to electromagnetic fields. A particle of charge moving with a velocity in an electric field and a magnetic field experiences a force of"}
+{"text":"(in SI units). It says that the electromagnetic force on a charge is a combination of a force in the direction of the electric field proportional to the magnitude of the field and the quantity of charge, and a force at right angles to the magnetic field and the velocity of the charge, proportional to the magnitude of the field, the charge, and the velocity. Variations on this basic formula describe the magnetic force on a current-carrying wire (sometimes called Laplace force), the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction), and the force on a moving charged particle."}
+{"text":"Historians suggest that the law is implicit in a paper by James Clerk Maxwell, published in 1865. Hendrik Lorentz arrived at a complete derivation in 1895, identifying the contribution of the electric force a few years after Oliver Heaviside correctly identified the contribution of the magnetic force."}
+{"text":"Lorentz force law as the definition of E and B."}
+{"text":"In many textbook treatments of classical electromagnetism, the Lorentz force law is used as the \"definition\" of the electric and magnetic fields E and B. To be specific, the Lorentz force is understood to be the following empirical statement:"}
+{"text":"This is valid, even for particles approaching the speed of light (that is, magnitude of v, |v| \u2248 \"c\"). So the two vector fields E and B are thereby defined throughout space and time, and these are called the \"electric field\" and \"magnetic field\". The fields are defined everywhere in space and time with respect to what force a test charge would receive regardless of whether a charge is present to experience the force."}
+{"text":"As a definition of E and B, the Lorentz force is only a definition in principle because a real particle (as opposed to the hypothetical \"test charge\" of infinitesimally-small mass and charge) would generate its own finite E and B fields, which would alter the electromagnetic force that it experiences. In addition, if the charge experiences acceleration, as if forced into a curved trajectory, it emits radiation that causes it to lose kinetic energy. See for example Bremsstrahlung and synchrotron light. These effects occur through both a direct effect (called the radiation reaction force) and indirectly (by affecting the motion of nearby charges and currents)."}
+{"text":"The force F acting on a particle of electric charge \"q\" with instantaneous velocity v, due to an external electric field E and magnetic field B, is given by (in SI units):"}
+{"text":"where \u00d7 is the vector cross product (all boldface quantities are vectors). In terms of Cartesian components, we have:"}
+{"text":"In general, the electric and magnetic fields are functions of the position and time. Therefore, explicitly, the Lorentz force can be written as:"}
+{"text":"in which r is the position vector of the charged particle, \"t\" is time, and the overdot is a time derivative."}
+{"text":"A positively charged particle will be accelerated in the \"same\" linear orientation as the E field, but will curve perpendicularly to both the instantaneous velocity vector v and the B field according to the right-hand rule (in detail, if the fingers of the right hand are extended to point in the direction of v and are then curled to point in the direction of B, then the extended thumb will point in the direction of F)."}
+{"text":"The term \"q\"E is called the electric force, while the term \"q\"(v \u00d7 B) is called the magnetic force. According to some definitions, the term \"Lorentz force\" refers specifically to the formula for the magnetic force, with the \"total\" electromagnetic force (including the electric force) given some other (nonstandard) name. This article will \"not\" follow this nomenclature: In what follows, the term \"Lorentz force\" will refer to the expression for the total force."}
+{"text":"The magnetic force component of the Lorentz force manifests itself as the force that acts on a current-carrying wire in a magnetic field. In that context, it is also called the Laplace force."}
+{"text":"The Lorentz force is a force exerted by the electromagnetic field on the charged particle, that is, it is the rate at which linear momentum is transferred from the electromagnetic field to the particle. Associated with it is the power which is the rate at which energy is transferred from the electromagnetic field to the particle. That power is"}
+{"text":"Notice that the magnetic field does not contribute to the power because the magnetic force is always perpendicular to the velocity of the particle."}
+{"text":"For a continuous charge distribution in motion, the Lorentz force equation becomes:"}
+{"text":"where formula_9 is the force on a small piece of the charge distribution with charge formula_10. If both sides of this equation are divided by the volume of this small piece of the charge distribution formula_11, the result is:"}
+{"text":"where formula_13 is the \"force density\" (force per unit volume) and formula_14 is the charge density (charge per unit volume). Next, the current density corresponding to the motion of the charge continuum is"}
+{"text":"so the continuous analogue to the equation is"}
+{"text":"The total force is the volume integral over the charge distribution:"}
+{"text":"By eliminating formula_14 and formula_18, using Maxwell's equations, and manipulating using the theorems of vector calculus, this form of the equation can be used to derive the Maxwell stress tensor formula_19, in turn this can be combined with the Poynting vector formula_20 to obtain the electromagnetic stress\u2013energy tensor T used in general relativity."}
+{"text":"In terms of formula_19 and formula_20, another way to write the Lorentz force (per unit volume) is"}
+{"text":"where formula_24 is the speed of light and \u2207\u00b7 denotes the divergence of a tensor field. Rather than the amount of charge and its velocity in electric and magnetic fields, this equation relates the energy flux (flow of \"energy\" per unit time per unit distance) in the fields to the force exerted on a charge distribution. See Covariant formulation of classical electromagnetism for more details."}
+{"text":"The density of power associated with the Lorentz force in a material medium is"}
+{"text":"If we separate the total charge and total current into their free and bound parts, we get that the density of the Lorentz force is"}
+{"text":"where: formula_27 is the density of free charge; formula_28 is the polarization density; formula_29 is the density of free current; and formula_30 is the magnetization density. In this way, the Lorentz force can explain the torque applied to a permanent magnet by the magnetic field. The density of the associated power is"}
+{"text":"The above-mentioned formulae use SI units which are the most common. In older cgs-Gaussian units, which are somewhat more common among some theoretical physicists as well as condensed matter experimentalists, one has instead"}
+{"text":"where \"c\" is the speed of light."}
+{"text":"Although this equation looks slightly different, it is completely equivalent, since"}
+{"text":"where \u03b50 is the vacuum permittivity and \u03bc0 the vacuum permeability. In practice, the subscripts \"cgs\" and \"SI\" are always omitted, and the unit system has to be assessed from context."}
+{"text":"Trajectories of particles due to the Lorentz force."}
+{"text":"In many cases of practical interest, the motion in a magnetic field of an electrically charged particle (such as an electron or ion in a plasma) can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation."}
+{"text":"In real materials the Lorentz force is inadequate to describe the collective behavior of charged particles, both in principle and as a matter of computation. The charged particles in a material medium not only respond to the E and B fields but also generate these fields. Complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker\u2013Planck equation or the Navier\u2013Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, stellar evolution. An entire physical apparatus for dealing with these matters has developed. See for example, Green\u2013Kubo relations and Green's function (many-body theory)."}
+{"text":"When a wire carrying an electric current is placed in a magnetic field, each of the moving charges, which comprise the current, experiences the Lorentz force, and together they can create a macroscopic force on the wire (sometimes called the Laplace force). By combining the Lorentz force law above with the definition of electric current, the following equation results, in the case of a straight, stationary wire:"}
+{"text":"where is a vector whose magnitude is the length of wire, and whose direction is along the wire, aligned with the direction of conventional current charge flow ."}
+{"text":"If the wire is not straight but curved, the force on it can be computed by applying this formula to each infinitesimal segment of wire , then adding up all these forces by integration. Formally, the net force on a stationary, rigid wire carrying a steady current is"}
+{"text":"This is the net force. In addition, there will usually be torque, plus other effects if the wire is not perfectly rigid."}
+{"text":"One application of this is Amp\u00e8re's force law, which describes how two current-carrying wires can attract or repel each other, since each experiences a Lorentz force from the other's magnetic field. For more information, see the article: Amp\u00e8re's force law."}
+{"text":"The magnetic force () component of the Lorentz force is responsible for \"motional\" electromotive force (or \"motional EMF\"), the phenomenon underlying many electrical generators. When a conductor is moved through a magnetic field, the magnetic field exerts opposite forces on electrons and nuclei in the wire, and this creates the EMF. The term \"motional EMF\" is applied to this phenomenon, since the EMF is due to the \"motion\" of the wire."}
+{"text":"In other electrical generators, the magnets move, while the conductors do not. In this case, the EMF is due to the electric force (\"q\"E) term in the Lorentz Force equation. The electric field in question is created by the changing magnetic field, resulting in an \"induced\" EMF, as described by the Maxwell\u2013Faraday equation (one of the four modern Maxwell's equations)."}
+{"text":"Both of these EMFs, despite their apparently distinct origins, are described by the same equation, namely, the EMF is the rate of change of magnetic flux through the wire. (This is Faraday's law of induction, see below.) Einstein's special theory of relativity was partially motivated by the desire to better understand this link between the two effects. In fact, the electric and magnetic fields are different facets of the same electromagnetic field, and in moving from one inertial frame to another, the solenoidal vector field portion of the \"E\"-field can change in whole or in part to a \"B\"-field or \"vice versa\"."}
+{"text":"Lorentz force and Faraday's law of induction."}
+{"text":"Given a loop of wire in a magnetic field, Faraday's law of induction states the induced electromotive force (EMF) in the wire is:"}
+{"text":"is the magnetic flux through the loop, B is the magnetic field, \u03a3(\"t\") is a surface bounded by the closed contour \u2202\u03a3(\"t\"), at time \"t\", dA is an infinitesimal vector area element of \u03a3(\"t\") (magnitude is the area of an infinitesimal patch of surface, direction is orthogonal to that surface patch)."}
+{"text":"The \"sign\" of the EMF is determined by Lenz's law. Note that this is valid for not only a \"stationary\" wirebut also for a \"moving\" wire."}
+{"text":"From Faraday's law of induction (that is valid for a moving wire, for instance in a motor) and the Maxwell Equations, the Lorentz Force can be deduced. The reverse is also true, the Lorentz force and the Maxwell Equations can be used to derive the Faraday Law."}
+{"text":"Let \u03a3(\"t\") be the moving wire, moving together without rotation and with constant velocity v and \u03a3(\"t\") be the internal surface of the wire. The EMF around the closed path \u2202\u03a3(\"t\") is given by:"}
+{"text":"is the electric field and d\u2113 is an infinitesimal vector element of the contour \u2202\u03a3(\"t\")."}
+{"text":"NB: Both d\u2113 and dA have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin\u2013Stokes theorem."}
+{"text":"The above result can be compared with the version of Faraday's law of induction that appears in the modern Maxwell's equations, called here the \"Maxwell\u2013Faraday equation\":"}
+{"text":"The Maxwell\u2013Faraday equation also can be written in an \"integral form\" using the Kelvin\u2013Stokes theorem."}
+{"text":"So we have, the Maxwell Faraday equation:"}
+{"text":"The two are equivalent if the wire is not moving. Using the Leibniz integral rule and that \"div\" B = 0, results in,"}
+{"text":"since this is valid for any wire position it implies that,"}
+{"text":"Faraday's law of induction holds whether the loop of wire is rigid and stationary, or in motion or in process of deformation, and it holds whether the magnetic field is constant in time or changing. However, there are cases where Faraday's law is either inadequate or difficult to use, and application of the underlying Lorentz force law is necessary. See inapplicability of Faraday's law."}
+{"text":"Note that the Maxwell Faraday's equation implies that the Electric Field E is non conservative when the Magnetic Field B varies in time, and is not expressible as the gradient of a scalar field, and not subject to the gradient theorem since its rotational is not zero."}
+{"text":"The E and B fields can be replaced by the magnetic vector potential A and (scalar) electrostatic potential \"\u03d5\" by"}
+{"text":"where \u2207 is the gradient, \u2207\u22c5 is the divergence, \u2207\u00d7 is the curl."}
+{"text":"Using an identity for the triple product this can be rewritten as,"}
+{"text":"(Notice that the coordinates and the velocity components should be treated as independent variables, so the del operator acts only on formula_50, not on formula_51; thus, there is no need of using Feynman's subscript notation in the equation above). Using the chain rule, the total derivative of formula_50 is:"}
+{"text":"With v = \u1e8b, we can put the equation into the convenient Euler\u2013Lagrange form"}
+{"text":"The Lagrangian for a charged particle of mass \"m\" and charge \"q\" in an electromagnetic field equivalently describes the dynamics of the particle in terms of its \"energy\", rather than the force exerted on it. The classical expression is given by:"}
+{"text":"where A and \"\u03d5\" are the potential fields as above. The quantity formula_58 can be thought as a velocity-dependent potential function. Using Lagrange's equations, the equation for the Lorentz force given above can be obtained again."}
+{"text":"The potential energy depends on the velocity of the particle, so the force is velocity dependent, so it is not conservative."}
+{"text":"The action is the relativistic arclength of the path of the particle in spacetime, minus the potential energy contribution, plus an extra contribution which quantum mechanically is an extra phase a charged particle gets when it is moving along a vector potential."}
+{"text":"Using the metric signature , the Lorentz force for a charge \"q\" can be written in covariant form:"}
+{"text":"where \"p\u03b1\" is the four-momentum, defined as"}
+{"text":"\"\u03c4\" the proper time of the particle, \"F\u03b1\u03b2\" the contravariant electromagnetic tensor"}
+{"text":"and \"U\" is the covariant 4-velocity of the particle, defined as:"}
+{"text":"The fields are transformed to a frame moving with constant relative velocity by:"}
+{"text":"where \u039b\"\u03bc\u03b1\" is the Lorentz transformation tensor."}
+{"text":"The component (\"x\"-component) of the force is"}
+{"text":"Substituting the components of the covariant electromagnetic tensor \"F\" yields"}
+{"text":"Using the components of covariant four-velocity yields"}
+{"text":"The calculation for , 3 (force components in the \"y\" and \"z\" directions) yields similar results, so collecting the 3 equations into one:"}
+{"text":"and since differentials in coordinate time \"dt\" and proper time \"d\u03c4\" are related by the Lorentz factor,"}
+{"text":"This is precisely the Lorentz force law, however, it is important to note that p is the relativistic expression,"}
+{"text":"The electric and magnetic fields are dependent on the velocity of an observer, so the relativistic form of the Lorentz force law can best be exhibited starting from a coordinate-independent expression for the electromagnetic and magnetic fields formula_72, and an arbitrary time-direction, formula_73. This can be settled through Space-Time Algebra (or the geometric algebra of space-time), a type of Clifford algebra defined on a pseudo-Euclidean space, as"}
+{"text":"formula_76 is a space-time bivector (an oriented plane segment, just like a vector is an oriented line segment), which has six degrees of freedom corresponding to boosts (rotations in space-time planes) and rotations (rotations in space-space planes). The dot product with the vector formula_73 pulls a vector (in the space algebra) from the translational part, while the wedge-product creates a trivector (in the space algebra) who is dual to a vector which is the usual magnetic field vector."}
+{"text":"The relativistic velocity is given by the (time-like) changes in a time-position vector formula_78, where"}
+{"text":"(which shows our choice for the metric) and the velocity is"}
+{"text":"The proper (invariant is an inadequate term because no transformation has been defined) form of the Lorentz force law is simply"}
+{"text":"Note that the order is important because between a bivector and a vector the dot product is anti-symmetric. Upon a spacetime split like one can obtain the velocity, and fields as above yielding the usual expression."}
+{"text":"In the general theory of relativity the equation of motion for a particle with mass formula_81 and charge formula_82, moving in a space with metric tensor formula_83 and electromagnetic field formula_84, is given as"}
+{"text":"where formula_86 (formula_87 is taken along the trajectory), formula_88, and formula_89."}
+{"text":"The equation can also be written as"}
+{"text":"where formula_91 is the Christoffel symbol (of the torsion-free metric connection in general relativity), or as"}
+{"text":"where formula_93 is the covariant differential in general relativity (metric, torsion-free)."}
+{"text":"The Lorentz force occurs in many devices, including:"}
+{"text":"In its manifestation as the Laplace force on an electric current in a conductor, this force occurs in many devices including:"}
+{"text":"The numbered references refer in part to the list immediately below."}
+{"text":"In mathematical physics, Minkowski space (or Minkowski spacetime) () is a combination of three-dimensional Euclidean space and time into a four-dimensional manifold where the spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Although initially developed by mathematician Hermann Minkowski for Maxwell's equations of electromagnetism, the mathematical structure of Minkowski spacetime was shown to be implied by the postulates of special relativity."}
+{"text":"Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure on which special relativity is formulated. While the individual components in Euclidean space and time may differ due to length contraction and time dilation, in Minkowski spacetime, all frames of reference will agree on the total distance in spacetime between events. Because it treats time differently than it treats the 3 spatial dimensions, Minkowski space differs from four-dimensional Euclidean space."}
+{"text":"In 3-dimensional Euclidean space (e.g., simply \"space\" in Galilean relativity), the isometry group (the maps preserving the regular Euclidean distance) is the Euclidean group. It is generated by rotations, reflections and translations. When time is amended as a fourth dimension, the further transformations of translations in time and Galilean boosts are added, and the group of all these transformations is called the Galilean group. All Galilean transformations preserve the \"3-dimensional\" Euclidean distance. This distance is purely spatial. Time differences are \"separately\" preserved as well. This changes in the spacetime of special relativity, where space and time are interwoven."}
+{"text":"Spacetime is equipped with an indefinite non-degenerate bilinear form, variously called the \"Minkowski metric\", the \"Minkowski norm squared\" or \"Minkowski inner product\" depending on the context. The Minkowski inner product is defined so as to yield the spacetime interval between two events when given their coordinate difference vector as argument. Equipped with this inner product, the mathematical model of spacetime is called Minkowski space. The analogue of the Galilean group for Minkowski space, preserving the spacetime interval (as opposed to the spatial Euclidean distance) is the Poincar\u00e9 group."}
+{"text":"As manifolds, Galilean spacetime and Minkowski spacetime are \"the same\". They differ in what further structures are defined \"on\" them. The former has the Euclidean distance function and time interval (separately) together with inertial frames whose coordinates are related by Galilean transformations, while the latter has the Minkowski metric together with inertial frames whose coordinates are related by Poincar\u00e9 transformations."}
+{"text":"In his second relativity paper in 1905\u201306 Henri Poincar\u00e9 showed how, by taking time to be an imaginary fourth spacetime coordinate , where is the speed of light and is the imaginary unit, Lorentz transformations can be visualized as ordinary rotations of the four dimensional Euclidean sphere"}
+{"text":"Poincar\u00e9 set for convenience. Rotations in planes spanned by two space unit vectors appear in coordinate space as well as in physical spacetime as Euclidean rotations and are interpreted in the ordinary sense. The \"rotation\" in a plane spanned by a space unit vector and a time unit vector, while formally still a rotation in coordinate space, is a Lorentz boost in physical spacetime with \"real\" inertial coordinates. The analogy with Euclidean rotations is only partial since the radius of the sphere is actually imaginary which turns rotations into rotations in hyperbolic space."}
+{"text":"This idea which was mentioned only very briefly by Poincar\u00e9, was elaborated in great detail by Minkowski in an extensive and influential paper in German in 1908 called \"The Fundamental Equations for Electromagnetic Processes in Moving Bodies\". Minkowski using this formulation restated the then recent theory of relativity of Einstein. In particular by restating the Maxwell equations as a symmetrical set of equations in the four variables combined with redefined vector variables for electromagnetic quantities, he was able to show directly and very simply their invariance under Lorentz transformation. He also made other important contributions and used matrix notation for the first time in this context."}
+{"text":"From his reformulation he concluded that time and space should be treated equally, and so arose his concept of events taking place in a unified four-dimensional spacetime continuum."}
+{"text":"In a further development in his 1908 \"Space and Time\" lecture, Minkowski gave an alternative formulation of this idea that used a real time coordinate instead of an imaginary one, representing the four variables of space and time in coordinate form in a four dimensional real vector space. Points in this space correspond to events in spacetime. In this space, there is a defined light-cone associated with each point, and events not on the light-cone are classified by their relation to the apex as \"spacelike\" or \"timelike\". It is principally this view of spacetime that is current nowadays, although the older view involving imaginary time has also influenced special relativity."}
+{"text":"In the English translation of Minkowski's paper, the Minkowski metric as defined below is referred to as the \"line element\". The Minkowski inner product of below appears unnamed when referring to orthogonality (which he calls \"normality\") of certain vectors, and the Minkowski norm squared is referred to (somewhat cryptically, perhaps this is translation dependent) as \"sum\"."}
+{"text":"Minkowski, aware of the fundamental restatement of the theory which he had made, said"}
+{"text":"Though Minkowski took an important step for physics, Albert Einstein saw its limitation:"}
+{"text":"For further historical information see references , and ."}
+{"text":"Where is velocity, and , , and are Cartesian coordinates in 3-dimensional space, and is the constant representing the universal speed limit, and is time, the four-dimensional vector is classified according to the sign of . A vector is timelike if , spacelike if , and null or lightlike if . This can be expressed in terms of the sign of as well, which depends on the signature. The classification of any vector will be the same in all frames of reference that are related by a Lorentz transformation (but not by a general Poincar\u00e9 transformation because the origin may then be displaced) because of the invariance of the interval."}
+{"text":"The set of all null vectors at an event of Minkowski space constitutes the light cone of that event. Given a timelike vector , there is a worldline of constant velocity associated with it, represented by a straight line in a Minkowski diagram."}
+{"text":"Once a direction of time is chosen, timelike and null vectors can be further decomposed into various classes. For timelike vectors one has"}
+{"text":"Together with spacelike vectors there are 6 classes in all."}
+{"text":"An orthonormal basis for Minkowski space necessarily consists of one timelike and three spacelike unit vectors. If one wishes to work with non-orthonormal bases it is possible to have other combinations of vectors. For example, one can easily construct a (non-orthonormal) basis consisting entirely of null vectors, called a null basis."}
+{"text":"Vector fields are called timelike, spacelike or null if the associated vectors are timelike, spacelike or null at each point where the field is defined."}
+{"text":"Time-like vectors have special importance in the theory of relativity as they correspond to events which"}
+{"text":"are accessible to the observer at (0, 0, 0, 0) with a speed less than that of light."}
+{"text":"Of most interest are time-like vectors which are \"similarly directed\" i.e.all either in the forward or in the backward cones. Such vectors have several properties not shared by space-like vectors. These arise"}
+{"text":"because both forward and backward cones are convex whereas the space-like region is not convex."}
+{"text":"The scalar product of two time-like vectors and is"}
+{"text":"\"Positivity of scalar product\": An important property is that the scalar product of two similarly directed time-like vectors is always positive. This can be seen from the reversed Cauchy\u2013Schwarz inequality below. It follows that if the scalar product of two vectors is zero then one of these at least, must be space-like. The scalar product of two space-like vectors can be positive or negative as can be seen by considering the product of two space-like vectors having orthogonal spatial components and times either of different or the same signs."}
+{"text":"Using the positivity property of time-like vectors it is easy to verify that a linear sum with positive coefficients of similarly directed time-like vectors is also similarly directed time-like (the sum remains within the light-cone because of convexity)."}
+{"text":"The norm of a time-like vector is defined as"}
+{"text":"\"The reversed Cauchy inequality\" is another consequence of the convexity of either light-cone. For two distinct similarly directed time-like vectors and this inequality is"}
+{"text":"From this the positivity property of the scalar product can be seen."}
+{"text":"For two similarly directed time-like vectors and , the inequality is"}
+{"text":"where the equality holds when the vectors are linearly dependent."}
+{"text":"The proof uses the algebraic definition with the reversed Cauchy inequality:"}
+{"text":"The result now follows by taking the square root on both sides."}
+{"text":"It is assumed below that spacetime is endowed with a coordinate system corresponding to an inertial frame. This provides an \"origin\", which is necessary in order to be able to refer to spacetime as being modeled as a vector space. This is not really \"physically\" motivated in that a canonical origin (\"central\" event in spacetime) should exist. One can get away with less structure, that of an affine space, but this would needlessly complicate the discussion and would not reflect how flat spacetime is normally treated mathematically in modern introductory literature."}
+{"text":"For an overview, Minkowski space is a -dimensional real vector space equipped with a nondegenerate, symmetric bilinear form on the tangent space at each point in spacetime, here simply called the \"Minkowski inner product\", with metric signature either or . The tangent space at each event is a vector space of the same dimension as spacetime, ."}
+{"text":"In practice, one need not be concerned with the tangent spaces. The vector space nature of Minkowski space allows for the canonical identification of vectors in tangent spaces at points (events) with vectors (points, events) in Minkowski space itself. See e.g. These identifications are routinely done in mathematics. They can be expressed formally in Cartesian coordinates as"}
+{"text":"with basis vectors in the tangent spaces defined by"}
+{"text":"Here and are any two events and the last identification is referred to as parallel transport. The first identification is the canonical identification of vectors in the tangent space at any point with vectors in the space itself. The appearance of basis vectors in tangent spaces as first order differential operators is due to this identification. It is motivated by the observation that a geometrical tangent vector can be associated in a one-to-one manner with a directional derivative operator on the set of smooth functions. This is promoted to a \"definition\" of tangent vectors in manifolds \"not\" necessarily being embedded in . This definition of tangent vectors is not the only possible one as ordinary \"n\"-tuples can be used as well."}
+{"text":"A tangent vector at a point may be defined, here specialized to Cartesian coordinates in Lorentz frames, as column vectors associated to \"each\" Lorentz frame related by Lorentz transformation such that the vector in a frame related to some frame by transforms according to . This is the \"same\" way in which the coordinates transform. Explicitly,"}
+{"text":"This definition is equivalent to the definition given above under a canonical isomorphism."}
+{"text":"For some purposes it is desirable to identify tangent vectors at a point with \"displacement vectors\" at , which is, of course, admissible by essentially the same canonical identification. The identifications of vectors referred to above in the mathematical setting can correspondingly be found in a more physical and explicitly geometrical setting in . They offer various degree of sophistication (and rigor) depending on which part of the material one chooses to read."}
+{"text":"The metric signature refers to which sign the Minkowski inner product yields when given space (\"spacelike\" to be specific, defined further down) and time basis vectors (\"timelike\") as arguments. Further discussion about this theoretically inconsequential, but practically necessary choice for purposes of internal consistency and convenience is deferred to the hide box below."}
+{"text":"Mathematically associated to the bilinear form is a tensor of type at each point in spacetime, called the \"Minkowski metric\". The Minkowski metric, the bilinear form, and the Minkowski inner product are all the same object; it is a bilinear function that accepts two (contravariant) vectors and returns a real number. In coordinates, this is the matrix representing the bilinear form."}
+{"text":"For comparison, in general relativity, a Lorentzian manifold is likewise equipped with a metric tensor , which is a nondegenerate symmetric bilinear form on the tangent space at each point of . In coordinates, it may be represented by a matrix \"depending on spacetime position\". Minkowski space is thus a comparatively simple special case of a Lorentzian manifold. Its metric tensor is in coordinates the same symmetric matrix at every point of , and its arguments can, per above, be taken as vectors in spacetime itself."}
+{"text":"Introducing more terminology (but not more structure), Minkowski space is thus a pseudo-Euclidean space with total dimension and signature or . Elements of Minkowski space are called events. Minkowski space is often denoted or to emphasize the chosen signature, or just . It is perhaps the simplest example of a pseudo-Riemannian manifold."}
+{"text":"An interesting example of non-inertial coordinates for (part of) Minkowski spacetime are the Born coordinates. Another useful set of coordinates are the light-cone coordinates."}
+{"text":"The Minkowski inner product is not an inner product, since it is not positive-definite, i.e. the quadratic form need not be positive for nonzero . The positive-definite condition has been replaced by the weaker condition of non-degeneracy. The bilinear form is said to be \"indefinite\"."}
+{"text":"The Minkowski metric is the metric tensor of Minkowski space. It is a pseudo-Euclidean metric, or more generally a \"constant\" pseudo-Riemannian metric in Cartesian coordinates. As such it is a nondegenerate symmetric bilinear form, a type tensor. It accepts two arguments , vectors in , the tangent space at in . Due to the above-mentioned canonical identification of with itself, it accepts arguments with both and in ."}
+{"text":"As a notational convention, vectors in , called 4-vectors, are denoted in italics, and not, as is common in the Euclidean setting, with boldface . The latter is generally reserved for the -vector part (to be introduced below) of a -vector."}
+{"text":"yields an inner product-like structure on , previously and also henceforth, called the \"Minkowski inner product\", similar to the Euclidean inner product, but it describes a different geometry. It is also called the \"relativistic dot product\". If the two arguments are the same,"}
+{"text":"the resulting quantity will be called the \"Minkowski norm squared\". The Minkowski inner product satisfies the following properties."}
+{"text":"The first two conditions imply bilinearity. The defining \"difference\" between a pseudo-inner product and an inner product proper is that the former is \"not\" required to be positive definite, that is, is allowed."}
+{"text":"The most important feature of the inner product and norm squared is that \"these are quantities unaffected by Lorentz transformations\". In fact, it can be taken as the defining property of a Lorentz transformation that it preserves the inner product (i.e. the value of the corresponding bilinear form on two vectors). This approach is taken more generally for \"all\" classical groups definable this way in classical group. There, the matrix is identical in the case (the Lorentz group) to the matrix to be displayed below."}
+{"text":"Two vectors and are said to be orthogonal if . For a geometric interpretation of orthogonality in the special case when and (or vice versa), see hyperbolic orthogonality."}
+{"text":"A vector is called a unit vector if . A basis for consisting of mutually orthogonal unit vectors is called an orthonormal basis."}
+{"text":"For a given inertial frame, an orthonormal basis in space, combined with the unit time vector, forms an orthonormal basis in Minkowski space. The number of positive and negative unit vectors in any such basis is a fixed pair of numbers, equal to the signature of the bilinear form associated with the inner product. This is Sylvester's law of inertia."}
+{"text":"More terminology (but not more structure): The Minkowski metric is a pseudo-Riemannian metric, more specifically, a Lorentzian metric, even more specifically, \"the\" Lorentz metric, reserved for -dimensional flat spacetime with the remaining ambiguity only being the signature convention."}
+{"text":"From the second postulate of special relativity, together with homogeneity of spacetime and isotropy of space, it follows that the spacetime interval between two arbitrary events called and is:"}
+{"text":"This quantity is not consistently named in the literature. The interval is sometimes referred to as the square of the interval as defined here. It is not possible to give an exhaustive list of notational inconsistencies. One has to first check out the definitions when consulting the relativity literature."}
+{"text":"The invariance of the interval under coordinate transformations between inertial frames follows from the invariance of"}
+{"text":"(with either sign preserved), provided the transformations are linear. This quadratic form can be used to define a bilinear form"}
+{"text":"via the polarization identity. This bilinear form can in turn be written as"}
+{"text":"where is a matrix associated with . While possibly confusing, it is common practice to denote with just . The matrix is read off from the explicit bilinear form as"}
+{"text":"with which this section started by assuming its existence, is now identified."}
+{"text":"For definiteness and shorter presentation, the signature is adopted below. This choice (or the other possible choice) has no (known) physical implications. The symmetry group preserving the bilinear form with one choice of signature is isomorphic (under the map given here) with the symmetry group preserving the other choice of signature. This means that both choices are in accord with the two postulates of relativity. Switching between the two conventions is straightforward. If the metric tensor has been used in a derivation, go back to the earliest point where it was used, substitute for , and retrace forward to the desired formula with the desired metric signature."}
+{"text":"A standard basis for Minkowski space is a set of four mutually orthogonal vectors such that"}
+{"text":"These conditions can be written compactly in the form"}
+{"text":"Relative to a standard basis, the components of a vector are written where the Einstein notation is used to write . The component is called the timelike component of while the other three components are called the spatial components. The spatial components of a -vector may be identified with a -vector ."}
+{"text":"In terms of components, the Minkowski inner product between two vectors and is given by"}
+{"text":"Here lowering of an index with the metric was used."}
+{"text":"Technically, a non-degenerate bilinear form provides a map between a vector space and its dual; in this context, the map is between the tangent spaces of and the cotangent spaces of . At a point in , the tangent and cotangent spaces are dual vector spaces (so the dimension of the cotangent space at an event is also ). Just as an authentic inner product on a vector space with one argument fixed, by Riesz representation theorem, may be expressed as the action of a linear functional on the vector space, the same holds for the Minkowski inner product of Minkowski space."}
+{"text":"Contravariant and covariant vectors are geometrically very different objects. The first can and should be thought of as arrows. A linear functional can be characterized by two objects: its kernel, which is a hyperplane passing through the origin, and its norm. Geometrically thus, covariant vectors should be viewed as a set of hyperplanes, with spacing depending on the norm (bigger = smaller spacing), with one of them (the kernel) passing through the origin. The mathematical term for a covariant vector is 1-covector or 1-form (though the latter is usually reserved for covector \"fields\")."}
+{"text":"uses a vivid analogy with wave fronts of a de Broglie wave (scaled by a factor of Planck's reduced constant) quantum mechanically associated to a momentum four-vector to illustrate how one could imagine a covariant version of a contravariant vector. The inner product of two contravariant vectors could equally well be thought of as the action of the covariant version of one of them on the contravariant version of the other. The inner product is then how many time the arrow pierces the planes. The mathematical reference, , offers the same geometrical view of these objects (but mentions no piercing)."}
+{"text":"The electromagnetic field tensor is a differential 2-form, which geometrical description can as well be found in MTW."}
+{"text":"One may, of course, ignore geometrical views all together (as is the style in e.g. and ) and proceed algebraically in a purely formal fashion. The time-proven robustness of the formalism itself, sometimes referred to as index gymnastics, ensures that moving vectors around and changing from contravariant to covariant vectors and vice versa (as well as higher order tensors) is mathematically sound. Incorrect expressions tend to reveal themselves quickly."}
+{"text":"The present purpose is to show semi-rigorously how \"formally\" one may apply the Minkowski metric to two vectors and obtain a real number, i.e. to display the role of the differentials, and how they disappear in a calculation. The setting is that of smooth manifold theory, and concepts such as convector fields and exterior derivatives are introduced."}
+{"text":"A full-blown version of the Minkowski metric in coordinates as a tensor field on spacetime has the appearance"}
+{"text":"Explanation: The coordinate differentials are 1-form fields. They are defined as the exterior derivative of the coordinate functions . These quantities evaluated at a point provide a basis for the cotangent space at . The tensor product (denoted by the symbol ) yields a tensor field of type , i.e. the type that expects two contravariant vectors as arguments. On the right hand side, the symmetric product (denoted by the symbol or by juxtaposition) has been taken. The equality holds since, by definition, the Minkowski metric is symmetric. The notation on the far right is also sometimes used for the related, but different, line element. It is \"not\" a tensor. For elaboration on the differences and similarities, see"}
+{"text":"\"Tangent\" vectors are, in this formalism, given in terms of a basis of differential operators of the first order,"}
+{"text":"where is an event. This operator applied to a function gives the directional derivative of at in the direction of increasing with fixed. They provide a basis for the tangent space at ."}
+{"text":"The exterior derivative of a function is a covector field, i.e. an assignment of a cotangent vector to each point , by definition such that"}
+{"text":"for each vector field . A vector field is an assignment of a tangent vector to each point . In coordinates can be expanded at each point in the basis given by the . Applying this with , the coordinate function itself, and , called a \"coordinate vector field\", one obtains"}
+{"text":"Since this relation holds at each point , the provide a basis for the cotangent space at each and the bases and are dual to each other,"}
+{"text":"for general one-forms on a tangent space and general tangent vectors . (This can be taken as a definition, but may also be proved in a more general setting.)"}
+{"text":"Thus when the metric tensor is fed two vectors fields , both expanded in terms of the basis coordinate vector fields, the result is"}
+{"text":"where , are the \"component functions\" of the vector fields. The above equation holds at each point , and the relation may as well be interpreted as the Minkowski metric at applied to two tangent vectors at ."}
+{"text":"As mentioned, in a vector space, such as that modelling the spacetime of special relativity, tangent vectors can be canonically identified with vectors in the space itself, and vice versa. This means that the tangent spaces at each point are canonically identified with each other and with the vector space itself. This explains how the right hand side of the above equation can be employed directly, without regard to spacetime point the metric is to be evaluated and from where (which tangent space) the vectors come from."}
+{"text":"This situation changes in general relativity. There one has"}
+{"text":"where now , i.e. is still a metric tensor but now depending on spacetime and is a solution of Einstein's field equations. Moreover, \"must\" be tangent vectors at spacetime point and can no longer be moved around freely."}
+{"text":"Suppose \"x\" \u2208 \"M\" is timelike. Then the simultaneous hyperplane for x is formula_34 Since this hyperplane varies as \"x\" varies, there is a relativity of simultaneity in Minkowski space."}
+{"text":"A Lorentzian manifold is a generalization of Minkowski space in two ways. The total number of spacetime dimensions is not restricted to be ( or more) and a Lorentzian manifold need not be flat, i.e. it allows for curvature."}
+{"text":"Minkowski space refers to a mathematical formulation in four dimensions. However, the mathematics can easily be extended or simplified to create an analogous generalized Minkowski space in any number of dimensions. If , -dimensional Minkowski space is a vector space of real dimension on which there is a constant Minkowski metric of signature or . These generalizations are used in theories where spacetime is assumed to have more or less than dimensions. String theory and M-theory are two examples where . In string theory, there appears conformal field theories with spacetime dimensions."}
+{"text":"de Sitter space can be formulated as a submanifold of generalized Minkowski space as can the model spaces of hyperbolic geometry (see below)."}
+{"text":"As a \"flat spacetime\", the three spatial components of Minkowski spacetime always obey the Pythagorean Theorem. Minkowski space is a suitable basis for special relativity, a good description of physical systems over finite distances in systems without significant gravitation. However, in order to take gravity into account, physicists use the theory of general relativity, which is formulated in the mathematics of a non-Euclidean geometry. When this geometry is used as a model of physical space, it is known as curved space."}
+{"text":"Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities). More abstractly, we say that in the presence of gravity spacetime is described by a curved 4-dimensional manifold for which the tangent space to any point is a 4-dimensional Minkowski space. Thus, the structure of Minkowski space is still essential in the description of general relativity."}
+{"text":"The meaning of the term \"geometry\" for the Minkowski space depends heavily on the context. Minkowski space is not endowed with a Euclidean geometry, and not with any of the generalized Riemannian geometries with intrinsic curvature, those exposed by the \"model spaces\" in hyperbolic geometry (negative curvature) and the geometry modeled by the sphere (positive curvature). The reason is the indefiniteness of the Minkowski metric. Minkowski space is, in particular, not a metric space and not a Riemannian manifold with a Riemannian metric. However, Minkowski space contains submanifolds endowed with a Riemannian metric yielding hyperbolic geometry."}
+{"text":"Model spaces of hyperbolic geometry of low dimension, say or , \"cannot\" be isometrically embedded in Euclidean space with one more dimension, i.e. or respectively, with the Euclidean metric , disallowing easy visualization. By comparison, model spaces with positive curvature are just spheres in Euclidean space of one higher dimension. Hyperbolic spaces \"can\" be isometrically embedded in spaces of one more dimension when the embedding space is endowed with the Minkowski metric ."}
+{"text":"Define to be the upper sheet () of the hyperboloid"}
+{"text":"in generalized Minkowski space of spacetime dimension . This is one of the surfaces of transitivity of the generalized Lorentz group. The induced metric on this submanifold,"}
+{"text":"the pullback of the Minkowski metric under inclusion, is a Riemannian metric. With this metric is a Riemannian manifold. It is one of the model spaces of Riemannian geometry, the hyperboloid model of hyperbolic space. It is a space of constant negative curvature . The in the upper index refers to an enumeration of the different model spaces of hyperbolic geometry, and the for its dimension. A corresponds to the Poincar\u00e9 disk model, while corresponds to the Poincar\u00e9 half-space model of dimension ."}
+{"text":"In the definition above is the inclusion map and the superscript star denotes the pullback. The present purpose is to describe this and similar operations as a preparation for the actual demonstration that actually is a hyperbolic space."}
+{"text":"In order to exhibit the metric it is necessary to pull it back via a suitable \"parametrization\". A parametrization of a submanifold of is a map whose range is an open subset of . If has the same dimension as , a parametrization is just the inverse of a coordinate map . The parametrization to be used is the inverse of \"hyperbolic stereographic projection\". This is illustrated in the figure to the left for . It is instructive to compare to stereographic projection for spheres."}
+{"text":"Stereographic projection and its inverse are given by"}
+{"text":"where, for simplicity, . The are coordinates on and the are coordinates on ."}
+{"text":"The rotational partition function relates the rotational degrees of freedom to the rotational part of the energy."}
+{"text":"The total canonical partition function formula_1 of a system of formula_2 identical, indistinguishable, noninteracting atoms or molecules can be divided into the atomic or molecular partition functions formula_3:"}
+{"text":"where formula_6 is the degeneracy of the \"j\"th quantum level of an individual particle, formula_7 is the Boltzmann constant, and formula_8 is the absolute temperature of system."}
+{"text":"For molecules, under the assumption that total energy levels formula_9 can be partitioned into its contributions from different degrees of freedom (weakly coupled degrees of freedom)"}
+{"text":"and the number of degenerate states are given as products of the single contributions"}
+{"text":"where \"trans\", \"ns\", \"rot\", \"vib\" and \"e\" denotes translational, nuclear spin, rotational and vibrational contributions as well as electron excitation, the molecular partition functions"}
+{"text":"can be written as a product itself"}
+{"text":"Rotational energies are quantized. For a diatomic molecule like CO or HCl or a linear polyatomic molecule like OCS in its ground vibrational state, the allowed rotational energies in the rigid rotor approximation are"}
+{"text":"J is the quantum number for total rotational angular momentum and takes all integer values starting at zero,i.e. formula_15 is the rotational constant, and formula_16 is the moment of inertia. Here we are using \"B\" in energy units. If it is expressed in frequency units, replace \"B\" by \"hB\" in all the expression that follow, where \"h\" is Planck's constant. If \"B\" is given in units of formula_17, then replace \"B\" by \"hcB\" where c is the speed of light in vacuum."}
+{"text":"For each value of J, we have rotational degeneracy, formula_18 = (2J+1), so the rotational partition function is therefore"}
+{"text":"For all but the lightest molecules or the very lowest temperatures we have formula_20. This suggests we can approximate the sum by replacing the sum over J by an integral of J treated as a continuous variable."}
+{"text":"This approximation is known as the high temperature limit. It is also called the classical approximation as this is the result for the canonical partition function for a classical rigid rod."}
+{"text":"Using the Euler\u2013Maclaurin formula an improved estimate can be found"}
+{"text":"For the CO molecule at formula_23, the (unit less) contribution formula_24 to formula_3 turns out to be in the range of formula_26."}
+{"text":"The mean thermal rotational energy per molecule can now be computed by taking the derivative of formula_24 with respect to temperature formula_28. In the high temperature limit approximation, the mean thermal rotational energy of a linear rigid rotor is formula_29."}
+{"text":"A rigid, nonlinear molecule has rotational energy levels determined by three rotational constants, conventionally written formula_43 and formula_44, which can often be determined by rotational spectroscopy. In terms of these constants, the rotational partition function can be written in the high temperature limit as"}
+{"text":"with formula_42 again known as the rotational symmetry number which in general equals the number ways a molecule can be rotated to overlap itself in an indistinguishable way, i.e. that at most interchanges identical atoms. Like in the case of the diatomic treated explicitly above, this factor corrects for the fact that only a fraction of the nuclear spin functions can be used for any given molecular level to construct wavefunctions that overall obey the required exchange symmetries. The expression for formula_47 works for asymmetric, symmetric and spherical top rotors."}
+{"text":"In quantum mechanics the delta potential is a potential well mathematically described by the Dirac delta function - a generalized function. Qualitatively, it corresponds to a potential which is zero everywhere, except at a single point, where it takes an infinite value. This can be used to simulate situations where a particle is free to move in two regions of space with a barrier between the two regions. For example, an electron can move almost freely in a conducting material, but if two conducting surfaces are put close together, the interface between them acts as a barrier for the electron that can be approximated by a delta potential."}
+{"text":"The delta potential well is a limiting case of the finite potential well, which is obtained if one maintains the product of the width of the well and the potential constant while decreasing the well's width and increasing the potential."}
+{"text":"This article, for simplicity, only considers a one-dimensional potential well, but analysis could be expanded to more dimensions."}
+{"text":"The time-independent Schr\u00f6dinger equation for the wave function of a particle in one dimension in a potential is"}
+{"text":"where is the reduced Planck constant, and is the energy of the particle."}
+{"text":"It is called a \"delta potential well\" if is negative, and a \"delta potential barrier\" if is positive. The delta has been defined to occur at the origin for simplicity; a shift in the delta function's argument does not change any of the proceeding results."}
+{"text":"The potential splits the space in two parts (\u00a0<\u00a00 and \u00a0>\u00a00). In each of these parts the potential energy is zero, and the Schr\u00f6dinger equation reduces to"}
+{"text":"this is a linear differential equation with constant coefficients, whose solutions are linear combinations of and , where the wave number is related to the energy by"}
+{"text":"In general, due to the presence of the delta potential in the origin, the coefficients of the solution need not be the same in both half-spaces:"}
+{"text":"where, in the case of positive energies (real ), represents a wave traveling to the right, and one traveling to the left."}
+{"text":"One obtains a relation between the coefficients by imposing that the wavefunction be continuous at the origin:"}
+{"text":"A second relation can be found by studying the derivative of the wavefunction. Normally, we could also impose differentiability at the origin, but this is not possible because of the delta potential. However, if we integrate the Schr\u00f6dinger equation around \u00a0=\u00a00, over an interval [\u2212\"\u03b5\",\u00a0+\"\u03b5\"]:"}
+{"text":"In the limit as \"\u03b5\"\u00a0\u2192\u00a00, the right-hand side of this equation vanishes; the left-hand side becomes"}
+{"text":"Substituting the definition of into this expression yields"}
+{"text":"The boundary conditions thus give the following restrictions on the coefficients"}
+{"text":"In any one-dimensional attractive potential there will be a bound state. To find its energy, note that for \u00a0<\u00a00, \u00a0=\u00a0\u00a0=\u00a0 is imaginary, and the wave functions which were oscillating for positive energies in the calculation above are now exponentially increasing or decreasing functions of \"x\" (see above). Requiring that the wave functions do not diverge at infinity eliminates half of the terms: \u00a0= \u00a0=\u00a00. The wave function is then"}
+{"text":"From the boundary conditions and normalization conditions, it follows that"}
+{"text":"from which it follows that must be negative, that is, the bound state only exists for the well, and not for the barrier. The Fourier transform of this wave function is a Lorentzian function."}
+{"text":"The energy of the bound state is then"}
+{"text":"For positive energies, the particle is free to move in either half-space: \u00a0<\u00a00 or \u00a0>\u00a00. It may be scattered at the delta-function potential."}
+{"text":"The quantum case can be studied in the following situation: a particle incident on the barrier from the left side . It may be reflected or transmitted ."}
+{"text":"To find the amplitudes for reflection and transmission for incidence from the left, we put in the above equations \u00a0=\u00a01 (incoming particle), \u00a0=\u00a0 (reflection), \u00a0=\u00a00 (no incoming particle from the right) and \u00a0=\u00a0 (transmission), and solve for and even though we do not have any equations in ."}
+{"text":"Due to the mirror symmetry of the model, the amplitudes for incidence from the right are the same as those from the left. The result is that there is a non-zero probability"}
+{"text":"for the particle to be reflected. This does not depend on the sign of , that is, a barrier has the same probability of reflecting the particle as a well. This is a significant difference from classical mechanics, where the reflection probability would be 1 for the barrier (the particle simply bounces back), and 0 for the well (the particle passes through the well undisturbed)."}
+{"text":"In summary, the probability for transmission is"}
+{"text":"The calculation presented above may at first seem unrealistic and hardly useful. However, it has proved to be a suitable model for a variety of real-life systems."}
+{"text":"One such example regards the interfaces between two conducting materials. In the bulk of the materials, the motion of the electrons is quasi-free and can be described by the kinetic term in the above Hamiltonian with an effective mass . Often, the surfaces of such materials are covered with oxide layers or are not ideal for other reasons. This thin, non-conducting layer may then be modeled by a local delta-function potential as above. Electrons may then tunnel from one material to the other giving rise to a current."}
+{"text":"The operation of a scanning tunneling microscope (STM) relies on this tunneling effect. In that case, the barrier is due to the air between the tip of the STM and the underlying object. The strength of the barrier is related to the separation being stronger the further apart the two are. For a more general model of this situation, see Finite potential barrier (QM). The delta function potential barrier is the limiting case of the model considered there for very high and narrow barriers."}
+{"text":"The above model is one-dimensional while the space around us is three-dimensional. So, in fact, one should solve the Schr\u00f6dinger equation in three dimensions. On the other hand, many systems only change along one coordinate direction and are translationally invariant along the others. The Schr\u00f6dinger equation may then be reduced to the case considered here by an Ansatz for the wave function of the type formula_18."}
+{"text":"Alternatively, it is possible to generalize the delta function to exist on the surface of some domain \"D\" (see Laplacian of the indicator)."}
+{"text":"The delta function model is actually a one-dimensional version of the Hydrogen atom according to the \"dimensional scaling\" method developed by the group of Dudley R. Herschbach"}
+{"text":"The delta function model becomes particularly useful with the \"double-well\" Dirac Delta function model which represents a one-dimensional version of the Hydrogen molecule ion, as shown in the following section."}
+{"text":"The double-well Dirac delta function models a diatomic hydrogen molecule by the corresponding Schr\u00f6dinger equation:"}
+{"text":"where formula_21 is the \"internuclear\" distance with Dirac delta-function (negative) peaks located at \u00a0=\u00a0\u00b1\/2 (shown in brown in the diagram). Keeping in mind the relationship of this model with its three-dimensional molecular counterpart, we use atomic units and set formula_22. Here formula_23 is a formally adjustable parameter. From the single-well case, we can infer the \"ansatz\" for the solution to be"}
+{"text":"Matching of the wavefunction at the Dirac delta-function peaks yields the determinant"}
+{"text":"Thus, formula_26 is found to be governed by the \"pseudo-quadratic\" equation"}
+{"text":"which has two solutions formula_28. For the case of equal charges (symmetric homonuclear case), \u00a0=\u00a01, and the pseudo-quadratic reduces to"}
+{"text":"The \"+\" case corresponds to a wave function symmetric about the midpoint (shown in red in the diagram), where \u00a0=\u00a0, and is called \"gerade\". Correspondingly, the \"\u2212\" case is the wave function that is anti-symmetric about the midpoint, where = \u2212, and is called \"ungerade\" (shown in green in the diagram). They represent an approximation of the two lowest discrete energy states of the three-dimensional H2^+<\/chem> and are useful in its analysis. Analytical solutions for the energy eigenvalues for the case of symmetric charges are given by"}
+{"text":"where \"W\" is the standard Lambert \"W\" function. Note that the lowest energy corresponds to the symmetric solution formula_31. In the case of \"unequal\" charges, and for that matter the three-dimensional molecular problem, the solutions are given by a \"generalization\" of the Lambert \"W\" function (see section on generalization of Lambert W function and references herein)."}
+{"text":"One of the most interesting cases is when \"qR\"\u00a0\u2264\u00a01, which results in formula_32. Thus, one has a non-trivial bound state solution with \u00a0=\u00a00. For these specific parameters, there are many interesting properties that occur, one of which is the unusual effect that the transmission coefficient is unity at zero energy."}
+{"text":"The Kundu equation is a general form of integrable system that is gauge-equivalent to the mixed nonlinear Schr\u00f6dinger equation. It was proposed by Anjan Kundu as"}
+{"text":"with arbitrary function formula_2 and the subscripts denoting partial derivatives. Equation (1) is shown to be reducible for the choice of formula_3 to an integrable class of mixed nonlinear Schr\u00f6dinger equation with cubic\u2013quintic nonlinearity, given in a representative form"}
+{"text":"Here formula_5 are independent parameters, while formula_6 Equation (1), more specifically equation (2) is known as the Kundu equation."}
+{"text":"The Kundu equation is a completely integrable system, allowing Lax pair representation, exact solutions, and higher conserved quantity."}
+{"text":"Along with its different particular cases, this equation has been investigated for finding its exact travelling wave solutions, exact solitary wave solutions via bilinearization, and Darboux transformation together with the orbital stability for such solitary wave solutions."}
+{"text":"The Kundu equation has been applied to various physical processes such as fluid dynamics, plasma physics, and nonlinear optics. It is linked to the mixed nonlinear Schr\u00f6dinger equation through a gauge transformation and is reducible to a variety of known integrable equations such as the nonlinear Schr\u00f6dinger equation (NLSE), derivative NLSE, higher nonlinear derivative NLSE, Chen\u2013Lee\u2013Liu, Gerjikov-Vanov, and Kundu\u2013Eckhaus equations, for different choices of the parameters."}
+{"text":"A generalization of the nonlinear Schr\u00f6dinger equation with additional quintic nonlinearity and a nonlinear dispersive term was proposed in the form"}
+{"text":"which may be obtained from the Kundu Equation (2), when restricted to formula_8. The same equation, limited further to the particular case formula_9 was introduced later as the Eckhaus equation, following which equation (3) is presently known as the Kundu-Ekchaus equation. The Kundu-Ekchaus equation can be reduced to the nonlinear Schr\u00f6dinger equation through a nonlinear transformation of the field and known therefore to be gauge equivalent integrable systems, since they are equivalent under the gauge transformation."}
+{"text":"The Kundu-Ekchaus equation is associated with a Lax pair, higher conserved quantity, exact soliton solution, rogue wave solution etc. Over the years various aspects of this equation, its generalizations and link with other equations have been studied. In particular, relationship of Kundu-Ekchaus equation with the Johnson's hydrodynamic equation near criticality is established, its discretizations, reduction via Lie symmetry, complex structure via Bernoulli subequation, bright and dark soliton solutions via B\u00e4cklund transformation and Darboux transformation with the associated rogue wave solutions, are studied."}
+{"text":"A multi-component generalisation of the Kundu-Ekchaus equation (3), known as Radhakrishnan, Kundu and Laskshmanan (RKL) equation was proposed in nonlinear optics for fiber optics communication through soliton pulses in a birefringent non-Kerr medium and analysed subsequently for its exact soliton solution and other aspects in a series of papers."}
+{"text":"Though the Kundu-Ekchaus equation (3) is gauge equivalent to the nonlinear Schr\u00f6dinger equation, they differ with respect to their Hamiltonian structures and field commutation relations. The Hamiltonian operator of the Kundu-Ekchaus equation quantum field model given by"}
+{"text":"and defined through the bosonic field operator commutation relation formula_11, is more complicated than the well-known bosonic Hamiltonian of the quantum nonlinear Schr\u00f6dinger equation. Here formula_12 indicates normal ordering in bosonic operators. This model corresponds to a double formula_13-function interacting Bose gas and is difficult to solve directly."}
+{"text":"However, under a nonlinear transformation of the field below:"}
+{"text":"i.e. in the same form as the quantum model of the Nonlinear Schr\u00f6dinger equation (NLSE), though it differs from the NLSE in its contents, since now the fields involved are no longer bosonic operators but exhibit anion like properties."}
+{"text":"though at the coinciding points the bosonic commutation relation still holds. In analogy with the Lieb Limiger model of formula_19 function bose gas, the quantum Kundu-Ekchaus model in the N-particle sector therefore corresponds to a one-dimensional (1D) anion gas interacting via a formula_20 function interaction. This model of interacting anion gas was proposed and exactly solved by the Bethe ansatz in"}
+{"text":"and this basic anion model is studied further for investigating various aspects of the 1D anion gas as well as extended in different directions."}
+{"text":"In mathematical physics, the Eckhaus equation \u2013 or the Kundu\u2013Eckhaus equation \u2013 is a nonlinear partial differential equations within the nonlinear Schr\u00f6dinger class:"}
+{"text":"The equation was independently introduced by Wiktor Eckhaus and by Anjan Kundu to model the propagation of waves in dispersive media."}
+{"text":"The Eckhaus equation can be linearized to the linear Schr\u00f6dinger equation:"}
+{"text":"This linearization also implies that the Eckhaus equation is integrable."}
+{"text":"In quantum mechanics, the rectangular (or, at times, square) potential barrier is a standard one-dimensional problem that demonstrates the phenomena of wave-mechanical tunneling (also called \"quantum tunneling\") and wave-mechanical reflection. The problem consists of solving the one-dimensional time-independent Schr\u00f6dinger equation for a particle encountering a rectangular potential energy barrier. It is usually assumed, as here, that a free particle impinges on the barrier from the left."}
+{"text":"Although classically a particle behaving as a point mass would be reflected if its energy is less than formula_1, a particle actually behaving as a matter wave has a non-zero probability of penetrating the barrier and continuing its travel as a wave on the other side. In classical wave-physics, this effect is known as evanescent wave coupling. The likelihood that the particle will pass through the barrier is given by the transmission coefficient, whereas the likelihood that it is reflected is given by the reflection coefficient. Schr\u00f6dinger's wave-equation allows these coefficients to be calculated."}
+{"text":"The time-independent Schr\u00f6dinger equation for the wave function formula_2 reads"}
+{"text":"where formula_4 is the Hamiltonian, formula_5 is the (reduced)"}
+{"text":"Planck constant, formula_6 is the mass, formula_7 the energy of the particle and"}
+{"text":"is the barrier potential with height formula_9 and width formula_10. formula_11"}
+{"text":"The barrier is positioned between formula_13 and formula_14. The barrier can be shifted to any formula_15 position without changing the results. The first term in the Hamiltonian, formula_16 is the kinetic energy."}
+{"text":"The barrier divides the space in three parts (formula_17). In any of these parts, the potential is constant, meaning that the particle is quasi-free, and the solution of the Schr\u00f6dinger equation can be written as a superposition of left and right moving waves (see free particle). If formula_18"}
+{"text":"where the wave numbers are related to the energy via"}
+{"text":"The coefficients formula_29 have to be found from the boundary conditions of the wave function at formula_13 and formula_14. The wave function and its derivative have to be continuous everywhere, so"}
+{"text":"Inserting the wave functions, the boundary conditions give the following restrictions on the coefficients"}
+{"text":"If the energy equals the barrier height, the second differential of the wavefunction inside the barrier region is 0, and hence the solutions of the Schr\u00f6dinger equation are not exponentials anymore but linear functions of the space coordinate"}
+{"text":"The complete solution of the Schr\u00f6dinger equation is found in the same way as above by matching wave functions and their derivatives at formula_13 and formula_14. That results in the following restrictions on the coefficients:"}
+{"text":"At this point, it is instructive to compare the situation to the classical case. In both cases, the particle behaves as a free particle outside of the barrier region. A classical particle with energy formula_7 larger than the barrier height formula_1 would \"always\" pass the barrier, and a classical particle with formula_48 incident on the barrier would \"always\" get reflected."}
+{"text":"To study the quantum case, consider the following situation: a particle incident on the barrier from the left side (formula_49). It may be reflected (formula_50) or transmitted (formula_51)."}
+{"text":"To find the amplitudes for reflection and transmission for incidence from the left, we put in the above equations formula_52 (incoming particle), formula_53 (reflection), formula_54=0 (no incoming particle from the right), and formula_55 (transmission). We then eliminate the coefficients formula_56 from the equation and solve for formula_57 and formula_58."}
+{"text":"Due to the mirror symmetry of the model, the amplitudes for incidence from the right are the same as those from the left. Note that these expressions hold for any energy formula_61."}
+{"text":"The surprising result is that for energies less than the barrier height, formula_48 there is a non-zero probability"}
+{"text":"for the particle to be transmitted through the barrier, with formula_64. This effect, which differs from the classical case, is called quantum tunneling. The transmission is exponentially suppressed with the barrier width, which can be understood from the functional form of the wave function: Outside of the barrier it oscillates with wave vector formula_65, whereas within the barrier it is exponentially damped over a distance formula_66. If the barrier is much wider than this decay length, the left and right part are virtually independent and tunneling as a consequence is suppressed."}
+{"text":"Equally surprising is that for energies larger than the barrier height, formula_18, the particle may be reflected from the barrier with a non-zero probability"}
+{"text":"The transmission and reflection probabilities are in fact oscillating with formula_71. The classical result of perfect transmission without any reflection (formula_72, formula_73) is reproduced not only in the limit of high energy formula_74 but also when the energy and barrier width satisfy formula_75, where formula_76 (see peaks near formula_77 and 1.8 in the above figure). Note that the probabilities and amplitudes as written are for any energy (above\/below) the barrier height."}
+{"text":"The transmission probability at formula_28 evaluates to"}
+{"text":"The calculation presented above may at first seem unrealistic and hardly"}
+{"text":"useful. However it has proved to be a suitable model for a variety of real-life"}
+{"text":"systems. One such example are interfaces between two conducting materials. In the bulk of the materials, the motion of the electrons is quasi-free and can be described by the kinetic term in the above Hamiltonian with an effective mass formula_6. Often the surfaces of such materials are covered with oxide layers or are not ideal for other reasons. This thin, non-conducting layer may then be modeled by a barrier potential as above. Electrons may then tunnel from one material to the other giving rise to a current."}
+{"text":"The operation of a scanning tunneling microscope (STM) relies on this tunneling effect. In that case, the barrier is due to the gap between the tip of the STM and the underlying object. Since the tunnel current depends exponentially on the barrier width, this device is extremely sensitive to height variations on the examined sample."}
+{"text":"The above model is one-dimensional, while space is three-dimensional. One should solve the Schr\u00f6dinger equation in three dimensions. On the other hand, many systems only change along one coordinate direction and are translationally invariant along the others; they are separable. The Schr\u00f6dinger equation may then be reduced to the case considered here by an ansatz for the wave function of the type: formula_81."}
+{"text":"For another, related model of a barrier, see Delta potential barrier (QM), which can be regarded as a special case of the finite potential barrier. All results from this article immediately apply to the delta potential barrier by taking the limits formula_82 while keeping formula_83 constant."}
+{"text":"In the general theory of relativity the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it."}
+{"text":"The equations were first published by Einstein in 1915 in the form of a tensor equation which related the local \"\" (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress\u2013energy tensor)."}
+{"text":"Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass\u2013energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress\u2013energy\u2013momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of non-linear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation."}
+{"text":"As well as implying local energy\u2013momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light."}
+{"text":"Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves."}
+{"text":"The Einstein field equations (EFE) may be written in the form:"}
+{"text":"where is the Einstein tensor, is the metric tensor, is the stress\u2013energy tensor, is the cosmological constant and is the Einstein gravitational constant."}
+{"text":"where is the Ricci curvature tensor, and is the scalar curvature. This is a symmetric second-degree tensor that depends on only the metric tensor and its first- and second derivatives."}
+{"text":"The Einstein gravitational constant is defined as"}
+{"text":"where is the Newtonian constant of gravitation and is the speed of light in vacuum."}
+{"text":"The EFE can thus also be written as"}
+{"text":"In standard units, each term on the left has units of 1\/length2."}
+{"text":"The expression on the left represents the curvature of spacetime as determined by the metric; the expression on the right represents the stress\u2013energy\u2013momentum content of spacetime. The EFE can then be interpreted as a set of equations dictating how stress\u2013energy\u2013momentum determines the curvature of spacetime."}
+{"text":"These equations, together with the geodesic equation, which dictates how freely falling matter moves through spacetime, form the core of the mathematical formulation of general relativity."}
+{"text":"The EFE is a tensor equation relating a set of symmetric 4\u00a0\u00d7\u00a04 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge-fixing degrees of freedom, which correspond to the freedom to choose a coordinate system."}
+{"text":"Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when is everywhere zero) define Einstein manifolds."}
+{"text":"The equations are more complex than they appear. Given a specified distribution of matter and energy in the form of a stress\u2013energy tensor, the EFE are understood to be equations for the metric tensor , since both the Ricci tensor and scalar curvature depend on the metric in a complicated nonlinear manner. When fully written out, the EFE are a system of ten coupled, nonlinear, hyperbolic-elliptic partial differential equations."}
+{"text":"The above form of the EFE is the standard established by Misner, Thorne, and Wheeler (MTW). The authors analyzed conventions that exist and classified these according to three signs ([S1] [S2] [S3]):"}
+{"text":"The third sign above is related to the choice of convention for the Ricci tensor:"}
+{"text":"With these definitions Misner, Thorne, and Wheeler classify themselves as , whereas Weinberg (1972) is , Peebles (1980) and Efstathiou et al. (1990) are , Rindler (1977), Atwater (1974), Collins Martin & Squires (1989) and Peacock (1999) are ."}
+{"text":"Authors including Einstein have used a different sign in their definition for the Ricci tensor which results in the sign of the constant on the right side being negative:"}
+{"text":"The sign of the cosmological term would change in both these versions if the metric sign convention is used rather than the MTW metric sign convention adopted here."}
+{"text":"Taking the trace with respect to the metric of both sides of the EFE one gets"}
+{"text":"where is the spacetime dimension. Solving for and substituting this in the original EFE, one gets the following equivalent \"trace-reversed\" form:"}
+{"text":"Reversing the trace again would restore the original EFE. The trace-reversed form may be more convenient in some cases (for example, when one is interested in weak-field limit and can replace in the expression on the right with the Minkowski metric without significant loss of accuracy)."}
+{"text":"the term containing the cosmological constant was absent from the version in which he originally published them. Einstein then included the term with the cosmological constant to allow for a universe that is not expanding or contracting. This effort was unsuccessful because:"}
+{"text":"Einstein then abandoned , remarking to George Gamow \"that the introduction of the cosmological term was the biggest blunder of his life\"."}
+{"text":"The inclusion of this term does not create inconsistencies. For many years the cosmological constant was almost universally assumed to be zero. More recent astronomical observations have shown an accelerating expansion of the universe, and to explain this a positive value of is needed. The cosmological constant is negligible at the scale of a galaxy or smaller."}
+{"text":"Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side and incorporated as part of the stress\u2013energy tensor:"}
+{"text":"This tensor describes a vacuum state with an energy density and isotropic pressure that are fixed constants and given by"}
+{"text":"where it is assumed that has SI unit m and is defined as above."}
+{"text":"The existence of a cosmological constant is thus equivalent to the existence of a vacuum energy and a pressure of opposite sign. This has led to the terms \"cosmological constant\" and \"vacuum energy\" being used interchangeably in general relativity."}
+{"text":"General relativity is consistent with the local conservation of energy and momentum expressed as"}
+{"text":"which expresses the local conservation of stress\u2013energy. This conservation law is a physical requirement. With his field equations Einstein ensured that general relativity is consistent with this conservation condition."}
+{"text":"The nonlinearity of the EFE distinguishes general relativity from many other fundamental physical theories. For example, Maxwell's equations of electromagnetism are linear in the electric and magnetic fields, and charge and current distributions (i.e. the sum of two solutions is also a solution); another example is Schr\u00f6dinger's equation of quantum mechanics, which is linear in the wavefunction."}
+{"text":"The EFE reduce to Newton's law of gravity by using both the weak-field approximation and the slow-motion approximation. In fact, the constant appearing in the EFE is determined by making these two approximations."}
+{"text":"If the energy\u2013momentum tensor is zero in the region under consideration, then the field equations are also referred to as the vacuum field equations. By setting in the trace-reversed field equations, the vacuum equations can be written as"}
+{"text":"In the case of nonzero cosmological constant, the equations are"}
+{"text":"The solutions to the vacuum field equations are called vacuum solutions. Flat Minkowski space is the simplest example of a vacuum solution. Nontrivial examples include the Schwarzschild solution and the Kerr solution."}
+{"text":"Manifolds with a vanishing Ricci tensor, , are referred to as Ricci-flat manifolds and manifolds with a Ricci tensor proportional to the metric as Einstein manifolds."}
+{"text":"If the energy\u2013momentum tensor is that of an electromagnetic field in free space, i.e. if the electromagnetic stress\u2013energy tensor"}
+{"text":"is used, then the Einstein field equations are called the \"Einstein\u2013Maxwell equations\" (with cosmological constant , taken to be zero in conventional relativity theory):"}
+{"text":"Additionally, the covariant Maxwell equations are also applicable in free space:"}
+{"text":"where the semicolon represents a covariant derivative, and the brackets denote anti-symmetrization. The first equation asserts that the 4-divergence of the 2-form is zero, and the second that its exterior derivative is zero. From the latter, it follows by the Poincar\u00e9 lemma that in a coordinate chart it is possible to introduce an electromagnetic field potential such that"}
+{"text":"in which the comma denotes a partial derivative. This is often taken as equivalent to the covariant Maxwell equation from which it is derived. However, there are global solutions of the equation that may lack a globally defined potential."}
+{"text":"The solutions of the Einstein field equations are metrics of spacetime. These metrics describe the structure of the spacetime including the inertial motion of objects in the spacetime. As the field equations are non-linear, they cannot always be completely solved (i.e. without making approximations). For example, there is no known complete solution for a spacetime with two massive bodies in it (which is a theoretical model of a binary star system, for example). However, approximations are usually made in these cases. These are commonly referred to as post-Newtonian approximations. Even so, there are several cases where the field equations have been solved completely, and those are called exact solutions."}
+{"text":"The study of exact solutions of Einstein's field equations is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe."}
+{"text":"One can also discover new solutions of the Einstein field equations via the method of orthonormal frames as pioneered by Ellis and MacCallum. In this approach, the Einstein field equations are reduced to a set of coupled, nonlinear, ordinary differential equations. As discussed by Hsu and Wainwright, self-similar solutions to the Einstein field equations are fixed points of the resulting dynamical system. New solutions have been discovered using these methods by LeBlanc and Kohli and Haslam."}
+{"text":"The nonlinearity of the EFE makes finding exact solutions difficult. One way of solving the field equations is to make an approximation, namely, that far from the source(s) of gravitating matter, the gravitational field is very weak and the spacetime approximates that of Minkowski space. The metric is then written as the sum of the Minkowski metric and a term representing the deviation of the true metric from the Minkowski metric, ignoring higher-power terms. This linearization procedure can be used to investigate the phenomena of gravitational radiation."}
+{"text":"Despite the EFE as written containing the inverse of the metric tensor, they can be arranged in a form that contains the metric tensor in polynomial form and without its inverse. First, the determinant of the metric in 4 dimensions can be written"}
+{"text":"using the Levi-Civita symbol; and the inverse of the metric in 4 dimensions can be written as:"}
+{"text":"Substituting this definition of the inverse of the metric into the equations then multiplying both sides by a suitable power of to eliminate it from the denominator results in polynomial equations in the metric tensor and its first and second derivatives. The action from which the equations are derived can also be written in polynomial form by suitable redefinitions of the fields."}
+{"text":"In physics and engineering, a constitutive equation or constitutive relation is a relation between two physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance, and approximates the response of that material to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or forces to strains or deformations."}
+{"text":"Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior. See the article Linear response function."}
+{"text":"The first constitutive equation (constitutive law) was developed by Robert Hooke and is known as Hooke's law. It deals with the case of linear elastic materials. Following this discovery, this type of equation, often called a \"stress-strain relation\" in this example, but also called a \"constitutive assumption\" or an \"equation of state\" was commonly used. Walter Noll advanced the use of constitutive equations, clarifying their classification and the role of invariance requirements, constraints, and definitions of terms"}
+{"text":"like \"material\", \"isotropic\", \"aeolotropic\", etc. The class of \"constitutive relations\" of the form \"stress rate = f (velocity gradient, stress, density)\" was the subject of Walter Noll's dissertation in 1954 under Clifford Truesdell."}
+{"text":"In modern condensed matter physics, the constitutive equation plays a major role. See Linear constitutive equations and Nonlinear correlation functions."}
+{"text":"Friction is a complicated phenomenon. Macroscopically, the friction force \"F\" between the interface of two materials can be modelled as proportional to the reaction force \"R\" at a point of contact between two interfaces through a dimensionless coefficient of friction \"\u03bc\"f, which depends on the pair of materials:"}
+{"text":"This can be applied to static friction (friction preventing two stationary objects from slipping on their own), kinetic friction (friction between two objects scraping\/sliding past each other), or rolling (frictional force which prevents slipping but causes a torque to exert on a round object)."}
+{"text":"The stress-strain constitutive relation for linear materials is commonly known as Hooke's law. In its simplest form, the law defines the spring constant (or elasticity constant) \"k\" in a scalar equation, stating the tensile\/compressive force is proportional to the extended (or contracted) displacement \"x\":"}
+{"text":"meaning the material responds linearly. Equivalently, in terms of the stress \"\u03c3\", Young's modulus \"E\", and strain \"\u03b5\" (dimensionless):"}
+{"text":"In general, forces which deform solids can be normal to a surface of the material (normal forces), or tangential (shear forces), this can be described mathematically using the stress tensor:"}
+{"text":"where \"C\" is the elasticity tensor and \"S\" is the compliance tensor."}
+{"text":"Several classes of deformations in elastic materials are the following:"}
+{"text":"The relative speed of separation \"v\"separation of an object A after a collision with another object B is related to the relative speed of approach \"v\"approach by the coefficient of restitution, defined by Newton's experimental impact law:"}
+{"text":"The electromagnetic wave equation is a second-order partial differential equation that describes the propagation of electromagnetic waves through a medium or in a vacuum. It is a three-dimensional form of the wave equation. The homogeneous form of the equation, written in terms of either the electric field or the magnetic field , takes the form:"}
+{"text":"is the speed of light (i.e. phase velocity) in a medium with permeability , and permittivity , and is the Laplace operator. In a vacuum, meters per second, a fundamental physical constant. The electromagnetic wave equation derives from Maxwell's equations. In most older literature, is called the \"magnetic flux density\" or \"magnetic induction\"."}
+{"text":"The origin of the electromagnetic wave equation."}
+{"text":"In his 1865 paper titled A Dynamical Theory of the Electromagnetic Field, James Clerk Maxwell utilized the correction to Amp\u00e8re's circuital law that he had made in part III of his 1861 paper On Physical Lines of Force. In \"Part VI\" of his 1864 paper titled \"Electromagnetic Theory of Light\", Maxwell combined displacement current with some of the other equations of electromagnetism and he obtained a wave equation with a speed equal to the speed of light. He commented:"}
+{"text":"The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws."}
+{"text":"Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics education by a much less cumbersome method involving combining the corrected version of Amp\u00e8re's circuital law with Faraday's law of induction."}
+{"text":"To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's equations. In a vacuum- and charge-free space, these equations are:"}
+{"text":"These are the general Maxwell's equations specialized to the case with charge and current both set to zero."}
+{"text":"Taking the curl of the curl equations gives:"}
+{"text":"where is any vector function of space. And"}
+{"text":"where is a dyadic which when operated on by the divergence operator yields a vector. Since"}
+{"text":"then the first term on the right in the identity vanishes and we obtain the wave equations:"}
+{"text":"Covariant form of the homogeneous wave equation."}
+{"text":"These relativistic equations can be written in contravariant form as"}
+{"text":"The electromagnetic wave equation is modified in two ways, the derivative is replaced with the covariant derivative and a new term that depends on the curvature appears."}
+{"text":"where formula_15 is the Ricci curvature tensor and the semicolon indicates covariant differentiation."}
+{"text":"The generalization of the Lorenz gauge condition in curved spacetime is assumed:"}
+{"text":"Localized time-varying charge and current densities can act as sources of electromagnetic waves in a vacuum. Maxwell's equations can be written in the form of a wave equation with sources. The addition of sources to the wave equations makes the partial differential equations inhomogeneous."}
+{"text":"Solutions to the homogeneous electromagnetic wave equation."}
+{"text":"The general solution to the electromagnetic wave equation is a linear superposition of waves of the form"}
+{"text":"for virtually \"any\" well-behaved function of dimensionless argument , where is the angular frequency (in radians per second), and is the wave vector (in radians per meter)."}
+{"text":"Although the function can be and often is a monochromatic sine wave, it does not have to be sinusoidal, or even periodic. In practice, cannot have infinite periodicity because any real electromagnetic wave must always have a finite extent in time and space. As a result, and based on the theory of Fourier decomposition, a real wave must consist of the superposition of an infinite set of sinusoidal frequencies."}
+{"text":"In addition, for a valid solution, the wave vector and the angular frequency are not independent; they must adhere to the dispersion relation:"}
+{"text":"where is the wavenumber and is the wavelength. The variable can only be used in this equation when the electromagnetic wave is in a vacuum."}
+{"text":"The simplest set of solutions to the wave equation result from assuming sinusoidal waveforms of a single frequency in separable form:"}
+{"text":"Consider a plane defined by a unit normal vector"}
+{"text":"Then planar traveling wave solutions of the wave equations are"}
+{"text":"where is the position vector (in meters)."}
+{"text":"These solutions represent planar waves traveling in the direction of the normal vector . If we define the z direction as the direction of . and the x direction as the direction of , then by Faraday's Law the magnetic field lies in the y direction and is related to the electric field by the relation"}
+{"text":"Because the divergence of the electric and magnetic fields are zero, there are no fields in the direction of propagation."}
+{"text":"This solution is the linearly polarized solution of the wave equations. There are also circularly polarized solutions in which the fields rotate about the normal vector."}
+{"text":"Because of the linearity of Maxwell's equations in a vacuum, solutions can be decomposed into a superposition of sinusoids. This is the basis for the Fourier transform method for the solution of differential equations. The sinusoidal solution to the electromagnetic wave equation takes the form"}
+{"text":"The wave vector is related to the angular frequency by"}
+{"text":"where is the wavenumber and is the wavelength."}
+{"text":"The electromagnetic spectrum is a plot of the field magnitudes (or energies) as a function of wavelength."}
+{"text":"Assuming monochromatic fields varying in time as formula_30, if one uses Maxwell's Equations to eliminate , the electromagnetic wave equation reduces to the Helmholtz Equation for :"}
+{"text":"with \"k = \u03c9\/c\" as given above. Alternatively, one can eliminate in favor of to obtain:"}
+{"text":"A generic electromagnetic field with frequency can be written as a sum of solutions to these two equations. The three-dimensional solutions of the Helmholtz Equation can be expressed as expansions in spherical harmonics with coefficients proportional to the spherical Bessel functions. However, applying this expansion to each vector component of or will give solutions that are not generically divergence-free (), and therefore require additional restrictions on the coefficients."}
+{"text":"The multipole expansion circumvents this difficulty by expanding not or , but or into spherical harmonics. These expansions still solve the original Helmholtz equations for and because for a divergence-free field , . The resulting expressions for a generic electromagnetic field are:"}
+{"text":"where formula_35 and formula_36 are the \"electric multipole fields of order (l, m)\", and formula_37 and formula_38 are the corresponding \"magnetic multipole fields\", and and are the coefficients of the expansion. The multipole fields are given by"}
+{"text":"where \"h\"l(1,2)(\"x\") are the spherical Hankel functions, \"E\"l(1,2) and \"B\"l(1,2) are determined by boundary conditions, and"}
+{"text":"are vector spherical harmonics normalized so that"}
+{"text":"The multipole expansion of the electromagnetic field finds application in a number of problems involving spherical symmetry, for example antennae radiation patterns, or nuclear gamma decay. In these applications, one is often interested in the power radiated in the far-field. In this regions, the and fields asymptote to"}
+{"text":"The angular distribution of the time-averaged radiated power is then given by"}
+{"text":"The Fresnel equations (or Fresnel coefficients) describe the reflection and transmission of light (or electromagnetic radiation in general) when incident on an interface between different optical media. They were deduced by Augustin-Jean Fresnel () who was the first to understand that light is a transverse wave, even though no one realized that the \"vibrations\" of the wave were electric and magnetic fields. For the first time, polarization could be understood quantitatively, as Fresnel's equations correctly predicted the differing behaviour of waves of the \"s\" and \"p\" polarizations incident upon a material interface."}
+{"text":"When light strikes the interface between a medium with refractive index \"n\"1 and a second medium with refractive index \"n\"2, both reflection and refraction of the light may occur. The Fresnel equations give the ratio of the \"reflected\" wave's electric field to the incident wave's electric field, and the ratio of the \"transmitted\" wave's electric field to the incident wave's electric field, for each of two components of polarization. (The \"magnetic\" fields can also be related using similar coefficients.) These ratios are generally complex, describing not only the relative amplitudes but also the phase shifts at the interface."}
+{"text":"The equations assume the interface between the media is flat and that the media are homogeneous and isotropic. The incident light is assumed to be a plane wave, which is sufficient to solve any problem since any incident light field can be decomposed into plane waves and polarizations."}
+{"text":"There are two sets of Fresnel coefficients for two different linear polarization components of the incident wave. Since any polarization state can be resolved into a combination of two orthogonal linear polarizations, this is sufficient for any problem. Likewise, unpolarized (or \"randomly polarized\") light has an equal amount of power in each of two linear polarizations."}
+{"text":"The s polarization refers to polarization of a wave's electric field \"normal\" to the plane of incidence (the direction in the derivation below); then the magnetic field is \"in\" the plane of incidence. The p polarization refers to polarization of the electric field \"in\" the plane of incidence (the plane in the derivation below); then the magnetic field is \"normal\" to the plane of incidence."}
+{"text":"Although the reflectivity and transmission are dependent on polarization, at normal incidence (\"\u03b8\"\u00a0=\u00a00) there is no distinction between them so all polarization states are governed by a single set of Fresnel coefficients (and another special case is mentioned below in which that is true)."}
+{"text":"In the diagram on the right, an incident plane wave in the direction of the ray IO strikes the interface between two media of refractive indices \"n\"1 and \"n\"2 at point O. Part of the wave is reflected in the direction OR, and part refracted in the direction OT. The angles that the incident, reflected and refracted rays make to the normal of the interface are given as \"\u03b8\"i, \"\u03b8\"r and \"\u03b8\"t, respectively."}
+{"text":"The relationship between these angles is given by the law of reflection:"}
+{"text":"The behavior of light striking the interface is solved by considering the electric and magnetic fields that constitute an electromagnetic wave, and the laws of electromagnetism, as shown below. The ratio of waves' electric field (or magnetic field) amplitudes are obtained, but in practice one is more often interested in formulae which determine \"power\" coefficients, since power (or irradiance) is what can be directly measured at optical frequencies. The power of a wave is generally proportional to the square of the electric (or magnetic) field amplitude."}
+{"text":"We call the fraction of the incident power that is reflected from the interface the reflectance (or \"reflectivity\", or \"power reflection coefficient\") \"R\", and the fraction that is refracted into the second medium is called the transmittance (or \"transmissivity\", or \"power transmission coefficient\") \"T\". Note that these are what would be measured right \"at\" each side of an interface and do not account for attenuation of a wave in an absorbing medium \"following\" transmission or reflection."}
+{"text":"while the reflectance for p-polarized light is"}
+{"text":"where and are the wave impedances of media 1 and 2, respectively."}
+{"text":"We assume that the media are non-magnetic (i.e., \"\u03bc\"1 = \"\u03bc\"2 = \"\u03bc\"0), which is typically a good approximation at optical frequencies (and for transparent media at other frequencies). Then the wave impedances are determined solely by the refractive indices \"n\"1 and \"n\"2:"}
+{"text":"where is the impedance of free space and =1,2. Making this substitution, we obtain equations using the refractive indices:"}
+{"text":"The second form of each equation is derived from the first by eliminating \"\u03b8\"t using Snell's law and trigonometric identities."}
+{"text":"As a consequence of conservation of energy, one can find the transmitted power (or more correctly, irradiance: power per unit area) simply as the portion of the incident power that isn't reflected:"}
+{"text":"Note that all such intensities are measured in terms of a wave's irradiance in the direction normal to the interface; this is also what is measured in typical experiments. That number could be obtained from irradiances \"in the direction of an incident or reflected wave\" (given by the magnitude of a wave's Poynting vector) multiplied by cos\"\u03b8\" for a wave at an angle \"\u03b8\" to the normal direction (or equivalently, taking the dot product of the Poynting vector with the unit vector normal to the interface). This complication can be ignored in the case of the reflection coefficient, since cos\"\u03b8\"i\u00a0=\u00a0cos\"\u03b8\"r, so that the ratio of reflected to incident irradiance in the wave's direction is the same as in the direction normal to the interface."}
+{"text":"Although these relationships describe the basic physics, in many practical applications one is concerned with \"natural light\" that can be described as unpolarized. That means that there is an equal amount of power in the \"s\" and \"p\" polarizations, so that the \"effective\" reflectivity of the material is just the average of the two reflectivities:"}
+{"text":"For low-precision applications involving unpolarized light, such as computer graphics, rather than rigorously computing the effective reflection coefficient for each angle, Schlick's approximation is often used."}
+{"text":"For the case of normal incidence, formula_11, and there is no distinction between s and p polarization. Thus, the reflectance simplifies to"}
+{"text":"For common glass (\"n\"2 \u2248 1.5) surrounded by air (\"n\"1=1), the power reflectance at normal incidence can be seen to be about 4%, or 8% accounting for both sides of a glass pane."}
+{"text":"At a dielectric interface from to , there is a particular angle of incidence at which goes to zero and a p-polarised incident wave is purely refracted, thus all reflected light is s-polarised. This angle is known as Brewster's angle, and is around 56\u00b0 for \"n\"1=1 and \"n\"2=1.5 (typical glass)."}
+{"text":"When light travelling in a denser medium strikes the surface of a less dense medium (i.e., ), beyond a particular incidence angle known as the \"critical angle\", all light is reflected and . This phenomenon, known as total internal reflection, occurs at incidence angles for which Snell's law predicts that the sine of the angle of refraction would exceed unity (whereas in fact sin\"\u03b8\"\u00a0\u2264\u00a01 for all real \"\u03b8\"). For glass with \"n\"=1.5 surrounded by air, the critical angle is approximately 41\u00b0."}
+{"text":"The above equations relating powers (which could be measured with a photometer for instance) are derived from the Fresnel equations which solve the physical problem in terms of electromagnetic field complex amplitudes, i.e., considering phase in addition to power (which is important in multipath propagation for instance). Those underlying equations supply generally complex-valued ratios of those EM fields and may take several different forms, depending on formalisms used. The complex amplitude coefficients are usually represented by lower case \"r\" and \"t\" (whereas the power coefficients are capitalized)."}
+{"text":"In the following, the reflection coefficient is the ratio of the reflected wave's electric field complex amplitude to that of the incident wave. The transmission coefficient is the ratio of the transmitted wave's electric field complex amplitude to that of the incident wave. We require separate formulae for the \"s\" and \"p\" polarizations. In each case we assume an incident plane wave at an angle of incidence formula_13 on a plane interface, reflected at an angle formula_14, and with a transmitted wave at an angle formula_15, corresponding to the above figure. Note that in the cases of an interface into an absorbing material (where \"n\" is complex) or total internal reflection, the angle of transmission might not evaluate to a real number."}
+{"text":"We consider the sign of a wave's electric field in relation to a wave's direction. Consequently, for \"p\" polarization at normal incidence, the positive direction of electric field for an incident wave (to the left) is \"opposite\" that of a reflected wave (also to its left); for \"s\" polarization both are the same (upward)."}
+{"text":"One can see that and . One can write similar equations applying to the ratio of magnetic fields of the waves, but these are usually not required."}
+{"text":"Because the reflected and incident waves propagate in the same medium and make the same angle with the normal to the surface, the power reflection coefficient R is just the squared magnitude of \"r\":"}
+{"text":"On the other hand, calculation of the power transmission coefficient is less straightforward, since the light travels in different directions in the two media. What's more, the wave impedances in the two media differ; power is only proportional to the square of the amplitude when the media's impedances are the same (as they are for the reflected wave). This results in:"}
+{"text":"The factor of is the reciprocal of the ratio of the media's wave impedances (since we assume \"\u03bc\"=\"\u03bc\"0). The factor of is from expressing power \"in the direction\" normal to the interface, for both the incident and transmitted waves."}
+{"text":"In the case of total internal reflection where the power transmission is zero, nevertheless describes the electric field (including its phase) just beyond the interface. This is an evanescent field which does not propagate as a wave (thus =0) but has nonzero values very close to the interface. The phase shift of the reflected wave on total internal reflection can similarly be obtained from the phase angles of and (whose magnitudes are unity). These phase shifts are different for \"s\" and \"p\" waves, which is the well-known principle by which total internal reflection is used to effect polarization transformations."}
+{"text":"In the above formula for , if we put formula_19 (Snell's law) and multiply the numerator and denominator by , we\u00a0obtain"}
+{"text":"If we do likewise with the formula for , the result is easily shown to be equivalent to"}
+{"text":"These formulas are known respectively as \"Fresnel's sine law\" and \"Fresnel's tangent law\". Although at normal incidence these expressions reduce to 0\/0, one can see that they yield the correct results in the limit as ."}
+{"text":"When light makes multiple reflections between two or more parallel surfaces, the multiple beams of light generally interfere with one another, resulting in net transmission and reflection amplitudes that depend on the light's wavelength. The interference, however, is seen only when the surfaces are at distances comparable to or smaller than the light's coherence length, which for ordinary white light is few micrometers; it can be much larger for light from a laser."}
+{"text":"An example of interference between reflections is the iridescent colours seen in a soap bubble or in thin oil films on water. Applications include Fabry\u2013P\u00e9rot interferometers, antireflection coatings, and optical filters. A quantitative analysis of these effects is based on the Fresnel equations, but with additional calculations to account for interference."}
+{"text":"The transfer-matrix method, or the recursive Rouard method can be used to solve multiple-surface problems."}
+{"text":"In 1808, \u00c9tienne-Louis Malus discovered that when a ray of light was reflected off a non-metallic surface at the appropriate angle, it behaved like \"one\" of the two rays emerging from a doubly-refractive calcite crystal. He later coined the term \"polarization\" to describe this behavior.\u00a0 In 1815, the dependence of the polarizing angle on the refractive index was determined experimentally by David Brewster. But the \"reason\" for that dependence was such a deep mystery that in late 1817, Thomas\u00a0Young was moved to write:"}
+{"text":"In 1821, however, Augustin-Jean Fresnel derived results equivalent to his sine and tangent laws (above), by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Fresnel promptly confirmed by experiment that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45\u00b0 to the plane of incidence, for light incident from air onto glass or water; in particular, the equations gave the correct polarization at Brewster's angle. The experimental confirmation was reported in a \"postscript\" to the work in which Fresnel first revealed his theory that light waves, including \"unpolarized\" waves, were \"purely\" transverse."}
+{"text":"Details of Fresnel's derivation, including the modern forms of the sine law and tangent law, were given later, in a memoir read to the French Academy of Sciences in January 1823. That derivation combined conservation of energy with continuity of the \"tangential\" vibration at the interface, but failed to allow for any condition on the \"normal\" component of vibration. The first derivation from \"electromagnetic\" principles was given by Hendrik Lorentz in 1875."}
+{"text":"In the same memoir of January 1823, Fresnel found that for angles of incidence greater than the critical angle, his formulas for the reflection coefficients ( and ) gave complex values with unit magnitudes. Noting that the magnitude, as usual, represented the ratio of peak amplitudes, he guessed that the argument represented the phase shift, and verified the hypothesis experimentally. The verification involved"}
+{"text":"Thus he finally had a quantitative theory for what we now call the \"Fresnel rhomb\" \u2014 a device that he had been using in experiments, in one form or another, since 1817 (see \"Fresnel rhomb \u00a7History\")."}
+{"text":"The success of the complex reflection coefficient inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index."}
+{"text":"Four weeks before he presented his completed theory of total internal reflection and the rhomb, Fresnel submitted a memoir in which he introduced the needed terms \"linear polarization\", \"circular polarization\", and \"elliptical polarization\", and in which he explained optical rotation as a species of birefringence: linearly-polarized light can be resolved into two circularly-polarized components rotating in opposite directions, and if these propagate at different speeds, the phase difference between them \u2014 hence the orientation of their linearly-polarized resultant \u2014 will vary continuously with distance."}
+{"text":"Thus Fresnel's interpretation of the complex values of his reflection coefficients marked the confluence of several streams of his research and, arguably, the essential completion of his reconstruction of physical optics on the transverse-wave hypothesis (see \"Augustin-Jean Fresnel\")."}
+{"text":"Here we systematically derive the above relations from electromagnetic premises."}
+{"text":"In order to compute meaningful Fresnel coefficients, we must assume that the medium is (approximately) linear and homogeneous. If the medium is also isotropic, the four field vectors are related by"}
+{"text":"where \"\u03f5\" and \"\u03bc\" are scalars, known respectively as the (electric) \"permittivity\" and the (magnetic) \"permeability\" of the medium. For a vacuum, these have the values \"\u03f5\"0 and \"\u03bc\"0, respectively. Hence we define the \"relative\" permittivity (or dielectric constant) , and the \"relative\" permeability ."}
+{"text":"In optics it is common to assume that the medium is non-magnetic, so that \"\u03bc\"rel=1. For ferromagnetic materials at radio\/microwave frequencies, larger values of \"\u03bc\"rel must be taken into account. But, for optically transparent media, and for all other materials at optical frequencies (except possible metamaterials), \"\u03bc\"rel is indeed very close to 1; that is, \"\u03bc\"\u2248\"\u03bc\"0."}
+{"text":"In optics, one usually knows the refractive index \"n\" of the medium, which is the ratio of the speed of light in a vacuum () to the speed of light in the medium. In the analysis of partial reflection and transmission, one is also interested in the electromagnetic wave impedance , which is the ratio of the amplitude of to the amplitude of . It is therefore desirable to express \"n\" and in terms of \"\u03f5\" and \"\u03bc\", and thence to relate to \"n\". The last-mentioned relation, however, will make it convenient to derive the reflection coefficients in terms of the wave \"admittance\" , which is the reciprocal of the wave impedance ."}
+{"text":"In the case of \"uniform plane sinusoidal\" waves, the wave impedance or admittance is known as the \"intrinsic\" impedance or admittance of the medium. This case is the one for which the Fresnel coefficients are to be derived."}
+{"text":"In a uniform plane sinusoidal electromagnetic wave, the electric field has the form"}
+{"text":"where is the (constant) complex amplitude vector,\u00a0 is the imaginary unit,\u00a0 is the wave vector (whose magnitude is the angular wavenumber),\u00a0 is the position vector,\u00a0 \"\u03c9\" is the angular frequency,\u00a0 is time, and it is understood that the \"real part\" of the expression is the physical field.\u00a0 The value of the expression is unchanged if the position varies in a direction normal to ; hence \"is normal to the wavefronts\"."}
+{"text":"To advance the phase by the angle \"\u03d5\", we replace by (that is, we replace by ), with the result that the (complex) field is multiplied by . So a phase \"advance\" is equivalent to multiplication by a complex constant with a \"negative\" argument. This becomes more obvious when the field () is factored as where the last factor contains the time-dependence. That factor also implies that differentiation w.r.t.\u00a0time corresponds to multiplication by ."}
+{"text":"If \"\u2113\" is the component of in the direction of the field () can be written .\u00a0 If the argument of is to be constant,\u00a0 \"\u2113\"\u00a0must increase at the velocity formula_22 known as the \"phase velocity\" . This in turn is equal to formula_23. Solving for gives"}
+{"text":"As usual, we drop the time-dependent factor which is understood to multiply every complex field quantity. The electric field for a uniform plane sine wave will then be represented by the location-dependent \"phasor\""}
+{"text":"For fields of that form, Faraday's law and the Maxwell-Amp\u00e8re law respectively reduce to"}
+{"text":"Putting and as above, we can eliminate and to obtain equations in only and :"}
+{"text":"If the material parameters \"\u03f5\" and \"\u03bc\" are real (as in a lossless dielectric), these equations show that form a \"right-handed orthogonal triad\", so that the same equations apply to the magnitudes of the respective vectors. Taking the magnitude equations and substituting from (), we obtain"}
+{"text":"where and are the magnitudes of and . Multiplying the last two equations gives"}
+{"text":"Dividing (or cross-multiplying) the same two equations gives where"}
+{"text":"From () we obtain the phase velocity formula_27. For a vacuum this reduces to formula_28. Dividing the second result by the first gives"}
+{"text":"For a \"non-magnetic\" medium (the usual case), this becomes formula_30."}
+{"text":"Taking the reciprocal of (), we find that the intrinsic \"impedance\" is formula_31.\u00a0 In a vacuum this takes the value formula_32 known as the impedance of free space. By division, formula_33. For a \"non-magnetic\" medium, this becomes formula_34"}
+{"text":"In Cartesian coordinates , let the region have refractive index intrinsic admittance etc., and let the region have refractive index intrinsic admittance etc. Then the plane is the interface, and the axis is normal to the interface (see diagram). Let and (in bold roman type) be the unit vectors in the and directions, respectively. Let the plane of incidence be the plane (the plane of the page), with the angle of incidence measured from towards . Let the angle of refraction, measured in the same sense, be where the subscript stands for \"transmitted\" (reserving for \"reflected\")."}
+{"text":"In the absence of Doppler shifts, \"\u03c9\" does not change on reflection or refraction. Hence, by (), the magnitude of the wave vector is proportional to the refractive index."}
+{"text":"So, for a given \"\u03c9\", if we \"redefine\" as the magnitude of the wave vector in the \"reference\" medium (for which ), then the wave vector has magnitude in the first medium (region in the diagram) and magnitude in the second medium. From the magnitudes and the geometry, we find that the wave vectors are"}
+{"text":"where the last step uses Snell's law. The corresponding dot products in the phasor form () are"}
+{"text":"For the \"s\" polarization, the field is parallel to the axis and may therefore be described by its component in the \u00a0direction. Let the reflection and transmission coefficients be and respectively. Then, if the incident field is taken to have unit amplitude, the phasor form () of its \u00a0component is"}
+{"text":"and the reflected and transmitted fields, in the same form, are"}
+{"text":"Under the sign convention used in this article, a positive reflection or transmission coefficient is one that preserves the direction of the \"transverse\" field, meaning (in this context) the field normal to the plane of incidence. For the \"s\" polarization, that means the field. If the incident, reflected, and transmitted fields (in the above equations) are in the \u00a0direction (\"out of the page\"), then the respective fields are in the directions of the red arrows, since form a right-handed orthogonal triad. The fields may therefore be described by their components in the directions of those arrows, denoted by .\u00a0 Then, since"}
+{"text":"At the interface, by the usual interface conditions for electromagnetic fields, the tangential components of the and fields must be continuous; that is,"}
+{"text":"When we substitute from equations () to () and then from (), the exponential factors cancel out, so that the interface conditions reduce to the simultaneous equations"}
+{"text":"which are easily solved for and yielding"}
+{"text":"At \"normal incidence\" indicated by an additional subscript 0, these results become"}
+{"text":"At \"grazing incidence\" , we have hence and ."}
+{"text":"For the \"p\" polarization, the incident, reflected, and transmitted fields are parallel to the red arrows and may therefore be described by their components in the directions of those arrows. Let those components be (redefining the symbols for the new context). Let the reflection and transmission coefficients be and . Then, if the incident field is taken to have unit amplitude, we have"}
+{"text":"If the fields are in the directions of the red arrows, then, in order for to form a right-handed orthogonal triad, the respective fields must be in the \u00a0direction (\"into the page\") and may therefore be described by their components in that direction. This is consistent with the adopted sign convention, namely that a positive reflection or transmission coefficient is one that preserves the direction of the transverse field the field in the case of the \"p\" polarization. The agreement of the \"other\" field with the red arrows reveals an alternative definition of the sign convention: that a positive reflection or transmission coefficient is one for which the field vector in the plane of incidence points towards the same medium before and after reflection or transmission."}
+{"text":"So, for the incident, reflected, and transmitted fields, let the respective components in the \u00a0direction be .\u00a0 Then, since"}
+{"text":"At the interface, the tangential components of the and fields must be continuous; that is,"}
+{"text":"When we substitute from equations () and () and then from (), the exponential factors again cancel out, so that the interface conditions reduce to"}
+{"text":"At \"grazing incidence\" , we again have hence and ."}
+{"text":"Comparing () and () with () and (), we see that at \"normal\" incidence, under the adopted sign convention, the transmission coefficients for the two polarizations are equal, whereas the reflection coefficients have equal magnitudes but opposite signs. While this clash of signs is a disadvantage of the convention, the attendant advantage is that the signs agree at \"grazing\" incidence."}
+{"text":"From equations () and (), taking squared magnitudes, we find that the \"reflectivity\" (ratio of reflected power to incident power) is"}
+{"text":"for the p polarization. Note that when comparing the powers of two such waves in the same medium and with the same cos\"\u03b8\", the impedance and geometric factors mentioned above are identical and cancel out. But in computing the power \"transmission\" (below), these factors must be taken into account."}
+{"text":"The simplest way to obtain the power transmission coefficient (\"transmissivity\", the ratio of transmitted power to incident power \"in the direction normal to the interface\", i.e. the direction) is to use (conservation of energy). In this way we find"}
+{"text":"In the case of an interface between two lossless media (for which \u03f5 and \u03bc are \"real\" and positive), one can obtain these results directly using the squared magnitudes of the amplitude transmission coefficients that we found earlier in equations () and (). But, for given amplitude (as noted above), the component of the Poynting vector in the direction is proportional to the geometric factor and inversely proportional to the wave impedance . Applying these corrections to each wave, we obtain two ratios multiplying the square of the amplitude transmission coefficient:"}
+{"text":"for the p polarization. The last two equations apply only to lossless dielectrics, and only at incidence angles smaller than the critical angle (beyond which, of course, )."}
+{"text":"From equations () and (), we see that two dissimilar media will have the same refractive index, but different admittances, if the ratio of their permeabilities is the inverse of the ratio of their permittivities. In that unusual situation we have (that is, the transmitted ray is undeviated), so that the cosines in equations (), (), (), (), and () to () cancel out, and all the reflection and transmission ratios become independent of the angle of incidence; in other words, the ratios for normal incidence become applicable to all angles of incidence. When extended to spherical reflection or scattering, this results in the Kerker effect for Mie scattering."}
+{"text":"Since the Fresnel equations were developed for optics, they are usually given for non-magnetic materials. Dividing () by ()) yields"}
+{"text":"For non-magnetic media we can substitute the vacuum permeability \"\u03bc\"0 for \"\u03bc\", so that"}
+{"text":"that is, the admittances are simply proportional to the corresponding refractive indices. When we make these substitutions in equations () to () and equations () to (), the factor \"c\u03bc\"0 cancels out. For the amplitude coefficients we obtain:"}
+{"text":"For the case of normal incidence these reduce to:"}
+{"text":"The power transmissions can then be found from ."}
+{"text":"For equal permeabilities (e.g., non-magnetic media), if and are \"complementary\", we can substitute for and for so that the numerator in equation () becomes which is zero (by Snell's law). Hence and only the s-polarized component is reflected. This is what happens at the Brewster angle. Substituting for in Snell's law, we readily obtain"}
+{"text":"This switch of polarizations has an analog in the old mechanical theory of light waves (see \"\u00a7History\", above). One could predict reflection coefficients that agreed with observation by supposing (like Fresnel) that different refractive indices were due to different \"densities\" and that the vibrations were \"normal\" to what was then called the plane of polarization, or by supposing (like MacCullagh and Neumann) that different refractive indices were due to different \"elasticities\" and that the vibrations were \"parallel\" to that plane. Thus the condition of equal permittivities and unequal permeabilities, although not realistic, is of some historical interest."}
+{"text":"The second law of thermodynamics establishes the concept of entropy as a physical property of a thermodynamic system. Entropy predicts the direction of spontaneous processes, and determines whether they are irreversible or impossible, despite obeying the requirement of conservation of energy, which is established in the first law of thermodynamics. The second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always arrive at a state of thermodynamic equilibrium, where the entropy is highest. If all processes in the system are reversible, the entropy is constant."}
+{"text":"An increase in entropy accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time."}
+{"text":"Historically, the second law was an empirical finding that was accepted as an axiom of thermodynamic theory. Statistical mechanics provides a microscopic explanation of the law in terms of probability distributions of the states of large assemblies of atoms or molecules."}
+{"text":"The second law has been expressed in many ways. Its first formulation, which preceded the proper definition of entropy and was based on caloric theory, is Carnot's theorem, credited to the French scientist Sadi Carnot, who in 1824 showed that the efficiency of conversion of heat to work in a heat engine has an upper limit. The first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolph Clausius in the 1850s including his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time."}
+{"text":"The second law of thermodynamics can also be used to define the concept of thermodynamic temperature, but this is usually delegated to the zeroth law of thermodynamics."}
+{"text":"The first law of thermodynamics provides the definition of the internal energy of a thermodynamic system, and expresses the law of conservation of energy. The second law is concerned with the direction of natural processes. It asserts that a natural process runs only in one sense, and is not reversible. For example, when a path for conduction and radiation is made available, heat always flows spontaneously from a hotter to a colder body. Such phenomena are accounted for in terms of entropy. If an isolated system is held initially in internal thermodynamic equilibrium by internal partitioning impermeable walls, and then some operation makes the walls more permeable, then the system spontaneously evolves to reach a final new internal thermodynamic equilibrium, and its total entropy, , increases."}
+{"text":"In a fictive reversible process, an infinitesimal increment in the entropy () of a system is defined to result from an infinitesimal transfer of heat () to a closed system (which allows the entry or exit of energy \u2013 but not transfer of matter) divided by the common temperature () of the system in equilibrium and the surroundings which supply the heat:"}
+{"text":"Different notations are used for infinitesimal amounts of heat () and infinitesimal amounts of entropy () because entropy is a function of state, while heat, like work, is not. For an actually possible infinitesimal process without exchange of mass with the surroundings, the second law requires that the increment in system entropy fulfills the inequality"}
+{"text":"This is because a general process for this case may include work being done on the system by its surroundings, which can have frictional or viscous effects inside the system, because a chemical reaction may be in progress, or because heat transfer actually occurs only irreversibly, driven by a finite difference between the system temperature () and the temperature of the surroundings (\"surr\"). Note that the equality still applies for pure heat flow,"}
+{"text":"which is the basis of the accurate determination of the absolute entropy of pure substances from measured heat capacity curves and entropy changes at phase transitions, i.e. by calorimetry. Introducing a set of internal variables formula_4 to describe the deviation of a thermodynamic system in physical equilibrium (with the required well-defined uniform pressure \"P\" and temperature \"T\") from the chemical equilibrium state, one can record the equality"}
+{"text":"The second term represents work of internal variables that can be perturbed by external influences, but the system cannot perform any positive work via internal variables. This statement introduces the impossibility of the reversion of evolution of the thermodynamic system in time and can be considered as a formulation of \"the second principle of thermodynamics\" \u2013 the formulation, which is, of course, equivalent to the formulation of the principle in terms of entropy."}
+{"text":"The zeroth law of thermodynamics in its usual short statement allows recognition that two bodies in a relation of thermal equilibrium have the same temperature, especially that a test body has the same temperature as a reference thermometric body. For a body in thermal equilibrium with another, there are indefinitely many empirical temperature scales, in general respectively depending on the properties of a particular reference thermometric body. The second law allows a distinguished temperature scale, which defines an absolute, thermodynamic temperature, independent of the properties of any particular reference thermometric body."}
+{"text":"The second law of thermodynamics may be expressed in many specific ways, the most prominent classical statements being the statement by Rudolf Clausius (1854), the statement by Lord Kelvin (1851), and the statement in axiomatic thermodynamics by Constantin Carath\u00e9odory (1909). These statements cast the law in general physical terms citing the impossibility of certain processes. The Clausius and the Kelvin statements have been shown to be equivalent."}
+{"text":"In modern terms, Carnot's principle may be stated more precisely:"}
+{"text":"The German scientist Rudolf Clausius laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work. His formulation of the second law, which was published in German in 1854, is known as the \"Clausius statement\":"}
+{"text":"Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time."}
+{"text":"The statement by Clausius uses the concept of 'passage of heat'. As is usual in thermodynamic discussions, this means 'net transfer of energy as heat', and does not refer to contributory transfers one way and the other."}
+{"text":"Heat cannot spontaneously flow from cold regions to hot regions without external work being performed on the system, which is evident from ordinary experience of refrigeration, for example. In a refrigerator, heat flows from cold to hot, but only when forced by an external agent, the refrigeration system."}
+{"text":"Lord Kelvin expressed the second law in several wordings."}
+{"text":"Equivalence of the Clausius and the Kelvin statements."}
+{"text":"Planck offered the following proposition as derived directly from experience. This is sometimes regarded as his statement of the second law, but he regarded it as a starting point for the derivation of the second law."}
+{"text":"Relation between Kelvin's statement and Planck's proposition."}
+{"text":"It is almost customary in textbooks to speak of the \"Kelvin-Planck statement\" of the law, as for example in the text by ter Haar and Wergeland."}
+{"text":"The Kelvin\u2013Planck statement (or the \"heat engine statement\") of the second law of thermodynamics states that"}
+{"text":"Planck stated the second law as follows."}
+{"text":"Rather like Planck's statement is that of Uhlenbeck and Ford for \"irreversible phenomena\"."}
+{"text":"Constantin Carath\u00e9odory formulated thermodynamics on a purely mathematical axiomatic foundation. His statement of the second law is known as the Principle of Carath\u00e9odory, which may be formulated as follows:"}
+{"text":"In every neighborhood of any state S of an adiabatically enclosed system there are states inaccessible from S."}
+{"text":"With this formulation, he described the concept of adiabatic accessibility for the first time and provided the foundation for a new subfield of classical thermodynamics, often called geometrical thermodynamics. It follows from Carath\u00e9odory's principle that quantity of energy quasi-statically transferred as heat is a holonomic process function, in other words, formula_9."}
+{"text":"Though it is almost customary in textbooks to say that Carath\u00e9odory's principle expresses the second law and to treat it as equivalent to the Clausius or to the Kelvin-Planck statements, such is not the case. To get all the content of the second law, Carath\u00e9odory's principle needs to be supplemented by Planck's principle, that isochoric work always increases the internal energy of a closed system that was initially in its own internal thermodynamic equilibrium."}
+{"text":"In 1926, Max Planck wrote an important paper on the basics of thermodynamics. He indicated the principle"}
+{"text":"This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarily implicitly rely on those concepts, but it implies the content of the second law. A closely related statement is that \"Frictional pressure never does positive work.\" Planck wrote: \"The production of heat by friction is irreversible.\""}
+{"text":"Not mentioning entropy, this principle of Planck is stated in physical terms. It is very closely related to the Kelvin statement given just above. It is relevant that for a system at constant volume and mole numbers, the entropy is a monotonic function of the internal energy. Nevertheless, this principle of Planck is not actually Planck's preferred statement of the second law, which is quoted above, in a previous sub-section of the present section of this present article, and relies on the concept of entropy."}
+{"text":"A statement that in a sense is complementary to Planck's principle is made by Borgnakke and Sonntag. They do not offer it as a full statement of the second law:"}
+{"text":"Differing from Planck's just foregoing principle, this one is explicitly in terms of entropy change. Removal of matter from a system can also decrease its entropy."}
+{"text":"Statement for a system that has a known expression of its internal energy as a function of its extensive state variables."}
+{"text":"The second law has been shown to be equivalent to the internal energy \"U\" being a weakly convex function, when written as a function of extensive properties (mass, volume, entropy, ...)."}
+{"text":"Before the establishment of the second law, many people who were interested in inventing a perpetual motion machine had tried to circumvent the restrictions of first law of thermodynamics by extracting the massive internal energy of the environment as the power of the machine. Such a machine is called a \"perpetual motion machine of the second kind\". The second law declared the impossibility of such machines."}
+{"text":"Carnot's theorem (1824) is a principle that limits the maximum efficiency for any possible engine. The efficiency solely depends on the temperature difference between the hot and cold thermal reservoirs. Carnot's theorem states:"}
+{"text":"In his ideal model, the heat of caloric converted into work could be reinstated by reversing the motion of the cycle, a concept subsequently known as thermodynamic reversibility. Carnot, however, further postulated that some caloric is lost, not being converted to mechanical work. Hence, no real heat engine could realize the Carnot cycle's reversibility and was condemned to be less efficient."}
+{"text":"Though formulated in terms of caloric (see the obsolete caloric theory), rather than entropy, this was an early insight into the second law."}
+{"text":"The Clausius theorem (1854) states that in a cyclic process"}
+{"text":"The equality holds in the reversible case and the strict inequality holds in the irreversible case. The reversible case is used to introduce the state function entropy. This is because in cyclic processes the variation of a state function is zero from state functionality."}
+{"text":"For an arbitrary heat engine, the efficiency is:"}
+{"text":"where \"W\"n is for the net work done per cycle. Thus the efficiency depends only on \"q\"\"C\"\/\"q\"\"H\"."}
+{"text":"Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures \"T\"1 and \"T\"2 must have the same efficiency, that is to say, the efficiency is the function of temperatures only:"}
+{"text":"In addition, a reversible heat engine operating between temperatures \"T\"1 and \"T\"3 must have the same efficiency as one consisting of two cycles, one between \"T\"1 and another (intermediate) temperature \"T\"2, and the second between \"T\"2 and\"T\"3. This can only be the case if"}
+{"text":"Now consider the case where formula_12 is a fixed reference temperature: the temperature of the triple point of water. Then for any \"T\"2 and \"T\"3,"}
+{"text":"Therefore, if thermodynamic temperature is defined by"}
+{"text":"then the function \"f\", viewed as a function of thermodynamic temperature, is simply"}
+{"text":"and the reference temperature \"T\"1 will have the value 273.16. (Any reference temperature and any positive numerical value could be usedthe choice here corresponds to the Kelvin scale.)"}
+{"text":"According to the Clausius equality, for a reversible process"}
+{"text":"That means the line integral formula_17 is path independent for reversible processes."}
+{"text":"So we can define a state function S called entropy, which for a reversible process or for pure heat transfer satisfies"}
+{"text":"With this we can only obtain the difference of entropy by integrating the above formula. To obtain the absolute value, we need the third law of thermodynamics, which states that \"S\" = 0 at absolute zero for perfect crystals."}
+{"text":"For any irreversible process, since entropy is a state function, we can always connect the initial and terminal states with an imaginary reversible process and integrating on that path to calculate the difference in entropy."}
+{"text":"Now reverse the reversible process and combine it with the said irreversible process. Applying the Clausius inequality on this loop,"}
+{"text":"where the equality holds if the transformation is reversible."}
+{"text":"Notice that if the process is an adiabatic process, then formula_21, so formula_22."}
+{"text":"An important and revealing idealized special case is to consider applying the Second Law to the scenario of an isolated system (called the total system or universe), made up of two parts: a sub-system of interest, and the sub-system's surroundings. These surroundings are imagined to be so large that they can be considered as an \"unlimited\" heat reservoir at temperature \"TR\" and pressure \"PR\" so that no matter how much heat is transferred to (or from) the sub-system, the temperature of the surroundings will remain \"TR\"; and no matter how much the volume of the sub-system expands (or contracts), the pressure of the surroundings will remain \"PR\"."}
+{"text":"Whatever changes to \"dS\" and \"dSR\" occur in the entropies of the sub-system and the surroundings individually, according to the Second Law the entropy \"S\"tot of the isolated total system must not decrease:"}
+{"text":"According to the first law of thermodynamics, the change \"dU\" in the internal energy of the sub-system is the sum of the heat \"\u03b4q\" added to the sub-system, \"less\" any work \"\u03b4w\" done \"by\" the sub-system, \"plus\" any net chemical energy entering the sub-system \"d\" \u2211\"\u03bciRNi\", so that:"}
+{"text":"where \"\u03bc\"\"iR\" are the chemical potentials of chemical species in the external surroundings."}
+{"text":"Now the heat leaving the reservoir and entering the sub-system is"}
+{"text":"where we have first used the definition of entropy in classical thermodynamics (alternatively, in statistical thermodynamics, the relation between entropy change, temperature and absorbed heat can be derived); and then the Second Law inequality from above."}
+{"text":"It therefore follows that any net work \"\u03b4w\" done by the sub-system must obey"}
+{"text":"It is useful to separate the work \"\u03b4w\" done by the subsystem into the \"useful\" work \"\u03b4wu\" that can be done \"by\" the sub-system, over and beyond the work \"pR dV\" done merely by the sub-system expanding against the surrounding external pressure, giving the following relation for the useful work (exergy) that can be done:"}
+{"text":"It is convenient to define the right-hand-side as the exact derivative of a thermodynamic potential, called the \"availability\" or \"exergy\" \"E\" of the subsystem,"}
+{"text":"The Second Law therefore implies that for any process which can be considered as divided simply into a subsystem, and an unlimited temperature and pressure reservoir with which it is in contact,"}
+{"text":"i.e. the change in the subsystem's exergy plus the useful work done \"by\" the subsystem (or, the change in the subsystem's exergy less any work, additional to that done by the pressure reservoir, done \"on\" the system) must be less than or equal to zero."}
+{"text":"In sum, if a proper \"infinite-reservoir-like\" reference state is chosen as the system surroundings in the real world, then the Second Law predicts a decrease in \"E\" for an irreversible process and no change for a reversible process."}
+{"text":"This expression together with the associated reference state permits a design engineer working at the macroscopic scale (above the thermodynamic limit) to utilize the Second Law without directly measuring or considering entropy change in a total isolated system. (\"Also, see process engineer\"). Those changes have already been considered by the assumption that the system under consideration can reach equilibrium with the reference state without altering the reference state. An efficiency for a process or collection of processes that compares it to the reversible ideal may also be found (\"See second law efficiency\".)"}
+{"text":"This approach to the Second Law is widely utilized in engineering practice, environmental accounting, systems ecology, and other disciplines."}
+{"text":"For a spontaneous chemical process in a closed system at constant temperature and pressure without non-\"PV\" work, the Clausius inequality \u0394\"S\" > \"Q\/T\"surr transforms into a condition for the change in Gibbs free energy"}
+{"text":"or d\"G\" < 0. For a similar process at constant temperature and volume, the change in Helmholtz free energy must be negative, formula_33. Thus, a negative value of the change in free energy (\"G\" or \"A\") is a necessary condition for a process to be spontaneous. This is the most useful form of the second law of thermodynamics in chemistry, where free-energy changes can be calculated from tabulated enthalpies of formation and standard molar entropies of reactants and products. The chemical equilibrium condition at constant \"T\" and \"p\" without electrical work is d\"G\" = 0."}
+{"text":"The first theory of the conversion of heat into mechanical work is due to Nicolas L\u00e9onard Sadi Carnot in 1824. He was the first to realize correctly that the efficiency of this conversion depends on the difference of temperature between an engine and its environment."}
+{"text":"Recognizing the significance of James Prescott Joule's work on the conservation of energy, Rudolf Clausius was the first to formulate the second law during 1850, in this form: heat does not flow \"spontaneously\" from cold to hot bodies. While common knowledge now, this was contrary to the caloric theory of heat popular at the time, which considered heat as a fluid. From there he was able to infer the principle of Sadi Carnot and the definition of entropy (1865)."}
+{"text":"Established during the 19th century, the Kelvin-Planck statement of the Second Law says, \"It is impossible for any device that operates on a cycle to receive heat from a single reservoir and produce a net amount of work.\" This was shown to be equivalent to the statement of Clausius."}
+{"text":"The ergodic hypothesis is also important for the Boltzmann approach. It says that, over long periods of time, the time spent in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e. that all accessible microstates are equally probable over a long period of time. Equivalently, it says that time average and average over the statistical ensemble are the same."}
+{"text":"There is a traditional doctrine, starting with Clausius, that entropy can be understood in terms of molecular 'disorder' within a macroscopic system. This doctrine is obsolescent."}
+{"text":"In 1856, the German physicist Rudolf Clausius stated what he called the \"second fundamental theorem in the mechanical theory of heat\" in the following form:"}
+{"text":"where \"Q\" is heat, \"T\" is temperature and \"N\" is the \"equivalence-value\" of all uncompensated transformations involved in a cyclical process. Later, in 1865, Clausius would come to define \"equivalence-value\" as entropy. On the heels of this definition, that same year, the most famous version of the second law was read in a presentation at the Philosophical Society of Zurich on April 24, in which, in the end of his presentation, Clausius concludes:"}
+{"text":"The entropy of the universe tends to a maximum."}
+{"text":"This statement is the best-known phrasing of the second law. Because of the looseness of its language, e.g. universe, as well as lack of specific conditions, e.g. open, closed, or isolated, many people take this simple statement to mean that the second law of thermodynamics applies virtually to every subject imaginable. This is not true; this statement is only a simplified version of a more extended and precise description."}
+{"text":"In terms of time variation, the mathematical statement of the second law for an isolated system undergoing an arbitrary transformation is:"}
+{"text":"The equality sign applies after equilibration. An alternative way of formulating of the second law for isolated systems is:"}
+{"text":"with formula_38 the sum of the rate of entropy production by all processes inside the system. The advantage of this formulation is that it shows the effect of the entropy production. The rate of entropy production is a very important concept since it determines (limits) the efficiency of thermal machines. Multiplied with ambient temperature formula_39 it gives the so-called dissipated energy formula_40."}
+{"text":"The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is:"}
+{"text":"The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms."}
+{"text":"For open systems (also allowing exchange of matter):"}
+{"text":"Here formula_47 is the flow of entropy into the system associated with the flow of matter entering the system. It should not be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions."}
+{"text":"The first mechanical argument of the Kinetic theory of gases that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium was due to James Clerk Maxwell in 1860; Ludwig Boltzmann with his H-theorem of 1872 also argued that due to collisions gases should over time tend toward the Maxwell\u2013Boltzmann distribution."}
+{"text":"Due to Loschmidt's paradox, derivations of the Second Law have to make an assumption regarding the past, namely that the system is uncorrelated at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a boundary condition, and thus the second Law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the Big Bang), though other scenarios have also been suggested."}
+{"text":"Given these assumptions, in statistical mechanics, the Second Law is not a postulate, rather it is a consequence of the fundamental postulate, also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy. The first part of the second law, which states that the entropy of a thermally isolated system can only increase, is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium. The entropy of an isolated system in thermal equilibrium containing an amount of energy of formula_48 is:"}
+{"text":"where formula_50 is the number of quantum states in a small interval between formula_48 and formula_52. Here formula_53 is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of formula_53. However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on formula_53."}
+{"text":"Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then formula_56 will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that formula_56 is maximized as that is the most probable situation in equilibrium."}
+{"text":"If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that formula_56 is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value)."}
+{"text":"Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number formula_56 of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of formula_60. We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state. Boltzmann's H-theorem, however, proves that the quantity increases monotonically as a function of time during the intermediate out of equilibrium state."}
+{"text":"Derivation of the entropy change for reversible processes."}
+{"text":"The second part of the Second Law states that the entropy change of a system undergoing a reversible process is given by:"}
+{"text":"See here for the justification for this definition. Suppose that the system has some external parameter, \"x\", that can be changed. In general, the energy eigenstates of the system will depend on \"x\". According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in."}
+{"text":"The generalized force, \"X\", corresponding to the external variable \"x\" is defined such that formula_63 is the work performed by the system if \"x\" is increased by an amount \"dx\". For example, if \"x\" is the volume, then \"X\" is the pressure. The generalized force for a system known to be in energy eigenstate formula_64 is given by:"}
+{"text":"Since the system can be in any energy eigenstate within an interval of formula_53, we define the generalized force for the system as the expectation value of the above expression:"}
+{"text":"To evaluate the average, we partition the formula_50 energy eigenstates by counting how many of them have a value for formula_69 within a range between formula_70 and formula_71. Calling this number formula_72, we have:"}
+{"text":"The average defining the generalized force can now be written:"}
+{"text":"We can relate this to the derivative of the entropy with respect to \"x\" at constant energy \"E\" as follows. Suppose we change \"x\" to \"x\" + \"dx\". Then formula_50 will change because the energy eigenstates depend on \"x\", causing energy eigenstates to move into or out of the range between formula_48 and formula_77. Let's focus again on the energy eigenstates for which formula_78 lies within the range between formula_70 and formula_71. Since these energy eigenstates increase in energy by \"Y dx\", all such energy eigenstates that are in the interval ranging from \"E\" \u2013 \"Y\" \"dx\" to \"E\" move from below \"E\" to above \"E\". There are"}
+{"text":"such energy eigenstates. If formula_82, all these energy eigenstates will move into the range between formula_48 and formula_77 and contribute to an increase in formula_56. The number of energy eigenstates that move from below formula_77 to above formula_77 is given by formula_88. The difference"}
+{"text":"is thus the net contribution to the increase in formula_56. Note that if \"Y dx\" is larger than formula_53 there will be the energy eigenstates that move from below \"E\" to above formula_77. They are counted in both formula_93 and formula_88, therefore the above expression is also valid in that case."}
+{"text":"Expressing the above expression as a derivative with respect to \"E\" and summing over \"Y\" yields the expression:"}
+{"text":"The logarithmic derivative of formula_56 with respect to \"x\" is thus given by:"}
+{"text":"The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanish in the thermodynamic limit. We have thus found that:"}
+{"text":"Derivation for systems described by the canonical ensemble."}
+{"text":"If a system is in thermal contact with a heat bath at some temperature \"T\" then, in equilibrium, the probability distribution over the energy eigenvalues are given by the canonical ensemble:"}
+{"text":"Here \"Z\" is a factor that normalizes the sum of all the probabilities to 1, this function is known as the partition function. We now consider an infinitesimal reversible change in the temperature and in the external parameters on which the energy levels depend. It follows from the general formula for the entropy:"}
+{"text":"Inserting the formula for formula_104 for the canonical ensemble in here gives:"}
+{"text":"As elaborated above, it is thought that the second law of thermodynamics is a result of the very low-entropy initial conditions at the Big Bang. From a statistical point of view, these were very special conditions. On the other hand, they were quite simple, as the universe - or at least the part thereof from which the observable universe developed - seem to have been extremely uniform."}
+{"text":"This may seem somewhat paradoxical, since in many physical systems uniform conditions (e.g. mixed rather than separated gases) have high entropy. The paradox is solved once realizing that gravitational systems have negative heat capacity, so that when gravity is important, uniform conditions (e.g. gas of uniform density) in fact have lower entropy compared to non-uniform ones (e.g. black holes in empty space). Yet another approach is that the universe had high (or even maximal) entropy given its size, but as the universe grew it rapidly came out of thermodynamic equilibrium, its entropy only slightly increased compared to the increase in maximal possible entropy, and thus it has arrived at a very low entropy when compared to the much larger possible maximum given its later size."}
+{"text":"As for the reason why initial conditions were such, one suggestion is that cosmological inflation was enough to wipe off non-smoothness, while another is that the universe was created spontaneously where the mechanism of creation implies low-entropy initial conditions."}
+{"text":"There are two principal ways of formulating thermodynamics, (a) through passages from one state of thermodynamic equilibrium to another, and (b) through cyclic processes, by which the system is left unchanged, while the total entropy of the surroundings is increased. These two ways help to understand the processes of life. The thermodynamics of living organisms has been considered by many authors, such as Erwin Schr\u00f6dinger, L\u00e9on Brillouin and Isaac Asimov."}
+{"text":"Furthermore, the ability of living organisms to grow and increase in complexity, as well as to form correlations with their environment in the form of adaption and memory, is not opposed to the second law \u2013 rather, it is akin to general results following from it: Under some definitions, an increase in entropy also results in an increase in complexity, and for a finite system interacting with finite reservoirs, an increase in entropy is equivalent to an increase in correlations between the system and the reservoirs."}
+{"text":"Living organisms may be considered as open systems, because matter passes into and out from them. Thermodynamics of open systems is currently often considered in terms of passages from one state of thermodynamic equilibrium to another, or in terms of flows in the approximation of local thermodynamic equilibrium. The problem for living organisms may be further simplified by the approximation of assuming a steady state with unchanging flows. General principles of entropy production for such approximations are subject to unsettled current debate or research."}
+{"text":"Commonly, systems for which gravity is not important have a positive heat capacity, meaning that their temperature rises with their internal energy. Therefore, when energy flows from a high-temperature object to a low-temperature object, the source temperature decreases while the sink temperature is increased; hence temperature differences tend to diminish over time."}
+{"text":"This is not always the case for systems in which the gravitational force is important: systems that are bound by their own gravity, such as stars, can have negative heat capacities. As they contract, both their total energy and their entropy decrease but their internal temperature may increase. This can be significant for protostars and even gas giant planets such as Jupiter."}
+{"text":"As gravity is the most important force operating on cosmological scales, it may be difficult or impossible to apply the second law to the universe as a whole."}
+{"text":"The theory of classical or equilibrium thermodynamics is idealized. A main postulate or assumption, often not even explicitly stated, is the existence of systems in their own internal states of thermodynamic equilibrium. In general, a region of space containing a physical system at a given time, that may be found in nature, is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms, nothing in the entire universe is or has ever been truly in exact thermodynamic equilibrium."}
+{"text":"In all cases, the assumption of thermodynamic equilibrium, once made, implies as a consequence that no putative candidate \"fluctuation\" alters the entropy of the system."}
+{"text":"It can easily happen that a physical system exhibits internal macroscopic changes that are fast enough to invalidate the assumption of the constancy of the entropy. Or that a physical system has so few particles that the particulate nature is manifest in observable fluctuations. Then the assumption of thermodynamic equilibrium is to be abandoned. There is no unqualified general definition of entropy for non-equilibrium states."}
+{"text":"There are intermediate cases, in which the assumption of local thermodynamic equilibrium is a very good approximation, but strictly speaking it is still an approximation, not theoretically ideal."}
+{"text":"For non-equilibrium situations in general, it may be useful to consider statistical mechanical definitions of other quantities that may be conveniently called 'entropy', but they should not be confused or conflated with thermodynamic entropy properly defined for the second law. These other quantities indeed belong to statistical mechanics, not to thermodynamics, the primary realm of the second law."}
+{"text":"The physics of macroscopically observable fluctuations is beyond the scope of this article."}
+{"text":"The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction. This does not conflict with symmetries observed in the fundamental laws of physics (particularly CPT symmetry) since the second law applies statistically on time-asymmetric boundary conditions. The second law has been related to the difference between moving forwards and backwards in time, or to the principle that cause precedes effect (the causal arrow of time, or causality)."}
+{"text":"Irreversibility in thermodynamic processes is a consequence of the asymmetric character of thermodynamic operations, and not of any internally irreversible microscopic properties of the bodies. Thermodynamic operations are macroscopic external interventions imposed on the participating bodies, not derived from their internal properties. There are reputed \"paradoxes\" that arise from failure to recognize this."}
+{"text":"Loschmidt's paradox, also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from the time-symmetric dynamics that describe the microscopic evolution of a macroscopic system."}
+{"text":"James Clerk Maxwell imagined one container divided into two parts, \"A\" and \"B\". Both parts are filled with the same gas at equal temperatures and placed next to each other, separated by a wall. Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wall. When a faster-than-average molecule from \"A\" flies towards the trapdoor, the demon opens it, and the molecule will fly from \"A\" to \"B\". The average speed of the molecules in \"B\" will have increased while in \"A\" they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in \"A\" and increases in \"B\", contrary to the second law of thermodynamics."}
+{"text":"One response to this question was suggested in 1929 by Le\u00f3 Szil\u00e1rd and later by L\u00e9on Brillouin. Szil\u00e1rd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy."}
+{"text":"Maxwell's 'demon' repeatedly alters the permeability of the wall between \"A\" and \"B\". It is therefore performing thermodynamic operations on a microscopic scale, not just observing ordinary spontaneous or natural macroscopic thermodynamic processes."}
+{"text":"In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-\u00bd massive particles such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine details of the hydrogen spectrum in a completely rigorous way."}
+{"text":"The equation also implied the existence of a new form of matter, \"antimatter\", previously unsuspected and unobserved and which was experimentally confirmed several years later. It also provided a \"theoretical\" justification for the introduction of several component wave functions in Pauli's phenomenological theory of spin. The wave functions in the Dirac theory are vectors of four complex numbers (known as bispinors), two of which resemble the Pauli wavefunction in the non-relativistic limit, in contrast to the Schr\u00f6dinger equation which described wave functions of only one complex value. Moreover, in the limit of zero mass, the Dirac equation reduces to the Weyl equation."}
+{"text":"Although Dirac did not at first fully appreciate the importance of his results, the entailed explanation of spin as a consequence of the union of quantum mechanics and relativity\u2014and the eventual discovery of the positron\u2014represents one of the great triumphs of theoretical physics. This accomplishment has been described as fully on a par with the works of Newton, Maxwell, and Einstein before him. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-\u00bd particles."}
+{"text":"The Dirac equation appears on the floor of Westminster Abbey on the plaque commemorating Paul Dirac's life, which was unveiled on 13\u00a0November 1995."}
+{"text":"The Dirac equation in the form originally proposed by Dirac is:"}
+{"text":"where is the wave function for the electron of rest mass with spacetime coordinates . The are the components of the momentum, understood to be the momentum operator in the Schr\u00f6dinger equation. Also, is the speed of light, and is the reduced Planck constant. These fundamental physical constants reflect special relativity and quantum mechanics, respectively."}
+{"text":"Dirac's purpose in casting this equation was to explain the behavior of the relativistically moving electron, and so to allow the atom to be treated in a manner consistent with relativity. His rather modest hope was that the corrections introduced this way might have a bearing on the problem of atomic spectra."}
+{"text":"Up until that time, attempts to make the old quantum theory of the atom compatible with the theory of relativity, attempts based on discretizing the angular momentum stored in the electron's possibly non-circular orbit of the atomic nucleus, had failed \u2013 and the new quantum mechanics of Heisenberg, Pauli, Jordan, Schr\u00f6dinger, and Dirac himself had not developed sufficiently to treat this problem. Although Dirac's original intentions were satisfied, his equation had far deeper implications for the structure of matter and introduced new mathematical classes of objects that are now essential elements of fundamental physics."}
+{"text":"The new elements in this equation are the four matrices , , and , and the four-component wave function . There are four components in because the evaluation of it at any given point in configuration space is a bispinor. It is interpreted as a superposition of a spin-up electron, a spin-down electron, a spin-up positron, and a spin-down positron (see below for further discussion)."}
+{"text":"The matrices and are all Hermitian and are involutory:"}
+{"text":"These matrices and the form of the wave function have a deep mathematical significance. The algebraic structure represented by the gamma matrices had been created some 50\u00a0years earlier by the English mathematician W. K. Clifford. In turn, Clifford's ideas had emerged from the mid-19th-century work of the German mathematician Hermann Grassmann in his \"Lineale Ausdehnungslehre\" (\"Theory of Linear Extensions\"). The latter had been regarded as well-nigh incomprehensible by most of his contemporaries. The appearance of something so seemingly abstract, at such a late date, and in such a direct physical manner, is one of the most remarkable chapters in the history of physics."}
+{"text":"The single symbolic equation thus unravels into four coupled linear first-order partial differential equations for the four quantities that make up the wave function. The equation can be written more explicitly in Planck units as:"}
+{"text":"which makes it clearer that it is a set of four partial differential equations with four unknown functions."}
+{"text":"The Dirac equation is superficially similar to the Schr\u00f6dinger equation for a massive free particle:"}
+{"text":"The left side represents the square of the momentum operator divided by twice the mass, which is the non-relativistic kinetic energy. Because relativity treats space and time as a whole, a relativistic generalization of this equation requires that space and time derivatives must enter symmetrically as they do in the Maxwell equations that govern the behavior of light \u2014 the equations must be differentially of the \"same order\" in space and time. In relativity, the momentum and the energies are the space and time parts of a spacetime vector, the four-momentum, and they are related by the relativistically invariant relation"}
+{"text":"which says that the length of this four-vector is proportional to the rest mass . Substituting the operator equivalents of the energy and momentum from the Schr\u00f6dinger theory, we get the Klein\u2013Gordon equation describing the propagation of waves, constructed from relativistically invariant objects,"}
+{"text":"with the wave function being a relativistic scalar: a complex number which has the same numerical value in all frames of reference. Space and time derivatives both enter to second order. This has a telling consequence for the interpretation of the equation. Because the equation is second order in the time derivative, one must specify initial values both of the wave function itself and of its first time-derivative in order to solve definite problems. Since both may be specified more or less arbitrarily, the wave function cannot maintain its former role of determining the probability density of finding the electron in a given state of motion. In the Schr\u00f6dinger theory, the probability density is given by the positive definite expression"}
+{"text":"and this density is convected according to the probability current vector"}
+{"text":"with the conservation of probability current and density following from the continuity equation:"}
+{"text":"The fact that the density is positive definite and convected according to this continuity equation implies that we may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. Now, if we wish to maintain the notion of a convected density, then we must generalize the Schr\u00f6dinger expression of the density and current so that space and time derivatives again enter symmetrically in relation to the scalar wave function. We are allowed to keep the Schr\u00f6dinger expression for the current, but must replace the probability density by the symmetrically formed expression"}
+{"text":"which now becomes the 4th component of a spacetime vector, and the entire probability 4-current density has the relativistically covariant expression"}
+{"text":"The continuity equation is as before. Everything is compatible with relativity now, but we see immediately that the expression for the density is no longer positive definite \u2013 the initial values of both and may be freely chosen, and the density may thus become negative, something that is impossible for a legitimate probability density. Thus, we cannot get a simple generalization of the Schr\u00f6dinger equation under the naive assumption that the wave function is a relativistic scalar, and the equation it satisfies, second order in time."}
+{"text":"Although it is not a successful relativistic generalization of the Schr\u00f6dinger equation, this equation is resurrected in the context of quantum field theory, where it is known as the Klein\u2013Gordon equation, and describes a spinless particle field (e.g. pi meson or Higgs boson). Historically, Schr\u00f6dinger himself arrived at this equation before the one that bears his name but soon discarded it. In the context of quantum field theory, the indefinite density is understood to correspond to the \"charge\" density, which can be positive or negative, and not the probability density."}
+{"text":"Dirac thus thought to try an equation that was \"first order\" in both space and time. One could, for example, formally (i.e. by abuse of notation) take the relativistic expression for the energy"}
+{"text":"replace by its operator equivalent, expand the square root in an infinite series of derivative operators, set up an eigenvalue problem, then solve the equation formally by iterations. Most physicists had little faith in such a process, even if it were technically possible."}
+{"text":"As the story goes, Dirac was staring into the fireplace at Cambridge, pondering this problem, when he hit upon the idea of taking the square root of the wave operator thus:"}
+{"text":"On multiplying out the right side we see that, in order to get all the cross-terms such as to vanish, we must assume"}
+{"text":"Dirac, who had just then been intensely involved with working out the foundations of Heisenberg's matrix mechanics, immediately understood that these conditions could be met if , , and are \"matrices\", with the implication that the wave function has \"multiple components\". This immediately explained the appearance of two-component wave functions in Pauli's phenomenological theory of spin, something that up until then had been regarded as mysterious, even to Pauli himself. However, one needs at least matrices to set up a system with the properties required \u2014 so the wave function had \"four\" components, not two, as in the Pauli theory, or one, as in the bare Schr\u00f6dinger theory. The four-component wave function represents a new class of mathematical object in physical theories that makes its first appearance here."}
+{"text":"Given the factorization in terms of these matrices, one can now write down immediately an equation"}
+{"text":"with formula_19 to be determined. Applying again the matrix operator on both sides yields"}
+{"text":"On taking formula_21 we find that all the components of the wave function \"individually\" satisfy the relativistic energy\u2013momentum relation. Thus the sought-for equation that is first-order in both space and time is"}
+{"text":"we get the Dirac equation as written above."}
+{"text":"To demonstrate the relativistic invariance of the equation, it is advantageous to cast it into a form in which the space and time derivatives appear on an equal footing. New matrices are introduced as follows:"}
+{"text":"and the equation takes the form (remembering the definition of the covariant components of the 4-gradient and especially that = )"}
+{"text":"where there is an implied summation over the values of the twice-repeated index , and is the 4-gradient. In practice one often writes the gamma matrices in terms of 2 \u00d7 2 sub-matrices taken from the Pauli matrices and the 2 \u00d7 2 identity matrix. Explicitly the standard representation is"}
+{"text":"The complete system is summarized using the Minkowski metric on spacetime in the form"}
+{"text":"denotes the anticommutator. These are the defining relations of a Clifford algebra over a pseudo-orthogonal 4-dimensional space with metric signature . The specific Clifford algebra employed in the Dirac equation is known today as the Dirac algebra. Although not recognized as such by Dirac at the time the equation was formulated, in hindsight the introduction of this \"geometric algebra\" represents an enormous stride forward in the development of quantum theory."}
+{"text":"The Dirac equation may now be interpreted as an eigenvalue equation, where the rest mass is proportional to an eigenvalue of the 4-momentum operator, the proportionality constant being the speed of light:"}
+{"text":"Using formula_31 (formula_32 is pronounced \"d-slash\"), according to Feynman slash notation, the Dirac equation becomes:"}
+{"text":"In practice, physicists often use units of measure such that , known as natural units. The equation then takes the simple form"}
+{"text":"A fundamental theorem states that if two distinct sets of matrices are given that both satisfy the Clifford relations, then they are connected to each other by a similarity transformation:"}
+{"text":"If in addition the matrices are all unitary, as are the Dirac set, then itself is unitary;"}
+{"text":"The transformation is unique up to a multiplicative factor of absolute value 1. Let us now imagine a Lorentz transformation to have been performed on the space and time coordinates, and on the derivative operators, which form a covariant vector. For the operator to remain invariant, the gammas must transform among themselves as a contravariant vector with respect to their spacetime index. These new gammas will themselves satisfy the Clifford relations, because of the orthogonality of the Lorentz transformation. By the fundamental theorem, we may replace the new set by the old set subject to a unitary transformation. In the new frame, remembering that the rest mass is a relativistic scalar, the Dirac equation will then take the form"}
+{"text":"If we now define the transformed spinor"}
+{"text":"then we have the transformed Dirac equation in a way that demonstrates manifest relativistic invariance:"}
+{"text":"Thus, once we settle on any unitary representation of the gammas, it is final provided we transform the spinor according to the unitary transformation that corresponds to the given Lorentz transformation."}
+{"text":"The various representations of the Dirac matrices employed will bring into focus particular aspects of the physical content in the Dirac wave function (see below). The representation shown here is known as the \"standard\" representation \u2013 in it, the wave function's upper two components go over into Pauli's 2\u00a0spinor wave function in the limit of low energies and small velocities in comparison to light."}
+{"text":"The considerations above reveal the origin of the gammas in \"geometry\", hearkening back to Grassmann's original motivation \u2013 they represent a fixed basis of unit vectors in spacetime. Similarly, products of the gammas such as represent \"oriented surface elements\", and so on. With this in mind, we can find the form of the unit volume element on spacetime in terms of the gammas as follows. By definition, it is"}
+{"text":"For this to be an invariant, the epsilon symbol must be a tensor, and so must contain a factor of , where is the determinant of the metric tensor. Since this is negative, that factor is \"imaginary\". Thus"}
+{"text":"This matrix is given the special symbol , owing to its importance when one is considering improper transformations of space-time, that is, those that change the orientation of the basis vectors. In the standard representation, it is"}
+{"text":"This matrix will also be found to anticommute with the other four Dirac matrices:"}
+{"text":"It takes a leading role when questions of \"parity\" arise because the volume element as a directed magnitude changes sign under a space-time reflection. Taking the positive square root above thus amounts to choosing a handedness convention on spacetime."}
+{"text":"where is the conjugate transpose of , and noticing that"}
+{"text":"we obtain, by taking the Hermitian conjugate of the Dirac equation and multiplying from the right by , the adjoint equation:"}
+{"text":"where is understood to act to the left. Multiplying the Dirac equation by from the left, and the adjoint equation by from the right, and adding, produces the law of conservation of the Dirac current:"}
+{"text":"Now we see the great advantage of the first-order equation over the one Schr\u00f6dinger had tried \u2013 this is the conserved current density required by relativistic invariance, only now its 4th component is \"positive definite\" and thus suitable for the role of a probability density:"}
+{"text":"Because the probability density now appears as the fourth component of a relativistic vector and not a simple scalar as in the Schr\u00f6dinger equation, it will be subject to the usual effects of the Lorentz transformations such as time dilation. Thus, for example, atomic processes that are observed as rates, will necessarily be adjusted in a way consistent with relativity, while those involving the measurement of energy and momentum, which themselves form a relativistic vector, will undergo parallel adjustment which preserves the relativistic covariance of the observed values. The Dirac current itself is then the spacetime-covariant four-vector:"}
+{"text":"See Dirac spinor for details of solutions to the Dirac equation. Note that since the Dirac operator acts on 4-tuples of square-integrable functions, its solutions should be members of the same Hilbert space. The fact that the energies of the solutions do not have a lower bound is unexpected \u2013 see the hole theory section below for more details."}
+{"text":"Here and formula_52 represent the components of the electromagnetic four-potential in their standard SI units, and the three sigmas are the Pauli matrices. On squaring out the first term, a residual interaction with the magnetic field is found, along with the usual classical Hamiltonian of a charged particle interacting with an applied field in SI units:"}
+{"text":"This Hamiltonian is now a matrix, so the Schr\u00f6dinger equation based on it must use a two-component wave function. On introducing the external electromagnetic 4-vector potential into the Dirac equation in a similar way, known as minimal coupling, it takes the form:"}
+{"text":"A second application of the Dirac operator will now reproduce the Pauli term exactly as before, because the spatial Dirac matrices multiplied by , have the same squaring and commutation properties as the Pauli matrices. What is more, the value of the gyromagnetic ratio of the electron, standing in front of Pauli's new term, is explained from first principles. This was a major achievement of the Dirac equation and gave physicists great faith in its overall correctness. There is more however. The Pauli theory may be seen as the low energy limit of the Dirac theory in the following manner. First the equation is written in the form of coupled equations for 2-spinors with the SI units restored:"}
+{"text":"Assuming the field is weak and the motion of the electron non-relativistic, we have the total energy of the electron approximately equal to its rest energy, and the momentum going over to the classical value,"}
+{"text":"and so the second equation may be written"}
+{"text":"which is of order \u2013 thus at typical energies and velocities, the bottom components of the Dirac spinor in the standard representation are much suppressed in comparison to the top components. Substituting this expression into the first equation gives after some rearrangement"}
+{"text":"It should be strongly emphasized that this separation of the Dirac spinor into large and small components depends explicitly on a low-energy approximation. The entire Dirac spinor represents an \"irreducible\" whole, and the components we have just neglected to arrive at the Pauli theory will bring in new phenomena in the relativistic regime \u2013 antimatter and the idea of creation and annihilation of particles."}
+{"text":"In the limit , the Dirac equation reduces to the Weyl equation, which describes relativistic massless spin- particles."}
+{"text":"Both the Dirac equation and the Adjoint Dirac equation can be obtained from (varying) the action with a specific Lagrangian density that is given by:"}
+{"text":"If one varies this with respect to one gets the Adjoint Dirac equation. Meanwhile, if one varies this with respect to one gets the Dirac equation."}
+{"text":"The critical physical question in a quantum theory is\u2014what are the physically observable quantities defined by the theory? According to the postulates of quantum mechanics, such quantities are defined by Hermitian operators that act on the Hilbert space of possible states of a system. The eigenvalues of these operators are then the possible results of measuring the corresponding physical quantity. In the Schr\u00f6dinger theory, the simplest such object is the overall Hamiltonian, which represents the total energy of the system. If we wish to maintain this interpretation on passing to the Dirac theory, we must take the Hamiltonian to be"}
+{"text":"where, as always, there is an implied summation over the twice-repeated index . This looks promising, because we see by inspection the rest energy of the particle and, in the case of , the energy of a charge placed in an electric potential . What about the term involving the vector potential? In classical electrodynamics, the energy of a charge moving in an applied potential is"}
+{"text":"Thus, the Dirac Hamiltonian is fundamentally distinguished from its classical counterpart, and we must take great care to correctly identify what is observable in this theory. Much of the apparently paradoxical behavior implied by the Dirac equation amounts to a misidentification of these observables."}
+{"text":"The negative solutions to the equation are problematic, for it was assumed that the particle has a positive energy. Mathematically speaking, however, there seems to be no reason for us to reject the negative-energy solutions. Since they exist, we cannot simply ignore them, for once we include the interaction between the electron and the electromagnetic field, any electron placed in a positive-energy eigenstate would decay into negative-energy eigenstates of successively lower energy. Real electrons obviously do not behave in this way, or they would disappear by emitting energy in the form of photons."}
+{"text":"To cope with this problem, Dirac introduced the hypothesis, known as hole theory, that the vacuum is the many-body quantum state in which all the negative-energy electron eigenstates are occupied. This description of the vacuum as a \"sea\" of electrons is called the Dirac sea. Since the Pauli exclusion principle forbids electrons from occupying the same state, any additional electron would be forced to occupy a positive-energy eigenstate, and positive-energy electrons would be forbidden from decaying into negative-energy eigenstates."}
+{"text":"Dirac further reasoned that if the negative-energy eigenstates are incompletely filled, each unoccupied eigenstate \u2013 called a hole \u2013 would behave like a positively charged particle. The hole possesses a \"positive\" energy since energy is required to create a particle\u2013hole pair from the vacuum. As noted above, Dirac initially thought that the hole might be the proton, but Hermann Weyl pointed out that the hole should behave as if it had the same mass as an electron, whereas the proton is over 1800 times heavier. The hole was eventually identified as the positron, experimentally discovered by Carl Anderson in 1932."}
+{"text":"It is not entirely satisfactory to describe the \"vacuum\" using an infinite sea of negative-energy electrons. The infinitely negative contributions from the sea of negative-energy electrons have to be canceled by an infinite positive \"bare\" energy and the contribution to the charge density and current coming from the sea of negative-energy electrons is exactly canceled by an infinite positive \"jellium\" background so that the net electric charge density of the vacuum is zero. In quantum field theory, a Bogoliubov transformation on the creation and annihilation operators (turning an occupied negative-energy electron state into an unoccupied positive energy positron state and an unoccupied negative-energy electron state into an occupied positive energy positron state) allows us to bypass the Dirac sea formalism even though, formally, it is equivalent to it."}
+{"text":"In certain applications of condensed matter physics, however, the underlying concepts of \"hole theory\" are valid. The sea of conduction electrons in an electrical conductor, called a Fermi sea, contains electrons with energies up to the chemical potential of the system. An unfilled state in the Fermi sea behaves like a positively charged electron, though it is referred to as a \"hole\" rather than a \"positron\". The negative charge of the Fermi sea is balanced by the positively charged ionic lattice of the material."}
+{"text":"In quantum field theories such as quantum electrodynamics, the Dirac field is subject to a process of second quantization, which resolves some of the paradoxical features of the equation."}
+{"text":"The Dirac equation is Lorentz covariant. Articulating this helps illuminate not only the Dirac equation, but also the Majorana spinor and Elko spinor, which although closely related, have subtle and important differences."}
+{"text":"Understanding Lorentz covariance is simplified by keeping in mind the geometric character of the process. Let formula_65 be a single, fixed point in the spacetime manifold. Its location can be expressed in multiple coordinate systems. In the physics literature, these are written as formula_66 and formula_67, with the understanding that both formula_66 and formula_67 describe \"the same\" point formula_65, but in different local frames of reference (a frame of reference over a small extended patch of spacetime)."}
+{"text":"One can imagine formula_65 as having a fiber of different coordinate frames above it. In geometric terms, one says that spacetime can be characterized as a fiber bundle, and specifically, the frame bundle. The difference between two points formula_66 and formula_67 in the same fiber is a combination of rotations and Lorentz boosts. A choice of coordinate frame is a (local) section through that bundle."}
+{"text":"The presentation here follows that of Itzykson and Zuber. It is very nearly identical to that of Bjorken and Drell. A similar derivation in a general relativistic setting can be found in Weinberg. Under a Lorentz transformation formula_74 the Dirac spinor to transform as"}
+{"text":"It can be shown that an explicit expression for formula_76 is given by"}
+{"text":"where formula_78 parameterizes the Lorentz transformation, and formula_79 is the 4x4 matrix"}
+{"text":"This matrix can be interpreted as the intrinsic angular momentum of the Dirac field. That it deserves this interpretation arises by contrasting it to the generator formula_81 of Lorentz transformations, having the form"}
+{"text":"This can be interpreted as the total angular momentum. It acts on the spinor field as"}
+{"text":"Note the formula_66 above does \"not\" have a prime on it: the above is obtained by transforming formula_85 obtaining the change to formula_86 and then returning to the original coordinate system formula_87."}
+{"text":"The geometrical interpretation of the above is that the frame field is affine, having no preferred origin. The generator formula_81 generates the symmetries of this space: it provides a relabelling of a fixed point formula_89 The generator formula_79 generates a movement from one point in the fiber to another: a movement from formula_85 with both formula_66 and formula_67 still corresponding to the same spacetime point formula_94 These perhaps obtuse remarks can be elucidated with explicit algebra."}
+{"text":"Let formula_95 be a Lorentz transformation. The Dirac equation is"}
+{"text":"If the Dirac equation is to be covariant, then it should have exactly the same form in all Lorentz frames:"}
+{"text":"The two spinors formula_98 and formula_99 should both describe the same physical field, and so should be related by a transformation that does not change any physical observables (charge, current, mass, \"etc.\") The transformation should encode only the change of coordinate frame. It can be shown that such a transformation is a 4x4 unitary matrix. Thus, one may presume that the relation between the two frames can be written as"}
+{"text":"Inserting this into the transformed equation, the result is"}
+{"text":"The original Dirac equation is then regained if"}
+{"text":"An explicit expression for formula_104 (equal to the expression given above) can be obtained by considering an infinitessimal Lorentz transformation"}
+{"text":"where formula_106 is the metric tensor and formula_107 is antisymmetric. After plugging and chugging, one obtains"}
+{"text":"which is the (infinitessimal) form for formula_76 above. To obtain the affine relabelling, write"}
+{"text":"After properly antisymmetrizing, one obtains the generator of symmetries formula_81 given earlier. Thus, both formula_81 and formula_79 can be said to be the \"generators of Lorentz transformations\", but with a subtle distinction: the first corresponds to a relabelling of points on the affine frame bundle, which forces a translation along the fiber of the spinor on the spin bundle, while the second corresponds to translations along the fiber of the spin bundle (taken as a movement formula_85 along the frame bundle, as well as a movement formula_115 along the fiber of the spin bundle.) Weinberg provides additional arguments for the physical interpretation of these as total and intrinsic angular momentum."}
+{"text":"The Dirac equation can be formulated in a number of other ways."}
+{"text":"This article has developed the Dirac equation in flat spacetime according to special relativity. It is possible to formulate the Dirac equation in curved spacetime."}
+{"text":"This article developed the Dirac equation using four vectors and Schr\u00f6dinger operators. The Dirac equation in the algebra of physical space uses a Clifford algebra over the real numbers, a type of geometric algebra."}
+{"text":"In quantum field theory, the nonlinear Dirac equation is a model of self-interacting Dirac fermions."}
+{"text":"This model is widely considered in quantum physics as a toy model of self-interacting electrons."}
+{"text":"The nonlinear Dirac equation appears in the Einstein-Cartan-Sciama-Kibble theory of gravity, which extends general relativity to matter with intrinsic angular momentum (spin). This theory removes a constraint of the symmetry of the affine connection and treats its antisymmetric part, the torsion tensor, as a variable in varying the action. In the resulting field equations, the torsion tensor is a homogeneous, linear function of the spin tensor. The minimal coupling between torsion and Dirac spinors thus generates an axial-axial, spin\u2013spin interaction in fermionic matter, which becomes significant only at extremely high densities. Consequently, the Dirac equation becomes nonlinear (cubic) in the spinor field, which causes fermions to be spatially extended and may remove the ultraviolet divergence in quantum field theory."}
+{"text":"Two common examples are the massive Thirring model and the Soler model."}
+{"text":"The Thirring model was originally formulated as a model in (1 + 1) space-time dimensions and is characterized by the Lagrangian density"}
+{"text":"where is the spinor field, is the Dirac adjoint spinor,"}
+{"text":"(Feynman slash notation is used), is the coupling constant, is the mass, and are the \"two\"-dimensional gamma matrices, finally is an index."}
+{"text":"The Soler model was originally formulated in (3 + 1) space-time dimensions. It is characterized by the Lagrangian density"}
+{"text":"is now the four-gradient operator contracted with the \"four\"-dimensional Dirac gamma matrices , so therein ."}
+{"text":"In Einstein-Cartan theory the Lagrangian density for a Dirac spinor field is given by (formula_5)"}
+{"text":"is the Fock-Ivanenko covariant derivative of a spinor with respect to the affine connection, formula_8 is the spin connection, formula_9 is the determinant of the metric tensor formula_10, and the Dirac matrices satisfy"}
+{"text":"The Einstein-Cartan field equations for the spin connection yield an algebraic constraint between the spin connection and the spinor field rather than a partial differential equation, which allows the spin connection to be explicitly eliminated from the theory. The final result is a nonlinear Dirac equation containing an effective \"spin-spin\" self-interaction,"}
+{"text":"where formula_13 is the general-relativistic covariant derivative of a spinor, and formula_14 is the Einstein gravitational constant, formula_15. The cubic term in this equation becomes significant at densities on the order of formula_16."}
+{"text":"In mathematical physics, the Dirac equation in curved spacetime generalizes the original Dirac equation to curved space."}
+{"text":"It can be written by using vierbein fields and the gravitational spin connection. The vierbein defines a local rest frame, allowing the constant Gamma matrices to act at each spacetime point. In this way, Dirac's equation takes the following form in curved spacetime:"}
+{"text":"Here is the vierbein and is the covariant derivative for fermionic fields, defined as follows"}
+{"text":"where is the commutator of Gamma matrices:"}
+{"text":"Note that here Latin indices denote the \"Lorentzian\" vierbein labels while Greek indices denote manifold coordinate indices."}
+{"text":"The optical response of a semiconductor follows if one can determine its macroscopic polarization formula_1 as a function of the electric field formula_2 that excites it. The connection between formula_1 and the microscopic polarization formula_4 is given by"}
+{"text":"where the sum involves crystal-momenta formula_6 of all relevant electronic states. In semiconductor optics, one typically excites transitions between a valence and a conduction band. In this connection, formula_7 is the dipole matrix element between the conduction and valence band and formula_8 defines the corresponding transition amplitude."}
+{"text":"The derivation of the SBEs starts from a system Hamiltonian that fully includes the free-particles, Coulomb interaction, dipole interaction between classical light and electronic states, as well as the phonon contributions. Like almost always in many-body physics, it is most convenient to apply the second-quantization formalism after the appropriate system Hamiltonian formula_9 is identified. One can then derive the quantum dynamics of relevant observables formula_10 by using the Heisenberg equation of motion"}
+{"text":"Due to the many-body interactions within formula_9, the dynamics of the observable formula_10 couples to new observables and the equation structure cannot be closed. This is the well-known BBGKY hierarchy problem that can be systematically truncated with different methods such as the cluster-expansion approach."}
+{"text":"At operator level, the microscopic polarization is defined by an expectation value for a single electronic transition between a valence and a conduction band. In second quantization, conduction-band electrons are defined by fermionic creation and annihilation operators formula_14 and formula_15, respectively. An analogous identification, i.e., formula_16 and formula_17, is made for the valence band electrons. The corresponding electronic interband transition then becomes"}
+{"text":"that describe transition amplitudes for moving an electron from conduction to valence band (formula_19 term) or vice versa (formula_8 term). At the same time, an electron distribution follows from"}
+{"text":"It is also convenient to follow the distribution of electronic vacancies, i.e., the holes,"}
+{"text":"that are left to the valence band due to optical excitation processes."}
+{"text":"The quantum dynamics of optical excitations yields an integro-differential equations that constitute the SBEs"}
+{"text":"+ \\mathrm{i} \\hbar \\left. \\frac{\\partial}{\\partial t} P_{\\mathbf{k}} \\right|_{\\mathrm{scatter}}\\,"}
+{"text":"as well as the renormalized carrier energy"}
+{"text":"where formula_27 corresponds to the energy of free electron\u2013hole pairs and formula_28 is the Coulomb matrix element, given here in terms of the carrier wave vector formula_29."}
+{"text":"The symbolically denoted formula_30 contributions stem from the hierarchical coupling due to many-body interactions. Conceptually, formula_31, formula_32, and formula_33 are single-particle expectation values while the hierarchical coupling originates from two-particle correlations such as polarization-density correlations or polarization-phonon correlations. Physically, these two-particle correlations introduce several nontrivial effects such as screening of Coulomb interaction, Boltzmann-type scattering of formula_34 and formula_35 toward Fermi\u2013Dirac distribution, excitation-induced dephasing, and further renormalization of energies due to correlations."}
+{"text":"All these correlation effects can be systematically included by solving also the dynamics of two-particle correlations. At this level of sophistication, one can use the SBEs to predict optical response of semiconductors without phenomenological parameters, which gives the SBEs a very high degree of predictability. Indeed, one can use the SBEs in order to predict suitable laser designs through the accurate knowledge they produce about the semiconductor's gain spectrum. One can even use the SBEs to deduce existence of correlations, such as bound excitons, from quantitative measurements."}
+{"text":"The presented SBEs are formulated in the momentum space since carrier's crystal momentum follows from formula_36. An equivalent set of equations can also be formulated in position space. However, especially, the correlation computations are much simpler to be performed in the momentum space."}
+{"text":"The formula_31 dynamic shows a structure where an individual formula_31 is coupled to \"all\" other microscopic polarizations due to the Coulomb interaction formula_28. Therefore, the transition amplitude formula_31 is collectively modified by the presence of other transition amplitudes. Only if one sets formula_28 to zero, one finds isolated transitions within each formula_42 state that follow exactly the same dynamics as the optical Bloch equations predict. Therefore, already the Coulomb interaction among formula_31 produces a new solid-state effect compared with optical transitions in simple atoms."}
+{"text":"Conceptually, formula_31 is just a transition amplitude for exciting an electron from valence to conduction band. At the same time, the homogeneous part of formula_31 dynamics yields an eigenvalue problem that can be expressed through the generalized Wannier equation. The eigenstates of the Wannier equation is analogous to bound solutions of the hydrogen problem of quantum mechanics. These are often referred to as exciton solutions and they formally describe Coulombic binding by oppositely charged electrons and holes."}
+{"text":"However, a real exciton is a true two-particle correlation because one must then have a correlation between one electron to another hole. Therefore, the appearance of exciton resonances in the polarization does not signify the presence of excitons because formula_31 is a single-particle transition amplitude. The excitonic resonances are a direct consequence of Coulomb coupling among all transitions possible in the system. In other words, the single-particle transitions themselves are influenced by Coulomb interaction making it possible to detect exciton resonance in optical response even when true excitons are not present."}
+{"text":"Therefore, it is often customary to specify optical resonances as exciton\"ic\" instead of exciton resonances. The actual role of excitons on optical response can only be deduced by quantitative changes to induce to the linewidth and energy shift of excitonic resonances."}
+{"text":"The solutions of the Wannier equation can produces valuable insight to the basic properties of a semiconductor's optical response. In particular, one can solve the steady-state solutions of the SBEs to predict optical absorption spectrum analytically with the so-called Elliott formula. In this form, one can verify that an unexcited semiconductor shows several excitonic absorption resonances well below the fundamental bandgap energy. Obviously, this situation cannot be probing excitons because the initial many-body system does not contain electrons and holes to begin with. Furthermore, the probing can, in principle, be performed so gently that one essentially does not excite electron\u2013hole pairs. This gedanken experiment illustrates nicely why one can detect excitonic resonances without having excitons in the system, all due to virtue of Coulomb coupling among transition amplitudes."}
+{"text":"The SBEs are particularly useful when solving the light propagation through a semiconductor structure. In this case, one needs to solve the SBEs together with the Maxwell's equations driven by the optical polarization. This self-consistent set is called the Maxwell\u2013SBEs and is frequently applied to analyze present-day experiments and to simulate device designs."}
+{"text":"At this level, the SBEs provide an extremely versatile method that describes linear as well as nonlinear phenomena such as excitonic effects, propagation effects, semiconductor microcavity effects, four-wave-mixing, polaritons in semiconductor microcavities, gain spectroscopy, and so on. One can also generalize the SBEs by including excitation with terahertz (THz) fields that are typically resonant with intraband transitions. One can also quantize the light field and investigate quantum-optical effects that result. In this situation, the SBEs become coupled to the semiconductor luminescence equations."}
+{"text":"The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum."}
+{"text":"In convective (or Lagrangian) form the Cauchy momentum equation is written as:"}
+{"text":"Note that only we use column vectors (in the Cartesian coordinate system) above for clarity, but the equation is written using physical components (which are neither covariants (\"column\") nor contravariants (\"row\") ). However, if we chose a non-orthogonal curvilinear coordinate system, then we should calculate and write equations in covariant (\"row vectors\") or contravariant (\"column vectors\") form."}
+{"text":"After an appropriate change of variables, it can also be written in conservation form:"}
+{"text":"where is the momentum density at a given space-time point, is the flux associated to the momentum density, and contains all of the body forces per unit volume."}
+{"text":"Let us start with the generalized momentum conservation principle which can be written as follows: \"The change in system momentum is proportional to the resulting force acting on this system\". It is expressed by the formula:"}
+{"text":"where formula_13 is momentum in time t, formula_14 is force averaged over formula_15. After dividing by formula_15 and passing to the limit formula_17 we get (derivative):"}
+{"text":"Let us analyse each side of the equation above."}
+{"text":"We split the forces into body forces formula_19 and surface forces formula_20"}
+{"text":"Surface forces act on walls of the cubic fluid element. For each wall, the X component of these forces was marked in the figure with a cubic element (in the form of a product of stress and surface area e.g. formula_22)."}
+{"text":"Adding forces (their X components) acting on each of the cube walls, we get:"}
+{"text":"After ordering formula_24 and performing similar reasoning for components formula_25 (they have not been shown in the figure, but these would be vectors parallel to the Y and Z axes, respectively) we get:"}
+{"text":"We can then write it in the symbolic operational form:"}
+{"text":"There are mass forces acting on the inside of the control volume. We can write them using the acceleration field formula_30 (e.g. gravitational acceleration):"}
+{"text":"Let us calculate momentum of the cube:"}
+{"text":"Because we assume that tested mass (cube) formula_33 is constant in time, so"}
+{"text":"Divide both sides by formula_38, and because formula_39 we get:"}
+{"text":"Applying Newton's second law (th component) to a control volume in the continuum being modeled gives:"}
+{"text":"Then, based on the Reynolds transport theorem and using material derivative notation, one can write"}
+{"text":"where represents the control volume. Since this equation must hold for any control volume, it must be true that the integrand is zero, from this the Cauchy momentum equation follows. The main step (not done above) in deriving this equation is establishing that the derivative of the stress tensor is one of the forces that constitutes ."}
+{"text":"The Cauchy momentum equation can also be put in the following form:"}
+{"text":"where is the momentum density at the point considered in the continuum (for which the continuity equation holds), is the flux associated to the momentum density, and contains all of the body forces per unit volume. is the dyad of the velocity."}
+{"text":"Here and have same number of dimensions as the flow speed and the body acceleration, while , being a tensor, has ."}
+{"text":"In the Eulerian forms it is apparent that the assumption of no deviatoric stress brings Cauchy equations to the Euler equations."}
+{"text":"A significant feature of the Navier\u2013Stokes equations is the presence of convective acceleration: the effect of time-independent acceleration of a flow with respect to space. While individual continuum particles indeed experience time dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle."}
+{"text":"Regardless of what kind of continuum is being dealt with, convective acceleration is a nonlinear effect. Convective acceleration is present in most flows (exceptions include one-dimensional incompressible flow), but its dynamic effect is disregarded in creeping flow (also called Stokes flow). Convective acceleration is represented by the nonlinear quantity , which may be interpreted either as or as , with the tensor derivative of the velocity vector . Both interpretations give the same result."}
+{"text":"The convection term formula_44 can be written as , where is the advection operator. This representation can be contrasted to the one in terms of the tensor derivative."}
+{"text":"The tensor derivative is the component-by-component derivative of the velocity vector, defined by , so that"}
+{"text":"where the Feynman subscript notation is used, which means the subscripted gradient operates only on the factor ."}
+{"text":"Lamb in his famous classical book Hydrodynamics (1895), used this identity to change the convective term of the flow velocity in rotational form, i.e. without a tensor derivative:"}
+{"text":"where the vector formula_48 is called the Lamb vector. The Cauchy momentum equation becomes:"}
+{"text":"In fact, in case of an external conservative field, by defining its potential :"}
+{"text":"And by projecting the momentum equation on the flow direction, i.e. along a \"streamline\", the cross product disappears due to a vector calculus identity of the triple scalar product:"}
+{"text":"If the stress tensor is isotropic, then only the pressure enters: formula_55 (where is the identity tensor), and the Euler momentum equation in the steady incompressible case becomes:"}
+{"text":"that is, \"the mass conservation for a steady incompressible flow states that the density along a streamline is constant\". This leads to a considerable simplification of the Euler momentum equation:"}
+{"text":"in fact, the above equation can be simply written as:"}
+{"text":"That is, \"the momentum balance for a steady inviscid and incompressible flow in an external conservative field states that the total head along a streamline is constant\"."}
+{"text":"The Lamb form is also useful in irrotational flow, where the curl of the velocity (called vorticity) is equal to zero. In that case, the convection term in formula_44 reduces to"}
+{"text":"The effect of stress in the continuum flow is represented by the and terms; these are gradients of surface forces, analogous to stresses in a solid. Here is the pressure gradient and arises from the isotropic part of the Cauchy stress tensor. This part is given by the normal stresses that occur in almost all situations. The anisotropic part of the stress tensor gives rise to , which usually describes viscous forces; for incompressible flow, this is only a shear effect. Thus, is the deviatoric stress tensor, and the stress tensor is equal to:"}
+{"text":"where is the identity matrix in the space considered and the shear tensor."}
+{"text":"All non-relativistic momentum conservation equations, such as the Navier\u2013Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relation. By expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity, the Cauchy momentum equation will lead to the Navier\u2013Stokes equations. By assuming inviscid flow, the Navier\u2013Stokes equations can further simplify to the Euler equations."}
+{"text":"The divergence of the stress tensor can be written as"}
+{"text":"The effect of the pressure gradient on the flow is to accelerate the flow in the direction from high pressure to low pressure."}
+{"text":"As written in the Cauchy momentum equation, the stress terms and are yet unknown, so this equation alone cannot be used to solve problems. Besides the equations of motion\u2014Newton's second law\u2014a force model is needed relating the stresses to the flow motion. For this reason, assumptions based on natural observations are often applied to specify the stresses in terms of the other flow variables, such as velocity and density."}
+{"text":"The vector field represents body forces per unit mass. Typically, these consist of only gravity acceleration, but may include others, such as electromagnetic forces. In non-inertial coordinate frames, other \"inertial accelerations\" associated with rotating coordinates may arise."}
+{"text":"Often, these forces may be represented as the gradient of some scalar quantity , with in which case they are called conservative forces. Gravity in the direction, for example, is the gradient of . Because pressure from such gravitation arises only as a gradient, we may include it in the pressure term as a body force . The pressure and force terms on the right-hand side of the Navier\u2013Stokes equation become"}
+{"text":"It is also possible to include external influences into the stress term formula_66 rather than the body force term. This may even include antisymmetric stresses (inputs of angular momentum), in contrast to the usually symmetrical internal contributions to the stress tensor."}
+{"text":"In order to make the equations dimensionless, a characteristic length and a characteristic velocity need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained:"}
+{"text":"Substitution of these inverted relations in the Euler momentum equations yields:"}
+{"text":"and by dividing for the first coefficient:"}
+{"text":"and the coefficient of skin-friction or the one usually referred as 'drag' co-efficient in the field of aerodynamics:"}
+{"text":"by passing respectively to the conservative variables, i.e. the momentum density and the force density:"}
+{"text":"the equations are finally expressed (now omitting the indexes):"}
+{"text":"2 \\nabla \\cdot \\boldsymbol \\tau + \\frac 1 {\\mathrm{Fr}} \\mathbf g"}
+{"text":"Cauchy equations in the Froude limit (corresponding to negligible external field) are named free Cauchy equations:"}
+{"text":"and can be eventually conservation equations. The limit of high Froude numbers (low external field) is thus notable for such equations and is studied with perturbation theory."}
+{"text":"Finally in convective form the equations are:"}
+{"text":"For asymmetric stress tensors, equations in general take the following forms:"}
+{"text":"Below, we write the main equation in pressure-tau form assuming that the stress tensor is symmetrical (formula_75):"}
+{"text":"Plastic limit theorems in continuum mechanics provide two bounds that can be used to determine whether material failure is possible by means of plastic deformation for a given external loading scenario. According to the theorems, to find the range within which the true solution must lie, it is necessary to find both a stress field that balances the external forces and a velocity field or flow pattern that corresponds to those stresses. If the upper and lower bounds provided by the velocity field and stress field coincide, the exact value of the collapse load is determined."}
+{"text":"The two plastic limit theorems apply to any elastic-perfectly plastic body or assemblage of bodies."}
+{"text":"If an equilibrium distribution of stress can be found which balances the applied load and nowhere violates the yield criterion, the body (or bodies) will not fail, or will be just at the point of failure."}
+{"text":"The body (or bodies) will collapse if there is any compatible pattern of plastic deformation for which the rate of work done by the external loads exceeds the internal plastic dissipation."}
+{"text":"In particle physics, the baryon number is a strictly conserved additive quantum number of a system. It is defined as"}
+{"text":"where \"n\"q is the number of quarks, and \"n\" is the number of antiquarks. Baryons (three quarks) have a baryon number of +1, mesons (one quark, one antiquark) have a baryon number of 0, and antibaryons (three antiquarks) have a baryon number of \u22121. Exotic hadrons like pentaquarks (four quarks, one antiquark) and tetraquarks (two quarks, two antiquarks) are also classified as baryons and mesons depending on their baryon number."}
+{"text":"Quarks carry not only electric charge, but also charges such as color charge and weak isospin. Because of a phenomenon known as \"color confinement\", a hadron cannot have a net color charge; that is, the total color charge of a particle has to be zero (\"white\"). A quark can have one of three \"colors\", dubbed \"red\", \"green\", and \"blue\"; while an antiquark may be either \"anti-red\", \"anti-green\" or \"anti-blue\"."}
+{"text":"For normal hadrons, a white color can thus be achieved in one of three ways:"}
+{"text":"The baryon number was defined long before the quark model was established, so rather than changing the definitions, particle physicists simply gave quarks one third the baryon number. Nowadays it might be more accurate to speak of the conservation of quark number."}
+{"text":"In theory, exotic hadrons can be formed by adding pairs of quarks and antiquarks, provided that each pair has a matching color\/anticolor. For example, a pentaquark (four quarks, one antiquark) could have the individual quark colors: red, green, blue, blue, and antiblue. In 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom Lambda baryons ()."}
+{"text":"Particles without any quarks have a baryon number of zero. Such particles are"}
+{"text":"The baryon number is conserved in all the interactions of the Standard Model, with one possible exception. 'Conserved' means that the sum of the baryon number of all incoming particles is the same as the sum of the baryon numbers of all particles resulting from the reaction. The one exception is the hypothesized Adler\u2013Bell\u2013Jackiw anomaly in electroweak interactions; however, sphalerons are not all that common and could occur at high energy and temperature levels and can explain electroweak baryogenesis and leptogenesis. Electroweak sphalerons can only change the baryon and\/or lepton number by 3 or multiples of 3 (collision of three baryons into three leptons\/antileptons and vice versa). No experimental evidence of sphalerons has yet been observed."}
+{"text":"The hypothetical concepts of grand unified theory (GUT) models and supersymmetry allows for the changing of a baryon into leptons and antiquarks (see \"B\" \u2212 \"L\"), thus violating the conservation of both baryon and lepton numbers. Proton decay would be an example of such a process taking place, but has never been observed."}
+{"text":"The conservation of baryon number is not consistent with the physics of black hole evaporation via Hawking radiation. It is expected in general that quantum gravitational effects violate the conservation of all charges associated to global symmetries. The violation of conservation of baryon number led John Archibald Wheeler to speculate on a principle of mutability for all physical properties."}
+{"text":"In particle physics, CP violation is a violation of CP-symmetry (or charge conjugation parity symmetry): the combination of C-symmetry (charge symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C symmetry) while its spatial coordinates are inverted (\"mirror\" or P symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers James Cronin and Val Fitch."}
+{"text":"It plays an important role both in the attempts of cosmology to explain the dominance of matter over antimatter in the present universe, and in the study of weak interactions in particle physics."}
+{"text":"Until the 1950s, parity conservation was believed to be one of the fundamental geometric conservation laws (along with conservation of energy and conservation of momentum). After the discovery of parity violation in 1956, CP-symmetry was proposed to restore order. However, while the strong interaction and electromagnetic interaction seem to be invariant under the combined CP transformation operation, further experiments showed that this symmetry is slightly violated during certain types of weak decay."}
+{"text":"Only a weaker version of the symmetry could be preserved by physical phenomena, which was CPT symmetry. Besides C and P, there is a third operation, time reversal T, which corresponds to reversal of motion. Invariance under time reversal implies that whenever a motion is allowed by the laws of physics, the reversed motion is also an allowed one and occurs at the same rate forwards and backwards."}
+{"text":"The combination of CPT is thought to constitute an exact symmetry of all types of fundamental interactions. Because of the CPT symmetry, a violation of the CP-symmetry is equivalent to a violation of the T symmetry. CP violation implied nonconservation of T, provided that the long-held CPT theorem was valid. In this theorem, regarded as one of the basic principles of quantum field theory, charge conjugation, parity, and time reversal are applied together. Direct observation of the time reversal symmetry violation without any assumption of CPT theorem was done in 1998 by two groups, CPLEAR and KTeV collaborations, at CERN and Fermilab, respectively. Already in 1970 Klaus Schubert observed T violation independent of assuming CPT symmetry by using the Bell-Steinberger unitarity relation."}
+{"text":"The idea behind parity symmetry was that the equations of particle physics are invariant under mirror inversion. This led to the prediction that the mirror image of a reaction (such as a chemical reaction or radioactive decay) occurs at the same rate as the original reaction. However, in 1956 a careful critical review of the existing experimental data by theoretical physicists Tsung-Dao Lee and Chen-Ning Yang revealed that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests."}
+{"text":"The first test based on beta decay of cobalt-60 nuclei was carried out in 1956 by a group led by Chien-Shiung Wu, and demonstrated conclusively that weak interactions violate the P symmetry or, as the analogy goes, some reactions did not occur as often as their mirror image. However, parity symmetry still appears to be valid for all reactions involving electromagnetism and strong interactions."}
+{"text":"Overall, the symmetry of a quantum mechanical system can be restored if another approximate symmetry \"S\" can be found such that the combined symmetry \"PS\" remains unbroken. This rather subtle point about the structure of Hilbert space was realized shortly after the discovery of \"P\" violation, and it was proposed that charge conjugation, \"C\", which transforms a particle into its antiparticle, was the suitable symmetry to restore order."}
+{"text":"In 1956 Reinhard Oehme in a letter to Yang and shortly after, Ioffe, Okun and Rudik showed that the parity violation meant that charge conjugation invariance must also be violated in weak decays."}
+{"text":"Charge violation was confirmed in the Wu experiment and in experiments performed by Valentine Telegdi and Jerome Friedman and Garwin and Lederman who observed parity non-conservation in\u00a0pion and muon decay and found that C is also violated. Charge violation was more explicitly shown in experiments done by John Riley Holt at the University of Liverpool."}
+{"text":"Oehme then wrote up a paper with Lee and Yang in which they discussed the interplay of non-invariance under P, C and T. The same result was also independently obtained by B.L. Ioffe, Okun and A.P. Rudik. Both groups also discussed possible CP violations in neutral kaon decays."}
+{"text":"Lev Landau proposed in 1957 \"CP-symmetry\", often called just \"CP\" as the true symmetry between matter and antimatter. \"CP-symmetry\" is the product of two transformations: C for charge conjugation and P for parity. In other words, a process in which all particles are exchanged with their antiparticles was assumed to be equivalent to the mirror image of the original proces and so the combined CP symmetry would be conserved in the weak interaction."}
+{"text":"In 1962, a group of experimentalists at Dubna, on Okun's insistence, unsuccessfully searched for CP-violating kaon decay."}
+{"text":"In 1964, James Cronin, Val Fitch and coworkers provided clear evidence from kaon decay that CP-symmetry could be broken. This work won them the 1980 Nobel Prize. This discovery showed that weak interactions violate not only the charge-conjugation symmetry C between particles and antiparticles and the P or parity, but also their combination. The discovery shocked particle physics and opened the door to questions still at the core of particle physics and of cosmology today. The lack of an exact CP-symmetry, but also the fact that it is so close to a symmetry, introduced a great puzzle."}
+{"text":"The kind of CP violation discovered in 1964 was linked to the fact that neutral kaons can transform into their antiparticles (in which each quark is replaced with the other's antiquark) and vice versa, but such transformation does not occur with exactly the same probability in both directions; this is called \"indirect\" CP violation."}
+{"text":"Despite many searches, no other manifestation of CP violation was discovered until the 1990s, when the NA31 experiment at CERN suggested evidence for CP violation in the decay process of the very same neutral kaons (\"direct\" CP violation). The observation was somewhat controversial, and final proof for it came in 1999 from the KTeV experiment at Fermilab and the NA48 experiment at CERN."}
+{"text":"Starting in 2001, a new generation of experiments, including the BaBar experiment at the Stanford Linear Accelerator Center (SLAC) and the Belle Experiment at the High Energy Accelerator Research Organisation (KEK) in Japan, observed direct CP violation in a different system, namely in decays of the B mesons. A large number of CP violation processes in B meson decays have now been discovered. Before these \"B-factory\" experiments, there was a logical possibility that all CP violation was confined to kaon physics. However, this raised the question of why CP violation did \"not\" extend to the strong force, and furthermore, why this was not predicted by the unextended Standard Model, despite the model's accuracy for \"normal\" phenomena."}
+{"text":"In 2011, a hint of CP violation in decays of neutral D mesons was reported by the LHCb experiment at CERN using 0.6\u00a0fb\u22121 of Run 1 data. However, the same measurement using the full 3.0\u00a0fb\u22121 Run 1 sample was consistent with CP symmetry."}
+{"text":"In 2013 LHCb announced discovery of CP violation in strange B meson decays."}
+{"text":"In March 2019, LHCb announced discovery of CP violation in charmed formula_1 decays with a deviation from zero of 5.3 standard deviations."}
+{"text":"In 2020, the T2K Collaboration reported some indications of CP violation in leptons for the first time."}
+{"text":"In this experiment, beams of muon neutrinos () and muon antineutrinos () were alternately produced by an accelerator. By the time they got to the detector, a significantly higher proportion of electron neutrinos () were detected from the beams, than electron antineutrinos () were from the beams. The results were not yet precise enough to determine the size of the CP violation, relative to that seen in quarks. In addition, another similar experiment, NOvA sees no evidence of CP violation in neutrino oscillations and is in slight tension with T2K."}
+{"text":"\"Direct\" CP violation is allowed in the Standard Model if a complex phase appears in the CKM matrix describing quark mixing, or the PMNS matrix describing neutrino mixing. A necessary condition for the appearance of the complex phase is the presence of at least three generations of quarks. If fewer generations are present, the complex phase parameter can be absorbed into redefinitions of the quark fields. A popular rephasing invariant whose vanishing signals absence of CP violation and occurs in most CP violating amplitudes is the \"Jarlskog invariant\","}
+{"text":"The reason why such a complex phase causes CP violation is not immediately obvious, but can be seen as follows. Consider any given particles (or sets of particles) formula_3 and formula_4, and their antiparticles formula_5 and formula_6. Now consider the processes formula_7 and the corresponding antiparticle process formula_8, and denote their amplitudes formula_9 and formula_10 respectively. Before CP violation, these terms must be the \"same\" complex number. We can separate the magnitude and phase by writing formula_11. If a phase term is introduced from (e.g.) the CKM matrix, denote it formula_12. Note that formula_10 contains the conjugate matrix to formula_9, so it picks up a phase term formula_15."}
+{"text":"Physically measurable reaction rates are proportional to formula_18, thus so far nothing is different. However, consider that there are \"two different routes\": formula_19 and formula_20 or equivalently, two unrelated intermediate states: formula_21 and formula_22. Now we have:"}
+{"text":"Thus, we see that a complex phase gives rise to processes that proceed at different rates for particles and antiparticles, and CP is violated."}
+{"text":"From the theoretical end, the CKM matrix is defined as = . , where and are unitary transformation matrices which diagonalize the fermion mass matrices and , respectively."}
+{"text":"Thus, there are two necessary conditions for getting a complex CKM matrix:"}
+{"text":"There is no experimentally known violation of the CP-symmetry in quantum chromodynamics. As there is no known reason for it to be conserved in QCD specifically, this is a \"fine tuning\" problem known as the strong CP problem."}
+{"text":"QCD does not violate the CP-symmetry as easily as the electroweak theory; unlike the electroweak theory in which the gauge fields couple to chiral currents constructed from the fermionic fields, the gluons couple to vector currents. Experiments do not indicate any CP violation in the QCD sector. For example, a generic CP violation in the strongly interacting sector would create the electric dipole moment of the neutron which would be comparable to 10\u221218\u00a0e\u00b7m while the experimental upper bound is roughly one trillionth that size."}
+{"text":"This is a problem because at the end, there are natural terms in the QCD Lagrangian that are able to break the CP-symmetry."}
+{"text":"For a nonzero choice of the \u03b8 angle and the chiral phase of the quark mass \u03b8\u2032 one expects the CP-symmetry to be violated. One usually assumes that the chiral quark mass phase can be converted to a contribution to the total effective formula_27 angle, but it remains to be explained why this angle is extremely small instead of being of order one; the particular value of the \u03b8 angle that must be very close to zero (in this case) is an example of a fine-tuning problem in physics, and is typically solved by physics beyond the Standard Model."}
+{"text":"There are several proposed solutions to solve the strong CP problem. The most well-known is Peccei\u2013Quinn theory, involving new scalar particles called axions. A newer, more radical approach not requiring the axion is a theory involving two time dimensions first proposed in 1998 by Bars, Deliduman, and Andreev."}
+{"text":"The universe is made chiefly of matter, rather than consisting of equal parts of matter and antimatter as might be expected. It can be demonstrated that, to create an imbalance in matter and antimatter from an initial condition of balance, the Sakharov conditions must be satisfied, one of which is the existence of CP violation during the extreme conditions of the first seconds after the Big Bang. Explanations which do not involve CP violation are less plausible, since they rely on the assumption that the matter\u2013antimatter imbalance was present at the beginning, or on other admittedly exotic assumptions."}
+{"text":"The Big Bang should have produced equal amounts of matter and antimatter if CP-symmetry was preserved; as such, there should have been total cancellation of both\u2014protons should have cancelled with antiprotons, electrons with positrons, neutrons with antineutrons, and so on. This would have resulted in a sea of radiation in the universe with no matter. Since this is not the case, after the Big Bang, physical laws must have acted differently for matter and antimatter, i.e. violating CP-symmetry."}
+{"text":"If CP violation in the lepton sector is experimentally determined to be too small to account for matter-antimatter asymmetry, some new physics beyond the Standard Model would be required to explain additional sources of CP violation. Adding new particles and\/or interactions to the Standard Model generally introduces new sources of CP violation since CP is not a symmetry of nature."}
+{"text":"Sakharov proposed a way to restore CP-symmetry using T-symmetry, extending spacetime \"before\" the Big Bang. He described complete \"CPT reflections\" of events on each side of what he called the \"initial singularity\". Because of this, phenomena with an opposite arrow of time at \"t\" < 0 would undergo an opposite CP violation, so the CP-symmetry would be preserved as a whole. The anomalous excess of matter over antimatter after the Big Bang in the orthochronous (or positive) sector, becomes an excess of antimatter before the Big Bang (antichronous or negative sector) as both charge conjugation, parity and arrow of time are reversed due to CPT reflections of all phenomena occurring over the initial singularity:"}
+{"text":"Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. There are two types of electric charge: \"positive\" and \"negative\" (commonly carried by protons and electrons respectively). Like charges repel each other and unlike charges attract each other. An object with an absence of net charge is referred to as neutral. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do not require consideration of quantum effects."}
+{"text":"Electric charges produce electric fields. A moving charge also produces a magnetic field. The interaction of electric charges with an electromagnetic field (combination of electric and magnetic fields) is the source of the electromagnetic (or Lorentz) force, which is one of the four fundamental forces in physics. The study of photon-mediated interactions among charged particles is called quantum electrodynamics."}
+{"text":"The SI derived unit of electric charge is the coulomb (C) named after French physicist Charles-Augustin de Coulomb. In electrical engineering it is also common to use the ampere-hour (Ah). In physics and chemistry it is common to use the elementary charge (\"e\" as a unit). Chemistry also uses the Faraday constant as the charge on a mole of electrons. The lowercase symbol \"q\" often denotes charge."}
+{"text":"Charge is the fundamental property of matter that exhibit electrostatic attraction or repulsion in the presence of other matter with charge. Electric charge is a characteristic property of many subatomic particles. The charges of free-standing particles are integer multiples of the elementary charge \"e\"; we say that electric charge is \"quantized\". Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge. Robert Millikan's oil drop experiment demonstrated this fact directly, and measured the elementary charge. It has been discovered that one type of particle, quarks, have fractional charges of either \u2212 or +, but it is believed they always occur in multiples of integral charge; free-standing quarks have never been observed."}
+{"text":"By convention, the charge of an electron is negative, \"\u2212e\", while that of a proton is positive, \"+e\". Charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. Coulomb's law quantifies the electrostatic force between two particles by asserting that the force is proportional to the product of their charges, and inversely proportional to the square of the distance between them. The charge of an antiparticle equals that of the corresponding particle, but with opposite sign."}
+{"text":"The electric charge of a macroscopic object is the sum of the electric charges of the particles that make it up. This charge is often small, because matter is made of atoms, and atoms typically have equal numbers of protons and electrons, in which case their charges cancel out, yielding a net charge of zero, thus making the atom neutral."}
+{"text":"An \"ion\" is an atom (or group of atoms) that has lost one or more electrons, giving it a net positive charge (cation), or that has gained one or more electrons, giving it a net negative charge (anion). \"Monatomic ions\" are formed from single atoms, while \"polyatomic ions\" are formed from two or more atoms that have been bonded together, in each case yielding an ion with a positive or negative net charge."}
+{"text":"During the formation of macroscopic objects, constituent atoms and ions usually combine to form structures composed of neutral \"ionic compounds\" electrically bound to neutral atoms. Thus macroscopic objects tend toward being neutral overall, but macroscopic objects are rarely perfectly net neutral."}
+{"text":"Even when an object's net charge is zero, the charge can be distributed non-uniformly in the object (e.g., due to an external electromagnetic field, or bound polar molecules). In such cases, the object is said to be polarized. The charge due to polarization is known as bound charge, while the charge on an object produced by electrons gained or lost from outside the object is called \"free charge\". The motion of electrons in conductive metals in a specific direction is known as electric current."}
+{"text":"The SI derived unit of quantity of electric charge is the coulomb (symbol: C). The coulomb is defined as the quantity of charge that passes through the cross section of an electrical conductor carrying one ampere for one second. This unit was proposed in 1946 and ratified in 1948. In modern practice, the phrase \"amount of charge\" is used instead of \"quantity of charge\". The lowercase symbol \"q\" is often used to denote a quantity of electricity or charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer."}
+{"text":"The amount of charge in 1 electron (elementary charge) is defined as a fundamental constant in the SI system of units, (effective from 20 May 2019). The value for elementary charge, when expressed in the SI unit for electric charge (coulomb), is \"exactly\" ."}
+{"text":"After finding the quantized character of charge, in 1891 George Stoney proposed the unit 'electron' for this fundamental unit of electrical charge. This was before the discovery of the particle by J. J. Thomson in 1897. The unit is today referred to as , , or simply as . A measure of charge should be a multiple of the elementary charge \"e\", even if at large scales charge seems to behave as a real quantity. In some contexts it is meaningful to speak of fractions of a charge; for example in the charging of a capacitor, or in the fractional quantum Hall effect."}
+{"text":"The unit faraday is sometimes used in electrochemistry. One faraday of charge is the magnitude of the charge of one mole of electrons, i.e. 96485.33289(59) C."}
+{"text":"In systems of units other than SI such as cgs, electric charge is expressed as combination of only three fundamental quantities (length, mass, and time), and not four, as in SI, where electric charge is a combination of length, mass, time, and electric current."}
+{"text":"In late 1100s, the substance jet, a compacted form of coal, was noted to have an amber effect, and in the middle of the 1500s, Girolamo Fracastoro, discovered that diamond also showed this effect. Some efforts were made by Fracastoro and others, especially Gerolamo Cardano to develop explanations for this phenomenon."}
+{"text":"Around 1663 Otto von Guericke invented what was probably the first electrostatic generator, but he did not recognize it primarily as an electrical device and only conducted minimal electrical experiments with it. Other European pioneers were Robert Boyle, who in 1675 published the first book in English that was devoted solely to electrical phenomena. His work was largely a repetition of Gilbert's studies, but he also identified several more \"electrics\", and noted mutual attraction between two bodies."}
+{"text":"Up until about 1745, the main explanation for electrical attraction and repulsion was the idea that electrified bodies gave off an effluvium."}
+{"text":"It is now known that the Franklin model was fundamentally correct. There is only one kind of electrical charge, and only one variable is required to keep track of the amount of charge."}
+{"text":"Until 1800 it was only possible to study conduction of electric charge by using an electrostatic discharge. In 1800 Alessandro Volta was the first to show that charge could be maintained in continuous motion through a closed path."}
+{"text":"In 1833, Michael Faraday sought to remove any doubt that electricity is identical, regardless of the source by which it is produced. He discussed a variety of known forms, which he characterized as common electricity (e.g., static electricity, piezoelectricity, magnetic induction), voltaic electricity (e.g., electric current from a voltaic pile), and animal electricity (e.g., bioelectricity)."}
+{"text":"In 1838, Faraday raised a question about whether electricity was a fluid or fluids or a property of matter, like gravity. He investigated whether matter could be charged with one kind of charge independently of the other. He came to the conclusion that electric charge was a relation between two or more bodies, because he could not charge one body without having an opposite charge in another body."}
+{"text":"In 1838, Faraday also put forth a theoretical explanation of electric force, while expressing neutrality about whether it originates from one, two, or no fluids. He focused on the idea that the normal state of particles is to be nonpolarized, and that when polarized, they seek to return to their natural, nonpolarized state."}
+{"text":"In developing a field theory approach to electrodynamics (starting in the mid-1850s), James Clerk Maxwell stops considering electric charge as a special substance that accumulates in objects, and starts to understand electric charge as a consequence of the transformation of energy in the field. This pre-quantum understanding considered magnitude of electric charge to be a continuous quantity, even at the microscopic level."}
+{"text":"The role of charge in static electricity."}
+{"text":"Static electricity refers to the electric charge of an object and the related electrostatic discharge when two objects are brought together that are not at equilibrium. An electrostatic discharge creates a change in the charge of each of the two objects."}
+{"text":"When a piece of glass and a piece of resin\u2014neither of which exhibit any electrical properties\u2014are rubbed together and left with the rubbed surfaces in contact, they still exhibit no electrical properties. When separated, they attract each other."}
+{"text":"A second piece of glass rubbed with a second piece of resin, then separated and suspended near the former pieces of glass and resin causes these phenomena:"}
+{"text":"This attraction and repulsion is an \"electrical phenomenon\", and the bodies that exhibit them are said to be \"electrified\", or \"electrically charged\". Bodies may be electrified in many other ways, as well as by friction. The electrical properties of the two pieces of glass are similar to each other but opposite to those of the two pieces of resin: The glass attracts what the resin repels and repels what the resin attracts."}
+{"text":"If a body electrified in any manner whatsoever behaves as the glass does, that is, if it repels the glass and attracts the resin, the body is said to be \"vitreously\" electrified, and if it attracts the glass and repels the resin it is said to be \"resinously\" electrified. All electrified bodies are either vitreously or resinously electrified."}
+{"text":"An established convention in the scientific community defines vitreous electrification as positive, and resinous electrification as negative. The exactly opposite properties of the two kinds of electrification justify our indicating them by opposite signs, but the application of the positive sign to one rather than to the other kind must be considered as a matter of arbitrary convention\u2014just as it is a matter of convention in mathematical diagram to reckon positive distances towards the right hand."}
+{"text":"No force, either of attraction or of repulsion, can be observed between an electrified body and a body not electrified."}
+{"text":"The role of charge in electric current."}
+{"text":"Electric current is the flow of electric charge through an object, which produces no net loss or gain of electric charge. The most common charge carriers are the positively charged proton and the negatively charged electron. The movement of any of these charged particles constitutes an electric current. In many situations, it suffices to speak of the \"conventional current\" without regard to whether it is carried by positive charges moving in the direction of the conventional current or by negative charges moving in the opposite direction. This macroscopic viewpoint is an approximation that simplifies electromagnetic concepts and calculations."}
+{"text":"At the opposite extreme, if one looks at the microscopic situation, one sees there are many ways of carrying an electric current, including: a flow of electrons; a flow of electron holes that act like positive particles; and both negative and positive particles (ions or other charged particles) flowing in opposite directions in an electrolytic solution or a plasma."}
+{"text":"Beware that, in the common and important case of metallic wires, the direction of the conventional current is opposite to the drift velocity of the actual charge carriers; i.e., the electrons. This is a source of confusion for beginners."}
+{"text":"The total electric charge of an isolated system remains constant regardless of changes within the system itself. This law is inherent to all processes known to physics and can be derived in a local form from gauge invariance of the wave function. The conservation of charge results in the charge-current continuity equation. More generally, the rate of change in charge density \"\u03c1\" within a volume of integration \"V\" is equal to the area integral over the current density J through the closed surface \"S\" = \u2202\"V\", which is in turn equal to the net current \"I\":"}
+{"text":"Thus, the conservation of electric charge, as expressed by the continuity equation, gives the result:"}
+{"text":"The charge transferred between times formula_2 and formula_3 is obtained by integrating both sides:"}
+{"text":"where \"I\" is the net outward current through a closed surface and \"q\" is the electric charge contained within the volume defined by the surface."}
+{"text":"Aside from the properties described in articles about electromagnetism, charge is a relativistic invariant. This means that any particle that has charge \"q\" has the same charge regardless of how fast it is travelling. This property has been experimentally verified by showing that the charge of one helium nucleus (two protons and two neutrons bound together in a nucleus and moving around at high speeds) is the same as two deuterium nuclei (one proton and one neutron bound together, but moving much more slowly than they would if they were in a helium nucleus)."}
+{"text":"In physics, a conservation law states that a particular measurable property of an isolated physical system does not change as the system evolves over time. Exact conservation laws include conservation of energy, conservation of linear momentum, conservation of angular momentum, and conservation of electric charge. There are also many approximate conservation laws, which apply to such quantities as mass, parity, lepton number, baryon number, strangeness, hypercharge, etc. These quantities are conserved in certain classes of physics processes, but not in all."}
+{"text":"A local conservation law is usually expressed mathematically as a continuity equation, a partial differential equation which gives a relation between the amount of the quantity and the \"transport\" of that quantity. It states that the amount of the conserved quantity at a point or within a volume can only change by the amount of the quantity which flows in or out of the volume."}
+{"text":"From Noether's theorem, each conservation law is associated with a symmetry in the underlying physics."}
+{"text":"Conservation laws as fundamental laws of nature."}
+{"text":"Conservation laws are considered to be fundamental laws of nature, with broad application in physics, as well as in other fields such as chemistry, biology, geology, and engineering."}
+{"text":"Most conservation laws are exact, or absolute, in the sense that they apply to all possible processes. Some conservation laws are partial, in that they hold for some processes but not for others."}
+{"text":"One particularly important result concerning conservation laws is Noether's theorem, which states that there is a one-to-one correspondence between each one of them and a differentiable symmetry of nature. For example, the conservation of energy follows from the time-invariance of physical systems, and the conservation of angular momentum arises from the fact that physical systems behave the same regardless of how they are oriented in space."}
+{"text":"A partial listing of physical conservation equations due to symmetry that are said to be exact laws, or more precisely \"have never been proven to be violated:\""}
+{"text":"There are also approximate conservation laws. These are approximately true in particular situations, such as low speeds, short time scales, or certain interactions."}
+{"text":"In continuum mechanics, the most general form of an exact conservation law is given by a continuity equation. For example, conservation of electric charge \"q\" is"}
+{"text":"where \u2207\u22c5 is the divergence operator, \"\u03c1\" is the density of \"q\" (amount per unit volume), j is the flux of \"q\" (amount crossing a unit area in unit time), and \"t\" is time."}
+{"text":"If we assume that the motion u of the charge is a continuous function of position and time, then"}
+{"text":"In one space dimension this can be put into the form of a homogeneous first-order quasilinear hyperbolic equation:"}
+{"text":"where the dependent variable \"y\" is called the \"density\" of a \"conserved quantity\", and \"A\"(\"y\") is called the \"current Jacobian\", and the subscript notation for partial derivatives has been employed. The more general inhomogeneous case:"}
+{"text":"is not a conservation equation but the general kind of balance equation describing a dissipative system. The dependent variable \"y\" is called a \"nonconserved quantity\", and the inhomogeneous term \"s\"(\"y\",\"x\",\"t\") is the-\"source\", or dissipation. For example, balance equations of this kind are the momentum and energy Navier-Stokes equations, or the entropy balance for a general isolated system."}
+{"text":"In the one-dimensional space a conservation equation is a first-order quasilinear hyperbolic equation that can be put into the \"advection\" form:"}
+{"text":"where the dependent variable \"y\"(\"x\",\"t\") is called the density of the \"conserved\" (scalar) quantity, and \"a\"(\"y\") is called the current coefficient, usually corresponding to the partial derivative in the conserved quantity of a current density of the conserved quantity \"j\"(\"y\"):"}
+{"text":"In this case since the chain rule applies:"}
+{"text":"the conservation equation can be put into the current density form:"}
+{"text":"In a space with more than one dimension the former definition can be extended to an equation that can be put into the form:"}
+{"text":"where the \"conserved quantity\" is \"y\"(r,\"t\"), formula_11 denotes the scalar product, \"\u2207\" is the nabla operator, here indicating a gradient, and \"a\"(\"y\") is a vector of current coefficients, analogously corresponding to the divergence of a vector current density associated to the conserved quantity j(\"y\"):"}
+{"text":"This is the case for the continuity equation:"}
+{"text":"Here the conserved quantity is the mass, with density \"\u03c1\"(r,\"t\") and current density \"\u03c1\"u, identical to the momentum density, while u(r,\"t\") is the flow velocity."}
+{"text":"In the general case a conservation equation can be also a system of this kind of equations (a vector equation) in the form:"}
+{"text":"where y is called the \"conserved\" (vector) quantity, \u2207 y is its gradient, 0 is the zero vector, and A(y) is called the Jacobian of the current density. In fact as in the former scalar case, also in the vector case A(y) usually corresponding to the Jacobian of a current density matrix J(y):"}
+{"text":"and the conservation equation can be put into the form:"}
+{"text":"For example, this the case for Euler equations (fluid dynamics). In the simple incompressible case they are:"}
+{"text":"It can be shown that the conserved (vector) quantity and the current density matrix for these equations are respectively:"}
+{"text":"Conservation equations can be also expressed in integral form: the advantage of the latter is substantially that it requires less smoothness of the solution, which paves the way to weak form, extending the class of admissible solutions to include discontinuous solutions. By integrating in any space-time domain the current density form in 1-D space:"}
+{"text":"and by using Green's theorem, the integral form is:"}
+{"text":"In a similar fashion, for the scalar multidimensional space, the integral form is:"}
+{"text":"where the line integration is performed along the boundary of the domain, in an anticlockwise manner."}
+{"text":"Moreover, by defining a test function \"\u03c6\"(r,\"t\") continuously differentiable both in time and space with compact support, the weak form can be obtained pivoting on the initial condition. In 1-D space it is:"}
+{"text":"Note that in the weak form all the partial derivatives of the density and current density have been passed on to the test function, which with the former hypothesis is sufficiently smooth to admit these derivatives."}
+{"text":"In physics, charge conservation is the principle that the total electric charge in an isolated system never changes. The net quantity of electric charge, the amount of positive charge minus the amount of negative charge in the universe, is always \"conserved\". Charge conservation, considered as a physical conservation law, implies that the change in the amount of electric charge in any volume of space is exactly equal to the amount of charge flowing into the volume minus the amount of charge flowing out of the volume. In essence, charge conservation is an accounting relationship between the amount of charge in a region and the flow of charge into and out of that region, given by a continuity equation between charge density formula_1 and current density formula_2."}
+{"text":"This does not mean that individual positive and negative charges cannot be created or destroyed. Electric charge is carried by subatomic particles such as electrons and protons. Charged particles can be created and destroyed in elementary particle reactions. In particle physics, charge conservation means that in reactions that create charged particles, equal numbers of positive and negative particles are always created, keeping the net amount of charge unchanged. Similarly, when particles are destroyed, equal numbers of positive and negative charges are destroyed. This property is supported without exception by all empirical observations so far."}
+{"text":"Although conservation of charge requires that the total quantity of charge in the universe is constant, it leaves open the question of what that quantity is. Most evidence indicates that the net charge in the universe is zero; that is, there are equal quantities of positive and negative charge."}
+{"text":"Charge conservation was first proposed by British scientist William Watson in 1746 and American statesman and scientist Benjamin Franklin in 1747, although the first convincing proof was given by Michael Faraday in 1843."}
+{"text":"Mathematically, we can state the law of charge conservation as a continuity equation:"}
+{"text":"where formula_4 is the electric charge accumulation rate in a specific volume at time , formula_5 is the amount of charge flowing into the volume and formula_6 is the amount of charge flowing out of the volume; both amounts are regarded as generic functions of time."}
+{"text":"The integrated continuity equation between two time values reads:"}
+{"text":"The general solution is obtained by fixing the initial condition time formula_8, leading to the integral equation:"}
+{"text":"The condition formula_10 corresponds to the absence of charge quantity change in the control volume: the system has reached a steady state. From the above condition, the following must hold true:"}
+{"text":"therefore, formula_5 and formula_6 are equal (not necessarily constant) over time, then the overall charge inside the control volume does not change. This deduction could be derived directly from the continuity equation, since at steady state formula_14 holds, and implies formula_15."}
+{"text":"In electromagnetic field theory, vector calculus can be used to express the law in terms of charge density (in coulombs per cubic meter) and electric current density (in amperes per square meter). This is called the charge density continuity equation"}
+{"text":"The term on the left is the rate of change of the charge density at a point. The term on the right is the divergence of the current density at the same point. The equation equates these two factors, which says that the only way for the charge density at a point to change is for a current of charge to flow into or out of the point. This statement is equivalent to a conservation of four-current."}
+{"text":"The net current into a volume is"}
+{"text":"where is the boundary of oriented by outward-pointing normals, and is shorthand for , the outward pointing normal of the boundary . Here is the current density (charge per unit area per unit time) at the surface of the volume. The vector points in the direction of the current."}
+{"text":"From the Divergence theorem this can be written"}
+{"text":"Charge conservation requires that the net current into a volume must necessarily equal the net change in charge within the volume."}
+{"text":"The total charge \"q\" in volume \"V\" is the integral (sum) of the charge density in \"V\""}
+{"text":"Since this is true for every volume, we have in general"}
+{"text":"Charge conservation can also be understood as a consequence of symmetry through Noether's theorem, a central result in theoretical physics that asserts that each conservation law is associated with a symmetry of the underlying physics. The symmetry that is associated with charge conservation is the global gauge invariance of the electromagnetic field. This is related to the fact that the electric and magnetic fields are not changed by different choices of the value representing the zero point of electrostatic potential formula_24. However the full symmetry is more complicated, and also involves the vector potential formula_25. The full statement of gauge invariance is that the physics of an electromagnetic field are unchanged when the scalar and vector potential are shifted by the gradient of an arbitrary scalar field formula_26:"}
+{"text":"In quantum mechanics the scalar field is equivalent to a phase shift in the wavefunction of the charged particle:"}
+{"text":"so gauge invariance is equivalent to the well known fact that changes in the phase of a wavefunction are unobservable, and only changes in the magnitude of the wavefunction result in changes to the probability function formula_29. This is the ultimate theoretical origin of charge conservation."}
+{"text":"Gauge invariance is a very important, well established property of the electromagnetic field and has many testable consequences. The theoretical justification for charge conservation is greatly strengthened by being linked to this symmetry. For example, gauge invariance also requires that the photon be massless, so the good experimental evidence that the photon has zero mass is also strong evidence that charge is conserved."}
+{"text":"Even if gauge symmetry is exact, however, there might be apparent electric charge non-conservation if charge could leak from our normal 3-dimensional space into hidden extra dimensions."}
+{"text":"Simple arguments rule out some types of charge nonconservation. For example, the magnitude of the elementary charge on positive and negative particles must be extremely close to equal, differing by no more than a factor of 10\u221221 for the case of protons and electrons. Ordinary matter contains equal numbers of positive and negative particles, protons and electrons, in enormous quantities. If the elementary charge on the electron and proton were even slightly different, all matter would have a large electric charge and would be mutually repulsive."}
+{"text":"The best experimental tests of electric charge conservation are searches for particle decays that would be allowed if electric charge is not always conserved. No such decays have ever been seen."}
+{"text":"The best experimental test comes from searches for the energetic photon from an electron decaying into a neutrino and a single photon:"}
+{"text":"but there are theoretical arguments that such single-photon decays will never occur even if charge is not conserved."}
+{"text":"Charge disappearance tests are sensitive to decays without energetic photons, other unusual charge violating processes such as an electron spontaneously changing into a positron,"}
+{"text":"and to electric charge moving into other dimensions."}
+{"text":"The best experimental bounds on charge disappearance are:"}
+{"text":"Conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry; that is, from the fact that the laws of physics do not change over time."}
+{"text":"A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist, that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. For systems which do not have time translation symmetry, it may not be possible to define \"conservation of energy\". Examples include curved spacetimes in general relativity or time crystals in condensed matter physics."}
+{"text":"In 1605, Simon Stevinus was able to solve a number of problems in statics based on the principle that perpetual motion was impossible."}
+{"text":"In 1639, Galileo published his analysis of several situations\u2014including the celebrated \"interrupted pendulum\"\u2014which can be described (in modern language) as conservatively converting potential energy to kinetic energy and back again. Essentially, he pointed out that the height a moving body rises is equal to the height from which it falls, and used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height to which a moving body ascends on a frictionless surface does not depend on the shape of the surface."}
+{"text":"The fact that kinetic energy is scalar, unlike linear momentum which is a vector, and hence easier to work with did not escape the attention of Gottfried Wilhelm Leibniz. It was Leibniz during 1676\u20131689 who first attempted a mathematical formulation of the kind of energy which is connected with \"motion\" (kinetic energy). Using Huygens' work on collision, Leibniz noticed that in many mechanical systems (of several masses, \"mi\" each with velocity \"vi\"),"}
+{"text":"was conserved so long as the masses did not interact. He called this quantity the \"vis viva\" or \"living force\" of the system. The principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, such as Newton, held that the conservation of momentum, which holds even in systems with friction, as defined by the momentum:"}
+{"text":"was the conserved \"vis viva\". It was later shown that both quantities are conserved simultaneously, given the proper conditions such as an elastic collision."}
+{"text":"In 1687, Isaac Newton published his \"Principia\", which was organized around the concept of force and momentum. However, the researchers were quick to recognize that the principles set out in the book, while fine for point masses, were not sufficient to tackle the motions of rigid and fluid bodies. Some other principles were also required."}
+{"text":"The law of conservation of vis viva was championed by the father and son duo, Johann and Daniel Bernoulli. The former enunciated the principle of virtual work as used in statics in its full generality in 1715, while the latter based his \"Hydrodynamica\", published in 1738, on this single conservation principle. Daniel's study of loss of vis viva of flowing water led him to formulate the Bernoulli's principle, which relates the loss to be proportional to the change in hydrodynamic pressure. Daniel also formulated the notion of work and efficiency for hydraulic machines; and he gave a kinetic theory of gases, and linked the kinetic energy of gas molecules with the temperature of the gas."}
+{"text":"This focus on the vis viva by the continental physicists eventually led to the discovery of stationarity principles governing mechanics, such as the D'Alembert's principle, Lagrangian, and Hamiltonian formulations of mechanics."}
+{"text":"Engineers such as John Smeaton, Peter Ewart, , Gustave-Adolphe Hirn and Marc Seguin recognized that conservation of momentum alone was not adequate for practical calculation and made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston. Academics such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics, but in the 18th and 19th centuries, the fate of the lost energy was still unknown."}
+{"text":"Gradually it came to be suspected that the heat inevitably generated by motion under friction was another form of \"vis viva\". In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of \"vis viva\" and caloric theory. Count Rumford's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat and (as important) that the conversion was quantitative and could be predicted (allowing for a universal conversion constant between kinetic energy and heat). \"Vis viva\" then started to be known as \"energy\", after the term was first used in that sense by Thomas Young in 1807."}
+{"text":"which can be understood as converting kinetic energy to work, was largely the result of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819\u20131839. The former called the quantity \"quantit\u00e9 de travail\" (quantity of work) and the latter, \"travail m\u00e9canique\" (mechanical work), and both championed its use in engineering calculation."}
+{"text":"In a paper \"\u00dcber die Natur der W\u00e4rme\"(German \"On the Nature of Heat\/Warmth\"), published in the \"Zeitschrift f\u00fcr Physik\" in 1837, Karl Friedrich Mohr gave one of the earliest general statements of the doctrine of the conservation of energy: \"besides the 54 known chemical elements there is in the physical world one agent only, and this is called \"Kraft\" [energy or work]. It may appear, according to circumstances, as motion, chemical affinity, cohesion, electricity, light and magnetism; and from any one of these forms it can be transformed into any of the others.\""}
+{"text":"A key stage in the development of the modern conservation principle was the demonstration of the \"mechanical equivalent of heat\". The caloric theory maintained that heat could neither be created nor destroyed, whereas conservation of energy entails the contrary principle that heat and mechanical work are interchangeable."}
+{"text":"In the middle of the eighteenth century, Mikhail Lomonosov, a Russian scientist, postulated his corpusculo-kinetic theory of heat, which rejected the idea of a caloric. Through the results of empirical studies, Lomonosov came to the conclusion that heat was not transferred through the particles of the caloric fluid."}
+{"text":"In 1798, Count Rumford (Benjamin Thompson) performed measurements of the frictional heat generated in boring cannons, and developed the idea that heat is a form of kinetic energy; his measurements refuted caloric theory, but were imprecise enough to leave room for doubt."}
+{"text":"The mechanical equivalence principle was first stated in its modern form by the German surgeon Julius Robert von Mayer in 1842. Mayer reached his conclusion on a voyage to the Dutch East Indies, where he found that his patients' blood was a deeper red because they were consuming less oxygen, and therefore less energy, to maintain their body temperature in the hotter climate. He discovered that heat and mechanical work were both forms of energy and in 1845, after improving his knowledge of physics, he published a monograph that stated a quantitative relationship between them."}
+{"text":"Meanwhile, in 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. In the most famous, now called the \"Joule apparatus\", a descending weight attached to a string caused a paddle immersed in water to rotate. He showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle."}
+{"text":"Over the period 1840\u20131843, similar work was carried out by engineer Ludwig A. Colding, although it was little known outside his native Denmark."}
+{"text":"Both Joule's and Mayer's work suffered from resistance and neglect but it was Joule's that eventually drew the wider recognition."}
+{"text":"In 1844, William Robert Grove postulated a relationship between mechanics, heat, light, electricity and magnetism by treating them all as manifestations of a single \"force\" (\"energy\" in modern terms). In 1846, Grove published his theories in his book \"The Correlation of Physical Forces\". In 1847, drawing on the earlier work of Joule, Sadi Carnot and \u00c9mile Clapeyron, Hermann von Helmholtz arrived at conclusions similar to Grove's and published his theories in his book \"\u00dcber die Erhaltung der Kraft\" (\"On the Conservation of Force\", 1847). The general modern acceptance of the principle stems from this publication."}
+{"text":"In 1850, William Rankine first used the phrase \"the law of the conservation of energy\" for the principle."}
+{"text":"In 1877, Peter Guthrie Tait claimed that the principle originated with Sir Isaac Newton, based on a creative reading of propositions 40 and 41 of the \"Philosophiae Naturalis Principia Mathematica\". This is now regarded as an example of Whig history."}
+{"text":"Matter is composed of atoms and what makes up atoms. Matter has \"intrinsic\" or \"rest\" mass. In the limited range of recognized experience of the nineteenth century it was found that such rest mass is conserved. Einstein's 1905 theory of special relativity showed that rest mass corresponds to an equivalent amount of \"rest energy\". This means that \"rest mass\" can be converted to or from equivalent amounts of (non-material) forms of energy, for example kinetic energy, potential energy, and electromagnetic radiant energy. When this happens, as recognized in twentieth century experience, rest mass is not conserved, unlike the \"total\" mass or \"total\" energy. All forms of energy contribute to the total mass and total energy."}
+{"text":"For example, an electron and a positron each have rest mass. They can perish together, converting their combined rest energy into photons having electromagnetic radiant energy, but no rest mass. If this occurs within an isolated system that does not release the photons or their energy into the external surroundings, then neither the total \"mass\" nor the total \"energy\" of the system will change. The produced electromagnetic radiant energy contributes just as much to the inertia (and to any weight) of the system as did the rest mass of the electron and positron before their demise. Likewise, non-material forms of energy can perish into matter, which has rest mass."}
+{"text":"Thus, conservation of energy (\"total\", including material or \"rest\" energy), and conservation of mass (\"total\", not just \"rest\"), each still holds as an (equivalent) law. In the 18th century these had appeared as two seemingly-distinct laws."}
+{"text":"The discovery in 1911 that electrons emitted in beta decay have a continuous rather than a discrete spectrum appeared to contradict conservation of energy, under the then-current assumption that beta decay is the simple emission of an electron from a nucleus. This problem was eventually resolved in 1933 by Enrico Fermi who proposed the correct description of beta-decay as the emission of both an electron and an antineutrino, which carries away the apparently missing energy."}
+{"text":"For a closed thermodynamic system, the first law of thermodynamics may be stated as:"}
+{"text":"where formula_10 is the quantity of energy added to the system by a heating process, formula_11 is the quantity of energy lost by the system due to work done by the system on its surroundings and formula_12 is the change in the internal energy of the system."}
+{"text":"Entropy is a function of the state of a system which tells of limitations of the possibility of conversion of heat into work."}
+{"text":"For a simple compressible system, the work performed by the system may be written:"}
+{"text":"where formula_18 is the pressure and formula_19 is a small change in the volume of the system, each of which are system variables. In the fictive case in which the process is idealized and infinitely slow, so as to be called \"quasi-static\", and regarded as reversible, the heat being transferred from a source with temperature infinitesimally above the system temperature, then the heat energy may be written"}
+{"text":"where formula_21 is the temperature and formula_22 is a small change in the entropy of the system. Temperature and entropy are variables of state of a system."}
+{"text":"If an open system (in which mass may be exchanged with the environment) has several walls such that the mass transfer is through rigid walls separate from the heat and work transfers, then the first law may be written:"}
+{"text":"where formula_24 is the added mass and formula_25 is the internal energy per unit mass of the added mass, measured in the surroundings before the process."}
+{"text":"With the discovery of special relativity by Henri Poincar\u00e9 and Albert Einstein, the energy was proposed to be one component of an energy-momentum 4-vector. Each of the four components (one of energy and three of momentum) of this vector is separately conserved across time, in any closed system, as seen from any given inertial reference frame. Also conserved is the vector length (Minkowski norm), which is the rest mass for single particles, and the invariant mass for systems of particles (where momenta and energy are separately summed before the length is calculated)."}
+{"text":"The relativistic energy of a single massive particle contains a term related to its rest mass in addition to its kinetic energy of motion. In the limit of zero kinetic energy (or equivalently in the rest frame) of a massive particle, or else in the center of momentum frame for objects or systems which retain kinetic energy, the total energy of particle or object (including internal kinetic energy in systems) is related to its rest mass or its invariant mass via the famous equation formula_26."}
+{"text":"Thus, the rule of \"conservation of energy\" over time in special relativity continues to hold, so long as the reference frame of the observer is unchanged. This applies to the total energy of systems, although different observers disagree as to the energy value. Also conserved, and invariant to all observers, is the invariant mass, which is the minimal system mass and energy that can be seen by any observer, and which is defined by the energy\u2013momentum relation."}
+{"text":"In general relativity, energy\u2013momentum conservation is not well-defined except in certain special cases. Energy-momentum is typically expressed with the aid of a stress\u2013energy\u2013momentum pseudotensor. However, since pseudotensors are not tensors, they do not transform cleanly between reference frames. If the metric under consideration is static (that is, does not change with time) or asymptotically flat (that is, at an infinite distance away spacetime looks empty), then energy conservation holds without major pitfalls. In practice, some metrics such as the Friedmann\u2013Lema\u00eetre\u2013Robertson\u2013Walker metric do not satisfy these constraints and energy conservation is not well defined. The theory of general relativity leaves open the question of whether there is a conservation of energy for the entire universe."}
+{"text":"In Newtonian mechanics, linear momentum, translational momentum, or simply momentum (pl. momenta) is the product of the mass and velocity of an object. It is a vector quantity, possessing a magnitude and a direction. If is an object's mass and is its velocity (also a vector quantity), then the object's momentum is:
"}
+{"text":"In SI units, momentum is measured in kilogram meters per second (kg\u22c5m\/s)."}
+{"text":"Newton's second law of motion states that the rate of change of a body's momentum is equal to the net force acting on it. Momentum depends on the frame of reference, but in any inertial frame it is a \"conserved\" quantity, meaning that if a closed system is not affected by external forces, its total linear momentum does not change. Momentum is also conserved in special relativity (with a modified formula) and, in a modified form, in electrodynamics, quantum mechanics, quantum field theory, and general relativity. It is an expression of one of the fundamental symmetries of space and time: translational symmetry."}
+{"text":"Advanced formulations of classical mechanics, Lagrangian and Hamiltonian mechanics, allow one to choose coordinate systems that incorporate symmetries and constraints. In these systems the conserved quantity is generalized momentum, and in general this is different from the kinetic momentum defined above. The concept of generalized momentum is carried over into quantum mechanics, where it becomes an operator on a wave function. The momentum and position operators are related by the Heisenberg uncertainty principle."}
+{"text":"In continuous systems such as electromagnetic fields, fluid dynamics and deformable bodies, a momentum density can be defined, and a continuum version of the conservation of momentum leads to equations such as the Navier\u2013Stokes equations for fluids or the Cauchy momentum equation for deformable solids or fluids."}
+{"text":"Momentum is a vector quantity: it has both magnitude and direction. Since momentum has a direction, it can be used to predict the resulting direction and speed of motion of objects after they collide. Below, the basic properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations (see multiple dimensions)."}
+{"text":"The momentum of a particle is conventionally represented by the letter . It is the product of two quantities, the particle's mass (represented by the letter ) and its velocity ():"}
+{"text":"The unit of momentum is the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity is in meters per second then the momentum is in kilogram meters per second (kg\u22c5m\/s). In cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters per second (g\u22c5cm\/s)."}
+{"text":"Being a vector, momentum has magnitude and direction. For example, a 1\u00a0kg model airplane, traveling due north at 1\u00a0m\/s in straight and level flight, has a momentum of 1\u00a0kg\u22c5m\/s due north measured with reference to the ground."}
+{"text":"The momentum of a system of particles is the vector sum of their momenta. If two particles have respective masses and , and velocities and , the total momentum is"}
+{"text":"The momenta of more than two particles can be added more generally with the following:"}
+{"text":"A system of particles has a center of mass, a point determined by the weighted sum of their positions:"}
+{"text":"If one or more of the particles is moving, the center of mass of the system will generally be moving as well (unless the system is in pure rotation around it). If the total mass of the particles is formula_6, and the center of mass is moving at velocity , the momentum of the system is:"}
+{"text":"This is known as Euler's first law."}
+{"text":"If the net force applied to a particle is constant, and is applied for a time interval , the momentum of the particle changes by an amount"}
+{"text":"In differential form, this is Newton's second law; the rate of change of the momentum of a particle is equal to the instantaneous force acting on it,"}
+{"text":"If the net force experienced by a particle changes as a function of time, , the change in momentum (or impulse ) between times and is"}
+{"text":"Impulse is measured in the derived units of the newton second (1\u00a0N\u22c5s = 1\u00a0kg\u22c5m\/s) or dyne second (1 dyne\u22c5s = 1 g\u22c5cm\/s)"}
+{"text":"Under the assumption of constant mass , it is equivalent to write"}
+{"text":"hence the net force is equal to the mass of the particle times its acceleration."}
+{"text":"\"Example\": A model airplane of mass 1\u00a0kg accelerates from rest to a velocity of 6\u00a0m\/s due north in 2\u00a0s. The net force required to produce this acceleration is 3\u00a0newtons due north. The change in momentum is 6\u00a0kg\u22c5m\/s due north. The rate of change of momentum is 3\u00a0(kg\u22c5m\/s)\/s due north which is numerically equivalent to 3\u00a0newtons."}
+{"text":"In a closed system (one that does not exchange any matter with its surroundings and is not acted on by external forces) the total momentum remains constant. This fact, known as the \"law of conservation of momentum\", is implied by Newton's laws of motion. Suppose, for example, that two particles interact. As explained by the third law, the forces between them are equal in magnitude but opposite in direction. If the particles are numbered 1 and 2, the second law states that and . Therefore,"}
+{"text":"with the negative sign indicating that the forces oppose. Equivalently,"}
+{"text":"If the velocities of the particles are and before the interaction, and afterwards they are and , then"}
+{"text":"This law holds no matter how complicated the force is between particles. Similarly, if there are several particles, the momentum exchanged between each pair of particles adds to zero, so the total change in momentum is zero. This conservation law applies to all interactions, including collisions and separations caused by explosive forces. It can also be generalized to situations where Newton's laws do not hold, for example in the theory of relativity and in electrodynamics."}
+{"text":"Momentum is a measurable quantity, and the measurement depends on the motion of the observer. For example: if an apple is sitting in a glass elevator that is descending, an outside observer, looking into the elevator, sees the apple moving, so, to that observer, the apple has a non-zero momentum. To someone inside the elevator, the apple does not move, so, it has zero momentum. The two observers each have a frame of reference, in which, they observe motions, and, if the elevator is descending steadily, they will see behavior that is consistent with those same physical laws."}
+{"text":"Suppose a particle has position in a stationary frame of reference. From the point of view of another frame of reference, moving at a uniform speed , the position (represented by a primed coordinate) changes with time as"}
+{"text":"This is called a Galilean transformation. If the particle is moving at speed in the first frame of reference, in the second, it is moving at speed"}
+{"text":"Since does not change, the accelerations are the same:"}
+{"text":"Thus, momentum is conserved in both reference frames. Moreover, as long as the force has the same form, in both frames, Newton's second law is unchanged. Forces such as Newtonian gravity, which depend only on the scalar distance between objects, satisfy this criterion. This independence of reference frame is called Newtonian relativity or Galilean invariance."}
+{"text":"A change of reference frame, can, often, simplify calculations of motion. For example, in a collision of two particles, a reference frame can be chosen, where, one particle begins at rest. Another, commonly used reference frame, is the center of mass frame \u2013 one that is moving with the center of mass. In this frame,"}
+{"text":"By itself, the law of conservation of momentum is not enough to determine the motion of particles after a collision. Another property of the motion, kinetic energy, must be known. This is not necessarily conserved. If it is conserved, the collision is called an \"elastic collision\"; if not, it is an \"inelastic collision\"."}
+{"text":"An elastic collision is one in which no kinetic energy is absorbed in the collision. Perfectly elastic \"collisions\" can occur when the objects do not touch each other, as for example in atomic or nuclear scattering where electric repulsion keeps them apart. A slingshot maneuver of a satellite around a planet can also be viewed as a perfectly elastic collision. A collision between two pool balls is a good example of an \"almost\" totally elastic collision, due to their high rigidity, but when bodies come in contact there is always some dissipation."}
+{"text":"A head-on elastic collision between two bodies can be represented by velocities in one dimension, along a line passing through the bodies. If the velocities are and before the collision and and after, the equations expressing conservation of momentum and kinetic energy are:"}
+{"text":"In general, when the initial velocities are known, the final velocities are given by"}
+{"text":"If one body has much greater mass than the other, its velocity will be little affected by a collision while the other body will experience a large change."}
+{"text":"In an inelastic collision, some of the kinetic energy of the colliding bodies is converted into other forms of energy (such as heat or sound). Examples include traffic collisions, in which the effect of loss of kinetic energy can be seen in the damage to the vehicles; electrons losing some of their energy to atoms (as in the Franck\u2013Hertz experiment); and particle accelerators in which the kinetic energy is converted into mass in the form of new particles."}
+{"text":"In a perfectly inelastic collision (such as a bug hitting a windshield), both bodies have the same motion afterwards. A head-on inelastic collision between two bodies can be represented by velocities in one dimension, along a line passing through the bodies. If the velocities are and before the collision then in a perfectly inelastic collision both bodies will be travelling with velocity after the collision. The equation expressing conservation of momentum is:"}
+{"text":"If one body is motionless to begin with (e.g. formula_23), the equation for conservation of momentum is"}
+{"text":"In a different situation, if the frame of reference is moving at the final velocity such that formula_26, the objects would be brought to rest by a perfectly inelastic collision and 100% of the kinetic energy is converted to other forms of energy. In this instance the initial velocities of the bodies would be non-zero, or the bodies would have to be massless."}
+{"text":"One measure of the inelasticity of the collision is the coefficient of restitution , defined as the ratio of relative velocity of separation to relative velocity of approach. In applying this measure to a ball bouncing from a solid surface, this can be easily measured using the following formula:"}
+{"text":"The momentum and energy equations also apply to the motions of objects that begin together and then move apart. For example, an explosion is the result of a chain reaction that transforms potential energy stored in chemical, mechanical, or nuclear form into kinetic energy, acoustic energy, and electromagnetic radiation. Rockets also make use of conservation of momentum: propellant is thrust outward, gaining momentum, and an equal and opposite momentum is imparted to the rocket."}
+{"text":"Real motion has both direction and velocity and must be represented by a vector. In a coordinate system with axes, velocity has components in the -direction, in the -direction, in the -direction. The vector is represented by a boldface symbol:"}
+{"text":"Similarly, the momentum is a vector quantity and is represented by a boldface symbol:"}
+{"text":"The equations in the previous sections, work in vector form if the scalars and are replaced by vectors and . Each vector equation represents three scalar equations. For example,"}
+{"text":"The kinetic energy equations are exceptions to the above replacement rule. The equations are still one-dimensional, but each scalar represents the magnitude of the vector, for example,"}
+{"text":"Each vector equation represents three scalar equations. Often coordinates can be chosen so that only two components are needed, as in the figure. Each component can be obtained separately and the results combined to produce a vector result."}
+{"text":"A simple construction involving the center of mass frame can be used to show that if a stationary elastic sphere is struck by a moving sphere, the two will head off at right angles after the collision (as in the figure)."}
+{"text":"The concept of momentum plays a fundamental role in explaining the behavior of variable-mass objects such as a rocket ejecting fuel or a star accreting gas. In analyzing such an object, one treats the object's mass as a function that varies with time: . The momentum of the object at time is therefore . One might then try to invoke Newton's second law of motion by saying that the external force on the object is related to its momentum by , but this is incorrect, as is the related expression found by applying the product rule to :"}
+{"text":"This equation does not correctly describe the motion of variable-mass objects. The correct equation is"}
+{"text":"where is the velocity of the ejected\/accreted mass \"as seen in the object's rest frame\". This is distinct from , which is the velocity of the object itself as seen in an inertial frame."}
+{"text":"This equation is derived by keeping track of both the momentum of the object as well as the momentum of the ejected\/accreted mass (\"dm\"). When considered together, the object and the mass (\"dm\") constitute a closed system in which total momentum is conserved."}
+{"text":"Newtonian physics assumes that absolute time and space exist outside of any observer; this gives rise to Galilean invariance. It also results in a prediction that the speed of light can vary from one reference frame to another. This is contrary to observation. In the special theory of relativity, Einstein keeps the postulate that the equations of motion do not depend on the reference frame, but assumes that the speed of light is invariant. As a result, position and time in two reference frames are related by the Lorentz transformation instead of the Galilean transformation."}
+{"text":"Consider, for example, one reference frame moving relative to another at velocity in the direction. The Galilean transformation gives the coordinates of the moving frame as"}
+{"text":"Newton's second law, with mass fixed, is not invariant under a Lorentz transformation. However, it can be made invariant by making the \"inertial mass\" of an object a function of velocity:"}
+{"text":"Within the domain of classical mechanics, relativistic momentum closely approximates Newtonian momentum: at low velocity, is approximately equal to , the Newtonian expression for momentum."}
+{"text":"In the theory of special relativity, physical quantities are expressed in terms of four-vectors that include time as a fourth coordinate along with the three space coordinates. These vectors are generally represented by capital letters, for example for position. The expression for the \"four-momentum\" depends on how the coordinates are expressed. Time may be given in its normal units or multiplied by the speed of light so that all the components of the four-vector have dimensions of length. If the latter scaling is used, an interval of proper time, , defined by"}
+{"text":"is invariant under Lorentz transformations (in this expression and in what follows the metric signature has been used, different authors use different conventions). Mathematically this invariance can be ensured in one of two ways: by treating the four-vectors as Euclidean vectors and multiplying time by ; or by keeping time a real quantity and embedding the vectors in a Minkowski space. In a Minkowski space, the scalar product of two four-vectors and is defined as"}
+{"text":"In all the coordinate systems, the (contravariant) relativistic four-velocity is defined by"}
+{"text":"where is the invariant mass. If (in Minkowski space), then"}
+{"text":"Using Einstein's mass-energy equivalence, , this can be rewritten as"}
+{"text":"Thus, conservation of four-momentum is Lorentz-invariant and implies conservation of both mass and energy."}
+{"text":"The magnitude of the momentum four-vector is equal to :"}
+{"text":"and is invariant across all reference frames."}
+{"text":"The relativistic energy\u2013momentum relationship holds even for massless particles such as photons; by setting it follows that"}
+{"text":"In a game of relativistic \"billiards\", if a stationary particle is hit by a moving particle in an elastic collision, the paths formed by the two afterwards will form an acute angle. This is unlike the non-relativistic case where they travel at right angles."}
+{"text":"The four-momentum of a planar wave can be related to a wave four-vector"}
+{"text":"For a particle, the relationship between temporal components, , is the Planck\u2013Einstein relation, and the relation between spatial components, , describes a de Broglie matter wave."}
+{"text":"In Lagrangian mechanics, a Lagrangian is defined as the difference between the kinetic energy and the potential energy :"}
+{"text":"If the generalized coordinates are represented as a vector and time differentiation is represented by a dot over the variable, then the equations of motion (known as the Lagrange or Euler\u2013Lagrange equations) are a set of equations:"}
+{"text":"If a coordinate is not a Cartesian coordinate, the associated generalized momentum component does not necessarily have the dimensions of linear momentum. Even if is a Cartesian coordinate, will not be the same as the mechanical momentum if the potential depends on velocity. Some sources represent the kinematic momentum by the symbol ."}
+{"text":"In this mathematical framework, a generalized momentum is associated with the generalized coordinates. Its components are defined as"}
+{"text":"Each component is said to be the \"conjugate momentum\" for the coordinate ."}
+{"text":"Now if a given coordinate does not appear in the Lagrangian (although its time derivative might appear), then"}
+{"text":"This is the generalization of the conservation of momentum."}
+{"text":"Even if the generalized coordinates are just the ordinary spatial coordinates, the conjugate momenta are not necessarily the ordinary momentum coordinates. An example is found in the section on electromagnetism."}
+{"text":"In Hamiltonian mechanics, the Lagrangian (a function of generalized coordinates and their derivatives) is replaced by a Hamiltonian that is a function of generalized coordinates and momentum. The Hamiltonian is defined as"}
+{"text":"where the momentum is obtained by differentiating the Lagrangian as above. The Hamiltonian equations of motion are"}
+{"text":"As in Lagrangian mechanics, if a generalized coordinate does not appear in the Hamiltonian, its conjugate momentum component is conserved."}
+{"text":"Conservation of momentum is a mathematical consequence of the homogeneity (shift symmetry) of space (position in space is the canonical conjugate quantity to momentum). That is, conservation of momentum is a consequence of the fact that the laws of physics do not depend on position; this is a special case of Noether's theorem. For systems that do not have this symmetry, it may not be possible to define conservation of momentum. Examples where conservation of momentum does not apply include curved spacetimes in general relativity or time crystals in condensed matter physics."}
+{"text":"In Maxwell's equations, the forces between particles are mediated by electric and magnetic fields. The electromagnetic force (\"Lorentz force\") on a particle with charge due to a combination of electric field and magnetic field is"}
+{"text":"It has an electric potential and magnetic vector potential ."}
+{"text":"In the non-relativistic regime, its generalized momentum is"}
+{"text":"The quantity formula_60 is sometimes called the \"potential momentum\". It is the momentum due to the interaction of the particle with the electromagnetic fields. The name is an analogy with the potential energy formula_61, which is the energy due to the interaction of the particle with the electromagnetic fields. These quantities form a four-vector, so the analogy is consistent; besides, the concept of potential momentum is important in explaining the so-called hidden-momentum of the electromagnetic fields"}
+{"text":"In Newtonian mechanics, the law of conservation of momentum can be derived from the law of action and reaction, which states that every force has a reciprocating equal and opposite force. Under some circumstances, moving charged particles can exert forces on each other in non-opposite directions. Nevertheless, the combined momentum of the particles and the electromagnetic field is conserved."}
+{"text":"The Lorentz force imparts a momentum to the particle, so by Newton's second law the particle must impart a momentum to the electromagnetic fields."}
+{"text":"In a vacuum, the momentum per unit volume is"}
+{"text":"where is the vacuum permeability and is the speed of light. The momentum density is proportional to the Poynting vector which gives the directional rate of energy transfer per unit area:"}
+{"text":"If momentum is to be conserved over the volume over a region , changes in the momentum of matter through the Lorentz force must be balanced by changes in the momentum of the electromagnetic field and outflow of momentum. If is the momentum of all the particles in , and the particles are treated as a continuum, then Newton's second law gives"}
+{"text":"and the equation for conservation of each component of the momentum is"}
+{"text":"The term on the right is an integral over the surface area of the surface representing momentum flow into and out of the volume, and is a component of the surface normal of . The quantity is called the Maxwell stress tensor, defined as"}
+{"text":"The above results are for the \"microscopic\" Maxwell equations, applicable to electromagnetic forces in a vacuum (or on a very small scale in media). It is more difficult to define momentum density in media because the division into electromagnetic and mechanical is arbitrary. The definition of electromagnetic momentum density is modified to"}
+{"text":"where the H-field is related to the B-field and the magnetization by"}
+{"text":"The electromagnetic stress tensor depends on the properties of the media."}
+{"text":"In quantum mechanics, momentum is defined as a self-adjoint operator on the wave function. The Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at once. In quantum mechanics, position and momentum are conjugate variables."}
+{"text":"For a single particle described in the position basis the momentum operator can be written as"}
+{"text":"where is the gradient operator, is the reduced Planck constant, and is the imaginary unit. This is a commonly encountered form of the momentum operator, though the momentum operator in other bases can take other forms. For example, in momentum space the momentum operator is represented as"}
+{"text":"where the operator acting on a wave function yields that wave function multiplied by the value , in an analogous fashion to the way that the position operator acting on a wave function yields that wave function multiplied by the value \"x\"."}
+{"text":"For both massive and massless objects, relativistic momentum is related to the phase constant formula_72 by"}
+{"text":"Electromagnetic radiation (including visible light, ultraviolet light, and radio waves) is carried by photons. Even though photons (the particle aspect of light) have no mass, they still carry momentum. This leads to applications such as the solar sail. The calculation of the momentum of light within dielectric media is somewhat controversial (see Abraham\u2013Minkowski controversy)."}
+{"text":"In fields such as fluid dynamics and solid mechanics, it is not feasible to follow the motion of individual atoms or molecules. Instead, the materials must be approximated by a continuum in which there is a particle or fluid parcel at each point that is assigned the average of the properties of atoms in a small region nearby. In particular, it has a density and velocity that depend on time and position . The momentum per unit volume is ."}
+{"text":"Consider a column of water in hydrostatic equilibrium. All the forces on the water are in balance and the water is motionless. On any given drop of water, two forces are balanced. The first is gravity, which acts directly on each atom and molecule inside. The gravitational force per unit volume is , where is the gravitational acceleration. The second force is the sum of all the forces exerted on its surface by the surrounding water. The force from below is greater than the force from above by just the amount needed to balance gravity. The normal force per unit area is the pressure . The average force per unit volume inside the droplet is the gradient of the pressure, so the force balance equation is"}
+{"text":"If the forces are not balanced, the droplet accelerates. This acceleration is not simply the partial derivative because the fluid in a given volume changes with time. Instead, the material derivative is needed:"}
+{"text":"Applied to any physical quantity, the material derivative includes the rate of change at a point and the changes due to advection as fluid is carried past the point. Per unit volume, the rate of change in momentum is equal to . This is equal to the net force on the droplet."}
+{"text":"Forces that can change the momentum of a droplet include the gradient of the pressure and gravity, as above. In addition, surface forces can deform the droplet. In the simplest case, a shear stress , exerted by a force parallel to the surface of the droplet, is proportional to the rate of deformation or strain rate. Such a shear stress occurs if the fluid has a velocity gradient because the fluid is moving faster on one side than another. If the speed in the direction varies with , the tangential force in direction per unit area normal to the direction is"}
+{"text":"where is the viscosity. This is also a flux, or flow per unit area, of x-momentum through the surface."}
+{"text":"Including the effect of viscosity, the momentum balance equations for the incompressible flow of a Newtonian fluid are"}
+{"text":"These are known as the Navier\u2013Stokes equations."}
+{"text":"The momentum balance equations can be extended to more general materials, including solids. For each surface with normal in direction and force in direction , there is a stress component . The nine components make up the Cauchy stress tensor , which includes both pressure and shear. The local conservation of momentum is expressed by the Cauchy momentum equation:"}
+{"text":"The Cauchy momentum equation is broadly applicable to deformations of solids and liquids. The relationship between the stresses and the strain rate depends on the properties of the material (see Types of viscosity)."}
+{"text":"A disturbance in a medium gives rise to oscillations, or waves, that propagate away from their source. In a fluid, small changes in pressure can often be described by the acoustic wave equation:"}
+{"text":"where is the speed of sound. In a solid, similar equations can be obtained for propagation of pressure (P-waves) and shear (S-waves)."}
+{"text":"The flux, or transport per unit area, of a momentum component by a velocity is equal to . In the linear approximation that leads to the above acoustic equation, the time average of this flux is zero. However, nonlinear effects can give rise to a nonzero average. It is possible for momentum flux to occur even though the wave itself does not have a mean momentum."}
+{"text":"The work of Philoponus, and possibly that of Ibn S\u012bn\u0101, was read and refined by the European philosophers Peter Olivi and Jean Buridan. Buridan, who in about 1350 was made rector of the University of Paris, referred to impetus being proportional to the weight times the speed. Moreover, Buridan's theory was different from his predecessor's in that he did not consider impetus to be self-dissipating, asserting that a body would be arrested by the forces of air resistance and gravity which might be opposing its impetus."}
+{"text":"Ren\u00e9 Descartes believed that the total \"quantity of motion\" () in the universe is conserved, where the quantity of motion is understood as the product of size and speed. This should not be read as a statement of the modern law of momentum, since he had no concept of mass as distinct from weight and size, and more important, he believed that it is speed rather than velocity that is conserved. So for Descartes if a moving object were to bounce off a surface, changing its direction but not its speed, there would be no change in its quantity of motion. Galileo, in his \"Two New Sciences\", used the Italian word \"impeto\" to similarly describe Descartes' quantity of motion."}
+{"text":"Leibniz, in his \"Discourse on Metaphysics\", gave an argument against Descartes' construction of the conservation of the \"quantity of motion\" using an example of dropping blocks of different sizes different distances. He points out that force is conserved but quantity of motion, construed as the product of size and speed of an object, is not conserved."}
+{"text":"Christiaan Huygens concluded quite early that Descartes's laws for the elastic collision of two bodies must be wrong, and he formulated the correct laws. An important step was his recognition of the Galilean invariance of the problems. His views then took many years to be circulated. He passed them on in person to William Brouncker and Christopher Wren in London, in 1661. What Spinoza wrote to Henry Oldenburg about them, in 1666 which was during the Second Anglo-Dutch War, was guarded. Huygens had actually worked them out in a manuscript \"De motu corporum ex percussione\" in the period 1652\u20136. The war ended in 1667, and Huygens announced his results to the Royal Society in 1668. He published them in the \"Journal des s\u00e7avans\" in 1669."}
+{"text":"In physics, zilch is a conserved quantity of the electromagnetic field."}
+{"text":"Daniel M. Lipkin observed that if he defined the quantities"}
+{"text":"which implies that the total \"zilch\" formula_3 is constant (formula_4 is the \"zilch current\"). Generalising the result, Lipkin found nine related conservation laws, all unrelated to the stress\u2013energy tensor. He named the quantity zilch because of the apparent lack of physical significance."}
+{"text":"Zilch can also be expressed using the dual electromagnetic tensor formula_5 as"}
+{"text":"It was later demonstrated that zilch is part of an infinite number of zilch-like conserved quantities, a general property of free fields."}
+{"text":"Zilch has occasionally been rediscovered. It has been called \"optical chirality\", since it determines the degree of chiral asymmetry in the rate of excitation of a small chiral molecule by an incident electromagnetic field. A physical interpretation of zilch was offered in 2012; zilch is to the curl or time derivative of the electromagnetic field what helicity, spin and related quantities are to the electromagnetic field itself. The conservation of zilch is not associated with duality transformations, but instead with a more subtle symmetry transformation, which has no special name."}
+{"text":"Magnetic braking is a theory explaining the loss of stellar angular momentum due to material getting captured by the stellar magnetic field and thrown out at great distance from the surface of the star. It plays an important role in the evolution of binary star systems."}
+{"text":"The currently accepted theory of the solar system's evolution states that the Solar System originates from a contracting gas cloud. As the cloud contracts, the angular momentum formula_1 must be conserved. Any small net rotation of the cloud will cause the spin to increase as the cloud collapses, forcing the material into a rotating disk. At the dense center of this disk a protostar forms, which gains heat from the gravitational energy of the collapse. As the collapse continues, the rotation rate can increase to the point where the accreting protostar can break up due to centrifugal force at the equator."}
+{"text":"Thus the rotation rate must be braked during the first 100,000 years of the star's life to avoid this scenario. One possible explanation for the braking is the interaction of the protostar's magnetic field with the stellar wind. In the case of our own Sun, when the planets' angular momenta are compared to the Sun's own, the Sun has less than 1% of its supposed angular momentum. In other words, the Sun has slowed down its spin while the planets have not."}
+{"text":"As ionized material follows the Sun's magnetic field lines, due to the effect of the field lines being frozen in the plasma, the charged particles feel a force formula_2 of the magnitude:"}
+{"text":"where formula_4 is the charge, formula_5 is the velocity and formula_6 is the magnetic field vector. This bending action forces the particles to \"corkscrew\" around the magnetic field lines while held in place by a \"magnetic pressure\"formula_7, or \"energy density\", while rotating together with the Sun as a solid body:"}
+{"text":"Since magnetic field strength decreases with the cube of the distance there will be a place where the kinetic gas pressure formula_9 of the ionized gas is great enough to break away from the field lines:"}
+{"text":"where n is the number of particles, m is the mass of the individual particle and v is the radial velocity away from the Sun, or the speed of the solar wind."}
+{"text":"Due to the high conductivity of the stellar wind, the magnetic field outside the sun declines with radius like the mass density of the wind, i.e. decline as an inverse square law. The magnetic field is therefore given by"}
+{"text":"where formula_12 is the magnetic field on the surface of the sun and formula_13 is its radius. The critical distance where the material will break away from the field lines can then be calculated as the distance where the kinetic pressure and the magnetic pressure are equal, i.e."}
+{"text":"If the solar mass loss is omni-directional then the mass loss formula_17; plugging this into the above equation and isolating the critical radius it follows that"}
+{"text":"This leads to a critical radius formula_23. This means that the ionized plasma will rotate together with the Sun as a solid body until it reaches a distance of nearly 15 times the radius of the Sun; from there the material will break off and stop affecting the Sun."}
+{"text":"The amount of solar mass needed to be thrown out along the field lines to make the Sun completely stop rotating can then be calculated using the specific angular momentum:"}
+{"text":"It has been suggested that the sun lost a comparable amount of material over the course of its lifetime."}
+{"text":"In 2016 scientists at Carnegie Observatories published a research suggesting that stars at a similar stage of life as the Sun were spinning faster than magnetic braking theories predicted. To calculate this they pinpointed the dark spots on the surface of stars and tracked them as they moved with the stars' spin. While this method has been successful for measuring the spin of younger stars, the \"weakened\" magnetic braking in older stars proved harder to confirm, as the latter notoriously have fewer star spots. In a study published in Nature Astronomy in 2021, researchers at the University of Birmingham used a different approach, namely asteroseismology, to confirm that older stars do appear to rotate faster than expected."}
+{"text":"In mathematics, a function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its argument."}
+{"text":"is invariant under rotations of the plane around the origin, because for a rotated set of coordinates through any angle \"\u03b8\""}
+{"text":"the function, after some cancellation of terms, takes exactly the same form"}
+{"text":"The rotation of coordinates can be expressed using matrix form using the rotation matrix,"}
+{"text":"or symbolically x\u2032 = Rx. Symbolically, the rotation invariance of a real-valued function of two real variables is"}
+{"text":"In words, the function of the rotated coordinates takes exactly the same form as it did with the initial coordinates, the only difference is the rotated coordinates replace the initial ones. For a real-valued function of three or more real variables, this expression extends easily using appropriate rotation matrices."}
+{"text":"The concept also extends to a vector-valued function f of one or more variables;"}
+{"text":"In all the above cases, the arguments (here called \"coordinates\" for concreteness) are rotated, not the function itself."}
+{"text":"which maps elements from a subset \"X\" of the real line \u211d to itself, rotational invariance may also mean that the function commutes with rotations of elements in \"X\". This also applies for an operator that acts on such functions. An example is the two-dimensional Laplace operator"}
+{"text":"which acts on a function \"f\" to obtain another function \u22072\"f\". This operator is invariant under rotations."}
+{"text":"If \"g\" is the function \"g\"(\"p\") = \"f\"(\"R\"(\"p\")), where \"R\" is any rotation, then (\u22072\"g\")(\"p\") = (\u22072\"f\" )(\"R\"(\"p\")); that is, rotating a function merely rotates its Laplacian."}
+{"text":"In physics, if a system behaves the same regardless of how it is oriented in space, then its Lagrangian is rotationally invariant. According to Noether's theorem, if the action (the integral over time of its Lagrangian) of a physical system is invariant under rotation, then angular momentum is conserved."}
+{"text":"In quantum mechanics, rotational invariance is the property that after a rotation the new system still obeys Schr\u00f6dinger's equation. That is"}
+{"text":"for any rotation \"R\". Since the rotation does not depend explicitly on time, it commutes with the energy operator. Thus for rotational invariance we must have [\"R\",\u00a0\"H\"] = 0."}
+{"text":"For infinitesimal rotations (in the \"xy\"-plane for this example; it may be done likewise for any plane) by an angle \"d\u03b8\" the (infinitesimal) rotation operator is"}
+{"text":"in other words angular momentum is conserved."}
+{"text":"In astrodynamics, the \"vis-viva\" equation, also referred to as orbital-energy-invariance law, is one of the equations that model the motion of orbiting bodies. It is the direct result of the principle of conservation of mechanical energy which applies when the only force acting on an object is its own weight."}
+{"text":"\"Vis viva\" (Latin for \"living force\") is a term from the history of mechanics, and it survives in this sole context. It represents the principle that the difference between the total work of the accelerating forces of a system and that of the retarding forces is equal to one half the \"vis viva\" accumulated or lost in the system while the work is being done."}
+{"text":"For any Keplerian orbit (elliptic, parabolic, hyperbolic, or radial), the \"vis-viva\" equation is as follows:"}
+{"text":"The product of GM can also be expressed as the standard gravitational parameter using the Greek letter \u03bc."}
+{"text":"Derivation for elliptic orbits (0 \u2264 eccentricity < 1)."}
+{"text":"In the vis-viva equation the mass \"m\" of the orbiting body (e.g., a spacecraft) is taken to be negligible in comparison to the mass \"M\" of the central body (e.g., the Earth). The central body and orbiting body are also often referred to as the primary and a particle respectively. In the specific cases of an elliptical or circular orbit, the vis-viva equation may be readily derived from conservation of energy and momentum."}
+{"text":"Specific total energy is constant throughout the orbit. Thus, using the subscripts \"a\" and \"p\" to denote apoapsis (apogee) and periapsis (perigee), respectively,"}
+{"text":"Recalling that for an elliptical orbit (and hence also a circular orbit) the velocity and radius vectors are perpendicular at apoapsis and periapsis, conservation of angular momentum requires specific angular momentum formula_4, thus formula_5:"}
+{"text":"Isolating the kinetic energy at apoapsis and simplifying,"}
+{"text":"From the geometry of an ellipse, formula_11 where \"a\" is the length of the semimajor axis. Thus,"}
+{"text":"Substituting this into our original expression for specific orbital energy,"}
+{"text":"Thus, formula_14 and the vis-viva equation may be written"}
+{"text":"Therefore, the conserved angular momentum L = mh can be derived using formula_17 and formula_18,"}
+{"text":"where a is semi-major axis and b is semi-minor axis of the elliptical orbit, as follows -"}
+{"text":"Given the total mass and the scalars \"r\" and \"v\" at a single point of the orbit, one can compute \"r\" and \"v\" at any other point in the orbit."}
+{"text":"Given the total mass and the scalars \"r\" and \"v\" at a single point of the orbit, one can compute the specific orbital energy formula_23, allowing an object orbiting a larger object to be classified as having not enough energy to remain in orbit, hence being \"suborbital\" (a ballistic missile, for example), having enough energy to be \"orbital\", but without the possibility to complete a full orbit anyway because it eventually collides with the other body, or having enough energy to come from and\/or go to infinity (as a meteor, for example)."}
+{"text":"The formula for escape velocity can be obtained from the Vis-viva equation by taking the limit as formula_24 approaches formula_25:"}
+{"text":"In quantum mechanics, a parity transformation (also called parity inversion) is the flip in the sign of \"one\" spatial coordinate. In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection):"}
+{"text":"It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image. All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity. The weak interaction is chiral and thus provides a means for probing chirality in physics. In interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions."}
+{"text":"A matrix representation of P (in any number of dimensions) has determinant equal to \u22121, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is \"not\" a parity transformation; it is the same as a 180\u00b0-rotation."}
+{"text":"In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions."}
+{"text":"Under rotations, classical geometrical objects can be classified into scalars, vectors, and tensors of higher rank. In classical physics, physical configurations need to transform under representations of every symmetry group."}
+{"text":"Quantum theory predicts that states in a Hilbert space do not need to transform under representations of the group of rotations, but only under projective representations. The word \"projective\" refers to the fact that if one projects out the phase of each state, where we recall that the overall phase of a quantum state is not observable, then a projective representation reduces to an ordinary representation. All representations are also projective representations, but the converse is not true, therefore the projective representation condition on quantum states is weaker than the representation condition on classical states."}
+{"text":"The projective representations of any group are isomorphic to the ordinary representations of a central extension of the group. For example, projective representations of the 3-dimensional rotation group, which is the special orthogonal group SO(3), are ordinary representations of the special unitary group SU(2) (see Representation theory of SU(2)). Projective representations of the rotation group that are not representations are called spinors and so quantum states may transform not only as tensors but also as spinors."}
+{"text":"If one adds to this a classification by parity, these can be extended, for example, into notions of"}
+{"text":"which also have negative determinant and form a valid parity transformation. Then, combining them with rotations (or successively performing \"x\"-, \"y\"-, and \"z\"-reflections) one can recover the particular parity transformation defined earlier. The first parity transformation given does not work in an even number of dimensions, though, because it results in a positive determinant. In even dimensions only the latter example of a parity transformation (or any reflection of an odd number of coordinates) can be used."}
+{"text":"Parity forms the abelian group formula_3 due to the relation formula_4. All Abelian groups have only one-dimensional irreducible representations. For formula_3, there are two irreducible representations: one is even under parity, formula_6, the other is odd, formula_7. These are useful in quantum mechanics. However, as is elaborated below, in quantum mechanics states need not transform under actual representations of parity but only under projective representations and so in principle a parity transformation may rotate a state by any phase."}
+{"text":"Newton's equation of motion formula_8 (if the mass is constant) equates two vectors, and hence is invariant under parity. The law of gravity also involves only vectors and is also, therefore, invariant under parity."}
+{"text":"However, angular momentum formula_9 is an axial vector,"}
+{"text":"In classical electrodynamics, the charge density formula_11 is a scalar, the electric field, formula_12, and current formula_13 are vectors, but the magnetic field, formula_14 is an axial vector. However, Maxwell's equations are invariant under parity because the curl of an axial vector is a vector."}
+{"text":"Effect of spatial inversion on some variables of classical physics."}
+{"text":"Classical variables, predominantly scalar quantities, which do not change upon spatial inversion include:"}
+{"text":"Classical variables, predominantly vector quantities, which have their sign flipped by spatial inversion include:"}
+{"text":"In quantum mechanics, spacetime transformations act on quantum states. The parity transformation, formula_40, is a unitary operator, in general acting on a state formula_41 as follows: formula_42."}
+{"text":"For electronic wavefunctions, even states are usually indicated by a subscript g for \"gerade\" (German: even) and odd states by a subscript u for \"ungerade\" (German: odd). For example, the lowest energy level of the hydrogen molecule ion (H2+) is labelled formula_56 and the next-closest (higher) energy level is labelled formula_57."}
+{"text":"The wave functions of a particle moving into an external potential, which is centrosymmetric (potential energy invariant with respect to a space inversion, symmetric to the origin), either remain invariable or change signs: these two possible states are called the even state or odd state of the wave functions."}
+{"text":"The law of conservation of parity of particle (not true for the beta decay of nuclei) states that, if an isolated ensemble of particles has a definite parity, then the parity remains invariable in the process of ensemble evolution."}
+{"text":"The parity of the states of a particle moving in a spherically symmetric external field is determined by the angular momentum, and the particle state is defined by three quantum numbers: total energy, angular momentum and the projection of angular momentum."}
+{"text":"When parity generates the Abelian group \u21242, one can always take linear combinations of quantum states such that they are either even or odd under parity (see the figure). Thus the parity of such states is \u00b11. The parity of a multiparticle state is the product of the parities of each state; in other words parity is a multiplicative quantum number."}
+{"text":"In quantum mechanics, Hamiltonians are invariant (symmetric) under a parity transformation if formula_58 commutes with the Hamiltonian. In non-relativistic quantum mechanics, this happens for any scalar potential, i.e., formula_59, hence the potential is spherically symmetric. The following facts can be easily proven:"}
+{"text":"Some of the non-degenerate eigenfunctions of formula_70 are unaffected (invariant) by parity formula_58 and the others are merely reversed in sign when the Hamiltonian operator and the parity operator commute:"}
+{"text":"where formula_77 is a constant, the eigenvalue of formula_58,"}
+{"text":"The overall parity of a many-particle system is the product of the parities of the one-particle states. It is \u22121 if an odd number of particles are in odd-parity states, and +1 otherwise. Different notations are in use to denote the parity of nuclei, atoms, and molecules."}
+{"text":"Atomic orbitals have parity (\u22121)\u2113, where the exponent \u2113 is the azimuthal quantum number. The parity is odd for orbitals p, f, \u2026 with \u2113 = 1, 3, \u2026, and an atomic state has odd parity if an odd number of electrons occupy these orbitals. For example, the ground state of the nitrogen atom has the electron configuration 1s22s22p3, and is identified by the term symbol 4So, where the superscript o denotes odd parity. However the third excited term at about 83,300\u00a0cm\u22121 above the ground state has electron configuration 1s22s22p23s has even parity since there are only two 2p electrons, and its term symbol is 4P (without an o superscript)."}
+{"text":"The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectively. The parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass."}
+{"text":"does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u vibronic states (called ortho-para mixing) and give rise to ortho-para transitions"}
+{"text":"If we can show that the vacuum state is invariant under parity, formula_80, the Hamiltonian is parity invariant formula_81 and the quantization conditions remain unchanged under parity, then it follows that every state has good parity, and this parity is conserved in any reaction."}
+{"text":"To show that quantum electrodynamics is invariant under parity, we have to prove that the action is invariant and the quantization is also invariant. For simplicity we will assume that canonical quantization is used; the vacuum state is then invariant under parity by construction. The invariance of the action follows from the classical invariance of Maxwell's equations. The invariance of the canonical quantization procedure can be worked out, and turns out to depend on the transformation of the annihilation operator:"}
+{"text":"where p denotes the momentum of a photon and \u00b1 refers to its polarization state. This is equivalent to the statement that the photon has odd intrinsic parity. Similarly all vector bosons can be shown to have odd intrinsic parity, and all axial-vectors to have even intrinsic parity."}
+{"text":"A straightforward extension of these arguments to scalar field theories shows that scalars have even parity, since"}
+{"text":"This is true even for a complex scalar field. (\"Details of spinors are dealt with in the article on the \"Dirac equation\", where it is shown that fermions and antifermions have opposite intrinsic parity.\")"}
+{"text":"With fermions, there is a slight complication because there is more than one spin group."}
+{"text":"In the Standard Model of fundamental interactions there are precisely three global internal U(1) symmetry groups available, with charges equal to the baryon number \"B\", the lepton number \"L\" and the electric charge \"Q\". The product of the parity operator with any combination of these rotations is another parity operator. It is conventional to choose one specific combination of these rotations to define a standard parity operator, and other parity operators are related to the standard one by internal rotations. One way to fix a standard parity operator is to assign the parities of three particles with linearly independent charges \"B\", \"L\" and \"Q\". In general, one assigns the parity of the most common massive particles, the proton, the neutron and the electron, to be +1."}
+{"text":"Steven Weinberg has shown that if , where \"F\" is the fermion number operator, then, since the fermion number is the sum of the lepton number plus the baryon number, , for all particles in the Standard Model and since lepton number and baryon number are charges \"Q\" of continuous symmetries \"e\"\"iQ\", it is possible to redefine the parity operator so that . However, if there exist Majorana neutrinos, which experimentalists today believe is possible, their fermion number is equal to one because they are neutrinos while their baryon and lepton numbers are zero because they are Majorana, and so (\u22121)\"F\" would not be embedded in a continuous symmetry group. Thus Majorana neutrinos would have parity \u00b1\"i\"."}
+{"text":"In 1954, a paper by William Chinowsky and Jack Steinberger demonstrated that the pion has negative parity. They studied the decay of an \"atom\" made from a deuteron () and a negatively charged pion () in a state with zero orbital angular momentum formula_82 into two neutrons (formula_83)."}
+{"text":"Although parity is conserved in electromagnetism, strong interactions and gravity, it is violated in weak interactions. The Standard Model incorporates parity violation by expressing the weak interaction as a chiral gauge interaction. Only the left-handed components of particles and right-handed components of antiparticles participate in charged weak interactions in the Standard Model. This implies that parity is not a symmetry of our universe, unless a hidden mirror sector exists in which parity is violated in the opposite way."}
+{"text":"By the mid-20th century, it had been suggested by several scientists that parity might not be conserved (in different contexts), but without solid evidence these suggestions were not considered important. Then, in 1956, a careful review and analysis by theoretical physicists Tsung-Dao Lee and Chen-Ning Yang went further, showing that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. They were mostly ignored, but Lee was able to convince his Columbia colleague Chien-Shiung Wu to try it. She needed special cryogenic facilities and expertise, so the experiment was done at the National Bureau of Standards."}
+{"text":"After the fact, it was noted that an obscure 1928 experiment, done by R. T. Cox, G. C. McIlwraith, and B. Kurrelmeyer, had in effect reported parity violation in weak decays, but since the appropriate concepts had not yet been developed, those results had no impact. The discovery of parity violation immediately explained the outstanding \u03c4\u2013\u03b8 puzzle in the physics of kaons."}
+{"text":"In 2010, it was reported that physicists working with the Relativistic Heavy Ion Collider (RHIC) had created a short-lived parity symmetry-breaking bubble in quark-gluon plasmas. An experiment conducted by several physicists including Yale's Jack Sandweiss as part of the STAR collaboration, suggested that parity may also be violated in the strong interaction. It is predicted that this local parity violation, which would be analogous to the effect that is induced by fluctuation of the axion field, manifest itself by chiral magnetic effect."}
+{"text":"To every particle one can assign an intrinsic parity as long as nature preserves parity. Although weak interactions do not, one can still assign a parity to any hadron by examining the strong interaction reaction that produces it, or through decays not involving the weak interaction, such as rho meson decay to pions."}
+{"text":"In physics, angular momentum (rarely, moment of momentum or rotational momentum) is the rotational equivalent of linear momentum. It is an important quantity in physics because it is a conserved quantity\u2014the total angular momentum of a closed system remains constant."}
+{"text":"In three dimensions, the angular momentum for a point particle is a pseudovector , the cross product of the particle's position vector r (relative to some origin) and its momentum vector; the latter is in Newtonian mechanics. This definition can be applied to each point in continua like solids or fluids, or physical fields. Unlike momentum, angular momentum does depend on where the origin is chosen, since the particle's position is measured from it."}
+{"text":"Angular momentum is an extensive quantity; i.e. the total angular momentum of any composite system is the sum of the angular momenta of its constituent parts. For a continuous rigid body, the total angular momentum is the volume integral of angular momentum density (i.e. angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body."}
+{"text":"In quantum mechanics, angular momentum (like other quantities) is expressed as an operator, and its one-dimensional projections have quantized eigenvalues. Angular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called \"component\") can be measured with definite precision; the other two then remain uncertain. Because of this, the notion of a quantum particle literally \"spinning\" about an axis does not exist. Quantum particles \"do\" possess a type of non-orbital angular momentum called \"spin\", but this angular momentum does not correspond to actual physical spinning motion."}
+{"text":"Angular momentum is a vector quantity (more precisely, a pseudovector) that represents the product of a body's rotational inertia and rotational velocity (in radians\/sec) about a particular axis. However, if the particle's trajectory lies in a single plane, it is sufficient to discard the vector nature of angular momentum, and treat it as a scalar (more precisely, a pseudoscalar). Angular momentum can be considered a rotational analog of linear momentum. Thus, where linear momentum is proportional to mass and linear speed"}
+{"text":"angular momentum is proportional to moment of inertia and angular speed measured in radians per second."}
+{"text":"Unlike mass, which depends only on amount of matter, moment of inertia is also dependent on the position of the axis of rotation and the shape of the matter. Unlike linear velocity, which does not depend upon the choice of origin, orbital angular velocity is always measured with respect to a fixed origin. Therefore, strictly speaking, should be referred to as the angular momentum \"relative to that center\"."}
+{"text":"Because formula_3 for a single particle and formula_4 for circular motion, angular momentum can be expanded, formula_5 and reduced to,"}
+{"text":"the product of the radius of rotation and the linear momentum of the particle formula_7, where formula_8 in this case is the equivalent linear (tangential) speed at the radius (formula_9)."}
+{"text":"This simple analysis can also apply to non-circular motion if only the component of the motion which is perpendicular to the radius vector is considered. In that case,"}
+{"text":"where formula_11 is the perpendicular component of the motion. Expanding, formula_12 rearranging, formula_13 and reducing, angular momentum can also be expressed,"}
+{"text":"where formula_15 is the length of the \"moment arm\", a line dropped perpendicularly from the origin onto the path of the particle. It is this definition, to which the term \"moment of momentum\" refers."}
+{"text":"Another approach is to define angular momentum as the conjugate momentum (also called canonical momentum) of the angular coordinate formula_16 expressed in the Lagrangian of the mechanical system. Consider a mechanical system with a mass formula_17 constrained to move in a circle of radius formula_18 in the absence of any external force field. The kinetic energy of the system is"}
+{"text":"The \"generalized momentum\" \"canonically conjugate to\" the coordinate formula_16 is defined by"}
+{"text":"To completely define orbital angular momentum in three dimensions, it is required to know the rate at which the position vector sweeps out angle, the direction perpendicular to the instantaneous plane of angular displacement, and the mass involved, as well as how this mass is distributed in space. By retaining this vector nature of angular momentum, the general nature of the equations is also retained, and can describe any sort of three-dimensional motion about the center of rotation \u2013 circular, linear, or otherwise. In vector notation, the orbital angular momentum of a point particle in motion about the origin can be expressed as:"}
+{"text":"This can be expanded, reduced, and by the rules of vector algebra, rearranged:"}
+{"text":"which is the cross product of the position vector formula_27 and the linear momentum formula_33 of the particle. By the definition of the cross product, the formula_34 vector is perpendicular to both formula_27 and formula_36. It is directed perpendicular to the plane of angular displacement, as indicated by the right-hand rule \u2013 so that the angular velocity is seen as counter-clockwise from the head of the vector. Conversely, the formula_34 vector defines the plane in which formula_27 and formula_36 lie."}
+{"text":"By defining a unit vector formula_40 perpendicular to the plane of angular displacement, a scalar angular speed formula_41 results, where"}
+{"text":"The two-dimensional scalar equations of the previous section can thus be given direction:"}
+{"text":"and formula_46 for circular motion, where all of the motion is perpendicular to the radius formula_47."}
+{"text":"In the spherical coordinate system the angular momentum vector expresses as"}
+{"text":"Angular momentum can be described as the rotational analog of linear momentum. Like linear momentum it involves elements of mass and displacement. Unlike linear momentum it also involves elements of position and shape."}
+{"text":"Many problems in physics involve matter in motion about some certain point in space, be it in actual rotation about it, or simply moving past it, where it is desired to know what effect the moving matter has on the point\u2014can it exert energy upon it or perform work about it? Energy, the ability to do work, can be stored in matter by setting it in motion\u2014a combination of its inertia and its displacement. Inertia is measured by its mass, and displacement by its velocity. Their product,"}
+{"text":"is the matter's momentum. Referring this momentum to a central point introduces a complication: the momentum is not applied to the point directly. For instance, a particle of matter at the outer edge of a wheel is, in effect, at the end of a lever of the same length as the wheel's radius, its momentum turning the lever about the center point. This imaginary lever is known as the \"moment arm\". It has the effect of multiplying the momentum's effort in proportion to its length, an effect known as a \"moment\". Hence, the particle's momentum referred to a particular point,"}
+{"text":"is the \"angular momentum\", sometimes called, as here, the \"moment of momentum\" of the particle versus that particular center point. The equation formula_51 combines a moment (a mass formula_17 turning moment arm formula_47) with a linear (straight-line equivalent) speed formula_8. Linear speed referred to the central point is simply the product of the distance formula_47 and the angular speed formula_41 versus the point: formula_57 another moment. Hence, angular momentum contains a double moment: formula_58 Simplifying slightly, formula_59 the quantity formula_60 is the particle's moment of inertia, sometimes called the second moment of mass. It is a measure of rotational inertia."}
+{"text":"Because moment of inertia is a crucial part of the spin angular momentum, the latter necessarily includes all of the complications of the former, which is calculated by multiplying elementary bits of the mass by the squares of their distances from the center of rotation. Therefore, the total moment of inertia, and the angular momentum, is a complex function of the configuration of the matter about the center of rotation and the orientation of the rotation for the various bits."}
+{"text":"For a rigid body, for instance a wheel or an asteroid, the orientation of rotation is simply the position of the rotation axis versus the matter of the body. It may or may not pass through the center of mass, or it may lie completely outside of the body. For the same body, angular momentum may take a different value for every possible axis about which rotation may take place. It reaches a minimum when the axis passes through the center of mass."}
+{"text":"For a collection of objects revolving about a center, for instance all of the bodies of the Solar System, the orientations may be somewhat organized, as is the Solar System, with most of the bodies' axes lying close to the system's axis. Their orientations may also be completely random."}
+{"text":"In brief, the more mass and the farther it is from the center of rotation (the longer the moment arm), the greater the moment of inertia, and therefore the greater the angular momentum for a given angular velocity. In many cases the moment of inertia, and hence the angular momentum, can be simplified by,"}
+{"text":"Similarly, for a point mass formula_17 the moment of inertia is defined as,"}
+{"text":"and for any collection of particles formula_67 as the sum,"}
+{"text":"The plane perpendicular to the axis of angular momentum and passing through the center of mass is sometimes called the \"invariable plane\", because the direction of the axis remains fixed if only the interactions of the bodies within the system, free from outside influences, are considered. One such plane is the invariable plane of the Solar System."}
+{"text":"Newton's second law of motion can be expressed mathematically,"}
+{"text":"or force = mass \u00d7 acceleration. The rotational equivalent for point particles may be derived as follows:"}
+{"text":"which means that the torque (i.e. the time derivative of the angular momentum) is"}
+{"text":"Because the moment of inertia is formula_72, it follows that formula_73, and formula_74 which, reduces to"}
+{"text":"This is the rotational analog of Newton's Second Law. Note that the torque is not necessarily proportional or parallel to the angular acceleration (as one might expect). The reason for this is that the moment of inertia of a particle can change with time, something that cannot occur for ordinary mass."}
+{"text":"A rotational analog of Newton's third law of motion might be written, \"In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque.\" Hence, \"angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant (is conserved).\""}
+{"text":"Seen another way, a rotational analogue of Newton's first law of motion might be written, \"A rigid body continues in a state of uniform rotation unless acted by an external influence.\" Thus \"with no external influence to act upon it, the original angular momentum of the system remains constant\"."}
+{"text":"For a planet, angular momentum is distributed between the spin of the planet and its revolution in its orbit, and these are often exchanged by various mechanisms. The conservation of angular momentum in the Earth\u2013Moon system results in the transfer of angular momentum from Earth to Moon, due to tidal torque the Moon exerts on the Earth. This in turn results in the slowing down of the rotation rate of Earth, at about 65.7 nanoseconds per day, and in gradual increase of the radius of Moon's orbit, at about 3.82\u00a0centimeters per year."}
+{"text":"The conservation of angular momentum explains the angular acceleration of an ice skater as she brings her arms and legs close to the vertical axis of rotation. By bringing part of the mass of her body closer to the axis, she decreases her body's moment of inertia. Because angular momentum is the product of moment of inertia and angular velocity, if the angular momentum remains constant (is conserved), then the angular velocity (rotational speed) of the skater must increase."}
+{"text":"The same phenomenon results in extremely fast spin of compact stars (like white dwarfs, neutron stars and black holes) when they are formed out of much larger and slower rotating stars. Decrease in the size of an object \"n\" times results in increase of its angular velocity by the factor of \"n\"2."}
+{"text":"Conservation is not always a full explanation for the dynamics of a system but is a key constraint. For example, a spinning top is subject to gravitational torque making it lean over and change the angular momentum about the nutation axis, but neglecting friction at the point of spinning contact, it has a conserved angular momentum about its spinning axis, and another about its precession axis. Also, in any planetary system, the planets, star(s), comets, and asteroids can all move in numerous complicated ways, but only so that the angular momentum of the system is conserved."}
+{"text":"Noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. The symmetry associated with conservation of angular momentum is rotational invariance. The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved."}
+{"text":"Relation to Newton's second law of motion."}
+{"text":"As an example, consider decreasing of the moment of inertia, e.g. when a figure skater is pulling in her\/his hands, speeding up the circular motion. In terms of angular momentum conservation, we have, for angular momentum \"L\", moment of inertia \"I\" and angular velocity \"\u03c9\":"}
+{"text":"Using this, we see that the change requires an energy of:"}
+{"text":"so that a decrease in the moment of inertia requires investing energy."}
+{"text":"This can be compared to the work done as calculated using Newton's laws. Each point in the rotating body is accelerating, at each point of time, with radial acceleration of:"}
+{"text":"Let us observe a point of mass \"m\", whose position vector relative to the center of motion is parallel to the z-axis at a given point of time, and is at a distance \"z\". The centripetal force on this point, keeping the circular motion, is:"}
+{"text":"Thus the work required for moving this point to a distance \"dz\" farther from the center of motion is:"}
+{"text":"For a non-pointlike body one must integrate over this, with \"m\" replaced by the mass density per unit \"z\". This gives:"}
+{"text":"which is exactly the energy required for keeping the angular momentum conserved."}
+{"text":"Note, that the above calculation can also be performed per mass, using kinematics only. Thus the phenomena of figure skater accelerating tangential velocity while pulling her\/his hands in, can be understood as follows in layman's language: The skater's palms are not moving in a straight line, so they are constantly accelerating inwards, but do not gain additional speed because the accelerating is always done when their motion inwards is zero. However, this is different when pulling the palms closer to the body: The acceleration due to rotation now increases the speed; but because of the rotation, the increase in speed does not translate to a significant speed inwards, but to an increase of the rotation speed."}
+{"text":"In Lagrangian mechanics, angular momentum for rotation around a given axis, is the conjugate momentum of the generalized coordinate of the angle around the same axis. For example, formula_85, the angular momentum around the z axis, is:"}
+{"text":"where formula_87 is the Lagrangian and formula_88 is the angle around the z axis."}
+{"text":"Note that formula_89, the time derivative of the angle, is the angular velocity formula_90. Ordinarily, the Lagrangian depends on the angular velocity through the kinetic energy: The latter can be written by separating the velocity to its radial and tangential part, with the tangential part at the x-y plane, around the z-axis, being equal to:"}
+{"text":"Since the lagrangian is dependent upon the angles of the object only through the potential, we have:"}
+{"text":"which is the torque on the i-th object."}
+{"text":"Suppose the system is invariant to rotations, so that the potential is independent of an overall rotation by the angle \"\u03b8\"z (thus it may depend on the angles of objects only through their differences, in the form formula_93). We therefore get for the total angular momentum:"}
+{"text":"And thus the angular momentum around the z-axis is conserved."}
+{"text":"This analysis can be repeated separately for each axis, giving conversation of the angular momentum vector. However, the angles around the three axes cannot be treated simultaneously as generalized coordinates, since they are not independent; in particular, two angles per point suffice to determine its position. While it is true that in the case of a rigid body, fully describing it requires, in addition to three translational degrees of freedom, also specification of three rotational degrees of freedom; however these cannot be defined as rotations around the Cartesian axes (see Euler angles). This caveat is reflected in quantum mechanics in the non-trivial commutation relations of the different components of the angular momentum operator."}
+{"text":"Equivalently, in Hamiltonian mechanics the Hamiltonian can be described as a function of the angular momentum. As before, the part of the kinetic energy related to rotation around the z-axis for the i-th object is:"}
+{"text":"In theoretical physics, a chiral anomaly is the anomalous nonconservation of a chiral current. In everyday terms, it is equivalent to a sealed box that contained equal numbers of left and right-handed bolts, but when opened was found to have more left than right, or vice versa."}
+{"text":"Such events are expected to be prohibited according to classical conservation laws, but we know there must be ways they can be broken, because we have evidence of charge\u2013parity non-conservation (\"CP violation\"). It is possible that other imbalances have been caused by breaking of a \"chiral law\" of this kind. Many physicists suspect that the fact that the observable universe contains more matter than antimatter is caused by a chiral anomaly. Research into chiral symmetry breaking laws is a major endeavor in particle physics research at this time."}
+{"text":"The chiral anomaly originally referred to the anomalous decay rate of the neutral pion, as computed in the current algebra of the chiral model. These calculations suggested that the decay of the pion was suppressed, clearly contradicting experimental results. The nature of the anomalous calculations was first explained by Adler and Bell & Jackiw. This is now termed the Adler\u2013Bell\u2013Jackiw anomaly of quantum electrodynamics. This is a symmetry of classical electrodynamics that is violated by quantum corrections."}
+{"text":"At the time that the Adler\u2013Bell\u2013Jackiw anomaly was being explored in physics, there were related developments in differential geometry that appeared to involve the same kinds of expressions. These were not in any way related to quantum corrections of any sort, but rather were the exploration of the global structure of fiber bundles, and specifically, of the Dirac operators on spin structures having curvature forms resembling that of the electromagnetic tensor, both in four and three dimensions (the Chern\u2013Simons theory). After considerable back and forth, it became clear that the structure of the anomaly could be described with bundles with a non-trivial homotopy group, or, in physics lingo, in terms of instantons."}
+{"text":"Instantons are a form of topological soliton; they are a solution to the \"classical\" field theory, having the property that they are stable and cannot decay (into plane waves, for example). Put differently: conventional field theory is built on the idea of a vacuum - roughly speaking, a flat empty space. Classically, this is the \"trivial\" solution; all fields vanish. However, one can also arrange the (classical) fields in such a way that they have a non-trivial global configuration. These non-trivial configurations are also candidates for the vacuum, for empty space; yet they are no longer flat or trivial; they contain a twist, the instanton. The quantum theory is able to interact with these configurations; when it does so, it manifests as the chiral anomaly."}
+{"text":"In mathematics, non-trivial configurations are found during the study of Dirac operators in their fully generalized setting, namely, on Riemannian manifolds in arbitrary dimensions. Mathematical tasks include finding and classifying structures and configurations. Famous results include the Atiyah\u2013Singer index theorem for Dirac operators. Roughly speaking, the symmetries of Minkowski spacetime, Lorentz invariance, Laplacians, Dirac operators and the U(1)xSU(2)xSU(3) fiber bundles can be taken to be a special case of a far more general setting in differential geometry; the exploration of the various possibilities accounts for much of the excitement in theories such as string theory; the richness of possibilities accounts for a certain perception of lack of progress."}
+{"text":"Besides explaining the decay of the pion, it has a second very important role. The one loop amplitude includes a factor that counts the grand total number of leptons that can circulate in the loop. In order to get the correct decay width, one must have exactly three generations of quarks, and not four or more. In this way, it plays an important role in constraining the Standard model. It provides a direct physical prediction of the number of quarks that can exist in nature."}
+{"text":"Current day research is focused on similar phenomena in different settings, including non-trivial topological configurations of the electroweak theory, that is, the sphalerons. Other applications include the hypothetical non-conservation of baryon number in GUTS and other theories."}
+{"text":"In some theories of fermions with chiral symmetry, the quantization may lead to the breaking of this (global) chiral symmetry. In that case, the charge associated with the chiral symmetry is not conserved. The non-conservation happens in a process of tunneling from one vacuum to another. Such a process is called an instanton."}
+{"text":"In the case of a symmetry related to the conservation of a fermionic particle number, one may understand the creation of such particles as follows. The definition of a particle is different in the two vacuum states between which the tunneling occurs; therefore a state of no particles in one vacuum corresponds to a state with some particles in the other vacuum. In particular, there is a Dirac sea of fermions and, when such a tunneling happens, it causes the energy levels of the sea fermions to gradually shift upwards for the particles and downwards for the anti-particles, or vice versa. This means particles which once belonged to the Dirac sea become real (positive energy) particles and particle creation happens."}
+{"text":"Technically, in the path integral formulation, an anomalous symmetry is a symmetry of the action formula_3, but not of the measure and therefore \"not\" of the generating functional"}
+{"text":"of the quantized theory ( is Planck's action-quantum divided by 2). The measure formula_5 consists of a part depending on the fermion field formula_6 and a part depending on its complex conjugate formula_7. The transformations of both parts under a chiral symmetry do not cancel in general. Note that if formula_8 is a Dirac fermion, then the chiral symmetry can be written as formula_9 where formula_10 is the chiral gamma matrix acting on formula_8. From the formula for formula_12 one also sees explicitly that in the classical limit, anomalies don't come into play, since in this limit only the extrema of formula_3 remain relevant."}
+{"text":"The anomaly is proportional to the instanton number of a gauge field to which the fermions are coupled. (Note that the gauge symmetry is always non-anomalous and is exactly respected, as is required for the theory to be consistent.)"}
+{"text":"The chiral anomaly can be calculated exactly by one-loop Feynman diagrams, e.g. Steinberger's \"triangle diagram\", contributing to the pion decays, and formula_14. The amplitude for this process can be calculated directly from the change in the measure of the fermionic fields under the chiral transformation."}
+{"text":"Wess and Zumino developed a set of conditions on how the partition function ought to behave under gauge transformations called the Wess\u2013Zumino consistency condition."}
+{"text":"Fujikawa derived this anomaly using the correspondence between functional determinants and the partition function using the Atiyah\u2013Singer index theorem. See Fujikawa's method."}
+{"text":"The Standard Model of electroweak interactions has all the necessary ingredients for successful baryogenesis, although these interactions have never been observed and may be insufficient to explain the total baryon number of the observed universe if the initial baryon number of the universe at the time of the Big Bang is zero. Beyond the violation of charge conjugation formula_15 and CP violation formula_16 (charge+parity), baryonic charge violation appears through the Adler\u2013Bell\u2013Jackiw anomaly of the formula_17 group."}
+{"text":"Baryons are not conserved by the usual electroweak interactions due to quantum chiral anomaly. The classic electroweak Lagrangian conserves baryonic charge. Quarks always enter in bilinear combinations formula_18, so that a quark can disappear only in collision with an antiquark. In other words, the classical baryonic current formula_19 is conserved:"}
+{"text":"However, quantum corrections known as the sphaleron destroy this conservation law: instead of zero in the right hand side of this equation, there is a non-vanishing quantum term,"}
+{"text":"where is a numerical constant vanishing for \u210f =0,"}
+{"text":"and the gauge field strength formula_23 is given by the expression"}
+{"text":"Electroweak sphalerons can only change the baryon and\/or lepton number by 3 or multiples of 3 (collision of three baryons into three leptons\/antileptons and vice versa)."}
+{"text":"An important fact is that the anomalous current non-conservation is proportional to the total derivative of a vector operator, formula_25 (this is non-vanishing due to instanton configurations of the gauge field, which are pure gauge at the infinity), where the anomalous current formula_26 is"}
+{"text":"which is the Hodge dual of the Chern\u2013Simons 3-form."}
+{"text":"In the language of differential forms, to any self-dual curvature form formula_28 we may assign the abelian 4-form formula_29. Chern\u2013Weil theory shows that this 4-form is locally \"but not globally\" exact, with potential given by the Chern-Simons 3-form locally:"}
+{"text":"Again, this is true only on a single chart, and is false for the global form formula_31 unless the instanton number vanishes."}
+{"text":"To proceed further, we attach a \"point at infinity\" onto formula_32 to yield formula_33, and use the clutching construction to chart principal A-bundles, with one chart on the neighborhood of and a second on formula_34. The thickening around , where these charts intersect, is trivial, so their intersection is essentially formula_35. Thus instantons are classified by the third homotopy group formula_36, which for formula_37 is simply the third 3-sphere group formula_38."}
+{"text":"The divergence of the baryon number current is (ignoring numerical constants)"}
+{"text":"Gravitational energy or gravitational potential energy is the potential energy a massive object has in relation to another massive object due to gravity. It is the potential energy associated with the gravitational field, which is released (converted into kinetic energy) when the objects fall towards each other. Gravitational potential energy increases when two objects are brought further apart."}
+{"text":"For two pairwise interacting point particles, the gravitational potential energy formula_1 is given by"}
+{"text":"where formula_3 and formula_4 are the masses of the two particles, formula_5 is the distance between them, and formula_6 is the gravitational constant."}
+{"text":"Close to the Earth's surface, the gravitational field is approximately constant, and the gravitational potential energy of an object reduces to"}
+{"text":"where formula_4 is the object's mass, formula_9 is the gravity of Earth, and formula_10 is the height of the object's center of mass above a chosen reference level."}
+{"text":"In classical mechanics, two or more masses always have a gravitational potential. Conservation of energy requires that this gravitational field energy is always negative, so that it is zero when the objects are infinitely far apart. The gravitational potential energy is the potential energy an object has because it is within a gravitational field."}
+{"text":"The force between a point mass, formula_3, and another point mass, formula_4, is given by Newton's law of gravitation:"}
+{"text":"To get the total work done by an external force to bring point mass formula_4 from infinity to the final distance formula_5 (for example the radius of Earth) of the two mass points, the force is integrated with respect to displacement:"}
+{"text":"Because formula_18, the total work done on the object can be written as:"}
+{"text":"In general relativity gravitational energy is extremely complex, and there is no single agreed upon definition of the concept. It is sometimes modelled via the Landau\u2013Lifshitz pseudotensor that allows retention for the energy\u2013momentum conservation laws of classical mechanics. Addition of the matter stress\u2013energy tensor to the Landau\u2013Lifshitz pseudotensor results in a combined matter plus gravitational energy pseudotensor that has a vanishing 4-divergence in all frames\u2014ensuring the conservation law. Some people object to this derivation on the grounds that pseudotensors are inappropriate in general relativity, but the divergence of the combined matter plus gravitational energy pseudotensor is a tensor."}
+{"text":"In particle physics, lepton number (historically also called lepton charge) is a conserved quantum number representing the difference between the number of leptons and the number of antileptons in an elementary particle reaction. Lepton number is an additive quantum number, so its sum is preserved in interactions (as opposed to multiplicative quantum numbers such as parity, where the product is preserved instead). Mathematically, the lepton number formula_1 is defined by formula_2, where formula_3 is the number of leptons and formula_4 is the number of antileptons."}
+{"text":"Lepton number was introduced in 1953 to explain the absence of reactions such as formula_5 in the Cowan\u2013Reines neutrino experiment, which instead observed formula_6. This process, inverse beta decay, conserves lepton number, as the incoming antineutrino has lepton number \u20131, while the outgoing positron (antielectron) also has lepton number \u20131."}
+{"text":"In addition to lepton number, lepton family numbers are defined as"}
+{"text":"Prominent examples of lepton flavor conservation are the muon decays formula_7 and formula_8. In these, the creation of an electron is accompanied by the creation of an electron antineutrino, and the creation of a positron is accompanied by the creation of an electron neutrino. Likewise, a decaying negative muon results in the creation of a muon neutrino, while a decaying positive muon results in the creation of a muon antineutrino."}
+{"text":"Violations of the lepton number conservation laws."}
+{"text":"Lepton flavor is only approximately conserved, and is notably not conserved in neutrino oscillation. However, total lepton number is still conserved in the Standard Model."}
+{"text":"Numerous searches for physics beyond the Standard Model incorporate searches for lepton number or lepton flavor violation, such as the decays formula_9. Experiments such as MEGA and SINDRUM have searched for lepton number violation in muon decays to electrons; MEG set the current branching limit of order 10\u221213 and plans to lower to limit to 10\u221214 after 2016. Some theories beyond the Standard Model, such as supersymmetry, predict branching ratios of order 10\u221212 to 10\u221214. The Mu2e experiment, in construction as of 2017, has a planned sensitivity of order 10\u221217."}
+{"text":"Because the lepton number conservation law in fact is violated by chiral anomalies, there are problems applying this symmetry universally over all energy scales. However, the quantum number \"B\" \u2212 \"L\" is commonly conserved in Grand Unified Theory models."}
+{"text":"If neutrinos turn out to be Majorana fermions, neither the lepton number nor \"B\" \u2212 \"L\" would be conserved, e.g. in neutrinoless double beta decay."}
+{"text":"In mathematical physics, inversion transformations are a natural extension of Poincar\u00e9 transformations to include all conformal one-to-one transformations on coordinate space-time. They are less studied in physics because unlike the rotations and translations of Poincar\u00e9 symmetry an object cannot be physically transformed by the inversion symmetry. Some physical theories are invariant under this symmetry, in these cases it is what is known as a 'hidden symmetry'. Other hidden symmetries of physics include gauge symmetry and general covariance."}
+{"text":"In 1831 the mathematician Ludwig Immanuel Magnus began to publish on transformations of the plane generated by inversion in a circle of radius \"R\". His work initiated a large body of publications, now called inversive geometry. The most prominently named mathematician became August Ferdinand M\u00f6bius once he reduced the planar transformations to complex number arithmetic. In the company of physicists employing the inversion transformation early on was Lord Kelvin, and the association with him leads it to be called the Kelvin transform."}
+{"text":"In the following we shall use imaginary time (formula_1) so that space-time is Euclidean and the equations are simpler. The Poincar\u00e9 transformations are given by the coordinate transformation on space-time parametrized by the 4-vectors\u00a0\"V\""}
+{"text":"where formula_3 is an orthogonal matrix and formula_4 is a 4-vector. Applying this transformation twice on a 4-vector gives a third transformation of the same form. The basic invariant under this transformation is the space-time length given by the distance between two space-time points given by 4-vectors \"x\" and\u00a0\"y\":"}
+{"text":"These transformations are subgroups of general 1-1 conformal transformations on space-time. It is possible to extend these transformations to include all 1-1 conformal transformations on space-time"}
+{"text":"We must also have an equivalent condition to the orthogonality condition of the Poincar\u00e9 transformations:"}
+{"text":"Because one can divide the top and bottom of the transformation by formula_8 we lose no generality by setting formula_9 to the unit matrix. We end up with"}
+{"text":"Applying this transformation twice on a 4-vector gives a transformation of the same form. The new symmetry of 'inversion' is given by the 3-tensor formula_11 This symmetry becomes Poincar\u00e9 symmetry if we set formula_12 When formula_13 the second condition requires that formula_3 is an orthogonal matrix. This transformation is 1-1 meaning that each point is mapped to a unique point only if we theoretically include the points at infinity."}
+{"text":"The invariants for this symmetry in 4 dimensions is unknown however it is known that the invariant requires a minimum of 4 space-time points. In one dimension, the invariant is the well known cross-ratio from M\u00f6bius transformations:"}
+{"text":"Because the only invariants under this symmetry involve a minimum of 4 points, this symmetry cannot be a symmetry of point particle theory. Point particle theory relies on knowing the lengths of paths of particles through space-time (e.g., from formula_16 to formula_17). The symmetry can be a symmetry of a string theory in which the strings are uniquely determined by their endpoints. The propagator for this theory for a string starting at the endpoints formula_18 and ending at the endpoints formula_19 is a conformal function of the 4-dimensional invariant. A string field in endpoint-string theory is a function over the endpoints."}
+{"text":"In theoretical physics, an invariant is an observable of a physical system which remains unchanged under some transformation. Invariance, as a broader term, also applies to the no change of form of physical laws under a transformation, and is closer in scope to the mathematical definition. Invariants of a system are deeply tied to the symmetries imposed by its environment."}
+{"text":"Invariance is an important concept in modern theoretical physics, and many theories are expressed in terms of their symmetries and invariants."}
+{"text":"In classical and quantum mechanics, invariance of space under translation results in momentum being an invariant and the conservation of momentum, whereas invariance of the origin of time, i.e. translation in time, results in energy being an invariant and the conservation of energy. In general, by Noether's theorem, any invariance of a physical system under a continuous symmetry leads to a fundamental conservation law."}
+{"text":"In crystals, the electron density is periodic and invariant with respect to discrete translations by unit cell vectors. In very few materials, this symmetry can be broken due to enhanced electron correlations."}
+{"text":"Another examples of physical invariants are the speed of light, and charge and mass of a particle observed from two reference frames moving with respect to one another (invariance under a spacetime Lorentz transformation), and invariance of time and acceleration under a Galilean transformation between two such frames moving at low velocities."}
+{"text":"Quantities can be invariant under some common transformations but not under others. For example, the velocity of a particle is invariant when switching coordinate representations from rectangular to curvilinear coordinates, but is not invariant when transforming between frames of reference that are moving with respect to each other. Other quantities, like the speed of light, are always invariant."}
+{"text":"Physical laws are said to be invariant under transformations when their predictions remain unchanged. This generally means that the form of the law (e.g. the type of differential equations used to describe the law) is unchanged in transformations so that no additional or different solutions are obtained."}
+{"text":"Covariance and contravariance generalize the mathematical properties of invariance in tensor mathematics, and are frequently used in electromagnetism, special relativity, and general relativity."}
+{"text":"In physics, a symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is preserved or remains unchanged under some transformation."}
+{"text":"A family of particular transformations may be \"continuous\" (such as rotation of a circle) or \"discrete\" (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see \"Symmetry group\")."}
+{"text":"These two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, be exploited to simplify many problems."}
+{"text":"Arguably the most important example of a symmetry in physics is that the speed of light has the same value in all frames of reference, which is known in mathematical terms as the Poincar\u00e9 group, the symmetry group of special relativity. Another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity."}
+{"text":"Invariance is specified mathematically by transformations that leave some property (e.g. quantity) unchanged. This idea can apply to basic real-world observations. For example, temperature may be homogeneous throughout a room. Since the temperature does not depend on the position of an observer within the room, we say that the temperature is \"invariant\" under a shift in an observer's position within the room."}
+{"text":"Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere \"looks\"."}
+{"text":"The above ideas lead to the useful idea of \"invariance\" when discussing observed physical symmetry; this can be applied to symmetries in forces as well."}
+{"text":"For example, an electric field due to an electrically charged wire of infinite length is said to exhibit cylindrical symmetry, because the electric field strength at a given distance \"r\" from the wire will have the same magnitude at each point on the surface of a cylinder (whose axis is the wire) with radius \"r\". Rotating the wire about its own axis does not change its position or charge density, hence it will preserve the field. The field strength at a rotated position is the same. This is not true in general for an arbitrary system of charges."}
+{"text":"In Newton's theory of mechanics, given two bodies, each with mass \"m\", starting at the origin and moving along the \"x\"-axis in opposite directions, one with speed \"v\"1 and the other with speed \"v\"2 the total kinetic energy of the system (as calculated from an observer at the origin) is and remains the same if the velocities are interchanged. The total kinetic energy is preserved under a reflection in the \"y\"-axis."}
+{"text":"The last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the total kinetic energy will be the same if \"v\"1 and \"v\"2 are interchanged."}
+{"text":"Symmetries may be broadly classified as \"global\" or \"local\". A \"global symmetry\" is one that keeps a property invariant for a transformation that is applied simultaneously at all points of spacetime, whereas a \"local symmetry\" is one that keeps a property invariant when a possibly different symmetry transformation is applied at each point of spacetime; specifically a local symmetry transformation is parameterised by the spacetime co-ordinates, whereas a global symmetry is not. This implies that a global symmetry is also a local symmetry. Local symmetries play an important role in physics as they form the basis for gauge theories."}
+{"text":"The two examples of rotational symmetry described above \u2013\u00a0spherical and cylindrical\u00a0\u2013 are each instances of continuous symmetry. These are characterised by invariance following a continuous change in the geometry of the system. For example, the wire may be rotated through any angle about its axis and the field strength will be the same on a given cylinder. Mathematically, continuous symmetries are described by transformations that change continuously as a function of their parameterization. An important subclass of continuous symmetries in physics are spacetime symmetries."}
+{"text":"Continuous \"spacetime symmetries\" are symmetries involving transformations of space and time. These may be further classified as \"spatial symmetries\", involving only the spatial geometry associated with a physical system; \"temporal symmetries\", involving only changes in time; or \"spatio-temporal symmetries\", involving changes in both space and time."}
+{"text":"Mathematically, spacetime symmetries are usually described by smooth vector fields on a smooth manifold. The underlying local diffeomorphisms associated with the vector fields correspond more directly to the physical symmetries, but the vector fields themselves are more often used when classifying the symmetries of the physical system."}
+{"text":"Some of the most important vector fields are Killing vector fields which are those spacetime symmetries that preserve the underlying metric structure of a manifold. In rough terms, Killing vector fields preserve the distance between any two points of the manifold and often go by the name of isometries."}
+{"text":"A \"discrete symmetry\" is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called \"reflections\" or \"interchanges\"."}
+{"text":"The Standard Model of particle physics has three related natural near-symmetries. These state that the universe in which we live should be indistinguishable from one where a certain type of change is introduced."}
+{"text":"These symmetries are near-symmetries because each is broken in the present-day universe. However, the Standard Model predicts that the combination of the three (that is, the simultaneous application of all three transformations) must be a symmetry, called CPT symmetry. CP violation, the violation of the combination of C- and P-symmetry, is necessary for the presence of significant amounts of baryonic matter in the universe. CP violation is a fruitful area of current research in particle physics."}
+{"text":"A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the Standard Model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the Standard Model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. Currently LHC is preparing for a run which tests supersymmetry."}
+{"text":"The transformations describing physical symmetries typically form a mathematical group. Group theory is an important area of mathematics for physicists."}
+{"text":"Continuous symmetries are specified mathematically by \"continuous groups\" (called Lie groups). Many physical symmetries are isometries and are specified by symmetry groups. Sometimes this term is used for more general types of symmetries. The set of all proper rotations (about any angle) through any axis of a sphere form a Lie group called the special orthogonal group SO(3). (The '3' refers to the three-dimensional space of an ordinary sphere.) Thus, the symmetry group of the sphere with proper rotations is SO(3). Any rotation preserves distances on the surface of the ball. The set of all Lorentz transformations form a group called the Lorentz group (this may be generalised to the Poincar\u00e9 group)."}
+{"text":"Discrete groups describe discrete symmetries. For example, the symmetries of an equilateral triangle are characterized by the symmetric group S."}
+{"text":"A type of physical theory based on \"local\" symmetries is called a \"gauge\" theory and the symmetries natural to such a theory are called gauge symmetries. Gauge symmetries in the Standard Model, used to describe three of the fundamental interactions, are based on the SU(3) \u00d7 SU(2) \u00d7 U(1) group. (Roughly speaking, the symmetries of the SU(3) group describe the strong force, the SU(2) group describes the weak interaction and the U(1) group describes the electromagnetic force.)"}
+{"text":"Also, the reduction by symmetry of the energy functional under the action by a group and spontaneous symmetry breaking of transformations of symmetric groups appear to elucidate topics in particle physics (for example, the unification of electromagnetism and the weak force in physical cosmology)."}
+{"text":"The symmetry properties of a physical system are intimately related to the conservation laws characterizing that system. Noether's theorem gives a precise description of this relation. The theorem states that each continuous symmetry of a physical system implies that some physical property of that system is conserved. Conversely, each conserved quantity has a corresponding symmetry. For example, spatial translation symmetry (i.e. homogeneity of space) gives rise to conservation of (linear) momentum, and temporal translation symmetry (i.e. homogeneity of time) gives rise to conservation of energy."}
+{"text":"The following table summarizes some fundamental symmetries and the associated conserved quantity."}
+{"text":"Continuous symmetries in physics preserve transformations. One can specify a symmetry by showing how a very small transformation affects various particle fields. The commutator of two of these infinitesimal transformations are equivalent to a third infinitesimal transformation of the same kind hence they form a Lie algebra."}
+{"text":"A general coordinate transformation described as the general field formula_6 (also known as a diffeomorphism) has the infinitesimal effect on a scalar formula_7, spinor formula_8 or vector field formula_9 that can be expressed (using the index summation convention):"}
+{"text":"Without gravity only the Poincar\u00e9 symmetries are preserved which restricts formula_6 to be of the form:"}
+{"text":"where M is an antisymmetric matrix (giving the Lorentz and rotational symmetries) and P is a general vector (giving the translational symmetries). Other symmetries affect multiple fields simultaneously. For example, local gauge transformations apply to both a vector and spinor field:"}
+{"text":"where formula_17 are generators of a particular Lie group. So far the transformations on the right have only included fields of the same type. Supersymmetries are defined according to how the mix fields of \"different\" types."}
+{"text":"Another symmetry which is part of some theories of physics and not in others is scale invariance which involve Weyl transformations of the following kind:"}
+{"text":"If the fields have this symmetry then it can be shown that the field theory is almost certainly conformally invariant also. This means that in the absence of gravity h(x) would restricted to the form:"}
+{"text":"with D generating scale transformations and K generating special conformal transformations. For example, super-Yang\u2013Mills theory has this symmetry while general relativity doesn't although other theories of gravity such as conformal gravity do. The 'action' of a field theory is an invariant under all the symmetries of the theory. Much of modern theoretical physics is to do with speculating on the various symmetries the Universe may have and finding the invariants to construct field theories as models."}
+{"text":"In string theories, since a string can be decomposed into an infinite number of particle fields, the symmetries on the string world sheet is equivalent to special transformations which mix an infinite number of fields."}
+{"text":"There is a natural connection between particle physics and representation theory, as first noted in the 1930s by Eugene Wigner. It links the properties of elementary particles to the structure of Lie groups and Lie algebras. According to this connection, the different quantum states of an elementary particle give rise to an irreducible representation of the Poincar\u00e9 group. Moreover, the properties of the various particles, including their spectra, can be related to representations of Lie algebras, corresponding to \"approximate symmetries\" of the universe."}
+{"text":"By definition of a symmetry of a quantum system, there is a group action on formula_7. For each formula_12, there is a corresponding transformation formula_13 of formula_7. More specifically, if formula_15 is some symmetry of the system (say, rotation about the x-axis by 12\u00b0), then the corresponding transformation formula_13 of formula_7 is a map on ray space. For example, when rotating a \"stationary\" (zero momentum) spin-5 particle about its center, formula_15 is a rotation in 3D space (an element of formula_19), while formula_13 is an operator whose domain and range are each the space of possible quantum states of this particle, in this example the projective space formula_7 associated with an 11-dimensional complex Hilbert space formula_1."}
+{"text":"Each map formula_13 preserves, by definition of symmetry, the ray product on formula_7 induced by the inner product on formula_1; according to Wigner's theorem, this transformation of formula_7 comes from a unitary or anti-unitary transformation formula_27 of formula_1. Note, however, that the formula_27 associated to a given formula_13 is not unique, but only unique \"up to a phase factor\". The composition of the operators formula_27 should, therefore, reflect the composition law in formula_4, but only up to a phase factor:"}
+{"text":"where formula_34 will depend on formula_15 and formula_36. Thus, the map sending formula_15 to formula_27 is a \"projective unitary representation\" of formula_4, or possibly a mixture of unitary and anti-unitary, if formula_4 is disconnected. In practice, anti-unitary operators are always associated with time-reversal symmetry."}
+{"text":"It is important physically that in general formula_41 does not have to be an ordinary representation of formula_4; it may not be possible to choose the phase factors in the definition of formula_27 to eliminate the phase factors in their composition law. An electron, for example, is a spin-one-half particle; its Hilbert space consists of wave functions on formula_44 with values in a two-dimensional spinor space. The action of formula_19 on the spinor space is only projective: It does not come from an ordinary representation of formula_19. There is, however, an associated ordinary representation of the universal cover formula_47 of formula_19 on spinor space."}
+{"text":"For many interesting classes of groups formula_4, tells us that every projective unitary representation of formula_4 comes from an ordinary representation of the universal cover formula_51 of formula_4. Actually, if formula_1 is finite dimensional, then regardless of the group formula_4, every projective unitary representation of formula_4 comes from an ordinary unitary representation of formula_51. If formula_1 is infinite dimensional, then to obtain the desired conclusion, some algebraic assumptions must be made on formula_4 (see below). In this setting the result is a theorem of Bargmann. Fortunately, in the crucial case of the Poincar\u00e9 group, Bargmann's theorem applies. (See Wigner's classification of the representations of the universal cover of the Poincar\u00e9 group.)"}
+{"text":"The requirement referred to above is that the Lie algebra formula_59 does not admit a nontrivial one-dimensional central extension. This is the case if and only if the second cohomology group of formula_59 is trivial. In this case, it may still be true that the group admits a central extension by a \"discrete\" group. But extensions of formula_4 by discrete groups are covers of formula_4. For instance, the universal cover formula_63 is related to formula_4 through the quotient formula_65 with the central subgroup formula_66 being the center of formula_63 itself, isomorphic to the fundamental group of the covered group."}
+{"text":"Thus, in favorable cases, the quantum system will carry a unitary representation of the universal cover formula_51 of the symmetry group formula_4. This is desirable because formula_1 is much easier to work with than the non-vector space formula_7. If the representations of formula_51 can be classified, much more information about the possibilities and properties of formula_1 are available."}
+{"text":"In this case, to obtain an ordinary representation, one has to pass to the Heisenberg group, which is a nontrivial one-dimensional central extension of formula_75."}
+{"text":"The group of translations and Lorentz transformations form the Poincar\u00e9 group, and this group should be a symmetry of a relativistic quantum system (neglecting general relativity effects, or in other words, in flat space). Representations of the Poincar\u00e9 group are in many cases characterized by a nonnegative mass and a half-integer spin (see Wigner's classification); this can be thought of as the reason that particles have quantized spin. (Note that there are in fact other possible representations, such as tachyons, infraparticles, etc., which in some cases do not have quantized spin or fixed mass.)"}
+{"text":"While the spacetime symmetries in the Poincar\u00e9 group are particularly easy to visualize and believe, there are also other types of symmetries, called internal symmetries. One example is color SU(3), an exact symmetry corresponding to the continuous interchange of the three quark colors."}
+{"text":"Many (but not all) symmetries or approximate symmetries form Lie groups. Rather than study the representation theory of these Lie groups, it is often preferable to study the closely related representation theory of the corresponding Lie algebras, which are usually simpler to compute."}
+{"text":"Now, representations of the Lie algebra correspond to representations of the universal cover of the original group. In the finite-dimensional case\u2014and the infinite-dimensional case, provided that applies\u2014irreducible projective representations of the original group correspond to ordinary unitary representations of the universal cover. In those cases, computing at the Lie algebra level is appropriate. This is the case, notably, for studying the irreducible projective representations of the rotation group SO(3). These are in one-to-one correspondence with the ordinary representations of the universal cover SU(2) of SO(3). The representations of the SU(2) are then in one-to-one correspondence with the representations of its Lie algebra su(2), which is isomorphic to the Lie algebra so(3) of SO(3)."}
+{"text":"Thus, to summarize, the irreducible projective representations of SO(3) are in one-to-one correspondence with the irreducible ordinary representations of its Lie algebra so(3). The two-dimensional \"spin 1\/2\" representation of the Lie algebra so(3), for example, does not correspond to an ordinary (single-valued) representation of the group SO(3). (This fact is the origin of statements to the effect that \"if you rotate the wave function of an electron by 360 degrees, you get the negative of the original wave function.\") Nevertheless, the spin 1\/2 representation does give rise to a well-defined \"projective\" representation of SO(3), which is all that is required physically."}
+{"text":"Although the above symmetries are believed to be exact, other symmetries are only approximate."}
+{"text":"As an example of what an approximate symmetry means, suppose an experimentalist lived inside an infinite ferromagnet, with magnetization in some particular direction. The experimentalist in this situation would find not one but two distinct types of electrons: one with spin along the direction of the magnetization, with a slightly lower energy (and consequently, a lower mass), and one with spin anti-aligned, with a higher mass. Our usual SO(3) rotational symmetry, which ordinarily connects the spin-up electron with the spin-down electron, has in this hypothetical case become only an \"approximate\" symmetry, relating \"different types of particles\" to each other."}
+{"text":"In general, an approximate symmetry arises when there are very strong interactions that obey that symmetry, along with weaker interactions that do not. In the electron example above, the two \"types\" of electrons behave identically under the strong and weak forces, but differently under the electromagnetic force."}
+{"text":"An example from the real world is isospin symmetry, an SU(2) group corresponding to the similarity between up quarks and down quarks. This is an approximate symmetry: While up and down quarks are identical in how they interact under the strong force, they have different masses and different electroweak interactions. Mathematically, there is an abstract two-dimensional vector space"}
+{"text":"and the laws of physics are \"approximately\" invariant under applying a determinant-1 unitary transformation to this space:"}
+{"text":"For example, formula_85 would turn all up quarks in the universe into down quarks and vice versa. Some examples help clarify the possible effects of these transformations:"}
+{"text":"In general, particles form isospin multiplets, which correspond to irreducible representations of the Lie algebra SU(2). Particles in an isospin multiplet have very similar but not identical masses, because the up and down quarks are very similar but not identical."}
+{"text":"Isospin symmetry can be generalized to flavour symmetry, an SU(3) group corresponding to the similarity between up quarks, down quarks, and strange quarks. This is, again, an approximate symmetry, violated by quark mass differences and electroweak interactions\u2014in fact, it is a poorer approximation than isospin, because of the strange quark's noticeably higher mass."}
+{"text":"Nevertheless, particles can indeed be neatly divided into groups that form irreducible representations of the Lie algebra SU(3), as first noted by Murray Gell-Mann and independently by Yuval Ne'eman."}
+{"text":"In geometry, to translate a geometric figure is to move it from one place to another without rotating it. A translation \"slides\" a thing by a: \"T\"a(p) = p + a."}
+{"text":"In physics and mathematics, continuous translational symmetry is the invariance of a system of equations under any translation. Discrete translational symmetry is invariant under discrete translation."}
+{"text":"Analogously an operator \"A\" on functions is said to be translationally invariant with respect to a translation operator formula_1 if the result after applying \"A\" doesn't change if the argument function is translated."}
+{"text":"Laws of physics are translationally invariant under a spatial translation if they do not distinguish different points in space. According to Noether's theorem, space translational symmetry of a physical system is equivalent to the momentum conservation law."}
+{"text":"Translational symmetry of an object means that a particular translation does not change the object. For a given object, the translations for which this applies form a group, the symmetry group of the object, or, if the object has more kinds of symmetry, a subgroup of the symmetry group."}
+{"text":"Translational invariance implies that, at least in one direction, the object is infinite: for any given point p, the set of points with the same properties due to the translational symmetry form the infinite discrete set {p\u00a0+\u00a0\"n\"a\u00a0|\u00a0\"n\"\u00a0\u2208\u00a0Z} = p\u00a0+\u00a0Z a. Fundamental domains are e.g. H\u00a0+\u00a0[0,\u00a01] a for any hyperplane H for which a has an independent direction. This is in 1D a line segment, in 2D an infinite strip, and in 3D a slab, such that the vector starting at one side ends at the other side. Note that the strip and slab need not be perpendicular to the vector, hence can be narrower or thinner than the length of the vector."}
+{"text":"In spaces with dimension higher than 1, there may be multiple translational symmetry. For each set of \"k\" independent translation vectors, the symmetry group is isomorphic with Z\"k\"."}
+{"text":"In particular, the multiplicity may be equal to the dimension. This implies that the object is infinite in all directions. In this case, the set of all translations forms a lattice. Different bases of translation vectors generate the same lattice if and only if one is transformed into the other by a matrix of integer coefficients of which the absolute value of the determinant is 1. The absolute value of the determinant of the matrix formed by a set of translation vectors is the hypervolume of the \"n\"-dimensional parallelepiped the set subtends (also called the \"covolume\" of the lattice). This parallelepiped is a fundamental region of the symmetry: any pattern on or in it is possible, and this defines the whole object."}
+{"text":"Alternatively, e.g. a rectangle may define the whole object, even if the translation vectors are not perpendicular, if it has two sides parallel to one translation vector, while the other translation vector starting at one side of the rectangle ends at the opposite side."}
+{"text":"For example, consider a tiling with equal rectangular tiles with an asymmetric pattern on them, all oriented the same, in rows, with for each row a shift of a fraction, not one half, of a tile, always the same, then we have only translational symmetry, wallpaper group \"p\"1 (the same applies without shift). With rotational symmetry of order two of the pattern on the tile we have \"p\"2 (more symmetry of the pattern on the tile does not change that, because of the arrangement of the tiles). The rectangle is a more convenient unit to consider as fundamental domain (or set of two of them) than a parallelogram consisting of part of a tile and part of another one."}
+{"text":"In 2D there may be translational symmetry in one direction for vectors of any length. One line, not in the same direction, fully defines the whole object. Similarly, in 3D there may be translational symmetry in one or two directions for vectors of any length. One plane (cross-section) or line, respectively, fully defines the whole object."}
+{"text":"An example of translational symmetry in one direction in 2D nr. 1) is:"}
+{"text":"Note: The example is not an example of rotational symmetry."}
+{"text":"(get the same by moving one line down and two positions to the right), and of translational symmetry in two directions in 2D (wallpaper group p1):"}
+{"text":"(get the same by moving three positions to the right, or one line down and two positions to the right; consequently get also the same moving three lines down)."}
+{"text":"In both cases there is neither mirror-image symmetry nor rotational symmetry."}
+{"text":"For a given translation of space we can consider the corresponding translation of objects. The objects with at least the corresponding translational symmetry are the fixed points of the latter, not to be confused with fixed points of the translation of space, which are non-existent."}
+{"text":"In physics and chemistry, the law of conservation of mass or principle of mass conservation states that for any system closed to all transfers of matter and energy, the mass of the system must remain constant over time, as the system's mass cannot change, so quantity can neither be added nor be removed. Therefore, the quantity of mass is conserved over time."}
+{"text":"The law implies that mass can neither be created nor destroyed, although it may be rearranged in space, or the entities associated with it may be changed in form. For example, in chemical reactions, the mass of the chemical components before the reaction is equal to the mass of the components after the reaction. Thus, during any chemical reaction and low-energy thermodynamic processes in an isolated system, the total mass of the reactants, or starting materials, must be equal to the mass of the products."}
+{"text":"The concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. Historically, mass conservation was demonstrated in chemical reactions independently by Mikhail Lomonosov and later rediscovered by Antoine Lavoisier in the late 18th century. The formulation of this law was of crucial importance in the progress from alchemy to the modern natural science of chemistry."}
+{"text":"The conservation of mass only holds approximately and is considered part of a series of assumptions coming from classical mechanics. The law has to be modified to comply with the laws of quantum mechanics and special relativity under the principle of mass-energy equivalence, which states that energy and mass form one conserved quantity. For very energetic systems the conservation of mass-only is shown not to hold, as is the case in nuclear reactions and particle-antiparticle annihilation in particle physics."}
+{"text":"Mass is also not generally conserved in open systems. Such is the case when various forms of energy and matter are allowed into, or out of, the system. However, unless radioactivity or nuclear reactions are involved, the amount of energy escaping (or entering) such systems as heat, mechanical work, or electromagnetic radiation is usually too small to be measured as a decrease (or increase) in the mass of the system."}
+{"text":"For systems where large gravitational fields are involved, general relativity has to be taken into account, where mass-energy conservation becomes a more complex concept, subject to different definitions, and neither mass nor energy is as strictly and simply conserved as is the case in special relativity."}
+{"text":"The law of conservation of mass can only be formulated in classical mechanics when the energy scales associated to an isolated system are much smaller than formula_1, where formula_2 is the mass of a typical object in the system, measured in the frame of reference where the object is at rest, and formula_3 is the speed of light."}
+{"text":"The law can be formulated mathematically in the fields of fluid mechanics and continuum mechanics, where the conservation of mass is usually expressed using the continuity equation, given in differential form asformula_4where formula_5 is the density (mass per unit volume), formula_6 is the time, formula_7 is the divergence, and formula_8 is the flow velocity field."}
+{"text":"The interpretation of the continuity equation for mass is the following: For a given closed surface in the system, the change in time of the mass enclosed by the surface is equal to the mass that traverses the surface, positive if matter goes in and negative if matter goes out. For the whole isolated system, this condition implies that the total mass formula_9, sum of the masses of all components in the system, does not change in time, i.e. formula_10,where formula_11 is the differential that defines the integral over the whole volume of the system."}
+{"text":"The continuity equation for the mass is part of Euler equations of fluid dynamics. Many other convection\u2013diffusion equations describe the conservation and flow of mass and matter in a given system."}
+{"text":"In chemistry, the calculation of the amount of reactant and products in a chemical reaction, or stoichiometry, is founded on the principle of conservation of mass. The principle implies that during a chemical reaction the total mass of the reactants is equal to the total mass of the products. For example, in the following reaction"}
+{"text":"where one molecule of methane () and two oxygen molecules are converted into one molecule of carbon dioxide () and two of water (). The number of molecules as result from the reaction can be derived from the principle of conservation of mass, as initially four hydrogen atoms, 4 oxygen atoms and one carbon atom are present (as well as in the final state), then the number water molecules produced must be exactly two per molecule of carbon dioxide produced."}
+{"text":"Many engineering problems are solved by following the mass distribution in time of a given system, this practice is known as mass balance."}
+{"text":"An important idea in ancient Greek philosophy was that \"Nothing comes from nothing\", so that what exists now has always existed: no new matter can come into existence where there was none before. An explicit statement of this, along with the further principle that nothing can pass away into nothing, is found in Empedocles (c.4th century BC): \"For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed.\""}
+{"text":"A further principle of conservation was stated by Epicurus around 3rd century BC, who, describing the nature of the Universe, wrote that \"the totality of things was always such as it is now, and always will be\"."}
+{"text":"Jain philosophy, a non-creationist philosophy based on the teachings of Mahavira (6th century BC), states that the universe and its constituents such as matter cannot be destroyed or created. The Jain text Tattvarthasutra (2nd century AD) states that a substance is permanent, but its modes are characterised by creation and destruction. A principle of the conservation of matter was also stated by Nas\u012br al-D\u012bn al-T\u016bs\u012b (around 13th century AD). He wrote that \"A body of matter cannot disappear completely. It only changes its form, condition, composition, color and other properties and turns into a different complex or elementary matter\"."}
+{"text":"The conservation of mass was obscure for millennia because of the buoyancy effect of the Earth's atmosphere on the weight of gases. For example, a piece of wood weighs less after burning; this seemed to suggest that some of its mass disappears, or is transformed or lost. This was not disproved until careful experiments were performed in which chemical reactions such as rusting were allowed to take place in sealed glass ampoules; it was found that the chemical reaction did not change the weight of the sealed container and its contents. Weighing of gases using scales was not possible until the invention of the vacuum pump in 17th century."}
+{"text":"Once understood, the conservation of mass was of great importance in progressing from alchemy to modern chemistry. Once early chemists realized that chemical substances never disappeared but were only transformed into other substances with the same weight, these scientists could for the first time embark on quantitative studies of the transformations of substances. The idea of mass conservation plus a surmise that certain \"elemental substances\" also could not be transformed into others by chemical reactions, in turn led to an understanding of chemical elements, as well as the idea that all chemical processes and transformations (such as burning and metabolic reactions) are reactions between invariant amounts or weights of these chemical elements."}
+{"text":"Following the pioneering work of Lavoisier, the exhaustive experiments of Jean Stas supported the consistency of this law in chemical reactions, even though they were carried out with other intentions. His research indicated that in certain reactions the loss or gain could not have been more than from 2 to 4 parts in 100,000. The difference in the accuracy aimed at and attained by Lavoisier on the one hand, and by Morley and Stas on the other, is enormous."}
+{"text":"The law conservation of mass and the analogous law of conservation of energy were finally overruled by a more general principle known as the mass\u2013energy equivalence. Special relativity also redefines the concept of mass and energy, which can be used interchangeably and are relative to the frame of reference. Several definitions had to be defined for consistency like \"rest mass\" of a particle (mass in the rest frame of the particle) and \"relativistic mass\" (in another frame). The latter term is usually less frequently used."}
+{"text":"In special relativity, the conservation of mass does not apply if the system is open and energy escapes. However, it does continue to apply to totally closed (isolated) systems. If energy cannot escape a system, its mass cannot decrease. In relativity theory, so long as any type of energy is retained within a system, this energy exhibits mass."}
+{"text":"Also, mass must be differentiated from matter, since matter may \"not\" be perfectly conserved in isolated systems, even though mass is always conserved in such systems. However, matter is so nearly conserved in chemistry that violations of matter conservation were not measured until the nuclear age, and the assumption of matter conservation remains an important practical concept in most systems in chemistry and other studies that do not involve the high energies typical of radioactivity and nuclear reactions."}
+{"text":"The mass associated with chemical amounts of energy is too small to measure."}
+{"text":"The change in mass of certain kinds of open systems where atoms or massive particles are not allowed to escape, but other types of energy (such as light or heat) are allowed to enter, escape or be merged, went unnoticed during the 19th century, because the change in mass associated with addition or loss of small quantities of thermal or radiant energy in chemical reactions is very small. (In theory, mass would not change at all for experiments conducted in isolated systems where heat and work were not allowed in or out.)"}
+{"text":"Mass conservation remains correct if energy is not lost."}
+{"text":"The conservation of relativistic mass implies the viewpoint of a single observer (or the view from a single inertial frame) since changing inertial frames may result in a change of the total energy (relativistic energy) for systems, and this quantity determines the relativistic mass."}
+{"text":"The principle that the mass of a system of particles must be equal to the sum of their rest masses, even though true in classical physics, may be false in special relativity. The reason that rest masses cannot be simply added is that this does not take into account other forms of energy, such as kinetic and potential energy, and massless particles such as photons, all of which may (or may not) affect the total mass of systems."}
+{"text":"For moving massive particles in a system, examining the rest masses of the various particles also amounts to introducing many different inertial observation frames (which is prohibited if total system energy and momentum are to be conserved), and also when in the rest frame of one particle, this procedure ignores the momenta of other particles, which affect the system mass if the other particles are in motion in this frame."}
+{"text":"The conservation of both relativistic and invariant mass applies even to systems of particles created by pair production, where energy for new particles may come from kinetic energy of other particles, or from one or more photons as part of a system that includes other particles besides a photon. Again, neither the relativistic nor the invariant mass of totally closed (that is, isolated) systems changes when new particles are created. However, different inertial observers will disagree on the value of this conserved mass, if it is the relativistic mass (i.e., relativistic mass is conserved but not invariant). However, all observers agree on the value of the conserved mass if the mass being measured is the invariant mass (i.e., invariant mass is both conserved and invariant)."}
+{"text":"The mass-energy equivalence formula gives a different prediction in non-isolated systems, since if energy is allowed to escape a system, both relativistic mass and invariant mass will escape also. In this case, the mass-energy equivalence formula predicts that the \"change\" in mass of a system is associated with the \"change\" in its energy due to energy being added or subtracted: formula_12 This form involving changes was the form in which this famous equation was originally presented by Einstein. In this sense, mass changes in any system are explained simply if the mass of the energy added or removed from the system, are taken into account."}
+{"text":"In general relativity, the total invariant mass of photons in an expanding volume of space will decrease, due to the red shift of such an expansion. The conservation of both mass and energy therefore depends on various corrections made to energy in the theory, due to the changing gravitational potential energy of such systems."}
+{"text":"The groundwater energy balance is the energy balance of a groundwater body in terms of incoming hydraulic energy associated with groundwater inflow into the body, energy associated with the outflow, energy conversion into heat due to friction of flow, and the resulting change of energy status and groundwater level."}
+{"text":"When multiplying the horizontal velocity of groundwater (dimension, for example, m3\/day per m2 cross-sectional area) with the groundwater potential (dimension energy per m3 water, or \"E\"\/m3) one obtains an energy flow (flux) in \"E\"\/day per m2 cross-sectional area."}
+{"text":"Summation or integration of the energy flux in a vertical cross-section of unit width (say 1 m) from the lower flow boundary (the impermeable layer or base) up to the water table in an unconfined aquifer gives the energy flow \"Fe\" through the cross-section in \"E\"\/day per m width of the aquifer."}
+{"text":"While flowing, the groundwater loses energy due to friction of flow, i.e. hydraulic energy is converted into heat. At the same time, energy may be added with the recharge of water coming into the aquifer through the water table. Thus one can make an hydraulic energy balance of a block of soil between two nearby cross-sections. In steady state, i.e. without change in energy status and without accumulation or depletion of water stored in the soil body, the energy flow in the first section plus the energy added by groundwater recharge between the sections minus the energy flow in the second section must equal the energy loss due to friction of flow."}
+{"text":"In mathematical terms this balance can be obtained by differentiating the cross-sectional integral of \"Fe\" in the direction of flow using the Leibniz rule, taking into account that the level of the water table may change in the direction of flow."}
+{"text":"The mathematics is simplified using the Dupuit\u2013Forchheimer assumption."}
+{"text":"The hydraulic friction losses can be described in analogy to \"Joule's law\" in electricity (see Joule's law#Hydraulic equivalent), where the friction losses are proportional to the square value of the current (flow) and the electrical resistance of the material through which the current occurs. In groundwater hydraulics (fluid dynamics, hydrodynamics) one often works with hydraulic conductivity (i.e. permeability of the soil for water), which is inversely proportional to the hydraulic resistance."}
+{"text":"The trial and error procedure is cumbersome and therefore computer programs may be developed to aid in the calculations."}
+{"text":"The energy balance of groundwater flow can be applied to flow of groundwater to subsurface drains. The computer program \"EnDrain\" compares the outcome of the traditional drain spacing equation, based on Darcy's law together with the continuity equation (i.e. conservation of mass), with the solution obtained by the energy balance and it can be seen that drain spacings are wider in the latter case. This is owing to the introduction of the energy supplied by the incoming recharge."}
+{"text":"The Carter constant is a conserved quantity for motion around black holes in the general relativistic formulation of gravity. Carter's constant was derived for a spinning, charged black hole by Australian theoretical physicist Brandon Carter in 1968. Carter's constant along with the energy, axial angular momentum, and particle rest mass provide the four conserved quantities necessary to uniquely determine all orbits in the Kerr\u2013Newman spacetime (even those of charged particles)."}
+{"text":"Carter noticed that the Hamiltonian for motion in Kerr spacetime was separable in Boyer\u2013Lindquist coordinates, allowing the constants of such motion to be easily identified using Hamilton-Jacobi theory. The Carter constant can be written as follows:"}
+{"text":"where formula_2 is the latitudinal component of the particle's angular momentum, formula_3 is the energy of the particle, formula_4 is the particle's axial angular momentum, formula_5 is the rest mass of the particle, and formula_6 is the spin parameter of the black hole. Because functions of conserved quantities are also conserved, any function of formula_7 and the three other constants of the motion can be used as a fourth constant in place of formula_7. This results in some confusion as to the form of Carter's constant. For example it is sometimes more convenient to use:"}
+{"text":"in place of formula_7. The quantity formula_11 is useful because it is always non-negative. In general any fourth conserved quantity for motion in the Kerr family of spacetimes may be referred to as \"Carter's constant\"."}
+{"text":"Noether's theorem states that all conserved quantities are related to spacetime symmetries. Carter's constant is related to a higher order symmetry of the Kerr metric generated by a second order Killing tensor field formula_11 (different formula_11 than used above). In component form:"}
+{"text":"where formula_15 is the four-velocity of the particle in motion. The components of the Killing tensor in Boyer\u2013Lindquist coordinates are:"}
+{"text":"where formula_17 are the components of the metric tensor and formula_18 and formula_19 are the components of the principal null vectors:"}
+{"text":"The spherical symmetry of the Schwarzschild metric for non-spinning black holes allows one to reduce the problem of finding the trajectories of particles to three dimensions. In this case one only needs formula_3, formula_4, and formula_5 to determine the motion; however, the symmetry leading to Carter's constant still exists. Carter's constant for Schwarzschild space is:"}
+{"text":"By a rotation of coordinates we can put any orbit in the formula_27 plane so formula_28. In this case formula_29, the square of the orbital angular momentum."}
+{"text":"The Bohr\u2013Kramers\u2013Slater theory (BKS theory) was perhaps the final attempt at understanding the interaction of matter and electromagnetic radiation on the basis of the so-called old quantum theory, in which quantum phenomena are treated by imposing quantum restrictions on classically describable behaviour. It was advanced in 1924, and sticks to a \"classical\" wave description of the electromagnetic field. It was perhaps more a research program than a full physical theory, the ideas that are developed not being worked out in a quantitative way."}
+{"text":"One aspect, the idea of modelling atomic behaviour under incident electromagnetic radiation using \"virtual oscillators\" at the absorption and emission frequencies, rather than the (different) apparent frequencies of the Bohr orbits, significantly led Born, Heisenberg and Kramers to explore mathematics that strongly inspired the subsequent development of matrix mechanics, the first form of modern quantum mechanics. The provocativeness of the theory also generated great discussion and renewed attention to the difficulties in the foundations of the old quantum theory. However, physically the most provocative element of the theory, that momentum and energy would not necessarily be conserved in each interaction but only overall, statistically, was soon shown to be in conflict with experiment."}
+{"text":"The initial idea of the BKS theory originated with Slater, who proposed to Bohr and Kramers the following elements of a theory of emission and absorption of radiation by atoms, to be developed during his stay in Copenhagen:"}
+{"text":"Slater's main intention seems to have been to reconcile the two conflicting models of radiation, viz. the wave and particle models. He may have had good hopes that his idea with respect to oscillators vibrating at the \"differences\" of the frequencies of electron rotations (rather than at the rotation frequencies themselves) might be attractive to Bohr because it solved a problem of the latter's atomic model, even though the physical meaning of these oscillators was far from clear. Nevertheless, Bohr and Kramers had two objections to Slater's proposal:"}
+{"text":"As Max Jammer puts it, this refocussed the theory \"to harmonize the physical picture of the continuous electromagnetic field with the physical picture, not as Slater had proposed of light quanta, but of the discontinuous quantum transitions in the atom.\" Bohr and Kramers hoped to be able to evade the photon hypothesis on the basis of ongoing work by Kramers to describe \"dispersion\" (in present-day terms inelastic scattering) of light by means of a classical theory of interaction of radiation and matter. But abandoning the concept of the photon, they instead chose to squarely accept the possibility of non-conservation of energy, and momentum."}
+{"text":"In particle physics, flavour or flavor refers to the \"species\" of an elementary particle. The Standard Model counts six flavours of quarks and six flavours of leptons. They are conventionally parameterized with \"flavour quantum numbers\" that are assigned to all subatomic particles. They can also be described by some of the family symmetries proposed for the quark-lepton generations."}
+{"text":"In classical mechanics, a force acting on a point-like particle can only alter the particle's dynamical state, i.e., its momentum, angular momentum, etc. Quantum field theory, however, allows interactions that can alter other facets of a particle's nature described by non dynamical, discrete quantum numbers. In particular, the action of the weak force is such that it allows the conversion of quantum numbers describing mass and electric charge of both quarks and leptons from one discrete type to another. This is known as a flavour change, or flavour transmutation. Due to their quantum description, flavour states may also undergo quantum superposition."}
+{"text":"In atomic physics the principal quantum number of an electron specifies the electron shell in which it resides, which determines the energy level of the whole atom. Analogously, the five flavour quantum numbers (isospin, strangeness, charm, bottomness or topness) can characterize the quantum state of quarks, by the degree to which it exhibits six distinct flavours (u, d, s, c, b, t)."}
+{"text":"Composite particles can be created from multiple quarks, forming hadrons, such as mesons and baryons, each possessing unique aggregate characteristics, such as different masses, electric charges, and decay modes. A hadron's overall flavour quantum numbers depend on the numbers of constituent quarks of each particular flavour."}
+{"text":"All of the various charges discussed above are conserved by the fact that the corresponding charge operators can be understood as \"generators of symmetries\" that commute with the Hamiltonian. Thus, the eigenvalues of the various charge operators are conserved."}
+{"text":"Absolutely conserved flavour quantum numbers in the Standard Model are:"}
+{"text":"In some theories, such as the grand unified theory, the individual baryon and lepton number conservation can be violated, if the difference between them () is conserved (see chiral anomaly)."}
+{"text":"Strong interactions conserve all flavours, but all flavour quantum numbers (other than and ) are violated (changed, non-conserved) by electroweak interactions."}
+{"text":"If there are two or more particles which have identical interactions, then they may be interchanged without affecting the physics. Any (complex) linear combination of these two particles give the same physics, as long as the combinations are orthogonal, or perpendicular, to each other."}
+{"text":"In other words, the theory possesses symmetry transformations such as formula_1, where and are the two fields (representing the various \"generations\" of leptons and quarks, see below), and is any unitary matrix with a unit determinant. Such matrices form a Lie group called SU(2) (see special unitary group). This is an example of flavour symmetry."}
+{"text":"In quantum chromodynamics, flavour is a conserved global symmetry. In the electroweak theory, on the other hand, this symmetry is broken, and flavour changing processes exist, such as quark decay or neutrino oscillations."}
+{"text":"All leptons carry a lepton number . In addition, leptons carry weak isospin, , which is \u2212 for the three charged leptons (i.e. electron, muon and tau) and + for the three associated neutrinos. Each doublet of a charged lepton and a neutrino consisting of opposite are said to constitute one generation of leptons. In addition, one defines a quantum number called weak hypercharge, , which is \u22121 for all left-handed leptons. Weak isospin and weak hypercharge are gauged in the Standard Model."}
+{"text":"Leptons may be assigned the six flavour quantum numbers: electron number, muon number, tau number, and corresponding numbers for the neutrinos. These are conserved in strong and electromagnetic interactions, but violated by weak interactions. Therefore, such flavour quantum numbers are not of great use. A separate quantum number for each generation is more useful: electronic lepton number (+1 for electrons and electron neutrinos), muonic lepton number (+1 for muons and muon neutrinos), and tauonic lepton number (+1 for tau leptons and tau neutrinos). However, even these numbers are not absolutely conserved, as neutrinos of different generations can mix; that is, a neutrino of one flavour can transform into another flavour. The strength of such mixings is specified by a matrix called the Pontecorvo\u2013Maki\u2013Nakagawa\u2013Sakata matrix (PMNS matrix)."}
+{"text":"All quarks carry a baryon number and all anti-quarks have They also all carry weak isospin, The positively charged quarks (up, charm, and top quarks) are called \"up-type quarks\" and have the negatively charged quarks (down, strange, and bottom quarks) are called \"down-type quarks\" and have Each doublet of up and down type quarks constitutes one generation of quarks."}
+{"text":"For all the quark flavour quantum numbers listed below, the convention is that the flavour charge and the electric charge of a quark have the same sign. Thus any flavour carried by a charged meson has the same sign as its charge. Quarks have the following flavour quantum numbers:"}
+{"text":"These five quantum numbers, together with baryon number (which is not a flavour quantum number), completely specify numbers of all 6 quark flavours separately (as i.e. an antiquark is counted with the minus sign). They are conserved by both the electromagnetic and strong interactions (but not the weak interaction). From them can be built the derived quantum numbers:"}
+{"text":"The terms \"strange\" and \"strangeness\" predate the discovery of the quark, but continued to be used after its discovery for the sake of continuity (i.e. the strangeness of each type of hadron remained the same); strangeness of anti-particles being referred to as +1, and particles as \u22121 as per the original definition. Strangeness was introduced to explain the rate of decay of newly discovered particles, such as the kaon, and was used in the Eightfold Way classification of hadrons and in subsequent quark models. These quantum numbers are preserved under strong and electromagnetic interactions, but not under weak interactions."}
+{"text":"For first-order weak decays, that is processes involving only one quark decay, these quantum numbers (e.g. charm) can only vary by 1, that is, for a decay involving a charmed quark or antiquark either as the incident particle or as a decay byproduct, likewise, for a decay involving a bottom quark or antiquark Since first-order processes are more common than second-order processes (involving two quark decays), this can be used as an approximate \"selection rule\" for weak decays."}
+{"text":"A special mixture of quark flavours is an eigenstate of the weak interaction part of the Hamiltonian, so will interact in a particularly simple way with the W bosons (charged weak interactions violate flavour). On the other hand, a fermion of a fixed mass (an eigenstate of the kinetic and strong interaction parts of the Hamiltonian) is an eigenstate of flavour. The transformation from the former basis to the flavour-eigenstate\/mass-eigenstate basis for quarks underlies the Cabibbo\u2013Kobayashi\u2013Maskawa matrix (CKM matrix). This matrix is analogous to the PMNS matrix for neutrinos, and quantifies flavour changes under charged weak interactions of quarks."}
+{"text":"The CKM matrix allows for CP violation if there are at least three generations."}
+{"text":"Flavour quantum numbers are additive. Hence antiparticles have flavour equal in magnitude to the particle but opposite in sign. Hadrons inherit their flavour quantum number from their valence quarks: this is the basis of the classification in the quark model. The relations between the hypercharge, electric charge and other flavour quantum numbers hold for hadrons as well as quarks."}
+{"text":"Quantum chromodynamics (QCD) contains six flavours of quarks. However, their masses differ and as a result they are not strictly interchangeable with each other. The up and down flavours are close to having equal masses, and the theory of these two quarks possesses an approximate SU(2) symmetry (isospin symmetry)."}
+{"text":"Under some circumstances (for instance when the quark masses are much smaller than the chiral symmetry breaking scale of 250\u00a0MeV), the masses of quarks do not meaningfully contribute to the system's behavior, and can be ignored to zeroth approximation. The simplified behavior of flavour transformations can then be successfully modeled as acting independently on the left- and right-handed parts of each quark field. This approximate description of the flavour symmetry is described by a chiral group ."}
+{"text":"If all quarks had non-zero but equal masses, then this chiral symmetry is broken to the \"vector symmetry\" of the \"diagonal flavour group\" , which applies the same transformation to both helicities of the quarks. This reduction of symmetry is a form of \"explicit symmetry breaking\". The strength of explicit symmetry breaking is controlled by the current quark masses in QCD."}
+{"text":"Even if quarks are massless, chiral flavour symmetry can be spontaneously broken if the vacuum of the theory contains a chiral condensate (as it does in low-energy QCD). This gives rise to an effective mass for the quarks, often identified with the valence quark mass in QCD."}
+{"text":"Analysis of experiments indicate that the current quark masses of the lighter flavours of quarks are much smaller than the QCD scale, \u039bQCD, hence chiral flavour symmetry is a good approximation to QCD for the up, down and strange quarks. The success of chiral perturbation theory and the even more naive chiral models spring from this fact. The valence quark masses extracted from the quark model are much larger than the current quark mass. This indicates that QCD has spontaneous chiral symmetry breaking with the formation of a chiral condensate. Other phases of QCD may break the chiral flavour symmetries in other ways."}
+{"text":"Some of the historical events that led to the development of flavour symmetry are discussed in the article on isospin, the eightfold way (physics) and chiral symmetry. Chief among these would be the November Revolution (physics) in 1974, when the fourth (charm) quark was found."}
+{"text":"Reciprocity in electrical networks is a property of a circuit that relates voltages and currents at two points. The reciprocity theorem states that the current at one point in a circuit due to a voltage at a second point is the same as the current at the second point due to the same voltage at the first. The reciprocity theorem is valid for almost all passive networks. The reciprocity theorem is a feature of a more general principle of reciprocity in electromagnetism."}
+{"text":"If a current, formula_1, injected into port A produces a voltage, formula_2, at port B and formula_1 injected into port B produces formula_2 at port A, then the network is said to be reciprocal. Equivalently, reciprocity can be defined by the dual situation; applying voltage, formula_5, at port A producing current formula_6 at port B and formula_5 at port B producing current formula_6 at port A. In general, passive networks are reciprocal. Any network that consists entirely of ideal capacitances, inductances (including mutual inductances), and resistances, that is, elements that are linear and bilateral, will be reciprocal. However, passive components that are non-reciprocal do exist. Any component containing ferromagnetic material is likely to be non-reciprocal. Examples of passive components deliberately designed to be non-reciprocal include circulators and isolators."}
+{"text":"The transfer function of a reciprocal network has the property that it is symmetrical about the main diagonal if expressed in terms of a z-parameter, y-parameter, or s-parameter matrix. A non-symmetrical matrix implies a non-reciprocal network. A symmetric matrix does not imply a symmetric network."}
+{"text":"In some parametisations of networks, the representative matrix is not symmetrical for reciprocal networks. Common examples are h-parameters and ABCD-parameters, but they all have some other condition for reciprocity that can be calculated from the parameters. For h-parameters the condition is formula_9 and for the ABCD parameters it is formula_10. These representations mix voltages and currents in the same column vector and therefore do not even have matching units in transposed elements."}
+{"text":"An example of reciprocity can be demonstrated using an asymmetrical resistive attenuator. An asymmetrical network is chosen as the example because a symmetrical network is fairly self-evidently reciprocal."}
+{"text":"Injecting six amps into port 1 of this network produces 24 volts at port 2."}
+{"text":"Injecting six amps into port 2 produces 24 volts at port 1."}
+{"text":"Hence, the network is reciprocal. In this example, the port that is not injecting current is left open circuit. This is because a current generator applying zero current is an open circuit. If, on the other hand, one wished to apply voltages and measure the resulting current, then the port to which the voltage is not applied would be made short circuit. This is because a voltage generator applying zero volts is a short circuit."}
+{"text":"Reciprocity of electrical networks is a special case of Lorentz reciprocity, but it can also be proven more directly from network theorems. This proof shows reciprocity for a two-node network in terms of its admittance matrix, and then shows reciprocity for a network with an arbitrary number of nodes by an induction argument. A linear network can be represented as a set of linear equations through nodal analysis. These equations can be expressed in the form of an admittance matrix,"}
+{"text":"If we further require that network is made up of passive, bilateral elements, then"}
+{"text":"since the admittance connected between nodes \"j\" and \"k\" is the same element as the admittance connected between nodes \"k\" and \"j\". The matrix is therefore symmetrical. For the case where formula_17 the matrix reduces to,"}
+{"text":"From which it can be seen that,"}
+{"text":"which is synonymous with the condition for reciprocity. In words, the ratio of the current at one port to the voltage at another is the same ratio if the ports being driven and measured are interchanged. Thus reciprocity is proven for the case of formula_17."}
+{"text":"For the case of a matrix of arbitrary size, the order of the matrix can be reduced through node elimination. After eliminating the \"s\"th node, the new admittance matrix will have the form,"}
+{"text":"It can be seen that this new matrix is also symmetrical. Nodes can continue to be eliminated in this way until only a 2\u00d72 symmetrical matrix remains involving the two nodes of interest. Since this matrix is symmetrical it is proved that reciprocity applies to a matrix of arbitrary size when one node is driven by a voltage and current measured at another. A similar process using the impedance matrix from mesh analysis demonstrates reciprocity where one node is driven by a current and voltage is measured at another."}
+{"text":"The Extra Element Theorem (EET) is an analytic technique developed by R. D. Middlebrook for simplifying the process of deriving driving point and transfer functions for linear electronic circuits. Much like Th\u00e9venin's theorem, the extra element theorem breaks down one complicated problem into several simpler ones."}
+{"text":"Driving point and transfer functions can generally be found using Kirchhoff's circuit laws. However several complicated equations may result that offer little insight into the circuit's behavior. Using the extra element theorem, a circuit element (such as a resistor) can be removed from a circuit and the desired driving point or transfer function found. By removing the element that most complicates the circuit (such as an element that creates feedback), the desired function can be easier to obtain. Next two correctional factors must be found and combined with the previously derived function to find the exact expression."}
+{"text":"The general form of the extra element theorem is called the N-extra element theorem and allows multiple circuit elements to be removed at once."}
+{"text":"The (single) extra element theorem expresses any transfer function as a product of the transfer function with that element removed and a correction factor. The correction factor term consists of the impedance of the extra element and two driving point impedances seen by the extra element: The double null injection driving point impedance and the single injection driving point impedance. Because an extra element can be removed in general by either short-circuiting or open-circuiting the element, there are two equivalent forms of the EET:"}
+{"text":"Where the Laplace-domain transfer functions and impedances in the above expressions are defined as follows: is the transfer function with the extra element present. is the transfer function with the extra element open-circuited. is the transfer function with the extra element short-circuited. is the impedance of the extra element. is the single-injection driving point impedance \"seen\" by the extra element. is the double-null-injection driving point impedance \"seen\" by the extra element."}
+{"text":"The extra element theorem incidentally proves that any electric circuit transfer function can be expressed as no more than a bilinear function of any particular circuit element."}
+{"text":"is found by making the input to the system's transfer function zero (short circuit a voltage source or open circuit a current source) and determining the impedance across the terminals to which the extra element will be connected with the extra element absent. This impedance is same as the Th\u00e9venin's equivalent impedance."}
+{"text":"is found by replacing the extra element with a second test signal source (either current source or voltage source as appropriate). Then, is defined as the ratio of voltage across the terminals of this second test source to the current leaving its positive terminal when the output of the system's transfer function is nulled for any value of the primary input to the system's transfer function."}
+{"text":"In practice, can be found from working backwards from the facts that the output of the transfer function is made zero and that the primary input to the transfer function is unknown. Then using conventional circuit analysis techniques to express both the voltage across the extra element test source's terminals, , and the current leaving the extra element test source's positive terminals, , and calculating formula_3. Although computation of is an unfamiliar process for many engineers, its expressions are often much simpler than those for because the nulling of the transfer function's output often leads to other voltages\/currents in the circuit being zero, which may allow exclusion of certain components from analysis."}
+{"text":"Special case with transfer function as a self-impedance."}
+{"text":"As a special case, the EET can be used to find the input impedance of a network with the addition of an element designated as \"extra\". In this case, is same as the impedance of the input test current source signal made zero or equivalently with the input open circuited. Likewise, since the transfer function output signal can be considered to be the voltage at the input terminals, is found when the input voltage is zero i.e. the input terminals are short-circuited. Thus, for this particular application the EET can be written as:"}
+{"text":"Computing these three terms may seem like extra effort, but they are often easier to compute than the overall input impedance."}
+{"text":"Consider the problem of finding formula_9 for the circuit in Figure 1 using the EET (note all component values are unity for simplicity). If the capacitor (gray shading) is denoted the extra element then"}
+{"text":"Calculating the impedance seen by the capacitor with the input shorted,"}
+{"text":"Calculating the impedance seen by the capacitor with the input open,"}
+{"text":"This problem was solved by calculating three simple driving point impedances by inspection."}
+{"text":"The EET is also useful for analyzing single and multi-loop feedback amplifiers. In this case the EET can take the form of the asymptotic gain model."}
+{"text":"The star-mesh transform, or star-polygon transform, is a mathematical circuit analysis technique to transform a resistive network into an equivalent network with one less node. The equivalence follows from the Schur complement identity applied to the Kirchhoff matrix of the network."}
+{"text":"The equivalent impedance betweens nodes A and B is given by:"}
+{"text":"where formula_2 is the impedance between node A and the central node being removed."}
+{"text":"The transform replaces \"N\" resistors with formula_3 resistors. For formula_4, the result is an increase in the number of resistors, so the transform has no general inverse without additional constraints."}
+{"text":"It is possible, though not necessarily efficient, to transform an arbitrarily complex two-terminal resistive network into a single equivalent resistor by repeatedly applying the star-mesh transform to eliminate each non-terminal node."}
+{"text":"Source transformation is the process of simplifying a circuit solution, especially with mixed sources, by transforming voltage sources into current sources, and vice versa, using Th\u00e9venin's theorem and Norton's theorem respectively."}
+{"text":"Performing a source transformation consists of using Ohm's law to take an existing voltage source in series with a resistance, and replacing it with a current source in parallel with the same resistance, or vice versa. The transformed sources are considered identical and can be substituted for one another in a circuit."}
+{"text":"Source transformations are not limited to resistive circuits. They can be performed on a circuit involving capacitors and inductors as well, by expressing circuit elements as impedances and sources in the frequency domain. In general, the concept of source transformation is an application of Th\u00e9venin's theorem to a current source, or Norton's theorem to a voltage source. However, this means that source transformation is bound by the same conditions as Thevenin's theorem and Norton's theorem; namely that the load behaves linearly, and does not contain dependent voltage or current sources."}
+{"text":"Source transformations are easy to compute using Ohm's law. If there is a voltage source in series with an impedance, it is possible to find the value of the equivalent current source in parallel with the impedance by dividing the value of the voltage source by the value of the impedance. The converse also holds: if a current source in parallel with an impedance is present, multiplying the value of the current source with the value of the impedance provides the equivalent voltage source in series with the impedance. A visual example of a source transformation can be seen in Figure 1."}
+{"text":"The transformation can be derived from the uniqueness theorem. In the present context, it implies that a black box with two terminals must have a unique well-defined relation between its voltage and current. It is readily to verify that the above transformation indeed gives the same V-I curve, and therefore the transformation is valid."}
+{"text":"In the mathematical theory of bifurcations, a Hopf bifurcation is a critical point where a system's stability switches and a periodic solution arises. More accurately, it is a local bifurcation in which a fixed point of a dynamical system loses stability, as a pair of complex conjugate eigenvalues\u2014of the linearization around the fixed point\u2014crosses the complex plane imaginary axis. Under reasonably generic assumptions about the dynamical system, a small-amplitude limit cycle branches from the fixed point."}
+{"text":"A Hopf bifurcation is also known as a Poincar\u00e9\u2013Andronov\u2013Hopf bifurcation, named after Henri Poincar\u00e9, Aleksandr Andronov and Eberhard Hopf."}
+{"text":"The limit cycle is orbitally stable if a specific quantity called the first Lyapunov coefficient is negative, and the bifurcation is supercritical. Otherwise it is unstable and the bifurcation is subcritical."}
+{"text":"The normal form of a Hopf bifurcation is:"}
+{"text":"Write: formula_2 The number \"\u03b1\" is called the first Lyapunov coefficient."}
+{"text":"Hopf bifurcations occur in the Lotka\u2013Volterra model of predator\u2013prey interaction (known as paradox of enrichment), the Hodgkin\u2013Huxley model for nerve membrane, the Selkov model of glycolysis, the Belousov\u2013Zhabotinsky reaction, the Lorenz attractor, the Brusselator and Classical electromagnetism."}
+{"text":"The phase portrait illustrating the Hopf bifurcation in the Selkov model is shown on the right."}
+{"text":"In railway vehicle systems, Hopf bifurcation analysis is notably important. Conventionally a railway vehicle's stable motion at low speeds crosses over to unstable at high speeds. One aim of the nonlinear analysis of these systems is to perform an analytical investigation of bifurcation, nonlinear lateral stability and hunting behavior of rail vehicles on a tangent track, which uses the Bogoliubov method."}
+{"text":"The appearance or the disappearance of a periodic orbit through a local change in the stability properties of a fixed point is known as the Hopf bifurcation. The following theorem works for fixed points with one pair of conjugate nonzero purely imaginary eigenvalues. It tells the conditions under which this bifurcation phenomenon occurs."}
+{"text":"Theorem (see section 11.2 of ). Let formula_6 be the Jacobian of a continuous parametric dynamical system evaluated at a steady point formula_7. Suppose that all eigenvalues of formula_6 have negative real part except one conjugate nonzero purely imaginary pair formula_9. A \"Hopf bifurcation\" arises when these two eigenvalues cross the imaginary axis because of a variation of the system parameters."}
+{"text":"Routh\u2013Hurwitz criterion (section I.13 of ) gives necessary conditions so that a Hopf bifurcation occurs. Let us see how one can use concretely this idea."}
+{"text":"Let formula_10 be Sturm series associated to a characteristic polynomial formula_11. They can be written in the form:"}
+{"text":"The coefficients formula_13 for formula_14 in formula_15 correspond to what is called Hurwitz determinants. Their definition is related to the associated Hurwitz matrix."}
+{"text":"Proposition 1. If all the Hurwitz determinants formula_13 are positive, apart perhaps formula_17 then the associated Jacobian has no pure imaginary eigenvalues."}
+{"text":"Proposition 2. If all Hurwitz determinants formula_13 (for all formula_14 in formula_20 are positive, formula_21 and formula_22 then all the eigenvalues of the associated Jacobian have negative real parts except a purely imaginary conjugate pair."}
+{"text":"The conditions that we are looking for so that a Hopf bifurcation occurs (see theorem above) for a parametric continuous dynamical system are given by this last proposition."}
+{"text":"Consider the classical Van der Pol oscillator written with ordinary differential equations:"}
+{"text":"The Jacobian matrix associated to this system follows:"}
+{"text":"The characteristic polynomial (in formula_25) of the linearization at (0,0) is equal to:"}
+{"text":"The Sturm polynomials can be written as (here formula_29):"}
+{"text":"The above proposition 2 tells that one must have:"}
+{"text":"Because 1\u00a0>\u00a00 and \u22121\u00a0<\u00a00 are obvious, one can conclude that a Hopf bifurcation may occur for Van der Pol oscillator if formula_32."}
+{"text":"In electrical engineering and science, an equivalent circuit refers to a theoretical circuit that retains all of the electrical characteristics of a given circuit. Often, an equivalent circuit is sought that simplifies calculation, and more broadly, that is a simplest form of a more complex circuit in order to aid analysis. In its most common form, an equivalent circuit is made up of linear, passive elements. However, more complex equivalent circuits are used that approximate the nonlinear behavior of the original circuit as well. These more complex circuits often are called \"macromodels\" of the original circuit. An example of a macromodel is the Boyle circuit for the 741 operational amplifier."}
+{"text":"One of linear circuit theory's most surprising properties relates to the ability to treat any two-terminal circuit no matter how complex as behaving as only a source and an impedance, which have either of two simple equivalent circuit forms:"}
+{"text":"However, the single impedance can be of arbitrary complexity (as a function of frequency) and may be irreducible to a simpler form."}
+{"text":"In linear circuits, due to the superposition principle, the output of a circuit is equal to the sum of the output due to its DC sources alone, and the output from its AC sources alone. Therefore, the DC and AC response of a circuit is often analyzed independently, using separate DC and AC equivalent circuits which have the same response as the original circuit to DC and AC currents respectively. The composite response is calculated by adding the DC and AC responses:"}
+{"text":"This technique is often extended to small-signal nonlinear circuits like tube and transistor circuits, by linearizing the circuit about the DC bias point Q-point, using an AC equivalent circuit made by calculating the equivalent \"small signal\" AC resistance of the nonlinear components at the bias point."}
+{"text":"Linear four-terminal circuits in which a signal is applied to one pair of terminals and an output is taken from another, are often modeled as two-port networks. These can be represented by simple equivalent circuits of impedances and dependent sources. To be analyzed as a two port network the currents applied to the circuit must satisfy the \"port condition\": the current entering one terminal of a port must be equal to the current leaving the other terminal of the port. By linearizing a nonlinear circuit about its operating point, such a two-port representation can be made for transistors: see hybrid pi and h-parameter circuits."}
+{"text":"In three phase power circuits, three phase sources and loads can be connected in two different ways, called a \"delta\" connection and a \"wye\" connection. In analyzing circuits, sometimes it simplifies the analysis to convert between equivalent wye and delta circuits. This can be done with the wye-delta transform."}
+{"text":"Equivalent circuits can be used to electrically describe and model either a) continuous materials or biological systems in which current does not actually flow in defined circuits, or, b) distributed reactances, such as found in electrical lines or windings, that do not represent actual discrete components. For example, a cell membrane can be modelled as a capacitance (i.e. the lipid bilayer) in parallel with resistance-DC voltage source combinations (i.e. ion channels powered by an ion gradient across the membrane)."}
+{"text":"In classical electromagnetism, reciprocity refers to a variety of related theorems involving the interchange of time-harmonic electric current densities (sources) and the resulting electromagnetic fields in Maxwell's equations for time-invariant linear media under certain constraints. Reciprocity is closely related to the concept of Hermitian operators from linear algebra, applied to electromagnetism."}
+{"text":"Reciprocity is useful in optics, which (apart from quantum effects) can be expressed in terms of classical electromagnetism, but also in terms of radiometry."}
+{"text":"There is also an analogous theorem in electrostatics, known as Green's reciprocity, relating the interchange of electric potential and electric charge density."}
+{"text":"Forms of the reciprocity theorems are used in many electromagnetic applications, such as analyzing electrical networks and antenna systems. For example, reciprocity implies that antennas work equally well as transmitters or receivers, and specifically that an antenna's radiation and receiving patterns are identical. Reciprocity is also a basic lemma that is used to prove other theorems about electromagnetic systems, such as the symmetry of the impedance matrix and scattering matrix, symmetries of Green's functions for use in boundary-element and transfer-matrix computational methods, as well as orthogonality properties of harmonic modes in waveguide systems (as an alternative to proving those properties directly from the symmetries of the eigen-operators)."}
+{"text":"Specifically, suppose that one has a current density formula_1 that produces an electric field formula_2 and a magnetic field formula_3, where all three are periodic functions of time with angular frequency \u03c9, and in particular they have time-dependence formula_4. Suppose that we similarly have a second current formula_5 at the same frequency \u03c9 which (by itself) produces fields formula_6 and formula_7. The Lorentz reciprocity theorem then states, under certain simple conditions on the materials of the medium described below, that for an arbitrary surface \"S\" enclosing a volume \"V\":"}
+{"text":"Equivalently, in differential form (by the divergence theorem):"}
+{"text":"This general form is commonly simplified for a number of special cases. In particular, one usually assumes that formula_1 and formula_5 are localized (i.e. have compact support), and that there are no incoming waves from infinitely far away. In this case, if one integrates throughout space then the surface-integral terms cancel (see below) and one obtains:"}
+{"text":"This result (along with the following simplifications) is sometimes called the Rayleigh-Carson reciprocity theorem, after Lord Rayleigh's work on sound waves and an extension by John R. Carson (1924; 1930) to applications for radio frequency antennas. Often, one further simplifies this relation by considering point-like dipole sources, in which case the integrals disappear and one simply has the product of the electric field with the corresponding dipole moments of the currents. Or, for wires of negligible thickness, one obtains the applied current in one wire multiplied by the resulting voltage across another and vice versa; see also below."}
+{"text":"Another special case of the Lorentz reciprocity theorem applies when the volume \"V\" entirely contains \"both\" of the localized sources (or alternatively if \"V\" intersects \"neither\" of the sources). In this case:"}
+{"text":"Above, Lorentz reciprocity was phrased in terms of an externally applied current source and the resulting field. Often, especially for electrical networks, one instead prefers to think of an externally applied voltage and the resulting currents. The Lorentz reciprocity theorem describes this case as well, assuming ohmic materials (i.e. currents that respond linearly to the applied field) with a 3\u00d73 conductivity matrix \u03c3 that is required to be symmetric, which is implied by the other conditions below. In order to properly describe this situation, one must carefully distinguish between the externally \"applied\" fields (from the driving voltages) and the \"total\" fields that result (King, 1963)."}
+{"text":"More specifically, the formula_14 above only consisted of external \"source\" terms introduced into Maxwell's equations. We now denote this by formula_15 to distinguish it from the \"total\" current produced by both the external source and by the resulting electric fields in the materials. If this external current is in a material with a conductivity \u03c3, then it corresponds to an externally applied electric field formula_16 where, by definition of \u03c3:"}
+{"text":"Moreover, the electric field formula_18 above only consisted of the \"response\" to this current, and did not include the \"external\" field formula_16. Therefore, we now denote the field from before as formula_20, where the \"total\" field is given by formula_21."}
+{"text":"Now, the equation on the left-hand side of the Lorentz reciprocity theorem can be rewritten by moving the \u03c3 from the external current term formula_15 to the response field terms formula_20, and also adding and subtracting a formula_24 term, to obtain the external field multiplied by the \"total\" current formula_25:"}
+{"text":"For the limit of thin wires, this gives the product of the externally applied voltage (1) multiplied by the resulting total current (2) and vice versa. In particular, the Rayleigh-Carson reciprocity theorem becomes a simple summation:"}
+{"text":"where \"V\" and \"I\" denote the complex amplitudes of the AC applied voltages and the resulting currents, respectively, in a set of circuit elements (indexed by \"n\") for two possible sets of voltages formula_28 and formula_29."}
+{"text":"Most commonly, this is simplified further to the case where each system has a \"single\" voltage source \"V\", at formula_30 and formula_31. Then the theorem becomes simply"}
+{"text":"The Lorentz reciprocity theorem is simply a reflection of the fact that the linear operator formula_33 relating formula_14 and formula_18 at a fixed frequency formula_36 (in linear media):"}
+{"text":"For any Hermitian operator formula_33 under an inner product formula_43, we have formula_44 by definition, and the Rayleigh-Carson reciprocity theorem is merely the vectorial version of this statement for this particular operator formula_45: that is, formula_46. The Hermitian property of the operator here can be derived by integration by parts. For a finite integration volume, the surface terms from this integration by parts yield the more-general surface-integral theorem above. In particular, the key fact is that, for vector fields formula_39 and formula_40, integration by parts (or the divergence theorem) over a volume \"V\" enclosed by a surface \"S\" gives the identity:"}
+{"text":"This identity is then applied twice to formula_50 to yield formula_51 plus the surface term, giving the Lorentz reciprocity relation."}
+{"text":"Conditions and proof of Lorenz reciprocity using Maxwell's equations and vector operations"}
+{"text":"We shall prove a general form of the electromagnetic reciprocity theorem due to Lorenz which states that fields formula_52 and formula_53 generated by two different sinusoidal current densities respectively formula_54 and formula_55 of the same frequency, satisfy the condition"}
+{"text":"Let us take a region in which dielectric constant and permeability may be functions of position but not of time. Maxwell's equations, written in terms of the total fields, currents and charges of the region describe the electromagnetic behavior of the region. The two curl equations are:"}
+{"text":"Under steady constant frequency conditions we get from the two curl equations the Maxwell's equations for the Time-Periodic case:"}
+{"text":"It must be recognized that the symbols in the equations of this article represent the complex multipliers of formula_59, giving the in-phase and out-of-phase parts with respect to the chosen reference. The complex vector multipliers of formula_59 may be called \"vector phasors\" by analogy to the complex scalar quantities which are commonly referred to as \"phasors\"."}
+{"text":"An equivalence of vector operations shows that"}
+{"text":"formula_61 for every vectors formula_62 and formula_63."}
+{"text":"If we apply this equivalence to formula_64 and formula_65 we get:"}
+{"text":"If products in the Time-Periodic equations are taken as indicated by this last equivalence, and added,"}
+{"text":"This now may be integrated over the volume of concern,"}
+{"text":"From the divergence theorem the volume integral of formula_69 equals the surface integral of formula_70 over the boundary."}
+{"text":"This form is valid for general media, but in the common case of linear, isotropic, time-invariant materials, formula_72 is a scalar independent of time. Then generally as physical magnitudes formula_73 and formula_74."}
+{"text":"In an exactly analogous way we get for vectors formula_6 and formula_3 the following expression:"}
+{"text":"Subtracting the two last equations by members we get"}
+{"text":"The cancellation of the surface terms on the right-hand side of the Lorentz reciprocity theorem, for an integration over all space, is not entirely obvious but can be derived in a number of ways."}
+{"text":"Another simple argument would be that the fields goes to zero at infinity for a localized source, but this argument fails in the case of lossless media: in the absence of absorption, radiated fields decay inversely with distance, but the surface area of the integral increases with the square of distance, so the two rates balance one another in the integral."}
+{"text":"Instead, it is common (e.g. King, 1963) to assume that the medium is homogeneous and isotropic sufficiently far away. In this case, the radiated field asymptotically takes the form of planewaves propagating radially outward (in the formula_81 direction) with formula_82 and formula_83 where \"Z\" is the impedance formula_84 of the surrounding medium. Then it follows that formula_85, which by a simple vector identity equals formula_86. Similarly, formula_87 and the two terms cancel one another."}
+{"text":"The above argument shows explicitly why the surface terms can cancel, but lacks generality. Alternatively, one can treat the case of lossless surrounding media by taking the limit as the losses (the imaginary part of \u03b5) go to zero. For any nonzero loss, the fields decay exponentially with distance and the surface integral vanishes, regardless of whether the medium is homogeneous. Since the left-hand side of the Lorentz reciprocity theorem vanishes for integration over all space with any non-zero losses, it must also vanish in the limit as the losses go to zero. (Note that we implicitly assumed the standard boundary condition of zero incoming waves from infinity, because otherwise even an infinitesimal loss would eliminate the incoming waves and the limit would not give the lossless solution.)"}
+{"text":"One case in which \u03b5 is \"not\" a symmetric matrix is for magneto-optic materials, in which case the usual statement of Lorentz reciprocity does not hold (see below for a generalization, however). If we allow magneto-optic materials, but restrict ourselves to the situation where material \"absorption is negligible\", then \u03b5 and \u03bc are in general 3\u00d73 complex Hermitian matrices. In this case, the operator formula_99 is Hermitian under the \"conjugated\" inner product formula_100, and a variant of the reciprocity theorem still holds:"}
+{"text":"where the sign changes come from the formula_102 in the equation above, which makes the operator formula_33 anti-Hermitian (neglecting surface terms). For the special case of formula_104, this gives a re-statement of conservation of energy or Poynting's theorem (since here we have assumed lossless materials, unlike above): the time-average rate of work done by the current (given by the real part of formula_105) is equal to the time-average outward flux of power (the integral of the Poynting vector). By the same token, however, the surface terms do not in general vanish if one integrates over all space for this reciprocity variant, so a Rayleigh-Carson form does not hold without additional assumptions."}
+{"text":"The fact that magneto-optic materials break Rayleigh-Carson reciprocity is the key to devices such as Faraday isolators and circulators. A current on one side of a Faraday isolator produces a field on the other side but \"not\" vice versa."}
+{"text":"For a combination of lossy and magneto-optic materials, and in general when the \u03b5 and \u03bc tensors are neither symmetric nor Hermitian matrices, one can still obtain a generalized version of Lorentz reciprocity by considering formula_106 and formula_107 to exist in \"different systems.\""}
+{"text":"In particular, if formula_106 satisfy Maxwell's equations at \u03c9 for a system with materials formula_109, and formula_107 satisfy Maxwell's equations at \u03c9 for a system with materials formula_111, where \"T\" denotes the transpose, then the equation of Lorentz reciprocity holds. This can be further generalized to bi-anisotropic materials by transposing the full 6\u00d76 susceptibility tensor."}
+{"text":"For nonlinear media, no reciprocity theorem generally holds. Reciprocity also does not generally apply for time-varying (\"active\") media; for example, when \u03b5 is modulated in time by some external process. (In both of these cases, the frequency \u03c9 is not generally a conserved quantity.)"}
+{"text":"A closely related reciprocity theorem was articulated independently by Y. A. Feld and C. T. Tai in 1992 and is known as Feld-Tai reciprocity or the Feld-Tai lemma. It relates two time-harmonic localized current sources and the resulting magnetic fields:"}
+{"text":"However, the Feld-Tai lemma is only valid under much more restrictive conditions than Lorentz reciprocity. It generally requires time-invariant linear media with an isotropic homogeneous impedance, i.e. a constant scalar \u03bc\/\u03b5 ratio, with the possible exception of regions of perfectly conducting material."}
+{"text":"More precisely, Feld-Tai reciprocity requires the Hermitian (or rather, complex-symmetric) symmetry of the electromagnetic operators as above, but also relies on the assumption that the operator relating formula_18 and formula_114 is a constant scalar multiple of the operator relating formula_115 and formula_116, which is true when \u03b5 is a constant scalar multiple of \u03bc (the two operators generally differ by an interchange of \u03b5 and \u03bc). As above, one can also construct a more general formulation for integrals over a finite volume."}
+{"text":"Apart from quantal effects, classical theory covers near-, middle-, and far-field electric and magnetic phenomena with arbitrary time courses. Optics refers to far-field nearly-sinusoidal oscillatory electromagnetic effects. Instead of paired electric and magnetic variables, optics, including optical reciprocity, can be expressed in polarization-paired radiometric variables, such as spectral radiance, traditionally called specific intensity."}
+{"text":"This is sometimes called the Helmholtz reciprocity (or reversion) principle. When the wave propagates through a material acted upon by an applied magnetic field, reciprocity can be broken so this principle will not apply. Similarly, when there are moving objects in the path of the ray, the principle may be entirely inapplicable. Historically, in 1849, Sir George Stokes stated his optical reversion principle without attending to polarization."}
+{"text":"Like the principles of thermodynamics, this principle is reliable enough to use as a check on the correct performance of experiments, in contrast with the usual situation in which the experiments are tests of a proposed law."}
+{"text":"The simplest statement of the principle is 'if I can see you, then you can see me.' The principle was used by Gustav Kirchhoff in his derivation of his law of thermal radiation and by Max Planck in his analysis of his law of thermal radiation."}
+{"text":"For ray-tracing global illumination algorithms, incoming and outgoing light can be considered as reversals of each other, without affecting the bidirectional reflectance distribution function (BRDF) outcome."}
+{"text":"Whereas the above reciprocity theorems were for oscillating fields, Green's reciprocity is an analogous theorem for electrostatics with a fixed distribution of electric charge (Panofsky and Phillips, 1962)."}
+{"text":"In particular, let formula_117 denote the electric potential resulting from a total charge density formula_118. The electric potential satisfies Poisson's equation, formula_119, where formula_120 is the vacuum permittivity. Similarly, let formula_121 denote the electric potential resulting from a total charge density formula_122, satisfying formula_123. In both cases, we assume that the charge distributions are localized, so that the potentials can be chosen to go to zero at infinity. Then, Green's reciprocity theorem states that, for integrals over all space:"}
+{"text":"This theorem is easily proven from Green's second identity. Equivalently, it is the statement that formula_125, i.e. that formula_126 is a Hermitian operator (as follows by integrating by parts twice)."}
+{"text":"The Miller theorem refers to the process of creating equivalent circuits. It asserts that a floating impedance element, supplied by two voltage sources connected in series, may be split into two grounded elements with corresponding impedances. There is also a dual Miller theorem with regards to impedance supplied by two current sources connected in parallel. The two versions are based on the two Kirchhoff's circuit laws."}
+{"text":"Miller theorems are not only pure mathematical expressions. These arrangements explain important circuit phenomena about modifying impedance (Miller effect, virtual ground, bootstrapping, negative impedance, etc.) and help in designing and understanding various commonplace circuits (feedback amplifiers, resistive and time-dependent converters, negative impedance converters, etc.). The theorems are useful in 'circuit analysis' especially for analyzing circuits with feedback and certain transistor amplifiers at high frequencies."}
+{"text":"There is a close relationship between Miller theorem and Miller effect: the theorem may be considered as a generalization of the effect and the effect may be thought as of a special case of the theorem."}
+{"text":"The Miller theorem establishes that in a linear circuit, if there exists a branch with impedance \"Z\", connecting two nodes with nodal voltages \"V1\" and \"V2\", we can replace this branch by two branches connecting the corresponding nodes to ground by impedances respectively Z\/(1 \u2212 K) and KZ\/(K \u2212 1), where K = V2\/V1. The Miller theorem may be proved by using the equivalent two-port network technique to replace the two-port to its equivalent and by applying the source absorption theorem. This version of the Miller theorem is based on Kirchhoff's voltage law; for that reason, it is named also \"Miller theorem for voltages\"."}
+{"text":"The Miller theorem implies that an impedance element is supplied by two arbitrary (not necessarily dependent) voltage sources that are connected in series through the common ground. In practice, one of them acts as a main (independent) voltage source with voltage \"V1\" and the other \u2013 as an additional (linearly dependent) voltage source with voltage formula_1. The idea of the Miller theorem (modifying circuit impedances seen from the sides of the input and output sources) is revealed below by comparing the two situations \u2013 without and with connecting an additional voltage source V2."}
+{"text":"If \"V2\" were zero (there was not a second voltage source or the right end of the element with impedance \"Z\" was just grounded), the input current flowing through the element would be determined, according to Ohm's law, only by \"V1\""}
+{"text":"and the input impedance of the circuit would be"}
+{"text":"As a second voltage source is included, the input current depends on both the voltages. According to its polarity, \"V2\" is subtracted from or added to \"V1\"; so, the input current decreases\/increases"}
+{"text":"and the input impedance of the circuit seen from the side of the input source accordingly increases\/decreases"}
+{"text":"So, the Miller theorem expresses the fact that connecting a second voltage source with proportional voltage formula_1 in series with the input voltage source changes the effective voltage, the current and respectively, the circuit impedance seen from the side of the input source. Depending on the polarity, \"V2\" acts as a supplemental voltage source helping or opposing the main voltage source to pass the current through the impedance."}
+{"text":"Besides by presenting the combination of the two voltage sources as a new composed voltage source, the theorem may be explained by \"combining the actual element and the second voltage source into a new virtual element with dynamically modified impedance\". From this viewpoint, \"V2\" is an additional voltage that artificially increases\/decreases the voltage drop \"Vz\" across the impedance \"Z\" thus decreasing\/increasing the current. The proportion between the voltages determines the value of the obtained impedance (see the tables below) and gives in total six groups of typical applications."}
+{"text":"The circuit impedance, seen from the side of the output source, may be defined similarly, if the voltages \"V1\" and \"V2\" are swapped and the coefficient \"K\" is replaced by 1\/\"K\""}
+{"text":"Most frequently, the Miller theorem may be observed in, and implemented by, an arrangement consisting of an element with impedance \"Z\" connected between the two terminals of a grounded general linear network. Usually, a voltage amplifier with gain of formula_8 serves as such a linear network, but also other devices can play this role: a man and a potentiometer in a potentiometric null-balance meter, an electromechanical integrator (servomechanisms using potentiometric feedback sensors), etc."}
+{"text":"In the amplifier implementation, the input voltage \"Vi\" serves as \"V1\" and the output voltage \"Vo\" \u2013 as \"V2\". In many cases, the input voltage source has some internal impedance formula_9 or an additional input impedance is connected that, in combination with \"Z\", introduces a feedback. Depending on the kind of amplifier (non-inverting, inverting or differential), the feedback can be positive, negative or mixed."}
+{"text":"The Miller amplifier arrangement has two aspects:"}
+{"text":"The introduction of an impedance that connects amplifier input and output ports adds a great"}
+{"text":"deal of complexity in the analysis process. Miller theorem helps reduce the"}
+{"text":"complexity in some circuits particularly with feedback by converting them to simpler equivalent circuits. But Miller theorem is not only an effective tool for creating equivalent circuits; it is also a powerful tool for designing and understanding circuits based on \"modifying impedance by additional voltage\". Depending on the polarity of the output voltage versus the input voltage and the proportion between their magnitudes, there are six groups of typical situations. In some of them, the Miller phenomenon appears as desired (bootstrapping) or undesired (Miller effect) unintentional effects; in other cases it is intentionally introduced."}
+{"text":"Applications based on subtracting \"V2\" from \"V1\"."}
+{"text":"In these applications, the output voltage \"Vo\" is inserted with an opposite polarity in respect to the input voltage \"Vi\" travelling along the loop (but in respect to ground, the polarities are the same). As a result, the effective voltage across, and the current through, the impedance decrease; the input impedance increases."}
+{"text":"Increased impedance is implemented by a non-inverting amplifier with gain of 0 < Av < 1. The (magnitude of) output voltage is less than the input voltage \"Vi\" and partially neutralizes it. Examples are imperfect voltage followers (emitter, source, cathode follower, etc.) and amplifiers with series negative feedback (emitter degeneration), whose input impedance is moderately increased."}
+{"text":"Infinite impedance uses a non-inverting amplifier with Av = 1. The output voltage is equal to the input voltage \"Vi\" and completely neutralizes it. Examples are potentiometric null-balance meters and op-amp followers and amplifiers with series negative feedback (op-amp follower and non-inverting amplifier) where the circuit input impedance is enormously increased. This technique is referred to as bootstrapping and is intentionally used in biasing circuits, input guarding circuits, etc."}
+{"text":"Negative impedance obtained by current inversion is implemented by a non-inverting amplifier with Av > 1. The current changes its direction, as the output voltage is higher than the input voltage. If the input voltage source has some internal impedance formula_9 or if it is connected through another impedance element, a positive feedback appears. A typical application is the negative impedance converter with current inversion (INIC) that uses both negative and positive feedback (the negative feedback is used to realize a non-inverting amplifier and the positive feedback \u2013 to modify the impedance)."}
+{"text":"Applications based on adding \"V2\" to \"V1\"."}
+{"text":"In these applications, the output voltage \"Vo\" is inserted with the same polarity in respect to the input voltage \"Vi\" travelling along the loop (but in respect to ground, the polarities are opposite). As a result, the effective voltage across and the current through the impedance increase; the input impedance decreases."}
+{"text":"Decreased impedance is implemented by an inverting amplifier having some moderate gain, usually 10 < Av < 1000. It may be observed as an undesired Miller effect in common-emitter, common-source and \"common-cathode\" amplifying stages where effective input capacitance is increased. Frequency compensation for general purpose operational amplifiers and transistor Miller integrator are examples of useful usage of the Miller effect."}
+{"text":"Zeroed impedance uses an inverting (usually op-amp) amplifier with enormously high gain Av \u2192 \u221e. The output voltage is almost equal to the voltage drop \"VZ\" across the impedance and completely neutralizes it. The circuit behaves as a short connection and a virtual ground appears at the input; so, it should not be driven by a constant voltage source. For this purpose, some circuits are driven by a constant current source or by a real voltage source with internal impedance: current-to-voltage converter (transimpedance amplifier), capacitive integrator (named also current integrator or charge amplifier), resistance-to-voltage converter (a resistive sensor connected in the place of the impedance \"Z\")."}
+{"text":"The rest of them have additional impedance connected in series to the input: voltage-to-current converter (transconductance amplifier), inverting amplifier, summing amplifier, inductive integrator, capacitive differentiator, resistive-capacitive integrator, capacitive-resistive differentiator, inductive-resistive differentiator, etc. The inverting integrators from this list are examples of useful and desired applications of the Miller effect in its extreme manifestation."}
+{"text":"In all these \"op-amp inverting circuits with parallel negative feedback\", the input current is increased to its maximum. It is determined only by the input voltage and the input impedance according to Ohm's law; it does not depend on the impedance \"Z\"."}
+{"text":"Negative impedance with voltage inversion is implemented by applying both negative and positive feedback to an op-amp amplifier with a differential input. The input voltage source has to have internal impedance formula_9 > 0 or it has to be connected through another impedance element to the input. Under these conditions, the input voltage \"Vi\" of the circuit changes its polarity as the output voltage exceeds the voltage drop \"VZ\" across the impedance (\"Vi\" = \"Vz\" \u2013 \"Vo\" < 0)."}
+{"text":"A typical application is a negative impedance converter with voltage inversion (VNIC). It is interesting that the circuit input voltage has the same polarity as the output voltage, although it is applied to the inverting op-amp input; the input source has an opposite polarity to both the circuit input and output voltages."}
+{"text":"The original Miller effect is implemented by capacitive impedance connected between the two nodes. Miller theorem generalizes Miller effect as it implies arbitrary impedance Z connected between the nodes. It is supposed also a constant coefficient K; then the expressions above are valid. But modifying properties of Miller theorem exist even when these requirements are violated and this arrangement can be generalized further by dynamizing the impedance and the coefficient."}
+{"text":"Non-linear element. Besides impedance, Miller arrangement can modify the IV characteristic of an arbitrary element. The circuit of a diode log converter is an example of a non-linear virtually zeroed resistance where the logarithmic forward IV curve of a diode is transformed to a vertical straight line overlapping the Y axis."}
+{"text":"Not constant coefficient. If the coefficient K varies, some exotic virtual elements can be obtained. A is an example of such a virtual element where the resistance RL is modified so that to mimic inductance, capacitance or inversed resistance."}
+{"text":"There is also a dual version of Miller theorem that is based on Kirchhoff's current law (\"Miller theorem for currents\"): if there is a branch in a circuit with impedance Z connecting a node, where two currents I1 and I2 converge to ground, we can replace this branch by two conducting the referred currents, with imperespectively equal to (1 + \u03b1)Z and (1 + \u03b1)Z\/\u03b1, where \u03b1 = I2\/I1. The dual theorem may be proved by replacing the two-port network by its equivalent and by applying the source absorption theorem."}
+{"text":"Dual Miller theorem actually expresses the fact that connecting a second current source producing proportional current formula_12 in parallel with the main input source and the impedance element changes the current flowing through it, the voltage and accordingly, the circuit impedance seen from the side of the input source. Depending on the direction, \"I2\" acts as a supplemental current source helping or opposing the main current source \"I1\" to create voltage across the impedance. The combination of the actual element and the second current source may be thought as of a new virtual element with dynamically modified impedance."}
+{"text":"Dual Miller theorem is usually implemented by an arrangement consisting of two voltage sources supplying the grounded impedance \"Z\" through floating impedances (see Fig. 3). The combinations of the voltage sources and belonging impedances form the two current sources \u2013 the main and the auxiliary one. As in the case of the main Miller theorem, the second voltage is usually produced by a voltage amplifier. Depending on the kind of the amplifier (inverting, non-inverting or differential) and the gain, the circuit input impedance may be virtually increased, infinite, decreased, zero or negative."}
+{"text":"List of specific applications based on Miller theorems."}
+{"text":"Below is a list of circuit solutions, phenomena and techniques based on the two Miller theorems."}
+{"text":"In electrical engineering, the maximum power transfer theorem states that, to obtain \"maximum\" external power from a source with a finite internal resistance, the resistance of the load must equal the resistance of the source as viewed from its output terminals. Moritz von Jacobi published the maximum power (transfer) theorem around 1840; it is also referred to as \"Jacobi's law\"."}
+{"text":"The theorem results in maximum \"power\" transfer across the circuit, and not maximum \"efficiency\". If the resistance of the load is made larger than the resistance of the source then efficiency is higher, since a higher percentage of the source power is transferred to the load, but the \"magnitude\" of the load power is lower since the total circuit resistance increases."}
+{"text":"If the load resistance is smaller than the source resistance, then most of the power ends up being dissipated in the source, and although the total power dissipated is higher, due to a lower total resistance, it turns out that the amount dissipated in the load is reduced."}
+{"text":"The theorem states how to choose (so as to maximize power transfer) the load resistance, once the source resistance is given. It is a common misconception to apply the theorem in the opposite scenario. It does \"not\" say how to choose the source resistance for a given load resistance. In fact, the source resistance that maximizes power transfer from a voltage source is always zero, regardless of the value of the load resistance."}
+{"text":"The theorem can be extended to alternating current circuits that include reactance, and states that maximum power transfer occurs when the load impedance is equal to the complex conjugate of the source impedance."}
+{"text":"Recent expository articles illustrate how the fundamental mathematics of the maximum power theorem also applies to other physical situations, such as:"}
+{"text":"The theorem was originally misunderstood (notably by Joule) to imply that a system consisting of an electric motor driven by a battery could not be more than 50% efficient since, when the impedances were matched, the power lost as heat in the battery would always be equal to the power delivered to the motor."}
+{"text":"In 1880 this assumption was shown to be false by either Edison or his colleague Francis Robbins Upton, who realized that maximum efficiency was not the same as maximum power transfer."}
+{"text":"To achieve maximum efficiency, the resistance of the source (whether a battery or a dynamo) could be (or should be) made as close to zero as possible. Using this new understanding, they obtained an efficiency of about 90%, and proved that the electric motor was a practical alternative to the heat engine."}
+{"text":"The condition of maximum power transfer does not result in maximum efficiency."}
+{"text":"If we define the efficiency as the ratio of power dissipated by the load, \"R\", to power developed by the source, \"V\", then it is straightforward to calculate from the above circuit diagram that"}
+{"text":"The efficiency is only 50% when maximum power transfer is achieved, but approaches 100% as the load resistance approaches infinity, though the total power level tends towards zero."}
+{"text":"Efficiency also approaches 100% if the source resistance approaches zero, and 0% if the load resistance approaches zero. In the latter case, all the power is consumed inside the source (unless the source also has no resistance), so the power dissipated in a short circuit is zero."}
+{"text":"A related concept is reflectionless impedance matching."}
+{"text":"In radio frequency transmission lines, and other electronics, there is often a requirement to match the source impedance (at the transmitter) to the load impedance (such as an antenna) to avoid reflections in the transmission line that could overload or damage the transmitter."}
+{"text":"In the diagram opposite, power is being transferred from the source, with voltage and fixed source resistance , to a load with resistance , resulting in a current . By Ohm's law, is simply the source voltage divided by the total circuit resistance:"}
+{"text":"The power dissipated in the load is the square of the current multiplied by the resistance:"}
+{"text":"The value of for which this expression is a maximum could be calculated by differentiating it, but it is easier to calculate the value of for which the denominator"}
+{"text":"is a minimum. The result will be the same in either case. Differentiating the denominator with respect to :"}
+{"text":"For a maximum or minimum, the first derivative is zero, so"}
+{"text":"In practical resistive circuits, and are both positive, so the positive sign in the above is the correct solution."}
+{"text":"To find out whether this solution is a minimum or a maximum, the denominator expression is differentiated again:"}
+{"text":"This is always positive for positive values of formula_16 and formula_17, showing that the denominator is a minimum, and the power is therefore a maximum, when"}
+{"text":"The above proof assumes fixed source resistance formula_16. When the source resistance can be varied, power transferred to the load can be increased by reducing formula_20. For example, a 100 Volt source with an formula_20 of formula_22 will deliver 250 watts of power to a formula_22 load; reducing formula_20 to formula_25 increases the power delivered to 1000 watts."}
+{"text":"Note that this shows that maximum power transfer can also be interpreted as the load voltage being equal to one-half of the Thevenin voltage equivalent of the source."}
+{"text":"The power transfer theorem also applies when the source and\/or load are not purely resistive."}
+{"text":"A refinement of the maximum power theorem says that any reactive components of source and load should be of equal magnitude but opposite sign. (\"See below for a derivation.\")"}
+{"text":"Physically realizable sources and loads are not usually purely resistive, having some inductive or capacitive components, and so practical applications of this theorem, under the name of complex conjugate impedance matching, do, in fact, exist."}
+{"text":"If the source is totally inductive (capacitive), then a totally capacitive (inductive) load, in the absence of resistive losses, would receive 100% of the energy from the source but send it back after a quarter cycle."}
+{"text":"The resultant circuit is nothing other than a resonant LC circuit in which the energy continues to oscillate to and fro. This oscillation is called reactive power."}
+{"text":"Power factor correction (where an inductive reactance is used to \"balance out\" a capacitive one), is essentially the same idea as complex conjugate impedance matching although it is done for entirely different reasons."}
+{"text":"For a fixed reactive \"source\", the maximum power theorem maximizes the real power (P) delivered to the load by complex conjugate matching the load to the source."}
+{"text":"For a fixed reactive \"load\", power factor correction minimizes the apparent power (S) (and unnecessary current) conducted by the transmission lines, while maintaining the same amount of real power transfer."}
+{"text":"This is done by adding a reactance to the load to balance out the load's own reactance, changing the reactive load impedance into a resistive load impedance."}
+{"text":"In this diagram, AC power is being transferred from the source, with phasor magnitude of voltage formula_26 (positive peak voltage) and fixed source impedance formula_27 (S for source), to a load with impedance formula_28 (L for load), resulting in a (positive) magnitude formula_29 of the current phasor formula_30. This magnitude formula_29 results from dividing the magnitude of the source voltage by the magnitude of the total circuit impedance:"}
+{"text":"The average power formula_33 dissipated in the load is the square of the current multiplied by the resistive portion (the real part) formula_34 of the load impedance formula_28:"}
+{"text":"where formula_37 and formula_34 denote the resistances, that is the real parts, and formula_39 and formula_40 denote the reactances, that is the imaginary parts, of respectively the source and load impedances formula_27 and formula_28."}
+{"text":"To determine, for a given source voltage formula_43 and impedance formula_44 the value of the load impedance formula_45 for which this expression for the power yields a maximum, one first finds, for each fixed positive value of formula_34, the value of the reactive term formula_40 for which the denominator"}
+{"text":"is a minimum. Since reactances can be negative, this is achieved by adapting the load reactance to"}
+{"text":"and it remains to find the value of formula_34 which maximizes this expression. This problem has the same form as in the purely resistive case, and the maximizing condition therefore is formula_52"}
+{"text":"describe the complex conjugate of the source impedance, denoted by formula_55 and thus can be concisely combined to:"}
+{"text":"Commensurate line circuits are electrical circuits composed of transmission lines that are all the same length; commonly one-eighth of a wavelength. Lumped element circuits can be directly converted to distributed-element circuits of this form by the use of Richards' transformation. This transformation has a particularly simple result; inductors are replaced with transmission lines terminated in short-circuits and capacitors are replaced with lines terminated in open-circuits. Commensurate line theory is particularly useful for designing distributed-element filters for use at microwave frequencies."}
+{"text":"It is usually necessary to carry out a further transformation of the circuit using Kuroda's identities. There are several reasons for applying one of the Kuroda transformations; the principal reason is usually to eliminate series connected components. In some technologies, including the widely used microstrip, series connections are difficult or impossible to implement."}
+{"text":"The frequency response of commensurate line circuits, like all distributed-element circuits, will periodically repeat, limiting the frequency range over which they are effective. Circuits designed by the methods of Richards and Kuroda are not the most compact. Refinements to the methods of coupling elements together can produce more compact designs. Nevertheless, the commensurate line theory remains the basis for many of these more advanced filter designs."}
+{"text":"Commensurate lines are transmission lines that are all the same electrical length, but not necessarily the same characteristic impedance (\"Z\"0). A commensurate line circuit is an electrical circuit composed only of commensurate lines terminated with resistors or short- and open-circuits. In 1948, Paul I. Richards published a theory of commensurate line circuits by which a passive lumped element circuit could be transformed into a distributed element circuit with precisely the same characteristics over a certain frequency range."}
+{"text":"Electrical length can also be expressed as the phase change between the start and the end of the line. Phase is measured in angle units. formula_1, the mathematical symbol for an angle variable, is used as the symbol for electrical length when expressed as an angle. In this convention \u03bb represents 360\u00b0, or 2\u03c0 radians."}
+{"text":"The advantage of using commensurate lines is that the commensurate line theory allows circuits to be synthesised from a prescribed frequency function. While any circuit using arbitrary transmission line lengths can be analysed to determine its frequency function, that circuit cannot necessarily be easily synthesised starting from the frequency function. The fundamental problem is that using more than one length generally requires more than one frequency variable. Using commensurate lines requires only one frequency variable. A well developed theory exists for synthesising lumped-element circuits from a given frequency function. Any circuit so synthesised can be converted to a commensurate line circuit using Richards' transformation and a new frequency variable."}
+{"text":"Richards' transformation transforms the angular frequency variable, \u03c9, according to,"}
+{"text":"or, more usefully for further analysis, in terms of the complex frequency variable, \"s\","}
+{"text":"Comparing this transform with expressions for the driving point impedance of stubs terminated, respectively, with a short circuit and an open circuit,"}
+{"text":"it can be seen that (for \u03b8 < \u03c0\/2) a short circuit stub has the impedance of a lumped inductance and an open circuit stub has the impedance of a lumped capacitance. Richards' transformation substitutes inductors with short circuited UEs and capacitors with open circuited UEs."}
+{"text":"When the length is \u03bb\/8 (or \u03b8=\u03c0\/4), this simplifies to,"}
+{"text":"\"L\" and \"C\" are conventionally the symbols for inductance and capacitance, but here they represent respectively the characteristic impedance of an inductive stub and the characteristic admittance of a capacitive stub. This convention is used by numerous authors, and later in this article."}
+{"text":"Richards' transformation can be viewed as transforming from a s-domain representation to a new domain called the \u03a9-domain where,"}
+{"text":"If \u03a9 is normalised so that \u03a9=1 when \u03c9=\u03c9c, then it is required that,"}
+{"text":"and the length in distance units becomes,"}
+{"text":"Any circuit composed of discrete, linear, lumped components will have a transfer function \"H\"(\"s\") that is a rational function in \"s\". A circuit composed of transmission line UEs derived from the lumped circuit by Richards' transformation will have a transfer function \"H\"(\"j\"\u03a9) that is a rational function of precisely the same form as \"H\"(\"s\"). That is, the shape of the frequency response of the lumped circuit against the \"s\" frequency variable will be precisely the same as the shape of the frequency response of the transmission line circuit against the \"j\"\u03a9 frequency variable and the circuit will be functionally the same."}
+{"text":"However, infinity in the \u03a9 domain is transformed to \u03c9=\u03c0\/4\"k\" in the \"s\" domain. The entire frequency response is squeezed down to this finite interval. Above this frequency, the same response is repeated in the same intervals, alternately in reverse. This is a consequence of the periodic nature of the tangent function. This multiple passband result is a general feature of all distributed-element circuits, not just those arrived at through Richards' transformation."}
+{"text":"A UE connected in cascade is a two-port network that has no exactly corresponding circuit in lumped elements. It is functionally a fixed delay. There are lumped-element circuits that can approximate a fixed delay such as the Bessel filter, but they only work within a prescribed passband, even with ideal components. Alternatively, lumped-element all-pass filters can be constructed that pass all frequencies (with ideal components), but they have constant delay only within a narrow band of frequencies. Examples are the lattice phase equaliser and bridged T delay equaliser."}
+{"text":"There is consequently no lumped circuit that Richard's transformation can transform into a cascade-connected line, and there is no reverse transformation for this element. Commensurate line theory thus introduces a new element of \"delay\", or \"length\"."}
+{"text":"Two or more UEs connected in cascade with the same \"Z\"0 are equivalent to a single, longer, transmission line. Thus, lines of length \"n\"\u03b8 for integer \"n\" are allowable in commensurate circuits. Some circuits can be implemented \"entirely\" as a cascade of UEs: impedance matching networks, for instance, can be done this way, as can most filters."}
+{"text":"Kuroda's identities are a set of four equivalent circuits that overcome certain difficulties with applying Richards' transformations directly. The four basic transformations are shown in the figure. Here the symbols for capacitors and inductors are used to represent open-circuit and short-circuit stubs. Likewise, the symbols \"C\" and \"L\" here represent respectively the susceptance of an open circuit stub and the reactance of a short circuit stub, which, for \u03b8=\u03bb\/8, are respectively equal to the characteristic admittance and characteristic impedance of the stub line. The boxes with thick lines represent cascade connected commensurate lengths of line with the marked characteristic impedance."}
+{"text":"The first difficulty solved is that all the UEs are required to be connected together at the same point. This arises because the lumped-element model assumes that all the elements take up zero space (or no significant space) and that there is no delay in signals between the elements. Applying Richards' transformation to convert the lumped circuit into a distributed circuit allows the element to now occupy a finite space (its length) but does not remove the requirement for zero distance between the interconnections. By repeatedly applying the first two Kuroda identities, UE lengths of the lines feeding into the ports of the circuit can be moved between the circuit components to physically separate them."}
+{"text":"A second difficulty that Kuroda's identities can overcome is that series connected lines are not always practical. While series connection of lines can easily be done in, for instance, coaxial technology, it is not possible in the widely used microstrip technology and other planar technologies. Filter circuits frequently use a ladder topology with alternating series and shunt elements. Such circuits can be converted to all shunt components in the same step used to space the components with the first two identities."}
+{"text":"The third and fourth identities allow characteristic impedances to be scaled down or up respectively. These can be useful for transforming impedances that are impractical to implement. However, they have the disadvantage of requiring the addition of an ideal transformer with a turns ratio equal to the scaling factor."}
+{"text":"In the decade after Richards' publication, advances in the theory of distributed circuits took place mostly in Japan. K. Kuroda published these identities in 1955 in his Ph.D thesis. However, they did not appear in English until 1958 in a paper by Ozaki and Ishii on stripline filters."}
+{"text":"Coupling elements together with impedance transformer lines is not the most compact design. Other methods of coupling have been developed, especially for band-pass filters that are far more compact. These include parallel lines filters, interdigital filters, hairpin filters, and the semi-lumped design combline filters."}
+{"text":"A generator in electrical circuit theory is one of two ideal elements: an ideal voltage source, or an ideal current source. These are two of the fundamental elements in circuit theory. Real electrical generators are most commonly modelled as a non-ideal source consisting of a combination of an ideal source and a resistor. Voltage generators are modelled as an ideal voltage source in series with a resistor. Current generators are modelled as an ideal current source in parallel with a resistor. The resistor is referred to as the internal resistance of the source. Real world equipment may not perfectly follow these models, especially at extremes of loading (both high and low) but for most purposes they suffice."}
+{"text":"The two models of non-ideal generators are interchangeable, either can be used for any given generator. Th\u00e9venin's theorem allows a non-ideal current source model to be converted to a non-ideal voltage source model and Norton's theorem allows a non-ideal voltage source model to be converted to a non-ideal current source model. Both models are equally valid, but the voltage source model is more applicable to when the internal resistance is low (that is, much lower than the load impedance) and the current source model is more applicable when the internal resistance is high (compared to the load)."}
+{"text":"Symbols commonly used for ideal sources are shown in the figure. Symbols do vary from region to region and time period to time period. Another common symbol for a current source is two interlocking circles."}
+{"text":"The model used to represent \"h\"-parameters is shown in the figure. \"h\"-parameters are frequently used in transistor data sheets to specify the device. The \"h\"-parameters are defined as the matrix"}
+{"text":"where the voltage and current variables are as shown in the figure. The circuit model using dependent generators is just an alternative way of representing this matrix."}
+{"text":"The superposition theorem is a derived result of the superposition principle suited to the network analysis of electrical circuits. The superposition theorem states that for a linear system (notably including the subcategory of time-invariant linear systems) the response (voltage or current) in any branch of a bilateral linear circuit having more than one independent source equals the algebraic sum of the responses caused by each independent source acting alone, where all the other independent sources are replaced by their internal impedances."}
+{"text":"To ascertain the contribution of each individual source, all of the other sources first must be \"turned off\" (set to zero) by:"}
+{"text":"This procedure is followed for each source in turn, then the resultant responses are added to determine the true operation of the circuit. The resultant circuit operation is the superposition of the various voltage and current sources."}
+{"text":"The superposition theorem is very important in circuit analysis. It is used in converting any circuit into its Norton equivalent or Thevenin equivalent."}
+{"text":"The theorem is applicable to linear networks (time varying or time invariant) consisting of independent sources, linear dependent sources, linear passive elements (resistors, inductors, capacitors) and linear transformers."}
+{"text":"Superposition works for voltage and current but not power. In other words, the sum of the powers of each source with the other sources turned off is not the real consumed power. To calculate power we first use superposition to find both current and voltage of each linear element and then calculate the sum of the multiplied voltages and currents."}
+{"text":"However, if the linear network is operating in steady-state and each external independent source has a different frequency, then superposition can be applied to compute the average power or active power. If at least two independent sources have the same frequency (for example in power systems, where many generators operate at 50 Hz or 60 Hz), then superposition can't be used to determine average power."}
+{"text":"The electric circuit superposition theorem is analogous to Dalton's law of partial pressure which can be stated as the total pressure exerted by an ideal gas mixture in a given volume is the algebraic sum of all the pressures exerted by each gas if it were alone in that volume."}
+{"text":"In electrical circuit theory, a port is a pair of terminals connecting an electrical network or circuit to an external circuit, as a point of entry or exit for electrical energy. A port consists of two nodes (terminals) connected to an outside circuit which meets the \"port condition\" - the currents flowing into the two nodes must be equal and opposite."}
+{"text":"The use of ports helps to reduce the complexity of circuit analysis. Many common electronic devices and circuit blocks, such as transistors, transformers, electronic filters, and amplifiers, are analyzed in terms of ports. In multiport network analysis, the circuit is regarded as a \"black box\" connected to the outside world through its ports. The ports are points where input signals are applied or output signals taken. Its behavior is completely specified by a matrix of parameters relating the voltage and current at its ports, so the internal makeup or design of the circuit need not be considered, or even known, in determining the circuit's response to applied signals."}
+{"text":"The concept of ports can be extended to waveguides, but the definition in terms of current is not appropriate and the possible existence of multiple waveguide modes must be accounted for."}
+{"text":"Any node of a circuit that is available for connection to an external circuit is called a pole (or terminal if it is a physical object). The port condition is that a pair of poles of a circuit is considered a port if and only if the current flowing into one pole from outside the circuit is equal to the current flowing out of the other pole into the external circuit. Equivalently, the algebraic sum of the currents flowing into the two poles from the external circuit must be zero."}
+{"text":"It cannot be determined if a pair of nodes meets the port condition by analysing the internal properties of the circuit itself. The port condition is dependent entirely on the external connections of the circuit. What are ports under one set of external circumstances may well not be ports under another. Consider the circuit of four resistors in the figure for example. If generators are connected to the pole pairs (1, 2) and (3, 4) then those two pairs are ports and the circuit is a box attenuator. On the other hand, if generators are connected to pole pairs (1, 4) and (2, 3) then those pairs are ports, the pairs (1, 2) and (3, 4) are no longer ports, and the circuit is a bridge circuit."}
+{"text":"It is even possible to arrange the inputs so that \"no\" pair of poles meets the port condition. However, it is possible to deal with such a circuit by splitting one or more poles into a number of separate poles joined to the same node. If only one external generator terminal is connected to each pole (whether a split pole or otherwise) then the circuit can again be analysed in terms of ports. The most common arrangement of this type is to designate one pole of an \"n\"-pole circuit as the common and split it into \"n\"\u22121 poles. This latter form is especially useful for unbalanced circuit topologies and the resulting circuit has \"n\"\u22121 ports."}
+{"text":"In the most general case, it is possible to have a generator connected to every pair of poles, that is, \"n\"C2 generators, then every pole must be split into \"n\"\u22121 poles. For instance, in the figure example (c), if the poles 2 and 4 are each split into two poles each then the circuit can be described as a 3-port. However, it is also possible to connect generators to pole pairs , , and making generators in all and the circuit has to be treated as a 6-port."}
+{"text":"Any two-pole circuit is guaranteed to meet the port condition by virtue of Kirchhoff's current law and they are therefore one-ports unconditionally. All of the basic electrical elements (inductance, resistance, capacitance, voltage source, current source) are one-ports, as is a general impedance."}
+{"text":"Study of one-ports is an important part of the foundation of network synthesis, most especially in filter design. Two-element one-ports (that is RC, RL and LC circuits) are easier to synthesise than the general case. For a two-element one-port Foster's canonical form or Cauer's canonical form can be used. In particular, LC circuits are studied since these are lossless and are commonly used in filter design."}
+{"text":"Linear two port networks have been widely studied and a large number of ways of representing them have been developed. One of these representations is the z-parameters which can be described in matrix form by;"}
+{"text":"where \"Vn\" and \"In\" are the voltages and currents respectively at port \"n\". Most of the other descriptions of two-ports can likewise be described with a similar matrix but with a different arrangement of the voltage and current column vectors."}
+{"text":"Common circuit blocks which are two-ports include amplifiers, attenuators and filters."}
+{"text":"In general, a circuit can consist of any number of ports\u2014a multiport. Some, but not all, of the two-port parameter representations can be extended to arbitrary multiports. Of the voltage and current based matrices, the ones that can be extended are z-parameters and y-parameters. Neither of these are suitable for use at microwave frequencies because voltages and currents are not convenient to measure in formats using conductors and are not relevant at all in waveguide formats. Instead, s-parameters are used at these frequencies and these too can be extended to an arbitrary number of ports."}
+{"text":"Circuit blocks which have more than two ports include directional couplers, power splitters, circulators, diplexers, duplexers, multiplexers, hybrids and directional filters."}
+{"text":"RF and microwave circuit topologies are commonly unbalanced circuit topologies such as coaxial or microstrip. In these formats, one pole of each port in a circuit is connected to a common node such as a ground plane. It is assumed in the circuit analysis that all these commoned poles are at the same potential and that current is sourced to or sunk into the ground plane that is equal and opposite to that going into the other pole of any port. In this topology a port is treated as being just a single pole. The corresponding balancing pole is imagined to be incorporated into the ground plane."}
+{"text":"The one-pole representation of a port will start to fail if there are significant ground plane loop currents. The assumption in the model is that the ground plane is perfectly conducting and that there is no potential difference between two locations on the ground plane. In reality, the ground plane is not perfectly conducting and loop currents in it will cause potential differences. If there is a potential difference between the commoned poles of two ports then the port condition is broken and the model is invalid."}
+{"text":"The idea of ports can be (and is) extended to waveguide devices, but a port can no longer be defined in terms of circuit poles because in waveguides the electromagnetic waves are not guided by electrical conductors. They are, instead guided by the walls of the waveguide. Thus, the concept of a circuit conductor pole does not exist in this format. Ports in waveguides consist of an aperture or break in the waveguide through which the electromagnetic waves can pass. The bounded plane through which the wave passes is the definition of the port."}
+{"text":"Waveguides have an additional complication in port analysis in that it is possible (and sometimes desirable) for more than one waveguide mode to exist at the same time. In such cases, for each physical port, a separate port must be added to the analysis model for each of the modes present at that physical port."}
+{"text":"The concept of ports can be extended into other energy domains. The generalised definition of a port is a place where energy can flow from one element or subsystem to another element or subsystem. This generalised view of the port concept helps to explain why the port condition is so defined in electrical analysis. If the algebraic sum of the currents is not zero, such as in example diagram (c), then the energy delivered from an external generator is not equal to the energy entering the pair of circuit poles. The energy transfer at that place is thus more complex than a simple flow from one subsystem to another and does not meet the generalised definition of a port."}
+{"text":"An equivalent impedance is an equivalent circuit of an electrical network of impedance elements which presents the same impedance between all pairs of terminals as did the given network. This article describes mathematical transformations between some passive, linear impedance networks commonly found in electronic circuits."}
+{"text":"There are a number of very well known and often used equivalent circuits in linear network analysis. These include resistors in series, resistors in parallel and the extension to series and parallel circuits for capacitors, inductors and general impedances. Also well known are the Norton and Th\u00e9venin equivalent current generator and voltage generator circuits respectively, as is the Y-\u0394 transform. None of these are discussed in detail here; the individual linked articles should be consulted."}
+{"text":"One-element networks are trivial and two-element, two-terminal networks are either two elements in series or two elements in parallel, also trivial. The smallest number of elements that is non-trivial is three, and there are two 2-element-kind non-trivial transformations possible, one being both the reverse transformation and the topological dual, of the other."}
+{"text":"Example 3 shows the result is a \u03a0-network rather than an L-network. The reason for this is that the shunt element has more capacitance than is required by the transform so some is still left over after applying the transform. If the excess were instead, in the element nearest the transformer, this could be dealt with by first shifting the excess to the other side of the transformer before carrying out the transform."}
+{"text":"\"spooky action at a distance\" on the assumption of QM's completeness)."}
+{"text":"The no-communication theorem states that, within the context of quantum mechanics, it is not possible to transmit classical bits of information by means of carefully prepared mixed or pure states, whether entangled or not. The theorem disallows all communication, not just faster-than-light communication, by means of shared quantum states. The theorem disallows not only the communication of whole bits, but even fractions of a bit. This is important to take note of, as there are many classical radio communications encoding techniques that can send arbitrarily small fractions of a bit across arbitrarily narrow, noisy communications channels. In particular, one may imagine that there is some ensemble that can be prepared, with small portions of the ensemble communicating a fraction of a bit; this, too, is not possible."}
+{"text":"The theorem is built on the basic presumption that the laws of quantum mechanics hold. Similar theorems may or may not hold for other related theories, such as hidden variable theories. The no-communication theorem is not meant to constrain other, non-quantum-mechanical theories."}
+{"text":"The basic assumption entering into the theorem is that a quantum-mechanical system is prepared in an initial state, and that this initial state is describable as a mixed or pure state in a Hilbert space \"H\". The system then evolves over time in such a way that there are two spatially distinct parts, \"A\" and \"B\", sent to two distinct observers, Alice and Bob, who are free to perform quantum mechanical measurements on their portion of the total system (viz, A and B). The question is: is there any action that Alice can perform on A that would be detectable by Bob making an observation of B? The theorem replies 'no'."}
+{"text":"An important assumption going into the theorem is that neither Alice nor Bob is allowed, in any way, to affect the preparation of the initial state. If Alice were allowed to take part in the preparation of the initial state, it would be trivially easy for her to encode a message into it; thus neither Alice nor Bob participates in the preparation of the initial state. The theorem does not require that the initial state be somehow 'random' or 'balanced' or 'uniform': indeed, a third party preparing the initial state could easily encode messages in it, received by Alice and Bob. Simply, the theorem states that, given some initial state, prepared in some way, there is no action that Alice can take that would be detectable by Bob."}
+{"text":"The proof proceeds by defining how the total Hilbert space \"H\" can be split into two parts, \"H\"\"A\" and \"H\"\"B\", describing the subspaces accessible to Alice and Bob. The total state of the system is assumed to be described by a density matrix \u03c3. This appears to be a reasonable assumption, as a density matrix is sufficient to describe both pure and mixed states in quantum mechanics. Another important part of the theorem is that measurement is performed by applying a generalized projection operator \"P\" to the state \u03c3. This again is reasonable, as projection operators give the appropriate mathematical description of quantum measurements. After a measurement by Alice, the state of the total system is said to have \"collapsed\" to a state \"P\"(\u03c3)."}
+{"text":"The proof of the theorem is commonly illustrated for the setup of Bell tests in which two observers Alice and Bob perform local observations on a common bipartite system, and uses the statistical machinery of quantum mechanics, namely density states and quantum operations."}
+{"text":"Alice and Bob perform measurements on system S whose underlying Hilbert space is"}
+{"text":"It is also assumed that everything is finite-dimensional to avoid convergence issues. The state of the composite system is given by a density operator on \"H\". Any density operator \u03c3 on \"H\" is a sum of the form:"}
+{"text":"where \"Ti\" and \"Si\" are operators on \"H\"\"A\" and \"H\"\"B\" respectively. For the following, it is not required to assume that \"Ti\" and \"Si\" are state projection operators: \"i.e.\" they need not necessarily be non-negative, nor have a trace of one. That is, \u03c3 can have a definition somewhat broader than that of a density matrix; the theorem still holds. Note that the theorem holds trivially for separable states. If the shared state \u03c3 is separable, it is clear that any local operation by Alice will leave Bob's system intact. Thus the point of the theorem is no communication can be achieved via a shared entangled state."}
+{"text":"Alice performs a local measurement on her subsystem. In general, this is described by a quantum operation, on the system state, of the following kind"}
+{"text":"where \"V\"\"k\" are called Kraus matrices which satisfy"}
+{"text":"means that Alice's measurement apparatus does not interact with Bob's subsystem."}
+{"text":"Supposing the combined system is prepared in state \u03c3 and assuming, for purposes of argument, a non-relativistic situation, immediately (with no time delay) after Alice performs her measurement, the relative state of Bob's system is given by the partial trace of the overall state with respect to Alice's system. In symbols, the relative state of Bob's system after Alice's operation is"}
+{"text":"where formula_8 is the partial trace mapping with respect to Alice's system."}
+{"text":"From this it is argued that, statistically, Bob cannot tell the difference between what Alice did and a random measurement (or whether she did anything at all)."}
+{"text":"The scallop theorem is a consequence of the subsequent forces applied to the organism as it swims from the surrounding fluid. For an incompressible Newtonian fluid with density formula_1 and viscosity formula_2, the flow satisfies the Navier\u2013Stokes equations"}
+{"text":"where formula_4 denotes the velocity of the swimmer. However, at the low Reynolds number limit, the inertial terms of the Navier-Stokes equation on the left-hand side tend to zero. This is made more apparent by nondimensionalizing the Navier\u2013Stokes equation. By defining a characteristic velocity and length, formula_5 and formula_6, we can cast our variables to dimensionless form:"}
+{"text":"By plugging back into the Navier-Stokes equation and performing some algebra, we arrive at a dimensionless form:"}
+{"text":"where formula_9 is the Reynolds number, formula_10. In the low Reynolds number limit (as formula_11), the LHS tends to zero and we arrive at a dimensionless form of Stokes equations. Redimensionalizing yields"}
+{"text":"The proof of the scallop theorem can be represented in a mathematically elegant way. To do this, we must first understand the mathematical consequences of the linearity of Stokes equations. To summarize, the linearity of Stokes equations allows us to use the reciprocal theorem to relate the swimming velocity of the swimmer to the velocity field of the fluid around its surface (known as the swimming gait), which changes according to the periodic motion it exhibits. This relation allows us to conclude that locomotion is independent of swimming rate. Subsequently, this leads to the discovery that reversal of periodic motion is identical to the forward motion due to symmetry, allowing us to conclude that there can be no net displacement."}
+{"text":"The reciprocal theorem describes the relationship between two flows in the same geometry where inertial effects are insignificant compared to viscous effects. Consider a fluid filled region formula_13 bounded by surface formula_14 with a unit normal formula_15. Suppose we have solutions to Stokes equations in the domain formula_13 possessing the form of the velocity fields formula_4 and formula_18. The velocity fields harbor corresponding stress fields formula_19 and formula_20 respectively. Then the following equality holds:"}
+{"text":"The reciprocal theorem allows us to obtain information about a certain flow by using information from another flow. This is preferable to solving Stokes equations, which is difficult due to not having a known boundary condition. This particularly useful if one wants to understand flow from a complicated problem by studying the flow of a simpler problem in the same geometry."}
+{"text":"One can use the reciprocal theorem to relate the swimming velocity, formula_22, of a swimmer subject to a force formula_23 to its swimming gait formula_24:"}
+{"text":"Now that we have established that the relationship between the instantaneous swimming velocity in the direction of the force acting on the body and its swimming gate follow the general form"}
+{"text":"where formula_27 and formula_28 denote the positions of points"}
+{"text":"on the surface of the swimmer, we can establish that locomotion is independent of rate. Consider a swimmer that deforms in a periodic fashion through a sequence of motions between the times formula_29 and formula_30. The net displacement of the swimmer is"}
+{"text":"Now consider the swimmer deforming in the same manner but at a different rate. We describe this with the mapping"}
+{"text":"This result means that the net distance traveled by the swimmer does not depend on the rate"}
+{"text":"at which it is being deformed, but only on the geometrical sequence of shape. This is the first key result."}
+{"text":"If a swimmer is moving in a periodic fashion that is time invariant, we know that the average displacement during one period must be zero. To illustrate the proof, let us consider a swimmer deforming during one period that starts and ends at times formula_29 and formula_30. That means its shape at the start and end are the same, i.e. formula_37. Next, we consider motion obtained by time-reversal"}
+{"text":"symmetry of the first motion that occurs during the period starting and ending at times formula_38 and formula_39. using a similar mapping as in the previous section, we define formula_40 and formula_41 and define the shape in the reverse motion to be the same as the shape in the forward motion, formula_42. Now we find the relationship between the net displacements in these two cases:"}
+{"text":"This is the second key result. Combining with our first key result from the previous section, we see that formula_44. We see that a swimmer that reverses its motion by reversing its sequence of shape changes leads to the opposite distance traveled. In addition, since the swimmer exhibits reciprocal body deformation, the sequence of motion is the same between formula_38 and formula_39 and formula_29 and formula_30. Thus, the distance traveled should"}
+{"text":"be the same independently of the direction of time, meaning that reciprocal motion cannot be used for net motion in low Reynolds number environments."}
+{"text":"The scallop theorem holds if we assume that a swimmer undergoes reciprocal motion in an infinite quiescent Newtonian fluid in the absence of inertia and external body forces. However, there are instances where the assumptions for the scallop theorem are violated. In one case, successful swimmers in viscous environments must display non-reciprocal body kinematics. In another case, if a swimmer is in a non-Newtonian fluid, locomotion can be achieved as well."}
+{"text":"In his original paper, Purcell proposed a simple example of non-reciprocal body deformation, now commonly known as the Purcell swimmer. This simple swimmer possess two degrees of freedom for motion: a two-hinged body composed of three rigid links rotating out-of-phase with each other. However, any body with more than one degree of freedom of motion can achieve locomotion as well."}
+{"text":"In general, microscopic organisms like bacteria have evolved different mechanisms to perform non-reciprocal motion:"}
+{"text":"The assumption of a Newtonian fluid is essential since Stokes equations will not remain linear and time-independent in an environment that possesses complex mechanical and rheological properties. It is also common knowledge that many living microorganisms live in complex non-Newtonian fluids, which are common in biologically relevant environments. For instance, crawling cells often"}
+{"text":"Non-Newtonian fluids have several properties that can be manipulated to produce small scale locomotion."}
+{"text":"First, one such exploitable property is normal"}
+{"text":"stress differences. These differences will arise from the stretching of the fluid by the flow of"}
+{"text":"The Clausius theorem (1855) states that for a thermodynamic system (e.g. heat engine or heat pump) exchanging heat with external reservoirs and undergoing a thermodynamic cycle,"}
+{"text":"where formula_2 is the infinitesimal amount of heat absorbed by the system from the reservoir and formula_3 is the temperature of the external reservoir (surroundings) at a particular instant in time. The closed integral is carried out along a thermodynamic process path from the initial\/final state to the same initial\/final state. In principle, the closed integral can start and end at an arbitrary point along the path."}
+{"text":"If there are multiple reservoirs with different temperatures formula_4, then Clausius inequality reads:"}
+{"text":"In the special case of a reversible process, the equality holds. The reversible case is used to introduce the state function known as entropy. This is because in a cyclic process the variation of a state function is zero. In other words, the Clausius statement states that it is impossible to construct a device whose sole effect is the transfer of heat from a cool reservoir to a hot reservoir. Equivalently, heat spontaneously flows from a hot body to a cooler one, not the other way around."}
+{"text":"for an infinitesimal change in entropy \"S\" applies not only to cyclic processes, but to any process that occurs in a closed system."}
+{"text":"The Clausius theorem is a mathematical explanation of the second law of thermodynamics. It was developed by Rudolf Clausius who intended to explain the relationship between the heat flow in a system and the entropy of the system and its surroundings. Clausius developed this in his efforts to explain entropy and define it quantitatively. In more direct terms, the theorem gives us a way to determine if a cyclical process is reversible or irreversible. The Clausius theorem provides a quantitative formula for understanding the second law."}
+{"text":"Clausius was one of the first to work on the idea of entropy and is even responsible for giving it that name. What is now known as the Clausius theorem was first published in 1862 in Clausius' sixth memoir, \"On the Application of the Theorem of the Equivalence of Transformations to Interior Work\". Clausius sought to show a proportional relationship between entropy and the energy flow by heating (\u03b4\"Q\") into a system. In a system, this heat energy can be transformed into work, and work can be transformed into heat through a cyclical process. Clausius writes that \"The algebraic sum of all the transformations occurring in a cyclical process can only be less than zero, or, as an extreme case, equal to nothing.\" In other words, the equation"}
+{"text":"with \ud835\udeff\"Q\" being energy flow into the system due to heating and \"T\" being absolute temperature of the body when that energy is absorbed, is found to be true for any process that is cyclical and reversible. Clausius then took this a step further and determined that the following relation must be found true for any cyclical process that is possible, reversible or not. This relation is the \"Clausius inequality\"."}
+{"text":"Now that this is known, there must be a relation developed between the Clausius inequality and entropy. The amount of entropy \"S\" added to the system during the cycle is defined as"}
+{"text":"If the amount of energy added by heating can be measured during the process, and the temperature can be measured during the process, the Clausius inequality can be used to determine whether the process is reversible or irreversible by carrying out the integration in the Clausius inequality."}
+{"text":"The temperature that enters in the denominator of the integrand in the Clausius inequality is actually the temperature of the external reservoir with which the system exchanges heat. At each instant of the process, the system is in contact with an external reservoir."}
+{"text":"Because of the Second Law of Thermodynamics, in each infinitesimal heat exchange process between the system and the reservoir, the net change in entropy of the \"universe\", so to say, is formula_12."}
+{"text":"When the system takes in heat by an infinitesimal amount formula_13(formula_14), for the net change in entropy formula_15 in this step to be positive, the temperature of the \"hot\" reservoir formula_16 needs to be slightly greater than the temperature of the system at that instant. If the temperature of the system is given by formula_17 at that instant, then formula_18, and formula_19 forces us to have:"}
+{"text":"This means the magnitude of the entropy \"loss\" from the reservoir, formula_21 is less than the magnitude of the entropy gain formula_22(formula_14) by the system:"}
+{"text":"Similarly, when the system at temperature formula_24 expels heat in magnitude formula_25 (formula_26) into a colder reservoir (at temperature formula_27) in an infinitesimal step, then again, for the Second Law of Thermodynamics to hold, one would have, in an exactly similar manner:"}
+{"text":"Here, the amount of heat 'absorbed' by the system is given by formula_29(formula_30), signifying that heat is transferring from the system to the reservoir, with formula_31. The magnitude of the entropy gained by the reservoir, formula_32 is greater than the magnitude of the entropy loss of the system formula_33"}
+{"text":"Since the total change in entropy for the system is 0 in a cyclic process, if one adds all the infinitesimal steps of heat intake and heat expulsion from the reservoir, signified by the previous two equations, with the temperature of the reservoir at each instant given by formula_34, one gets,"}
+{"text":"In summary, (the inequality in the third statement below, being obviously guaranteed by the second law of thermodynamics, which is the basis of our calculation),"}
+{"text":"For a reversible cyclic process, there is no generation of entropy in each of the infinitesimal heat transfer processes, so the following equality holds,"}
+{"text":"Thus, the Clausius inequality is a consequence of applying the second law of thermodynamics at each infinitesimal stage of heat transfer, and is thus in a sense a weaker condition than the Second Law itself."}
+{"text":"Adiabatic quantum computation (AQC) is a form of quantum computing which relies on the adiabatic theorem to do calculations and is closely related to quantum annealing."}
+{"text":"First, a (potentially complicated) Hamiltonian is found whose ground state describes the solution to the problem of interest. Next, a system with a simple Hamiltonian is prepared and initialized to the ground state. Finally, the simple Hamiltonian is adiabatically evolved to the desired complicated Hamiltonian. By the adiabatic theorem, the system remains in the ground state, so at the end the state of the system describes the solution to the problem. Adiabatic quantum computing has been shown to be polynomially equivalent to conventional quantum computing in the circuit model."}
+{"text":"The time complexity for an adiabatic algorithm is the time taken to complete the adiabatic evolution which is dependent on the gap in the energy eigenvalues (spectral gap) of the Hamiltonian. Specifically, if the system is to be kept in the ground state, the energy gap between the ground state and the first excited state of formula_1 provides an upper bound on the rate at which the Hamiltonian can be evolved at time When the spectral gap is small, the Hamiltonian has to be evolved slowly. The runtime for the entire algorithm can be bounded by:"}
+{"text":"where formula_3 is the minimum spectral gap for"}
+{"text":"AQC is a possible method to get around the problem of energy relaxation. Since the quantum system is in the ground state, interference with the outside world cannot make it move to a lower state. If the energy of the outside world (that is, the \"temperature of the bath\") is kept lower than the energy gap between the ground state and the next higher energy state, the system has a proportionally lower probability of going to a higher energy state. Thus the system can stay in a single system eigenstate as long as needed."}
+{"text":"Universality results in the adiabatic model are tied to quantum complexity and QMA-hard problems. The k-local Hamiltonian is QMA-complete for k \u2265 2. QMA-hardness results are known for physically realistic lattice models of qubits such as"}
+{"text":"where formula_5represent the Pauli matrices Such models are used for universal adiabatic quantum computation. The Hamiltonians for the QMA-complete problem can also be restricted to act on a two dimensional grid of qubits or a line of quantum particles with 12 states per particle. If such models were found to be physically realisable, they too could be used to form the building blocks of a universal adiabatic quantum computer."}
+{"text":"In practice, there are problems during a computation. As the Hamiltonian is gradually changed, the interesting parts (quantum behaviour as opposed to classical) occur when multiple qubits are close to a tipping point. It is exactly at this point when the ground state (one set of qubit orientations) gets very close to a first energy state (a different arrangement of orientations). Adding a slight amount of energy (from the external bath, or as a result of slowly changing the Hamiltonian) could take the system out of the ground state, and ruin the calculation. Trying to perform the calculation more quickly increases the external energy; scaling the number of qubits makes the energy gap at the tipping points smaller."}
+{"text":"Adiabatic quantum computation solves satisfiability problems and other combinatorial search problems. Specifically, these kind of problems seek a state that satisfies"}
+{"text":"This expression contains the satisfiability of M clauses, for which clause formula_7 has the value True or False, and can involve n bits. Each bit is a variable formula_8 such that formula_7 is a Boolean value function of formula_10. QAA solves this kind of problem using quantum adiabatic evolution. It starts with an Initial Hamiltonian formula_11:"}
+{"text":"where formula_13 shows the Hamiltonian corresponding to the clause formula_7. Usually, the choice of formula_13 won't depend on different clauses, so only the total number of times each bit is involved in all clauses matters. Next, it goes through an adiabatic evolution, ending in the Problem Hamiltonian formula_16:"}
+{"text":"where formula_18 is the satisfying Hamiltonian of clause C."}
+{"text":"For a simple path of adiabatic evolution with run time T, consider:"}
+{"text":"which is the adiabatic evolution Hamiltonian of our algorithm."}
+{"text":"According to the adiabatic theorem, we start from the ground state of Hamiltonian formula_11 at the beginning, proceed through an adiabatic process, and end in the ground state of problem Hamiltonian formula_16."}
+{"text":"We then measure the z-component of each of the n spins in the final state. This will produce a string formula_25 which is highly likely to be the result of our satisfiability problem. The run time T must be sufficiently long to assure correctness of the result. According to adiabatic theorem, T is about formula_26, where"}
+{"text":"is the minimum energy gap between ground state and first excited state."}
+{"text":"Adiabatic quantum computing is equivalent in power to standard gate-based quantum computing that implements arbitrary unitary operations. However, the mapping challenge on gate-based quantum devices differs substantially from quantum annealers as logical variables are mapped only to single qubits and not to chains."}
+{"text":"The D-Wave One is a device made by Canadian company D-Wave Systems, which claims that it uses quantum annealing to solve optimization problems. On 25 May 2011, Lockheed-Martin purchased a D-Wave One for about US$10 million. In May 2013, Google purchased a 512 qubit D-Wave Two."}
+{"text":"The question of whether the D-Wave processors offer a speedup over a classical processor is still unanswered. Tests performed by researchers at Quantum Artificial Intelligence Lab (NASA), USC, ETH Zurich, and Google show that as of 2015, there is no evidence of a quantum advantage."}
+{"text":"The positive energy theorem (also known as the positive mass theorem) refers to a collection of foundational results in general relativity and differential geometry. Its standard form, broadly speaking, asserts that the gravitational energy of an isolated system is nonnegative, and can only be zero when the system has no gravitating objects. Although these statements are often thought of as being primarily physical in nature, they can be formalized as mathematical theorems which can be proven using techniques of differential geometry, partial differential equations, and geometric measure theory."}
+{"text":"Richard Schoen and Shing-Tung Yau, in 1979 and 1981, were the first to give proofs of the positive mass theorem. Edward Witten, in 1982, gave the outlines of an alternative proof, which were later filled in rigorously by mathematicians. Witten and Yau were awarded the Fields medal in mathematics in part for their work on this topic."}
+{"text":"An imprecise formulation of the Schoen-Yau \/ Witten positive energy theorem states the following:"}
+{"text":"The meaning of these terms is discussed below. There are alternative and non-equivalent formulations for different notions of energy-momentum and for different classes of initial data sets. Not all of these formulations have been rigorously proven, and it is currently an open problem whether the above formulation holds for initial data sets of arbitrary dimension."}
+{"text":"The original proof of the theorem for ADM mass was provided by Richard Schoen and Shing-Tung Yau in 1979 using variational methods and minimal surfaces. Edward Witten gave another proof in 1981 based on the use of spinors, inspired by positive energy theorems in the context of supergravity. An extension of the theorem for the Bondi mass was given by Ludvigsen and James Vickers, Gary Horowitz and Malcolm Perry, and Schoen and Yau."}
+{"text":"Gary Gibbons, Stephen Hawking, Horowitz and Perry proved extensions of the theorem to asymptotically anti-de Sitter spacetimes and to Einstein\u2013Maxwell theory. The mass of an asymptotically anti-de Sitter spacetime is non-negative and only equal to zero for anti-de Sitter spacetime. In Einstein\u2013Maxwell theory, for a spacetime with electric charge formula_1 and magnetic charge formula_2, the mass of the spacetime satisfies (in Gaussian units)"}
+{"text":"with equality for the Majumdar\u2013Papapetrou extremal black hole solutions."}
+{"text":"An initial data set consists of a Riemannian manifold and a symmetric 2-tensor field on . One says that an initial data set :"}
+{"text":"Note that a time-symmetric initial data set satisfies the dominant energy condition if and only if the scalar curvature of is nonnegative. One says that a Lorentzian manifold is a development of an initial data set if there is a (necessarily spacelike) hypersurface embedding of into , together with a continuous unit normal vector field, such that the induced metric is and the second fundamental form with respect to the given unit normal is ."}
+{"text":"This definition is motivated from Lorentzian geometry. Given a Lorentzian manifold of dimension and a spacelike immersion from a connected -dimensional manifold into which has a trivial normal bundle, one may consider the induced Riemannian metric as well as the second fundamental form of with respect to either of the two choices of continuous unit normal vector field along . The triple is an initial data set. According to the Gauss-Codazzi equations, one has"}
+{"text":"where denotes the Einstein tensor of and denotes the continuous unit normal vector field along used to define . So the dominant energy condition as given above is, in this Lorentzian context, identical to the assertion that , when viewed as a vector field along , is timelike or null and is oriented in the same direction as ."}
+{"text":"The ends of asymptotically flat initial data sets."}
+{"text":"In the literature there are several different notions of \"asymptotically flat\" which are not mutually equivalent. Usually it is defined in terms of weighted H\u00f6lder spaces or weighted Sobolev spaces."}
+{"text":"However, there are some features which are common to virtually all approaches. One considers an initial data set which may or may not have a boundary; let denote its dimension. One requires that there is a compact subset of such that each connected component of the complement is diffeomorphic to the complement of a closed ball in Euclidean space . Such connected components are called the ends of ."}
+{"text":"Let be a time-symmetric initial data set satisfying the dominant energy condition. Suppose that is an oriented three-dimensional smooth Riemannian manifold-with-boundary, and that each boundary component has positive mean curvature. Suppose that it has one end, and it is \"asymptotically Schwarzschild\" in the following sense:"}
+{"text":"Schoen and Yau's theorem asserts that must be nonnegative. If, in addition, the functions formula_6 formula_7 and formula_8 are bounded for any formula_9 then must be positive unless the boundary of is empty and is isometric to with its standard Riemannian metric."}
+{"text":"Note that the conditions on are asserting that , together with some of its derivatives, are small when is large. Since is measuring the defect between in the coordinates and the standard representation of the slice of the Schwarzschild metric, these conditions are a quantification of the term \"asymptotically Schwarzschild\". This can be interpreted in a purely mathematical sense as a strong form of \"asymptotically flat\", where the coefficient of the part of the expansion of the metric is declared to be a constant multiple of the Euclidean metric, as opposed to a general symmetric 2-tensor."}
+{"text":"Note also that Schoen and Yau's theorem, as stated above, is actually (despite appearances) a strong form of the \"multiple ends\" case. If is a complete Riemannian manifold with multiple ends, then the above result applies to any single end, provided that there is a positive mean curvature sphere in every other end. This is guaranteed, for instance, if each end is asymptotically flat in the above sense; one can choose a large coordinate sphere as a boundary, and remove the corresponding remainder of each end until one has a Riemannian manifold-with-boundary with a single end."}
+{"text":"Let be an initial data set satisfying the dominant energy condition. Suppose that is an oriented three-dimensional smooth complete Riemannian manifold (without boundary); suppose that it has finitely many ends, each of which is asymptotically flat in the following sense."}
+{"text":"Suppose that formula_10 is an open precompact subset such that formula_11 has finitely many connected components formula_12 and for each formula_13 there is a diffeomorphism formula_14 such that the symmetric 2-tensor formula_15 satisfies the following conditions:"}
+{"text":"The conclusion is that the ADM energy of each formula_12 defined as"}
+{"text":"For each formula_30 consider this as a vector formula_31 in Minkowski space. Witten's conclusion is that for each formula_32 it is necessarily a future-pointing non-spacelike vector. If this vector is zero for any formula_33 then formula_34 formula_35 is diffeomorphic to formula_36 and the maximal globally hyperbolic development of the initial data set formula_37 has zero curvature."}
+{"text":"According to the above statements, Witten's conclusion is stronger than Schoen and Yau's. However, a third paper by Schoen and Yau shows that their 1981 result implies Witten's, retaining only the extra assumption that formula_20 and formula_21 are bounded for any formula_40 It also must be noted that Schoen and Yau's 1981 result relies on their"}
+{"text":"As of April 2017, Schoen and Yau have released a preprint which proves the general higher-dimensional case in the special case formula_42 without any restriction on dimension or topology. However, it has not yet (as of May 2020) appeared in an academic journal."}
+{"text":"A black hole firewall is a hypothetical phenomenon where an observer falling into a black hole encounters high-energy quanta at (or near) the event horizon. The \"firewall\" phenomenon was proposed in 2012 by physicists Ahmed Almheiri, Donald Marolf, Joseph Polchinski, and James Sully as a possible solution to an apparent inconsistency in black hole complementarity. The proposal is sometimes referred to as the AMPS firewall, an acronym for the names of the authors of the 2012 paper. The use of a firewall to resolve this inconsistency remains controversial, with physicists divided as to the solution to the paradox.[Astrophysics: Fire in the hole!<\/ref>"}
+{"text":"According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will only emit a finite amount of information encoded within its Hawking radiation. Assume that at time formula_1, more than half of the information had already been emitted."}
+{"text":"According to widely accepted research by physicists like Don Page and Leonard Susskind, an outgoing particle emitted at time formula_1 must be entangled with all the Hawking radiation the black hole has previously emitted. This creates a paradox: a principle called \"monogamy of entanglement\" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two independent systems at the same time; yet here the outgoing particle appears to be entangled with both the infalling particle and, independently, with past Hawking radiation."}
+{"text":"In order to resolve the paradox, physicists may eventually be forced to give up one of three time-tested theories: Einstein's equivalence principle, unitarity, or existing quantum field theory.][ Originally published in Quanta, December 21, 2012.<\/ref>"}
+{"text":"Some scientists suggest that the entanglement must somehow get immediately broken between the infalling particle and the outgoing particle. Breaking this entanglement would release large amounts of energy, thus creating a searing \"black hole firewall\" at the black hole event horizon. This resolution requires a violation of Einstein's equivalence principle, which states that free-falling is indistinguishable from floating in empty space. This violation has been characterized as \"outrageous\"; theoretical physicist Raphael Bousso has complained that \"a firewall simply can't appear in empty space, any more than a brick wall can suddenly appear in an empty field and smack you in the face.\""}
+{"text":"Some scientists suggest that there is in fact no entanglement between the emitted particle and previous Hawking radiation. This resolution would require black hole information loss, a controversial violation of unitarity."}
+{"text":"Others, such as Steve Giddings, suggest modifying quantum field theory so that entanglement would be gradually lost as the outgoing and infalling particles separate, resulting in a more gradual release of energy inside the black hole, and consequently no firewall."}
+{"text":"Juan Maldacena and Leonard Susskind have suggested in ER=EPR that the outgoing and infalling particles are somehow connected by wormholes, and therefore are not independent systems; however, , this hypothesis is still a \"work in progress\".][<\/ref>"}
+{"text":"The fuzzball picture resolves the dilemma by replacing the 'no-hair' vacuum with a stringy quantum state, thus explicitly coupling any outgoing Hawking radiation with the formation history of the black hole."}
+{"text":"Stephen Hawking received widespread mainstream media coverage in January 2014 with an informal proposal to replace the event horizon of a black hole with an \"apparent horizon\" where infalling matter is suspended and then released; however, some scientists have expressed confusion about what precisely is being proposed and how the proposal would solve the paradox."}
+{"text":"The firewall would exist at the black hole's event horizon, and would be invisible to observers outside the event horizon. Matter passing through the event horizon into the black hole would immediately be \"burned to a crisp\" by an arbitrarily hot \"seething maelstrom of particles\" at the firewall."}
+{"text":"In a merger of two black holes, the characteristics of a firewall (if any) may leave a mark on the outgoing gravitational radiation as \"echoes\" when waves bounce in the vicinity of the fuzzy event horizon. The expected quantity of such echoes is theoretically unclear, as physicists don't currently have a good physical model of firewalls. In 2016, cosmologist Niayesh Afshordi and others argued there were tentative signs of some such echo in the data from the first black hole merger detected by LIGO; more recent work has argued there is no statistically significant evidence for such echoes in the data."}
+{"text":"The no-hair theorem states that all black hole solutions of the Einstein\u2013Maxwell equations of gravitation and electromagnetism in general relativity can be completely characterized by only three \"externally\" observable classical parameters: mass, electric charge, and angular momentum. All other information (for which \"hair\" is a metaphor) about the matter that formed a black hole or is falling into it \"disappears\" behind the black-hole event horizon and is therefore permanently inaccessible to external observers. Physicist John Archibald Wheeler expressed this idea with the phrase \"black holes have no hair\", which was the origin of the name. In a later interview, Wheeler said that Jacob Bekenstein coined this phrase."}
+{"text":"The first version of the no-hair theorem for the simplified case of the uniqueness of the Schwarzschild metric was shown by Werner Israel in 1967. The result was quickly generalized to the cases of charged or spinning black holes. There is still no rigorous mathematical proof of a general no-hair theorem, and mathematicians refer to it as the no-hair conjecture. Even in the case of gravity alone (i.e., zero electric fields), the conjecture has only been partially resolved by results of Stephen Hawking, Brandon Carter, and David C. Robinson, under the additional hypothesis of non-degenerate event horizons and the technical, restrictive and difficult-to-justify assumption of real analyticity of the space-time continuum."}
+{"text":"Suppose two black holes have the same masses, electrical charges, and angular momenta, but the first black hole was made by collapsing ordinary matter whereas the second was made out of antimatter; nevertheless, then the conjecture states they will be completely indistinguishable to an observer \"outside the event horizon\". None of the special particle physics pseudo-charges (i.e., the global charges baryonic number, leptonic number, etc., all of which would be different for the originating masses of matter that created the black holes) are conserved in the black hole, or if they are conserved somehow then their values would be unobservable from the outside."}
+{"text":"Every isolated unstable black hole decays rapidly to a stable black hole; and (excepting quantum fluctuations) stable black holes can be completely described (in a Cartesian coordinate system) at any moment in time by these eleven numbers:"}
+{"text":"These numbers represent the conserved attributes of an object which can be determined from a distance by examining its gravitational and electromagnetic fields. All other variations in the black hole will either escape to infinity or be swallowed up by the black hole."}
+{"text":"By changing the reference frame one can set the linear momentum and position to zero and orient the spin angular momentum along the positive \"z\" axis. This eliminates eight of the eleven numbers, leaving three which are independent of the reference frame: mass, angular momentum magnitude, and electric charge. Thus any black hole that has been isolated for a significant period of time can be described by the Kerr\u2013Newman metric in an appropriately chosen reference frame."}
+{"text":"The no-hair theorem was originally formulated for black holes within the context of a four-dimensional spacetime, obeying the Einstein field equation of general relativity with zero cosmological constant, in the presence of electromagnetic fields, or optionally other fields such as scalar fields and massive vector fields (Proca fields, etc.)."}
+{"text":"It has since been extended to include the case where the cosmological constant is positive (which recent observations are tending to support)."}
+{"text":"Magnetic charge, if detected as predicted by some theories, would form the fourth parameter possessed by a classical black hole."}
+{"text":"Counterexamples in which the theorem fails are known in spacetime dimensions higher than four; in the presence of non-abelian Yang\u2013Mills fields, non-abelian Proca fields, some non-minimally coupled scalar fields, or skyrmions; or in some theories of gravity other than Einstein\u2019s general relativity. However, these exceptions are often unstable solutions and\/or do not lead to conserved quantum numbers so that \"The 'spirit' of the no-hair conjecture, however, seems to be maintained\". It has been proposed that \"hairy\" black holes may be considered to be bound states of hairless black holes and solitons."}
+{"text":"In 2004, the exact analytical solution of a (3+1)-dimensional spherically symmetric black hole with minimally coupled self-interacting scalar field was derived. This showed that, apart from mass, electrical charge and angular momentum, black holes can carry a finite scalar charge which might be a result of interaction with cosmological scalar fields such as the inflaton. The solution is stable and does not possess any unphysical properties; however, the existence of a scalar field with the desired properties is only speculative."}
+{"text":"The LIGO results provide some experimental evidence consistent with the uniqueness of the no-hair theorem. This observation is consistent with Stephen Hawking's theoretical work on black holes in the 1970s."}
+{"text":"A study by Stephen Hawking, Malcolm Perry and Andrew Strominger postulates that black holes might contain \"soft hair\", giving the black hole more degrees of freedom than previously thought. This hair permeates at a very low-energy state, which is why it didn't come up in previous calculations that postulated the no-hair theorem."}
+{"text":"In particle and condensed matter physics, Goldstone bosons or Nambu\u2013Goldstone bosons (NGBs) are bosons that appear necessarily in models exhibiting spontaneous breakdown of continuous symmetries. They were discovered by Yoichiro Nambu in particle physics within the context of the BCS superconductivity mechanism, and subsequently elucidated by Jeffrey Goldstone, and systematically generalized in the context of quantum field theory. In condensed matter physics such bosons are quasiparticles and are known as Anderson-Bogoliubov modes."}
+{"text":"These spinless bosons correspond to the spontaneously broken internal symmetry generators, and are characterized by the quantum numbers of these."}
+{"text":"They transform nonlinearly (shift) under the action of these generators, and can thus be excited out of the asymmetric vacuum by these generators. Thus, they can be thought of as the excitations of the field in the broken symmetry directions in group space\u2014and are massless if the spontaneously broken symmetry is not also broken explicitly."}
+{"text":"If, instead, the symmetry is not exact, i.e. if it is explicitly broken as well as spontaneously broken, then the Nambu\u2013Goldstone bosons are not massless, though they typically remain relatively light; they are then called pseudo-Goldstone bosons or pseudo-Nambu\u2013Goldstone bosons (abbreviated PNGBs)."}
+{"text":"Goldstone's theorem examines a generic continuous symmetry which is spontaneously broken; i.e., its currents are conserved, but the ground state is not invariant under the action of the corresponding charges. Then, necessarily, new massless (or light, if the symmetry is not exact) scalar particles appear in the spectrum of possible excitations. There is one scalar particle\u2014called a Nambu\u2013Goldstone boson\u2014for each generator of the symmetry that is broken, i.e., that does not preserve the ground state. The Nambu\u2013Goldstone mode is a long-wavelength fluctuation of the corresponding order parameter."}
+{"text":"By virtue of their special properties in coupling to the vacuum of the respective symmetry-broken theory, vanishing momentum (\"soft\") Goldstone bosons involved in field-theoretic amplitudes make such amplitudes vanish (\"Adler zeros\")."}
+{"text":"Consider a complex scalar field , with the constraint that formula_1, a constant. One way to impose a constraint of this sort is by including a potential interaction term in its Lagrangian density,"}
+{"text":"and taking the limit as . This is called the \"Abelian nonlinear \u03c3-model\"."}
+{"text":"The constraint, and the action, below, are invariant under a \"U\"(1) phase transformation, . The field can be redefined to give a real scalar field (i.e., a spin-zero particle) without any constraint by"}
+{"text":"where is the Nambu\u2013Goldstone boson (actually formula_4 is) and the \"U\"(1) symmetry transformation effects a shift on , namely"}
+{"text":"but does not preserve the ground state (i.e. the above infinitesimal transformation \"does not annihilate it\"\u2014the hallmark of invariance), as evident in the charge of the current below."}
+{"text":"Thus, the vacuum is degenerate and noninvariant under the action of the spontaneously broken symmetry."}
+{"text":"The corresponding Lagrangian density is given by"}
+{"text":"Note that the constant term formula_8 in the Lagrangian density has no physical significance, and the other term in it is simply the kinetic term for a massless scalar."}
+{"text":"The charge, \"Q\", resulting from this current shifts and the ground state to a new, degenerate, ground state. Thus, a vacuum with will shift to a \"different vacuum\" with . The current connects the original vacuum with the Nambu\u2013Goldstone boson state, ."}
+{"text":"In general, in a theory with several scalar fields, , the Nambu\u2013Goldstone mode is massless, and parameterises the curve of possible (degenerate) vacuum states. Its hallmark under the broken symmetry transformation is \"nonvanishing vacuum expectation\" , an order parameter, for vanishing , at some ground state |0\u3009 chosen at the minimum of the potential, . Symmetry dictates that all variations of the potential with respect to the fields in all symmetry directions vanish. The vacuum value of the first order variation in any direction vanishes as just seen; while the vacuum value of the second order variation must also vanish, as follows. Vanishing vacuum values of field symmetry transformation increments add no new information."}
+{"text":"By contrast, however, \"nonvanishing vacuum expectations of transformation increments\", , specify the relevant (Goldstone) \"null eigenvectors of the mass matrix\","}
+{"text":"The principle behind Goldstone's argument is that the ground state is not unique. Normally, by current conservation, the charge operator for any symmetry current is time-independent,"}
+{"text":"Acting with the charge operator on the vacuum either \"annihilates the vacuum\", if that is symmetric; else, if \"not\", as is the case in spontaneous symmetry breaking, it produces a zero-frequency state out of it, through its shift transformation feature illustrated above. Actually, here, the charge itself is ill-defined, cf. the Fabri\u2013Picasso argument below."}
+{"text":"But its better behaved commutators with fields, that is, the nonvanishing transformation shifts , are, nevertheless, \"time-invariant\","}
+{"text":"thus generating a in its Fourier transform. (This ensures that, inserting a complete set of intermediate states in a nonvanishing current commutator can lead to vanishing time-evolution only when one or more of these states is massless.)"}
+{"text":"Thus, if the vacuum is not invariant under the symmetry, action of the charge operator produces a state which is different from the vacuum chosen, but which has zero frequency. This is a long-wavelength oscillation of a field which is nearly stationary: there are physical states with zero frequency, , so that the theory cannot have a mass gap."}
+{"text":"This argument is further clarified by taking the limit carefully. If an approximate charge operator acting in a huge but finite region is applied to the vacuum,"}
+{"text":"a state with approximately vanishing time derivative is produced,"}
+{"text":"Assuming a nonvanishing mass gap , the frequency of any state like the above, which is orthogonal to the vacuum, is at least ,"}
+{"text":"Letting become large leads to a contradiction. Consequently 0\u00a0=\u00a00. However this argument fails when the symmetry is gauged, because then the symmetry generator is only performing a gauge transformation. A gauge transformed state is the same exact state, so that acting with a symmetry generator does not get one out of the vacuum."}
+{"text":"The argument requires both the vacuum and the charge to be translationally invariant, , ."}
+{"text":"Consider the correlation function of the charge with itself,"}
+{"text":"so the integrand in the right hand side does not depend on the position."}
+{"text":"Thus, its value is proportional to the total space volume, formula_16 \u2014 unless the symmetry is unbroken, . Consequently, does not properly exist in the Hilbert space."}
+{"text":"There is an arguable loophole in the theorem. If one reads the theorem carefully, it only states that there exist non-vacuum states with arbitrarily small energies. Take for example a chiral N = 1 super QCD model with a nonzero squark VEV which is conformal in the IR. The chiral symmetry is a global symmetry which is (partially) spontaneously broken. Some of the \"Goldstone bosons\" associated with this spontaneous symmetry breaking are charged under the unbroken gauge group and hence, these composite bosons have a continuous mass spectrum with arbitrarily small masses but yet there is no Goldstone boson with exactly zero mass. In other words, the Goldstone bosons are infraparticles."}
+{"text":"A version of Goldstone's theorem also applies to nonrelativistic theories (and also relativistic theories with spontaneously broken spacetime symmetries, such as Lorentz symmetry or conformal symmetry, rotational, or translational invariance)."}
+{"text":"It essentially states that, for each spontaneously broken symmetry, there corresponds some quasiparticle with no energy gap\u2014the nonrelativistic version of the mass gap. (Note that the energy here is really and not .) However, two \"different\" spontaneously broken generators may now give rise to the \"same\" Nambu\u2013Goldstone boson. For example, in a superfluid, both the \"U(1)\" particle number symmetry and Galilean symmetry are spontaneously broken. However, the phonon is the Goldstone boson for both."}
+{"text":"In general, the phonon is effectively the Nambu\u2013Goldstone boson for spontaneously broken Galilean\/Lorentz symmetry. However, in contrast to the case of internal symmetry breaking, when spacetime symmetries are broken, the order parameter \"need not\" be a scalar field, but may be a tensor field, and the corresponding independent massless modes may now be \"fewer\" than the number of spontaneously broken generators, because the"}
+{"text":"Goldstone modes may now be linearly dependent among themselves: e.g., the Goldstone modes for some generators might be expressed as gradients of Goldstone modes for other broken generators."}
+{"text":"Spontaneously broken global fermionic symmetries, which occur in some supersymmetric models, lead to Nambu\u2013Goldstone fermions, or \"goldstinos\". These have spin \u00bd, instead of 0, and carry all quantum numbers of the respective supersymmetry generators broken spontaneously."}
+{"text":"Spontaneous supersymmetry breaking smashes up (\"reduces\") supermultiplet structures into the characteristic nonlinear realizations of broken supersymmetry, so that goldstinos are superpartners of \"all\" particles in the theory, of \"any spin\", and the only superpartners, at that. That is, to say, two non-goldstino particles"}
+{"text":"are connected to only goldstinos through supersymmetry transformations, and not to each other, even if they were so connected before the breaking of supersymmetry. As a result, the masses and spin multiplicities of such particles are then arbitrary."}
+{"text":"The theorem was proved first by Gell-Mann and Low in 1951, making use of the Dyson series. In 1969 Klaus Hepp provided an alternative derivation for the case where the original Hamiltonian describes free particles and the interaction is norm bounded. In 1989 Nenciu and Rasche proved it using the adiabatic theorem. A proof that does not rely on the Dyson expansion was given in 2007 by Molinari."}
+{"text":"Let formula_1 be an eigenstate of formula_2 with energy formula_3 and let the 'interacting' Hamiltonian be formula_4, where formula_5 is a coupling constant and formula_6 the interaction term. We define a Hamiltonian formula_7 which effectively interpolates between formula_8 and formula_2 in the limit formula_10 and formula_11. Let formula_12 denote the evolution operator in the interaction picture. The Gell-Mann and Low theorem asserts that if the limit as formula_13 of"}
+{"text":"exists, then formula_15 are eigenstates of formula_8."}
+{"text":"Note that when applied to, say, the ground-state, the theorem does not guarantee that the evolved state will be a ground state. In other words, level crossing is not excluded."}
+{"text":"As in the original paper, the theorem is typically proved making use of Dyson's expansion of the evolution operator. Its validity however extends beyond the scope of perturbation theory as has been demonstrated by Molinari. We follow Molinari's method here. Focus on formula_17 and let formula_18. From Schr\u00f6dinger's equation for the time-evolution operator"}
+{"text":"and the boundary condition formula_20 we can formally write"}
+{"text":"Focus for the moment on the case formula_22. Through a change of variables formula_23 we can write"}
+{"text":"This result can be combined with the Schr\u00f6dinger equation and its adjoint"}
+{"text":"The corresponding equation between formula_28 is the same. It can be obtained by pre-multiplying both sides with formula_29, post-multiplying with formula_30 and making use of"}
+{"text":"The other case we are interested in, namely formula_32 can be treated in an analogous fashion"}
+{"text":"and yields an additional minus sign in front of the commutator (we are not concerned here with the case where"}
+{"text":"formula_33 have mixed signs). In summary, we obtain"}
+{"text":"We proceed for the negative-times case. Abbreviating the various operators for clarity"}
+{"text":"Now using the definition of formula_36 we differentiate and eliminate derivatives formula_37 using the above expression, finding"}
+{"text":"i \\hbar \\epsilon g \\partial_g | \\Psi_\\epsilon \\rangle &="}
+{"text":"5. A.L. Fetter and J.D. Walecka: \"Quantum Theory of Many-Particle Systems\", McGraw\u2013Hill (1971)"}
+{"text":"In mathematics, Bogoliubov's edge-of-the-wedge theorem implies that holomorphic functions on two \"wedges\" with an \"edge\" in common are analytic continuations of each other provided they both give the same continuous function on the edge. It is used in quantum field theory to construct the analytic continuation of Wightman functions. The formulation and the first proof of the theorem were presented by Nikolay Bogoliubov at the International Conference on Theoretical Physics, Seattle, USA (September, 1956) and also published in the book \"Problems in the Theory of Dispersion Relations\". Further proofs and generalizations of the theorem were given by R.\u00a0Jost and H.\u00a0Lehmann (1957), F.\u00a0Dyson (1958), H.\u00a0Epstein (1960), and by other researchers."}
+{"text":"In one dimension, a simple case of the edge-of-the-wedge theorem can be stated as follows."}
+{"text":"In this example, the two wedges are the upper half-plane and the lower half plane, and their common edge is the real axis. This result can be proved from Morera's theorem. Indeed, a function is holomorphic provided its integral round any contour vanishes; a contour which crosses the real axis can be broken up into contours in the upper and lower half-planes and the integral round these vanishes by hypothesis."}
+{"text":"The more general case is phrased in terms of distributions. This is technically simplest in the case where the common boundary is the unit circle formula_1 in the complex plane. In that case holomorphic functions \"f\", \"g\" in the regions formula_2 and formula_3 have Laurent expansions"}
+{"text":"absolutely convergent in the same regions and have distributional boundary values given by the formal Fourier series"}
+{"text":"Their distributional boundary values are equal if formula_6 for all \"n\". It is then elementary that the common Laurent series converges absolutely in the whole region formula_7."}
+{"text":"In general given an open interval formula_8 on the real axis and holomorphic functions formula_9 defined in formula_10 and formula_11 satisfying"}
+{"text":"for some non-negative integer \"N\", the boundary values formula_13 of formula_14 can be defined as distributions on the real axis by the formulas"}
+{"text":"Existence can be proved by noting that, under the hypothesis, formula_16 is the formula_17-th complex derivative of a holomorphic function which extends to a continuous function on the boundary. If \"f\" is defined as formula_14 above and below the real axis and \"F\" is the distribution defined on the rectangle formula_19"}
]